Find answers from the community

Updated 2 months ago

, I am facing this error after I update llama-index package. Error code: 400 - {'error':

, I am facing this error after I update llama-index package.

Error code: 400 - {'error': {'message': "'name' is a required property - 'functions.0'", 'type': 'invalid_request_error', 'param': None, 'code': None}}
k
o
L
37 comments
Please stand by @openmind as I search the knowledge base. I normally take about 20 seconds or so. :wave:
----------------------
The error message you're seeing suggests that the 'name' property is missing in the first function of the 'functions' array you're passing to the LlamaIndex API.

In the LlamaIndex API, when you're making a function call, you need to define a class with the required properties and then convert it to an OpenAI function using the to_openai_function method. Here's an example from the Llama API documentation ():

Plain Text
from pydantic import BaseModel
from llama_index.llms.openai_utils import to_openai_function

class Song(BaseModel):
    """A song with name and artist"""
    name: str
    artist: str

song_fn = to_openai_function(Song)


In this example, 'Song' is the name of the function. When you call the complete method, you need to pass this function in the 'functions' array:

Plain Text
llm = LlamaAPI(api_key=api_key)
response = llm.complete("Generate a song", functions=[song_fn])


Please ensure that you're correctly defining and passing the function in your code. If the problem persists, it would be helpful to review the exact code that's causing the error.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@Logan M , Function calling is not working
How did you setup your function/tool?
Expecting value: line 1 column 1 (char 0)
Gettign this error
why do we need to use api key from llama-index?
it was working with openai api key before
@Logan M
Don't listen to kapa there, it's using some other LLM, not openai
In your code, how did you setup the function/tool? Exactly like above?
llm = LlamaAPI(api_key="key")
func_response = llm.complete(func_prompt, functions=[compilance_fn])
function_call = func_response.additional_kwargs["function_call"]

This is what kapa said.
func_response = OpenAI(model="gpt-4").complete(func_prompt, functions=[compilance_fn])

Using this code, it worked well before using low version, but in latest version, it doesn't work.
Kapa is crazy, ignore it.

In any case, newer versions of llama-index use a newer openai client.

It looks like this now

Plain Text
from pydantic import BaseModel
from llama_index.llms import OpenAI
from llama_index.llms.openai_utils import to_openai_tool

class Song(BaseModel):
    """A song with name and artist"""
    name: str
    artist: str

song_fn = to_openai_tool(Song)

llm = OpenAI(model="gpt-4")
response = llm.complete(func_prompt, tools=[song_fn])
just replace functions to tools?
func_response = OpenAI(model="gpt-4").complete(func_prompt, tools=[compilance_fn])func_response = OpenAI(model="gpt-4").complete(func_prompt, tools=[compilance_fn])
And different function here
song_fn = to_openai_tool(Song)
let me check
function_call = func_response.additional_kwargs["function_call"]
this is necessary?
not really, that's only used to tell which function was used.

If you need it, it might be under a different key now, not function_call
class Compilance(BaseModel):
"""Compilance score between requirement and answer, Score should be number from 1 to 10."""
score: str

compilance_fn = to_openai_tool(Compilance)
func_response = OpenAI(model="gpt-4").complete(func_prompt, tools=[compilance_fn])

Fullcode, and not working, @Logan M
Plain Text
>>> response = llm.complete("Tell me a song", tools=[song_fn])
>>> song = Song.parse_raw(response.additional_kwargs['tool_calls'][0].function.arguments)
>>> song
Song(name='Bohemian Rhapsody', artist='Queen')
>>> 
what version are you using?
llama-index==0.9.11.post1, openai==1.3.5
llama-index==0.9.11

I am using this version
let me try to update llama-index version again
Full code

Plain Text
from pydantic import BaseModel
from llama_index.llms.openai_utils import to_openai_tool
 
class Song(BaseModel):
    """A song with name and artist"""
    name: str
    artist: str
 
song_fn = to_openai_tool(Song)

llm = OpenAI(model="gpt-4")
response = llm.complete("Tell me a song", tools=[song_fn])
song = Song.parse_raw(response.additional_kwargs['tool_calls'][0].function.arguments)
print(song)
not sure what to tell you otherwise
Song.parse_raw(response.additional_kwargs['tool_calls'][0].function.arguments)

Here , how to access to name?
response.additional_kwargs['tool_calls'][0].function.name
score = func_response.additional_kwargs['tool_calls'][0].function.score

not working. but
@Logan M , ah, finally solved
Add a reply
Sign up and join the conversation on Discord