dumps_kwargs
keyword arguments are no longer supported.dumps_kwargs
keyword arguments in structured prediction output. The error you're encountering is not directly addressed in the given information about LlamaIndex.structured_predict
method is available in various LLM classes, including OpenAI and Ollama. Here's a basic example of how structured prediction is typically used:from llama_index.core.bridge.pydantic import BaseModel, Field class SomeStructure(BaseModel): field1: str = Field(description="Description of field1") field2: int = Field(description="Description of field2") result = llm.structured_predict( output_cls=SomeStructure, prompt=some_prompt_template, **prompt_args )
dumps_kwargs
, it's possible that there have been changes in the API or dependencies that are not reflected in the provided knowledge sources. I would recommend checking the following:from llama_index.core.bridge.pydantic import BaseModel, Field class ConversationTitleDescription(BaseModel): """A representation of title and description of conversation""" title: str = Field( description="A unique title for the conversation." ) description: str = Field( description="A unique description for the conversation." )
dumps_kwargs
keyword arguments are no longer supported.?dumps_kwargs
error you're encountering. This error is not directly related to the LlamaIndex structured prediction functionality as described in the available documentation.dumps_kwargs keyword arguments are no longer supported
suggests that there might be an issue with the serialization of the Pydantic model, possibly related to recent changes in Pydantic or its integration with LlamaIndex.structured_predict
method. Based on the examples in the knowledge sources, it should look something like this:from llama_index.core.prompts import PromptTemplate llm = YourLLMInstance() # Replace with your actual LLM instance prompt_tmpl = PromptTemplate("Your prompt text here {some_variable}") result = llm.structured_predict( ConversationTitleDescription, prompt_tmpl, some_variable="Your value here" )
json_mode=False
. According to the Ollama documentation, JSON mode can be helpful when trying to ensure all responses are valid JSON, which is particularly useful for structured outputs. You might want to consider setting json_mode=True
:model = Ollama(model="llama3:8b", json_mode=True, temperature=0)
as_structured_llm
method to create a structured LLM with your ConversationTitleDescription
class.PromptTemplate
and ChatMessage
appears correct based on the examples in the knowledge sources.response = sllm.chat([input_msg]) print(response.message.content)
output_obj = response.raw print(output_obj)
response_gen = sllm.stream_chat([input_msg]) for r in response_gen: print(r.message.content)
ConversationTitleDescription
class is correctly defined as shown in your previous message.DESCRIBE_CONVERSATION_RETURN_TITLE_DESCRIPTION
template is properly formatted to elicit a response that matches your Pydantic model.structured_predict
method directly, as shown in the OpenAI examples:result = model.structured_predict( ConversationTitleDescription, prompt_template, user_message=data['message'] ) print(result)
}
is missing, which is causing the JSON parsing to fail. This issue can be addressed by making a few adjustments to your code:json_mode=True
can help ensure that the responses are valid JSON:model = Ollama(model="llama3:8b", json_mode=True, temperature=0)
def _marshal_output_to_json(output: str) -> str: output = output.strip() left = output.find("{") right = output.rfind("}") output = output[left : right + 1] return output # After getting the response json_output = _marshal_output_to_json(str(response)) parsed_output = json.loads(json_output) print(parsed_output)
output_obj = response.raw print(output_obj)
structured_predict
:structured_predict
method directly, which might handle the parsing more robustly:result = model.structured_predict( ConversationTitleDescription, prompt_template, user_message=data['message'] ) print(result)
response_gen = sllm.stream_chat([input_msg]) for r in response_gen: print(r.message.content)
dumps_kwargs
is not mentioned anywhere in the codebasellm = Ollama(model="llama3:8b", json_mode=True, request_timeout=600) sllm = llm.as_structured_llm(ConversationTitleDescription) prompt = PromptTemplate(DESCRIBE_CONVERSATION_RETURN_TITLE_DESCRIPTION) struct_prompt = prompt.format(user_message=data['message']) input_msg = ChatMessage.from_str(struct_prompt) response = sllm.chat([input_msg]) print(response) # TODO
assistant: {"title":"Laughter is the Best Medicine","description":"A humorous conversation about a joke"}
raw
attribute:raw
attribute:output_obj = response.raw print(output_obj)
raw
attribute doesn't solve the issue, you can remove the prefix manually:response_str = str(response).replace("assistant: ", "") parsed_output = json.loads(response_str) print(parsed_output)
structured_predict
directly:chat
, try using the structured_predict
method, which is designed specifically for structured outputs:result = model.structured_predict( ConversationTitleDescription, prompt_template, user_message=data['message'] ) print(result)
json_mode
setting:model = Ollama(model="llama3:8b", json_mode=True, temperature=0)