Find answers from the community

Home
Members
scottyakc
s
scottyakc
Offline, last seen 3 months ago
Joined September 25, 2024
Trying to replicate the LLamaParse example here: https://github.com/run-llama/llama_parse/blob/main/examples/demo_advanced.ipynb

But instead of OpenAI trying to use LM Studio on my local machine:
from llama_index.embeddings.huggingface import HuggingFaceEmbedding

embed_model = HuggingFaceEmbedding(model_name="BAAI/bge-small-en-v1.5") llm = OpenAI( api_key="NULL", api_base="http://localhost:1234/v1", temperature=0.2, ) Settings.llm = llm Settings.embed_model = embed_model node_parser = MarkdownElementNodeParser(llm=llm, num_workers=8) ... sub_query_engine = SubQuestionQueryEngine.from_defaults( query_engine_tools=query_engine_tools, llm=llm, use_async=True, ) response = sub_query_engine.query( "Which fund has the smallest minimum investment size?" )

The api call is received by the server, but when the response is sent back, Llama Index throws the below error:

ValueError: Expected tool_calls in ai_message.additional_kwargs, but none found.

Anyone experience this?
8 comments
s
L