Find answers from the community

Home
Members
BerndPra
B
BerndPra
Offline, last seen 3 weeks ago
Joined October 4, 2024
I am trying to use llama-index-agent-openai with Ollama as backend. The code
Plain Text
agent = OpenAIAgent.from_tools(
    tools=[my_tool], llm=llm, verbose=True, SystemMessage=SCHEMA_PROMPT
)

raises the exception: ValueError("llm must be a OpenAI instance") when the llm is provided by:
Plain Text
    llm = Ollama(
        model=settings.ollama_model,
        base_url=settings.ollama_base_url,
        request_timeout=120.0,
    )

Is this a bug or is there a workaround? Thanks for any help!
1 comment
L
I am trying to use llama-index-graph-stores-neo4j==0.3.5 . When I instantiate the store:
Plain Text
# graph store instance
graph_store = Neo4jPGStore(
    username=settings.neo4j_username,
    password=settings.neo4j_password.get_secret_value(),
    url=settings.neo4j_uri,
)

Neo4J throws:
Plain Text
neo4j.exceptions.ClientError: {code: Neo.ClientError.Procedure.ProcedureNotFound} {message: There is no procedure with the name `apoc.meta.subGraph` registered for this database instance. Please ensure you've spelled the procedure name correctly and that the procedure is properly deployed.}

I have following plugins installed: apoc-5.24.0-extended.jar apoc.jar neo4j-graph-data-science-2.11.0.jar
I tried neo4j:5.23.0-community and neo4j:5.24.2-community with the with the same issues. Does anyone know what I am missing.
9 comments
L
B
The simple example logs:
Plain Text
DEBUG:llama_agents.control_plane.server - Sending task ef3bd09a-274b-4848-a6ca-7527709e3a07 to services: [QueueMessage(id_='10ab27b2-b089-4d17-8350-5bd9b35bc6ef', publisher_id='default', data={'task_id': 'ef3bd09a-274b-4848-a6ca-7527709e3a07', 'history': [{'role': <MessageRole.USER: 'user'>, 'content': "What's the secret fact?", 'additional_kwargs': {}}], 'result': "I'm not sure what specific secret fact you're referring to. Could you provide more details or clarify what you mean?", 'data': {}}, action=<ActionTypes.COMPLETED_TASK: 'completed_task'>, stats=QueueMessageStats(publish_time=None, process_start_time=None, process_end_time=None), type='human')]
INFO:llama_agents.message_queues.base - Publishing message to 'human' with action 'ActionTypes.COMPLETED_TASK'
DEBUG:llama_agents.message_queues.base - Message: {'id_': '10ab27b2-b089-4d17-8350-5bd9b35bc6ef', 'publisher_id': 'ControlPlaneServer-a450c4fc-37ae-4beb-b571-3039e47760a6', 'data': {'task_id': 'ef3bd09a-274b-4848-a6ca-7527709e3a07', 'history': [{'role': <MessageRole.USER: 'user'>, 'content': "What's the secret fact?", 'additional_kwargs': {}}], 'result': "I'm not sure what specific secret fact you're referring to. Could you provide more details or clarify what you mean?", 'data': {}}, 'action': <ActionTypes.COMPLETED_TASK: 'completed_task'>, 'stats': {'publish_time': None, 'process_start_time': None, 'process_end_time': None}, 'type': 'human'}
DEBUG:llama_agents.control_plane.server - Task ef3bd09a-274b-4848-a6ca-7527709e3a07 created
INFO:llama_agents.message_queues.simple - Successfully published message 'control_plane' to consumer.
INFO:llama_agents.message_queues.simple - Successfully published message 'human' to consumer.
I'm not sure what specific secret fact you're referring to. Could you provide more details or clarify what you mean?
1 comment
L
I would appreciate some help with AzureOpenAI path settings. Our proxy requires a path like https://ai-proxy.lab.mycompany.com/openai/deployments/gpt-35-turbo-1106/chat/completions?api-version=2023-12-01-preview.
By setting OPENAI_API_BASE to https://ai-proxy.lab.mycompany.com/openai/deployments/gpt-35-turbo-1106 Llamaindex calls https://ai-proxy.lab.mycompany.com/openai/deployments/gpt-35-turbo-1106/chat/completions but the query parameter ?api-version=2023-12-01-preview is missing and the call fails. How can I add that? Thank you for any help.
5 comments
c
B
When I use LlamaParse on pdf documents and use ResultType.MD I get multiple documents with markdown text as a result. Is there a way to save a complete single Markdown file for each parsed pdf somehow? I can distinguish the results e.g. by metadata file_name, but in what order?