Find answers from the community

Home
Members
elmatero
e
elmatero
Offline, last seen 3 months ago
Joined September 25, 2024
Hello folks!
I am using QueryEngineTool and I am getting this error because of a long description:
2023-12-12 13:49:03,913 - llama_index.chat_engine.types - WARNING - Encountered exception writing response to history: Error code: 400 - {'error': {'message': "'Tool for bla bla bla bla bla' is too long - 'tools.0.function.description'", 'type': 'invalid_request_error', 'param': None, 'code': None}} (types.py:116)

Do you know how much is the description characters limit??
1 comment
L
I am getting this error:
openai.error.RateLimitError: Rate limit reached for gpt-3.5-turbo in organization on tokens per min. Limit: 160000 / min. Current: 159190 / min. Contact us through our help center at help.openai.com if you continue to have issues.
I am creating an index:
vector_store = FaissVectorStore(faiss_index=faiss_index) storage_context = StorageContext.from_defaults(vector_store=vector_store) index = VectorStoreIndex.from_documents(documents, storage_context=storage_context) index.storage_context.persist("./webpage_data_index")
Can I use gpt-3.5-turbo-16k in this code? How can I configurate that
3 comments
k
e
hi guys. I am creating a new query engine with csv data. I create a custom document with a metadata keyword.
when I query with the word Parotiditis I get Parotiditis is a medical condition that is being referred to in the given context. But one node is the has the following metadata: 'bd87575f-6aca-4ebe-bbae-0320521c6ced': RefDocInfo(node_ids=['7fc5ef03-033e-479f-8c9d-7d03fab082f4'], metadata={'keyword': 'Ac IgM Parotiditis'}),
What can I do to get the info with that metadata keyword??
4 comments
L
e
Hello. There is a way to make a bot answer a question using more than one function calling??
3 comments
e
L
I got this error:
Error: This model's maximum context length is 4097 tokens. However, your messages resulted in 4318 tokens (2877 in the messages, 1441 in the functions). Please reduce the length of the messages or functions.
And my memory is configured like this:
_memory = ChatMemoryBuffer.from_defaults( token_limit=2000, chat_history=chat_history )
14 comments
e
L
I have other question. I don't know if there is a better way to make this function calling or if is better to use other tool. The problem of this code is that if specialty variable passed is for example bloods donation the bot is not going to find it and will return the default message.
2 comments
e
Hello, I have one question, there is a way to make a bot agent use unless one tool (query engine or a function calling)??
4 comments
e
hi! I am saving the messages at the database so if we have to upgrade production, we can get the back. The problem is that I can't configure them in the new agent:
From this code:
print("chat send", chat_history) self._agent = OpenAIAgent.from_tools( _all_tools, llm=llm, callback_manager=callback_manager, memory=_memory, system_prompt=TEXT_QA_SYSTEM_PROMPT.content, chat_history=chat_history, ) print("chat history", self._agent.chat_history)
I get:
chat send [ChatMessage(role=<MessageRole.USER: 'user'>, content='elmatero', additional_kwargs={}), ChatMessage(role=<MessageRole.ASSISTANT: 'assistant'>, content='Buenos días, mi nombre es Juan Pablo. ¿En qué lo puedo ayudar?', additional_kwargs={}), ChatMessage(role=<MessageRole.USER: 'user'>, content='elmatero', additional_kwargs={}), ChatMessage(role=<MessageRole.ASSISTANT: 'assistant'>, content='Buenos días, mi nombre es Juan Pablo. ¿En qué lo puedo ayudar?', additional_kwargs={})] chat history []
And the format I am passing the agent is the same I get from: self._agent.chat_history
5 comments
e
L
how can I choose the "best" embed_model for my use of case?
3 comments
L
e
Hello folks, how are you doing??
I am using FnRetrieverOpenAIAgent to get information about different query engines, the problem is that when I verbose I see that is like makes a summary of the text that in this case I am retrieving.
And I don't like how this resume is done. There is a way to use only the context and don't do openai to make a summary previously??
I tried with ContextRetrieverOpenAIAgent but as also have the retriever as a required argument, makes the same.
4 comments
e
Hello, I have a basic question because I am not understanding something. I'm trying to use chromadb with docker:

import chromadb remote_db = chromadb.HttpClient() chroma_collection = remote_db.get_or_create_collection("quickstart") vector_store = ChromaVectorStore(chroma_collection=chroma_collection) storage_context = StorageContext.from_defaults(vector_store=vector_store) service_context = ServiceContext.from_defaults(embed_model=embed_model) index = VectorStoreIndex.from_documents( documents, storage_context=storage_context, service_context=service_context )
This is the code I want to make it work. I am creating a chromadb collection in one backend where the documents are allocated.
From other backend I want to create this index and make querys.
My question is, why I need to use documents in the last line of the code??
Chromadb doesn't save this data with the related vector???
8 comments
e
E
Hi, how are you??
=== Calling Function === Calling function: endoscopia with args: ¡Hola! Para sacar (............) ponibles? Error chatbot: Expecting value: line 1 column 1 (char 0)
I am getting this error, it seems that the ¡ character is causing me problems. Do you have any ides how I can solve this problem??
23 comments
e
b
L
Is it possible to increase the number of tools used in a FnRetrieverOpenAIAgent? From 2 to 5 for example? And always use an elementary one?
18 comments
L
e
Hi, I am trying to create a Retrieval-augmented openAi agent (https://gpt-index.readthedocs.io/en/stable/examples/agent/openai_agent_retrieval.html#retrieval-augmented-openai-agent)
I used to use an OpenAIAgent. I don't understand how can use a query_engine_tool. I am recieving an error: Error: Tool with name query_engine_tool not found
I am using the FnRetrieverOpenAIAgent wrong and I don't know how to use it right. I couldn't find anything in the docs. This is my actual code:
self._user_id = user_id self.project_id = index_dir self._tools = {tool.metadata.name: tool for tool in tools} _vector_store = FaissVectorStore.from_persist_dir( f"./llama_index/{index_dir}" ) _storage_context = StorageContext.from_defaults( vector_store=_vector_store, persist_dir=f"./llama_index/{index_dir}", ) _index = load_index_from_storage(storage_context=_storage_context) _memory = ChatMemoryBuffer.from_defaults( token_limit=3000, chat_history=chat_history ) similarity_top_k = 7 _retriever = VectorIndexRetriever( index=_index, similarity_top_k=similarity_top_k ) _query_engine = RetrieverQueryEngine(retriever=_retriever) query_engine_tool = QueryEngineTool.from_defaults( query_engine=_query_engine, ) _all_tools = [query_engine_tool] for tool in tools: _all_tools.append(tool) tool_mapping = SimpleToolNodeMapping.from_objects(_all_tools) obj_index = ObjectIndex.from_objects( _all_tools, tool_mapping, VectorStoreIndex, ) self.token_counter = TokenCountingHandler( tokenizer=tiktoken.encoding_for_model("gpt-3.5-turbo").encode, ) callback_manager = CallbackManager([self.token_counter]) self._agent = FnRetrieverOpenAIAgent.from_retriever( retriever=obj_index.as_retriever(), llm=llm, callback_manager=callback_manager, memory=_memory, system_prompt=TEXT_QA_SYSTEM_PROMPT.content, verbose=True, ) self._chat_history = chat_history
24 comments
L
e
Hi! How can I make an agent to enter to the QueryEngineTool always? Or if a function calling has a particular reponse??
2 comments
e
L
e
elmatero
·

Hi

Hi!
There is a easy way to combine a CondenseQuestionChatEngine with chatgpt functions like the ones in this example??
https://gpt-index.readthedocs.io/en/stable/examples/agent/openai_agent.html
16 comments
e
L
Hi! I want to know if there is a way to get the processed question that the CondenseQuestionChatEngine makes.
When I run the code with verbose=True:
self.chat_engine = CondenseQuestionChatEngine.from_defaults( query_engine=self.query_engine, verbose=True, memory=self.memory )
I get in the console: Querying with: How are you?
I want to save that question to do fine tuning with the processed question and not with the real question
2 comments
e
b
Hi there. When i try to run the following example: https://gpt-index.readthedocs.io/en/latest/getting_started/starter_example.html#build-and-query-index I get the error that I doen't have the OPENAI_API_KEY configurated. I really need this key for this simple example??
9 comments
L
e