2023-12-12 13:49:03,913 - llama_index.chat_engine.types - WARNING - Encountered exception writing response to history: Error code: 400 - {'error': {'message': "'Tool for bla bla bla bla bla' is too long - 'tools.0.function.description'", 'type': 'invalid_request_error', 'param': None, 'code': None}} (types.py:116)
openai.error.RateLimitError: Rate limit reached for gpt-3.5-turbo in organization on tokens per min. Limit: 160000 / min. Current: 159190 / min. Contact us through our help center at help.openai.com if you continue to have issues.
vector_store = FaissVectorStore(faiss_index=faiss_index)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(documents, storage_context=storage_context)
index.storage_context.persist("./webpage_data_index")
Parotiditis
I get Parotiditis is a medical condition that is being referred to in the given context.
But one node is the has the following metadata: 'bd87575f-6aca-4ebe-bbae-0320521c6ced': RefDocInfo(node_ids=['7fc5ef03-033e-479f-8c9d-7d03fab082f4'], metadata={'keyword': 'Ac IgM Parotiditis'}),
Error: This model's maximum context length is 4097 tokens. However, your messages resulted in 4318 tokens (2877 in the messages, 1441 in the functions). Please reduce the length of the messages or functions.
_memory = ChatMemoryBuffer.from_defaults(
token_limit=2000, chat_history=chat_history
)
bloods donation
the bot is not going to find it and will return the default message.
print("chat send", chat_history)
self._agent = OpenAIAgent.from_tools(
_all_tools,
llm=llm,
callback_manager=callback_manager,
memory=_memory,
system_prompt=TEXT_QA_SYSTEM_PROMPT.content,
chat_history=chat_history,
)
print("chat history", self._agent.chat_history)
chat send [ChatMessage(role=<MessageRole.USER: 'user'>, content='elmatero', additional_kwargs={}), ChatMessage(role=<MessageRole.ASSISTANT: 'assistant'>, content='Buenos días, mi nombre es Juan Pablo. ¿En qué lo puedo ayudar?', additional_kwargs={}), ChatMessage(role=<MessageRole.USER: 'user'>, content='elmatero', additional_kwargs={}), ChatMessage(role=<MessageRole.ASSISTANT: 'assistant'>, content='Buenos días, mi nombre es Juan Pablo. ¿En qué lo puedo ayudar?', additional_kwargs={})]
chat history []
self._agent.chat_history
FnRetrieverOpenAIAgent
to get information about different query engines, the problem is that when I verbose I see that is like makes a summary of the text that in this case I am retrieving. ContextRetrieverOpenAIAgent
but as also have the retriever as a required argument, makes the same.
import chromadb
remote_db = chromadb.HttpClient()
chroma_collection = remote_db.get_or_create_collection("quickstart")
vector_store = ChromaVectorStore(chroma_collection=chroma_collection)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
service_context = ServiceContext.from_defaults(embed_model=embed_model)
index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context, service_context=service_context
)
=== Calling Function ===
Calling function: endoscopia with args: ¡Hola! Para sacar (............) ponibles?
Error chatbot: Expecting value: line 1 column 1 (char 0)
¡
character is causing me problems. Do you have any ides how I can solve this problem??Error: Tool with name query_engine_tool not found
self._user_id = user_id
self.project_id = index_dir
self._tools = {tool.metadata.name: tool for tool in tools}
_vector_store = FaissVectorStore.from_persist_dir(
f"./llama_index/{index_dir}"
)
_storage_context = StorageContext.from_defaults(
vector_store=_vector_store,
persist_dir=f"./llama_index/{index_dir}",
)
_index = load_index_from_storage(storage_context=_storage_context)
_memory = ChatMemoryBuffer.from_defaults(
token_limit=3000, chat_history=chat_history
)
similarity_top_k = 7
_retriever = VectorIndexRetriever(
index=_index, similarity_top_k=similarity_top_k
)
_query_engine = RetrieverQueryEngine(retriever=_retriever)
query_engine_tool = QueryEngineTool.from_defaults(
query_engine=_query_engine,
)
_all_tools = [query_engine_tool]
for tool in tools:
_all_tools.append(tool)
tool_mapping = SimpleToolNodeMapping.from_objects(_all_tools)
obj_index = ObjectIndex.from_objects(
_all_tools,
tool_mapping,
VectorStoreIndex,
)
self.token_counter = TokenCountingHandler(
tokenizer=tiktoken.encoding_for_model("gpt-3.5-turbo").encode,
)
callback_manager = CallbackManager([self.token_counter])
self._agent = FnRetrieverOpenAIAgent.from_retriever(
retriever=obj_index.as_retriever(),
llm=llm,
callback_manager=callback_manager,
memory=_memory,
system_prompt=TEXT_QA_SYSTEM_PROMPT.content,
verbose=True,
)
self._chat_history = chat_history
self.chat_engine = CondenseQuestionChatEngine.from_defaults(
query_engine=self.query_engine, verbose=True, memory=self.memory
)
Querying with: How are you?