Find answers from the community

e
erizvi
Offline, last seen 3 months ago
Joined September 25, 2024
Hi All, I'm having a hard time extracting dates from user query. I'm using auto retriever to filter by metadata using Qdrant vector DB and OpenAI GPT 4. The issue is that the dates need to be converted to integers (unix timestamps) since Qdrant doesn't have a date type. In my VectorStoreInfo >MetadataInfo field description I've stated that the value needs to be unix timestamp but it seems that the values returned by GPT4 are not exactly correctly converted. They are almost but not exact for example it will convert 04/01/2023 to 1682995200 wich is some time in May.
7 comments
a
e
Can I have make LLM workflows in llama-index like having questions frist be screened by an agent to ensure that question contains all relevent information before querying db and synthesizing respone. then if the query passes inspection by first agent, then send the query to second agent to select yet another appropriate agent to route the query to?
2 comments
e
L
Hi All I'm trying to use an AutoRetriever with VectorStoreIndex as chat_engine with mode = "openai". The solution I implemented was to subclass VectorStoreIndex class and override the as_retriever method to return an VectorIndexAutoRetriever instead of the default VectorIndexRetriever. The implemention just seems very convoluted and seems like there would proabably be a better solution. Has anyone ever worked with a chat engine, vector index, and auto retriever? any alternative suggestions?
6 comments
a
e
Hi Anyone know how the llamaIndex's chat engine works: specifically, does it query the index for each user interaction and then use configured llm to produce a response or does it figure out if the answer to a new user query is contained in the chat history (including any contexts queried from the index previously)?
9 comments
e
L
hi there, I'm new to llama-index. I'm trying to use UnstructuredElementNodeParser to store base nodes and index nodes to Qdrant but I have a doubt: how do i store both baseNodes and mappings that I get from calling get_base_nodes_and_mappings in Qdrant?
1 comment
L
Hi all, is there any sample code that shows how query pipeline can be integrated with an Agent? preferably using AWS Bedrock?
2 comments
e
L
I'm working with Qdrant and LLamaIndex. I have a bunch of release note documents in html format that I was able to parse using UnstructuredElementNodeParser and persist to Qdrant Vector DB via vectorestoreindex.

However, the user can ask for all the changes between two dates in which case I need to retrieve relevant documents in between the date range so the process is two folds: 1) try to determine which relevant documents to query. 2) query those documents for more context.

I'm using chat_engine with modes "openai" and "context" none seem to do this. Also it seems that the vector store doesn't really store document names as that would be helpful since the document names contain their release dates.

Is there a way to do this type of hierarchical search? would I be able to use recursiveretriever for this purpose and if so how would I configure it and work with a chat-engine?
3 comments
e
L
r