Find answers from the community

Updated 3 months ago

Hi All I'm trying to use an

Hi All I'm trying to use an AutoRetriever with VectorStoreIndex as chat_engine with mode = "openai". The solution I implemented was to subclass VectorStoreIndex class and override the as_retriever method to return an VectorIndexAutoRetriever instead of the default VectorIndexRetriever. The implemention just seems very convoluted and seems like there would proabably be a better solution. Has anyone ever worked with a chat engine, vector index, and auto retriever? any alternative suggestions?
a
e
6 comments
Hey erizvi, curious to know how that way worked out?
I think what you might want to try instead is the following:

  1. Construct a RetriverQueryEngine using VectorIndexAutoRetriever as the retriever
  2. Using the query_engine created in step 1. define a QueryEngineTool using QueryEngineTool.from(query_engine=query_engine).
  3. Construct your OpenAIAgent (which is what gets built when using chat_mode = "openai") through
Plain Text
OpenAIAgent.from_tools(
                    tools=[query_engine_tool],
                    llm=llm,
                    **kwargs,
                )
Hi andrei, it did work fine but just the implementation seemed very messy to me.
What you have suggested makes a lot of sense and seems more straight forward than my approach. I will try that. Thanks
with my approach I had to go through the llama-index library code and essentially copy the code and override (good for learning but not ideal as consumer of the library). Your approach can be implemented with just knowledge from the documentation (ideal for consumer of the library)
Yes! Great glad it makes sense. I was asking from the perspective of perhaps us adding the custom class to the library. Was just trying to guage a t-shirt size of it given your recent experience. πŸ™‚
Add a reply
Sign up and join the conversation on Discord