Find answers from the community

Updated 2 years ago

Vector stores and Indexes

if using a vector store index, what is the advantage of llama over langchain vector stores_
L
T
l
22 comments
See this page here for different use cases/indexes to use:
https://gpt-index.readthedocs.io/en/latest/use_cases/queries.html

With Llama index, you have full control over how your data is indexed, structured, and queried. You can also create some more complex data structures, like wrapping many indexes with a top level index. This is also just scratching the surface of what's possible.

I recommend reading a few more docs pages. In addition to the one I linked above, here's a few more
https://gpt-index.readthedocs.io/en/latest/how_to/index_structs/composability.html

https://gpt-index.readthedocs.io/en/latest/how_to/query/query_transformations.html
I was also thinking it would be interesting to have a virtual index that searches the web. this is because I want to have context from custom web searches as well as indexes, and to be able to assign a priority or size. so it would make sense for a tree to have some indexes that are searches
or is there a better way to implement this?
Super interesting idea 🤔 this sounds like it could be added as a sort of data loader in llamahub ?
For example, these loaders are all available to pull data from https://llamahub.ai/
but the data loaders just create documents to be indexed no?
this would do the search only at query time
so it’s more like an index interface
the query function searches the web instead of the documents
returns the results and llama does its thing of summarizing or trimming to size
it returns x number of results based on the same parameters that are used in a vector store to retrieve x number of most similar docs
only difference is that they come from live web search (I will use several custom searches for some of the indexes in the tree )
this avoids having to scrape and index some websites periodically to index
but maybe the results are not as good as doing semantic search on the scraped websites…
maybe in that case it could get x web results, and do semantic on those to get top k but that is another level
Interesting 🤔🤔 would be very cool if you made a PR for this actually! Would be similar to Bing chat almost
When I have already loaded my local data into Pinecone with GPTPineconeIndex, then next time when I want to query the index, do I need to load data in Pinecone with PineconeReader first? Is there an easier way?
You can initialize the client and pass it in, along with an empty array for the documents/nodes!

Then it will connect with the existing documents in the pinecone index
I see. In this way, will the all the data in pinecone index be loaded, or only the matched data be loaded in query time? Thanks a lot.
I'm not sure, depends how pinecone manages and loads vectors.

At query time though, you can trust it to work I think haha
okay, let me have a try.
isn’t bing search more like a langchain agent with search tool? it’s different than mixing document retrieval with custom web searches, what I suggest is more personalized for specific domains
Add a reply
Sign up and join the conversation on Discord