Find answers from the community

Updated 2 years ago

Hi Jerry thank you for your answer Hmm

At a glance
Hi Jerry, thank you for your answer! Hmm sounds great but does this bring any mayor performance benefits? Currently, I have a vector database with chunked context embeddings and I am doing similarity search based on cosine distance with users queries. Could you maybe provide the benefits more in detail. I am asking because this feature is part of a larger project, which is based on Node.js, and if I decide to incorporate GPT index I would have to create a separate python micro service. Thank you!
j
2 comments
Hey @nnnn thanks for the followup comment. There's a few considerations here:
  • What types of queries are you running? From my experience + others, embedding-top-k retrieval helps to provide answers to fact-based queries but not others (summarization queries, queries where you want to synthesize across heterogenous data sources). In gpt index, list index is good for summarization, and people have also tried defining graphs over their data to enforce that certain information is fed in to generate results.
  • as a general note, if you're building an app that depends on data, you're going to want nice data structure support, and as a python package gpt index can provide that. typescript's data structure support doesn't seem as good from what i can tell.
  • we offer abstractions for response synthesis so that you don't have to worry about token limitations given retrieved context
  • we offer abstractions around vector store support that will make it easy to to swap out different vector stores if you were trying to test them out
let me know if that makes sense!
Add a reply
Sign up and join the conversation on Discord