Find answers from the community

d
djl0.
Offline, last seen 3 months ago
Joined September 25, 2024
Hey everyone. I'm looking to use the openai batch api (half price) to get embeddings in bulk. Is it feasible to pass embeddings directly to llamaindex, and would anyone be able to point me in a good direction for this?
4 comments
L
d
d
djl0.
·

Vector store

The documentation says that vector stores don't use a docstore. Is there any way of listing source files from a vector store? Similar to getting file_path from: index.docstore.docs.values()
2 comments
d
L
d
djl0.
·

Chrima

hey everyone. I've been looking to switch from using an index persisted to disk (the standard from the example) to using chroma. I assumed this was recommended for making more mature apps? But I've noticed that chroma has been moved to legacy and none of the examples in documentation have been updated. Is this a sign that I shouldn't see chroma as a good standard option anymore? Should i stick with the more basic persisting of the index?

thanks
6 comments
d
L
I have a general question about how it works. When I started using LlamaIndex a few months ago, we had to pass a service context to vectorstoreindex.from_documents. I would pass the same openai model (lets say gpt-3.5-turbo) in this place as I would when querying.

My question is, is the first piece I described the same as embedding model? What I'm seeing as default embed model seems to be a different type of model (from gpt-3.5-turbo, gpt-4o, etc), so what do I need to know in terms of compatibility? Let's say I want to leverage gpt-4o, can I use default embed model, and only pass gpt-4o into the query engine?

TIA!
5 comments
d
j