Find answers from the community

Home
Members
Rust-Ninja-Sabi
R
Rust-Ninja-Sabi
Offline, last seen 3 months ago
Joined September 25, 2024
Looking for an example with Chroma for managing documents. Retrieving Document 1 and Generating Embedding: The text of Document 1 is retrieved from ChromaDB and an embedding is generated using LlamaIndex.

ChatGPT Interaction : An request is sent to the ChatGPT API to obtain a summary of Document 1.

Adding Document 2: The text and embedding of Document 2 are added to ChromaDB.

ChatGPT Interaction: An optional request is sent to the ChatGPT API to generate a comparison between Document 1 and Document 2.

Updating Document 1: The text of Document 1 is updated, a new embedding is generated, and it is stored in ChromaDB.

ChatGPT Interaction: An optional request is sent to the ChatGPT API to summarize the changes made to Document 1.

Deleting Document 2: Document 2 is deleted from ChromaDB.
1 comment
L
I want to see all the prompts that LlamaIndex send and the answer it receives? Any examples?
1 comment
L
How to use Ollama for Embeddings?
1 comment
L
I'm using index.as_chat_engine(..). It does not only use data from my vectorstore but uses also data from elsewhere. Can I avoid that? I use chat_mode="condense_plus_context" and this prompt \nInstruction: Use the previous chat history, or the context above, to interact and help the user. Don't use any other informations." But it is not correct.
5 comments
L
R
W
Is it possible to query OpenAI with LlamaIndex directly without using Rag?
4 comments
a
R