Looking for an example with Chroma for managing documents. Retrieving Document 1 and Generating Embedding: The text of Document 1 is retrieved from ChromaDB and an embedding is generated using LlamaIndex.
ChatGPT Interaction : An request is sent to the ChatGPT API to obtain a summary of Document 1.
Adding Document 2: The text and embedding of Document 2 are added to ChromaDB.
ChatGPT Interaction: An optional request is sent to the ChatGPT API to generate a comparison between Document 1 and Document 2.
Updating Document 1: The text of Document 1 is updated, a new embedding is generated, and it is stored in ChromaDB.
ChatGPT Interaction: An optional request is sent to the ChatGPT API to summarize the changes made to Document 1.
Deleting Document 2: Document 2 is deleted from ChromaDB.
I'm using index.as_chat_engine(..). It does not only use data from my vectorstore but uses also data from elsewhere. Can I avoid that? I use chat_mode="condense_plus_context" and this prompt \nInstruction: Use the previous chat history, or the context above, to interact and help the user. Don't use any other informations." But it is not correct.