Find answers from the community

Updated 6 months ago

Langchain

At a glance

The community member posted about an issue they encountered when using the db.save_local("faiss_index") function, which deleted their old vectors and stored only the new ones. They want to save the new vectors in append mode to utilize both the old and new vectors. The comments suggest that the community member may need to look into an "insert function" or read the documentation to find a solution. Another community member mentioned that they are a beginner in Python and are confused by their current project, which involves integrating ChatGPT with their own data and training it with functionalities like memory to store the current conversation. The community members discussed the possibility of using LlamaIndex to handle a large amount of data, and one of them provided links to the LlamaIndex documentation for concepts and agents, which may help with the conversation functionality.

Useful resources
Hi, When I used the db.save_local("faiss_index") function to save my vectors, it deleted my old vector and stored the new created vector, but I want to save the new created vector in append mode so that I can utilise both my old and new created vectors. Can someone assist me with this?
code: using langchain
def StoreNewData():
all_text_data = extract_text_from_documents_in_directory(UPLOAD_DIR)
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000,
chunk_overlap=200,
length_function=len
)
chunks = text_splitter.split_text(text=all_text_data)

embeddings = OpenAIEmbeddings()
db = FAISS.from_texts(chunks, embedding=embeddings)
db.save_local("faiss_index")

StoreNewData()
sys.exit()
L
A
5 comments
Hmm, yea not 100% how it works for langchain stuff (this is a llama index discord lol)

Maybe there's some kind of insert function? Might have to read some docs or source code
Actually, I am a beginner in Python, and I am confused by my current project.
Which integrate chatgpt with own data and train it with all functionality like memory to store current conversation
is it possible to do using llamaIndex with large number of data
Yup, this is possible with llama index! I recommend reading some of our docs
https://gpt-index.readthedocs.io/en/latest/getting_started/concepts.html

And then also the agents will help with the conversation stuff
https://gpt-index.readthedocs.io/en/latest/core_modules/agent_modules/agents/root.html
Add a reply
Sign up and join the conversation on Discord