Find answers from the community

Updated 3 months ago

Langchain vector store

Hello people! Is it possible to build an indev using Llama index and then use it as (or convert it to) a vector store for use by langchain? I know there are some parts that are backwards compatible, but i dont know if this is possible! Thanks in advance
L
m
2 comments
Hmmm I don't think this is really possible.

But, you can definitely use llama index as a custom tool inside langchain agents!
interesting interesting! Okay well maybe you could help if i give you an example of my current setup?..

How im creating my vector store:
Plain Text
def ingest_docs():
    loader = ReadTheDocsLoader("docs")
    raw_documents = loader.load()
    text_splitter = RecursiveCharacterTextSplitter(
        chunk_size=1000,
        chunk_overlap=200,
    )
    documents = text_splitter.split_documents(raw_documents)
    embeddings = OpenAIEmbeddings()
    vectorstore = FAISS.from_documents(documents, embeddings)

How im setting up my agent:
Plain Text
............
(in app route)
await websocket.accept()
question_handler = QuestionGenCallbackHandler(websocket)
stream_handler = StreamingLLMCallbackHandler(websocket)
qa_chain = get_chain(vectorstore, question_handler, stream_handler)
...........
def get_chain(
    vectorstore: VectorStore, question_handler, stream_handler
) -> ConversationalRetrievalChain:
    manager = AsyncCallbackManager([])
    question_manager = AsyncCallbackManager([question_handler])
    stream_manager = AsyncCallbackManager([stream_handler])

    question_gen_llm = OpenAI(
        temperature=0,
        verbose=True,
        callback_manager=question_manager,
    )
    streaming_llm = OpenAI(
        streaming=True,
        callback_manager=stream_manager,
        verbose=True,
        temperature=0,
    )

    question_generator = LLMChain(
        llm=question_gen_llm, prompt=CONDENSE_QUESTION_PROMPT, callback_manager=manager
    )
    doc_chain = load_qa_chain(
        streaming_llm, chain_type="stuff", prompt=QA_PROMPT, callback_manager=manager
    )

    qa = ConversationalRetrievalChain(
        retriever=vectorstore.as_retriever(),
        combine_docs_chain=doc_chain,
        question_generator=question_generator,
        callback_manager=manager,
    )
    return qa


How would I then integrate an index created like this:
Plain Text
GPTSimpleVectorIndex.from_documents(docs)


Thanks in advance πŸ™
Add a reply
Sign up and join the conversation on Discord