----------------------
Based on the information provided in the knowledge sources, you can use the
QueryFusionRetriever
with multiple retrievers like
BM25Retriever
and
VectorStoreIndex.from_vector_store
. Here is an example of how you can set it up:
from llama_index.core import VectorStoreIndex, StorageContext
from llama_index.core.storage.docstore import SimpleDocumentStore
from llama_index.vector_stores.chroma import ChromaVectorStore
import chromadb
docstore = SimpleDocumentStore()
docstore.add_documents(nodes)
db = chromadb.PersistentClient(path="./chroma_db")
chroma_collection = db.get_or_create_collection("dense_vectors")
vector_store = ChromaVectorStore(chroma_collection=chroma_collection)
storage_context = StorageContext.from_defaults(
docstore=docstore, vector_store=vector_store
)
index = VectorStoreIndex(nodes=nodes, storage_context=storage_context)
import nest_asyncio
nest_asyncio.apply()
from llama_index.core.retrievers import QueryFusionRetriever
retriever = QueryFusionRetriever(
[
index.as_retriever(similarity_top_k=2),
BM25Retriever.from_defaults(
docstore=index.docstore, similarity_top_k=2
),
],
num_queries=1,
use_async=True,
)
(Source:
GitHub ())
For the
CondensePlusContextChatEngine
, you can initialize it from default parameters as shown below:
from llama_index.core.chat_engine import CondensePlusContextChatEngine
chat_engine = CondensePlusContextChatEngine.from_defaults(
retriever=retriever,
# other parameters as needed
)
(Source:
LlamaIndex Docs ())