Find answers from the community

Updated 3 months ago

Gpu

Hi , i m using a custom embedding , when updating the index is there a way to intiate using the gpu ?
cause ive a large data and updating the embedding in the documentstore using cpu takes really long :
# Index the documents using the Llama index and the custom embedding index = VectorStoreIndex.from_documents(documents,storage_context=storage_context,service_context=service_context)
L
T
4 comments
What embedding model are you using? Most use the gpu automatically if you have cuda installed
yeah i fixed that issue ive some questions about the single query decomposition if its ok with u @Logan M , if i have one indice , not multiple , how does the decomposition work ? does that controle how the query is being decomposed ?
cause it generated only 1 query :
Current query: Le gérant d'une sarl à qui j'ai fourni des prestations au cours de sa création refuse de me payer sous prétexte que les travaux facturés sont antérieurs à la constitution de la société. A-t-il le droit de refuser ?
New query: Qu'est-ce qu'une sarl ?
in the output , is it related to the index number ? or you can config ,
and i want to ask if the default decompose prompt u can modify it or pass it as an argument
Are you using the sub question query engine?

It just shows the LLM the initial query, the name/description of sub indexes, and asks it to write queries and direct them to an appropriate index
thank you @Logan M
i m using the tutorial provided in the single query decomposition,
from llama_index.indices.query.query_transform.base import ( DecomposeQueryTransform, ) decompose_transform = DecomposeQueryTransform( service_context.llm_predictor, verbose=True )

from llama_index.query_engine.transform_query_engine import TransformQueryEngine query_engine = index.as_query_engine(service_context=service_context) transform_metadata = {"index_summary": index.index_struct.summary} tranformed_query_engine = TransformQueryEngine( query_engine, decompose_transform, transform_metadata=transform_metadata, )
Add a reply
Sign up and join the conversation on Discord