Find answers from the community

b
balanp
Offline, last seen 3 months ago
Joined September 25, 2024
how to use pretrained llm from hugging face
10 comments
k
M
W
If my finetuned model involves an extra step say mean pooling of the model output.It is saved into hugging face repositry, how can I use this model in say Sub Question Query Engine
2 comments
k
For the purpose of fine tuning embedding, my textnode has both metadata and text. So, when preparing training and validation dataset for fine tuning metadata+text will pair with questions to form a datapoint or will it only be text?
36 comments
k
b
For fine tuning my embedding model I need to send custom prompts regarding the data contained in nodes and what I kind of questions I want to be generated. How can I do this using generate_qa_embedding_pairs?
17 comments
k
b
I have a different finetuned embedding model than my fine tuned llm. How can I connect these two in SubQuestionQueryEngine such that my index retrieves relevant documents rather than embeddings and send the relevant documents to llm?
11 comments
k
b
@kapa.ai You need to use a chat model in order the use role blocks like with user():! Perhaps you meant to use the TransformersChat class? How can I deal with this error? I am using the llama-index-question-gen-guidance library?
2 comments
k
I am installing llama-index-question-gen-guidance in order to use guidance with an open-source LLM (https://docs.llamaindex.ai/en/stable/examples/output_parsing/guidance_sub_question.html) but upon execution of from llama_index.question_gen.guidance import GuidanceQuestionGenerator I get the error ImportError: cannot import name 'LLM' from 'llama_index.core.llms. Installing llama-index-question-gen-guidance also breaks my llama_index installation. Any advise?
3 comments
b
L
@kapa.ai I am getting a "No valid JSON found in output" ValueError in subquestionoutputparser.parse(self, output). What might be the reason?
2 comments
k
@kapa.ai how can I run hugging face embedding model over multiple gpus
2 comments
k
@kapa.ai @kapa.ai Where to find VectorStore module source code. It is being imported from llama_index.core? I need to adjust context_window?
2 comments
k
@kapa.ai I am getting the following error when extracting top 5 indices from an index query engine wrt a query
ValueError: Calculated available context size -42 was not non-negative.
2 comments
k