Find answers from the community

M
MarioZ
Offline, last seen 3 months ago
Joined September 25, 2024
I would like to use model from the local directory, mistral for instance, According to this source: https://gpt-index.readthedocs.io/en/latest/examples/llm/llama_2_llama_cpp.html I can use it from url or path. LLamaCPP download the model in /tmp/llama_index/models/. If I put model path, will it be recognized before so the model doesn't have to be downloaded? Secondly, how to change the path of the model url from tmp to desired directory?

llm = LlamaCPP(
# You can pass in the URL to a GGML model to download it automatically
model_url=model_url,
# optionally, you can set the path to a pre-downloaded model instead of model_url
1 comment
E
M
MarioZ
·

Asyncio

In Colab I'm getting
RuntimeError: asyncio.run() cannot be called from a running event loop
for
response_str = response.response
for source_node in response.source_nodes:
eval_result = evaluator.evaluate(response=response_str, contexts=[source_node.get_content()])
print(str(eval_result.passing))
2 comments
M
L
M
MarioZ
·

Metadata

Do you add metadata to help llm model in RAG?
4 comments
W
M
I need help with RAG. I've provided a markdown table with six columns, with the latest two containing money values. The model includes 'llama2 7b llm' and 'HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2")', with a service context chunk size of 1024. Unfortunately, I didn't obtain any correct information. The model mixes up columns, unable to sum according to the ID column, and it doesn't recognize the latest row. Any advice would be greatly appreciated.
7 comments
M
r
yes I know the 'model_path'. I wanted to automatize the process, if the model exists then use model path, if not, the model_url
14 comments
E
M
L