Find answers from the community

Home
Members
ranjanj4
r
ranjanj4
Offline, last seen 2 weeks ago
Joined September 25, 2024
r
ranjanj4
·

Milvus

Hi , can someone tell me how to load the index from "milvus_local.db" in milvus,? for example I am using below function in chromadb
Plain Text
 db = chromadb.PersistentClient(path="./chroma_db_mini")
chroma_collection = db.get_or_create_collection("quickstart")
vector_store = ChromaVectorStore(chroma_collection=chroma_collection) 
1 comment
L
anyone using vllm, it alwsys thros error
Plain Text
 from llama_index.llms.vllm import VllmServer
llm = VllmServer(
    api_url="http://localhost:8000/v1", max_new_tokens=100, temperature=0, device=auto
)

Traceback (most recent call last):
File "/home/jovyan/gen-ai-tm500-llm-nb2-l40s-workspace/TM-500/main_async.py", line 626, in <module>
main()
File "/home/jovyan/gen-ai-tm500-llm-nb2-l40s-workspace/TM-500/main_async.py", line 526, in main
reformulated_query = llm.complete(formatted_prompt)
File "/home/jovyan/gen-ai-tm500-llm-nb2-l40s-workspace/my_env/lib/python3.10/site-packages/llama_index/core/instrumentation/dispatcher.py", line 260, in wrapper
result = func(*args, kwargs) File "/home/jovyan/gen-ai-tm500-llm-nb2-l40s-workspace/my_env/lib/python3.10/site-packages/llama_index/core/llms/callbacks.py", line 429, in wrapped_llm_predict f_return_val = f(_self, *args, kwargs)
File "/home/jovyan/gen-ai-tm500-llm-nb2-l40s-workspace/my_env/lib/python3.10/site-packages/llama_index/llms/vllm/base.py", line 427, in complete
output = get_response(response)
File "/home/jovyan/gen-ai-tm500-llm-nb2-l40s-workspace/my_env/lib/python3.10/site-packages/llama_index/llms/vllm/utils.py", line 9, in get_response
return data["text"]
KeyError: 'text'
1 comment
L
I am using condenseQuestionChatEngine for conversations. However, when the user abruptly switches topics, it sometimes condenses the question into an incoherent or nonsensical standalone question. How can I address this issue effectively?
9 comments
L
r
I finetuned Mistral model and I have the lora weights, i just wanted to know how do i load the same in Llamaindex ? can anyone please help ?
1 comment
L
r
ranjanj4
·

Prompt

Did anyone experience Llamindex giving just direct answer ? what should I do to get better and detailed answer ?
1 comment
W
Thanks for sharing , it says "Repo model meta-llama/Meta-Llama-3-8B-Instruct is gated. You must be authenticated to access it."
3 comments
r
L
Hello everyone, After parsing a PDF into structured data, I have segmented its content into nodes consisting of text chunks, with their corresponding headings and subheadings preserved as metadata. Utilizing the LLaMAIndex framework, I have generated vectors for each node to facilitate a semantic search. However, I'm encountering an issue where the search for specific content, denoted by version numbers, lacks precision. For instance, a query for 'Issues resolved in Version 4.1' incorrectly retrieves nodes related to 'Version 4.1.1'. What strategies can I employ within LLaMAIndex to improve the accuracy of my searches, ensuring that the results strictly correspond to the exact version number specified in the query?

Issues resolved in Version 4.0 Changes/Enhancements in Version 4.0 Issues resolved in Version 4.0.1 Changes/Enhancements in Version 4.0.1 Issues resolved in Version 4.0.2 Changes/Enhancements in Version 4.0.2 New Feature in Version 4.0.2 Issues resolved in Version 4.0.3 Changes/Enhancements in Version 4.0.3 Issues resolved in Version 4.0.4 Issues resolved in Version 4.1 Changes/Enhancements in Version 4.1 New features in Version 4.1 Issues resolved in Version 4.1.1 Changes/Enhancements in Version 4.1.1

I tried metadata extraction and searching but still the same problem. How can i do in llamindex
5 comments
L