Find answers from the community

f
fabian
Offline, last seen 2 months ago
Joined September 25, 2024
f
fabian
·

Embedding

,

Hi, thank you for all your help in the past days. Unfortunately i have another more generic question on retrievers. So i have very long prompts and i want to weigh in a specific sentence more than the rest for retrieval. E.g. the embeddings of a sentence count double or something similar. How would i go about doing something like this?
8 comments
L
f
I am trying to load and query a persisting neo4j Graph DB.

My current code to open the Graph DB looks like this:

service_context = ServiceContext.from_defaults(llm=llm, chunk_size=512) graph_store = Neo4jGraphStore( username="neo4j", password="password", url="bolt://localhost:7687", database="neo4j", ) graph_storage_context = StorageContext.from_defaults(graph_store=graph_store) graph_index = KnowledgeGraphIndex( storage_context=graph_storage_context, kg_triplet_extract_fn=extract_triplets, service_context=service_context, verbose=True )
However, this only returns an error, which I do not know how to approach:

File "/home/fabian/Desktop/RAG/scripts/medium.py", line 118, in <module>
graph_index = KnowledgeGraphIndex(
File "/home/fabian/Desktop/RAG/.venv/lib/python3.10/site-packages/llama_index/indices/knowledge_graph/base.py", line 81, in init
super().init(
File "/home/fabian/Desktop/RAG/.venv/lib/python3.10/site-packages/llama_index/indices/base.py", line 47, in init
raise ValueError("One of nodes or index_struct must be provided.")
ValueError: One of nodes or index_struct must be provided.

Can you please help me and provide me with a working nsiplet of ocde?
9 comments
k
f
@Logan M @WhiteFang_Jr
Hi, hope you are doing good. I was wondering, now that I have a list of nodes with scores, is there a way to give them to a synthesizer? I tried a lot of stuff to no avail. I cannot plug it directly into a query engine, as i need to add one node by hand (patient information).
My current approach is:
2 comments
f
R
@Logan M Getting an error when updating Prompts:

Hello again, still trying to get RAG to work with 100 percent german prompts to use an open source LLM to keep patient data secret as best as possible.

After running
prompts_dict = query_engine.get_prompts() print(list(prompts_dict))

i found that i was using two prompts: ['response_synthesizer:text_qa_template', 'response_synthesizer:refine_template']

Changing the first prompt worked great and my answers became german more often, however, when i tried to change the refine_template prompt I get an error:

File "/home/fabian/Desktop/RAG/.venv/lib/python3.10/site-packages/llama_index/llms/llm.py", line 165, in _get_messages
messages = prompt.format_messages(llm=self, prompt_args) File "/home/fabian/Desktop/RAG/.venv/lib/python3.10/site-packages/llama_index/prompts/base.py", line 185, in format_messages prompt = self.format(kwargs)
File "/home/fabian/Desktop/RAG/.venv/lib/python3.10/site-packages/llama_index/prompts/base.py", line 170, in format
prompt = self.template.format(**mapped_all_kwargs)
KeyError: 'existing_answer'

My Code currently looks like this:
2 comments
L
f
@Logan M I want to change all LLAMA Index prompts to a second language.

Currently i am using a graph DB (neo4j) and tree_summarize as my retriever, but as the document, i am retrieving my nodes from, is in German, i need my prompts to be in German too. So the model does not use any other language than German, as it has been shown, that switching model language inside the prompt decreases the accuracy of the answers. Is there a way to change the prompts from llama Index?

Here is my basic retrieval code:

llm = Ollama(model="mixtral",request_timeout=180.0) service_context = ServiceContext.from_defaults(llm=llm, chunk_size=512) graph_store = Neo4jGraphStore( username="neo4j", password="password", url="bolt://localhost:7687", database="neo4j", ) from llama_index import StorageContext, load_index_from_storage storage_context = StorageContext.from_defaults(persist_dir="./graph_storage", graph_store=graph_store) index = load_index_from_storage(storage_context) query_engine = index.as_query_engine( include_text=True, response_mode="tree_summarize", service_context=service_context ) with open("Data/p1.txt") as f: data = f.read() response = query_engine.query(data) print(response)
20 comments
L
f
W