Find answers from the community

s
F
Y
a
P
Updated 10 months ago

@Logan M I want to change all LLAMA

@Logan M I want to change all LLAMA Index prompts to a second language.

Currently i am using a graph DB (neo4j) and tree_summarize as my retriever, but as the document, i am retrieving my nodes from, is in German, i need my prompts to be in German too. So the model does not use any other language than German, as it has been shown, that switching model language inside the prompt decreases the accuracy of the answers. Is there a way to change the prompts from llama Index?

Here is my basic retrieval code:

llm = Ollama(model="mixtral",request_timeout=180.0) service_context = ServiceContext.from_defaults(llm=llm, chunk_size=512) graph_store = Neo4jGraphStore( username="neo4j", password="password", url="bolt://localhost:7687", database="neo4j", ) from llama_index import StorageContext, load_index_from_storage storage_context = StorageContext.from_defaults(persist_dir="./graph_storage", graph_store=graph_store) index = load_index_from_storage(storage_context) query_engine = index.as_query_engine( include_text=True, response_mode="tree_summarize", service_context=service_context ) with open("Data/p1.txt") as f: data = f.read() response = query_engine.query(data) print(response)
W
f
L
20 comments
Hi @WhiteFang_Jr , thank you for the answer, unfortunatly, when adding :
prompts_dict = query_engine.get_prompts() print(list(prompts_dict.keys())) with open("Data/p1.txt") as f: data = f.read() response = query_engine.query(data) print(response)

to my script i do not get any prompts returned. Therefore i do not know how to change them using the information provided by your second link. Are you sure the .get_prompts() works with Neo4j and the graph retrieval?

the most important prompt i am looking for is the combination prompt for the index and the query, probably something like this:

"answer the query {query} with the help of the context {context}"
You get an empty list when you run get_prompts() method?
Try this:

Plain Text
Define the prompt:
from llama_index.prompts import PromptTemplate
qa_prompt_tmpl_str = (
    "Context information is below.\n"
    "---------------------\n"
    "{context_str}\n"
    "---------------------\n"
    "Given the context information and not prior knowledge, "
    "answer the query in the style of a Shakespeare play.\n"
    "Query: {query_str}\n"
    "Answer: "
)

qa_prompt_tmpl = PromptTemplate(qa_prompt_tmpl_str)
# add them to engine
query_engine = index.as_query_engine(
    text_qa_template=qa_prompt_tmpl )
@WhiteFang_Jr this is part of the code and my output. I have not tried your second suggestion. Will continue with this one. But i wanted to give you feedback on the get_prompt() method, which does not seem to work in my case...
Attachment
Screenshot_from_2024-01-15_13-36-50.png
Modify this as per your requirement
@WhiteFang_Jr @Logan M Thank you for the Answer, however, i am still a bit confused. the link you sent, contains no prompts, but references to depreciated prompts. I searched and found the original prompts in the folder prompts of the llama index library (see screenshot)

Should i change these prompts to adapt to another language?
Attachment
Screenshot_from_2024-01-15_14-45-15.png
No need to change there. As it is source code

You can update the prompt following this link: https://docs.llamaindex.ai/en/stable/module_guides/models/prompts/usage_pattern.html#updating-prompts



Plain Text
updated_summary_prompt_str = (
    "MODIFY THIS AS PER YOUR REQUIREMENT \n"
    "---------------------\n"
    "{context_str}\n"
    "---------------------\n"
    'SUMMARY:"""\n'
)
smry_prompt_tmpl = PromptTemplate(updated_summary_prompt_str)

query_engine.update_prompts(
    {"response_synthesizer:summary_template": smry_prompt_tmpl}
)
@WhiteFang_Jr Thank you for the answer, and sorry to bother you again. I guess this is more complex than i anticipated. I followed the prompt change guide. however my answers are still in English. I even gave a system prompt in German, to no avail. The query and the graph are in German as well.
Attachment
Screenshot_from_2024-01-15_15-55-21.png
Try printing the LLM inputs, perhaps something wasn't changed correctly

Put this at the top of your script to see the raw llm inputs/outputs
Plain Text
import llama_index

llama_index.set_global_handler("simple")
@Logan M Thank you for your answer and for the help. I enabled the global _handler and I guess i am having some further problems. The answer gets generated at least a dozen times with different context from my GraphDB (Neo4j)

Sometimes the LLM returns answers in German, sometimes in English. Here is one example of one of the answers returned by the LLM.

Sorry for the lengthy output. If possible, could you recommend me a way to only generate one precise output instead of a dozen or so?

Thank you
@Logan M @WhiteFang_Jr After reading most of this and finding hints about my graph containing some english relations it actually does. this might be enough to pit my LLM towards speaking english. I would have to redo the graph (again) i guess.

Does one of you know how i could set the prompt to generate Graphs to German? As this is necessary for creating a monolingual Graph.
@fabian since you are using tree-summarize, it will end up making requests for node retrieved essentially. So it seems like to me it retrieved quite a few documents, and therefore needs to read all the content and build a tree of summaries to return a final answer
You can set the template on the constructor. Here is the default

Plain Text
DEFAULT_KG_TRIPLET_EXTRACT_TMPL = (
    "Some text is provided below. Given the text, extract up to "
    "{max_knowledge_triplets} "
    "knowledge triplets in the form of (subject, predicate, object). Avoid stopwords.\n"
    "---------------------\n"
    "Example:"
    "Text: Alice is Bob's mother."
    "Triplets:\n(Alice, is mother of, Bob)\n"
    "Text: Philz is a coffee shop founded in Berkeley in 1982.\n"
    "Triplets:\n"
    "(Philz, is, coffee shop)\n"
    "(Philz, founded in, Berkeley)\n"
    "(Philz, founded in, 1982)\n"
    "---------------------\n"
    "Text: {text}\n"
    "Triplets:\n"
)

index = KnowledgeGraphIndex.from_documents(documents, ..., kg_triple_extract_template=DEFAULT_KG_TRIPLET_EXTRACT_TMPL)
Maybe a 🌢️ take, but do you actually need a knowledge graph? Despite the hype on twitter, I find the rarely work well, and are quite slow/expensive to create
@Logan M Thank you, i was recommended one, and you are right, i followed the "hype". But as you might have allready noticed, this is a very large and complex medical guideline (380 pages) which i am training on. Specifically, these are the official german guidelines on breast cancer treatment, and my prompts are patient files with diagnosed breast cancer. The prompt contains questions how to continue treatment and therefore, an answer must adhere to the guideline. I hoped to achieve better results when using a KG, as the vector-stores performed mediocre.
The vector store generated ouput overlaps around 60% of the time with the tumor-board in our hospital. So i wanted to check, if I added a KG atop of that, if I could bolster these metrics.
Would also be open to general advice, but thank you again for your answers and time. ill try to continue from here
I think probably adding some form of hybrid search + reranking will probably be an easier path to better results imo (although, would still have to change prompts either way to maintain german output πŸ˜… )
Add a reply
Sign up and join the conversation on Discord