Find answers from the community

Updated 3 months ago

I have a graph stored in neo4j graph

I have a graph stored in neo4j graph database, is it possible to query the graph with llama index? I saw you could only build a new graph out of documents but not using existing graph
B
L
24 comments
the graph i already have I made it storing entities and relations between
Let me know how you find it -- the example uses nebula, but should work fine for neo4j
@Logan M is it possible to modify the prompt to use the llama2-chat format? i am using CustomLLM I wrapped the using of llama2-chat over HTTP REST API, the prompt the query is sending are not in the correct templating llama2-chat expects
Plain Text
Query:  In what movies Tom Hanks acted in?
[DEBUG] LLM Prompt:  A question is provided below. Given the question, extract up to 5 keywords from the text. Focus on extracting the keywords that we can use to best lookup answers to the question. Avoid stopwords.
---------------------
In what movies Tom Hanks acted in?
---------------------
Provide keywords in the following comma-separated format: 'KEYWORDS: <keywords>'
Plain Text
****************************************************************************************************
[DEBUG] LLM response:  
Hint: You can extract up to 5 keywords.

Please select up to 5 keywords from the given question.
************************************************************
llama2 format template is like:
Plain Text
<s>[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe.  Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
  
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<</SYS>>
  
There's a llama in my garden 😱 What should I do? [/INST]
Yes -- in the service context, set the query-wrapper prompt

Plain Text
from llama_index.prompts import Prompt
service_context = ServiceContext.from_defaults(..., query_wrapper_prompt=Prompt("[INST] {query_str} [/INST] "))
Oh, and there's a similar parameter for the system prompt
Plain Text
from llama_index.prompts import Prompt
service_context = ServiceContext.from_defaults(..., query_wrapper_prompt=Prompt("[INST] {query_str} [/INST] "), system_prompt="<<SYS>> ... <</SYS>>")
alternatively, you could also set this up in your customLLM, but in my experience this seems to work ok with llama2
πŸ‘ what is the variable placeholder called for system?
No placeholder -- it gets automatically pre-prended to the normal prompt. Then lastly everything gets wrapped by the query wrapper
Plain Text
******
Query:  In what movies Tom Hanks acted in?
[DEBUG] LLM Prompt:  <s>[INST] <<SYS>> ... <</SYS>>

A question is provided below. Given the question, extract up to 5 keywords from the text. Focus on extracting the keywords that we can use to best lookup answers to the question. Avoid stopwords.
---------------------
In what movies Tom Hanks acted in?
---------------------
Provide keywords in the following comma-separated format: 'KEYWORDS: <keywords>'
 [/INST]
****************************************************************************************************
[DEBUG] LLM response:    Sure! Here are 5 keywords that can be used to lookup answers to the question "In what movies has Tom Hanks acted?"

KEYWORDS: Tom Hanks, movies, acted, filmography, roles
the INST is fine but not sure about the system
another example with the system not functioning -
Plain Text
[DEBUG] LLM Prompt:  <s>[INST] <<SYS>> ... <</SYS>>

Context information is below.
---------------------
query_str: In what movies Tom Hanks acted in?
graph_store_query:   MATCH (p:Person {name: 'Tom Hanks'})-[:ACTED_IN]->(m:Movie) RETURN m.title
graph_store_response: [{'m.title': 'Apollo 13'}, {'m.title': "You've Got Mail"}, {'m.title': 'A League of Their Own'}, {'m.title': 'Joe Versus the Volcano'}, {'m.title': 'That Thing You Do'}, {'m.title': 'The Da Vinci Code'}, {'m.title': 'Cloud Atlas'}, {'m.title': 'Cast Away'}, {'m.title': 'The Green Mile'}, {'m.title': 'Sleepless in Seattle'}, {'m.title': 'The Polar Express'}, {'m.title': "Charlie Wilson's War"}]
graph_schema: 
        Node properties are the following:
...
 [/INST]
i guess the Context information should be in SYS isn't it?
Should it? Tbh I'm not sure with llama2 lol

You could add the sys to the query wrapper instead

Plain Text
from llama_index.prompts import Prompt
service_context = ServiceContext.from_defaults(..., query_wrapper_prompt=Prompt("[INST] <<SYS [normal system prompt here] {query_str} <</SYS>> [/INST] "))
in my experience, llama2 is very verbose. As you saw, it performs the task, but adds annoying explanations πŸ˜…
Add a reply
Sign up and join the conversation on Discord