Find answers from the community

Updated 6 months ago

Can you list all default prompt template vars?

@kapa.ai Can you list all default prompt template vars?
k
5 comments
Please stand by @林爸 as I search the knowledge base. I normally take about 20 seconds or so. :wave:
----------------------
The default prompt template variables for LlamaIndex are schema_str, info_str, and query_str. These variables are used to customize the prompt template.

Here is an example of how these variables are used in a prompt template:

Plain Text
prompt_tmpl_str = """\
Your goal is to structure the user's query to match the request schema provided below.

<< Structured Request Schema >>
When responding use a markdown code snippet with a JSON object formatted in the following schema:

{schema_str}

The query string should contain only text that is expected to match the contents of documents. Any conditions in the filter should not be mentioned in the query as well.

Make sure that filters only refer to attributes that exist in the data source.
Make sure that filters take into account the descriptions of attributes.
Make sure that filters are only used as needed. If there are no filters that should be applied return [] for the filter value.
If the user's query explicitly mentions number of documents to retrieve, set top_k to that number, otherwise do not set top_k.
Do NOT EVER infer a null value for a filter. This will break the downstream program. Instead, don't include the filter.
"""


In this example, schema_str is used to format the request schema, info_str is used to provide additional information, and query_str is used to structure the user's query.

For more information, you can refer to the LlamaIndex documentation ().
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai What is configuration about seeing the completed prompts in console?
Let me check the knowledge base for you. One moment! :books:
----------------------
To see the completed prompts in the console, you can use the print() function in Python. For example, if you have a function that completes prompts and stores the result in a variable named response, you can print this response using print(response).

In the context of LlamaIndex, if you're using the llm.complete() function to complete prompts, you can print the response as follows:

Plain Text
response = llm.complete(prompt)
print(response)


This will print the completed prompt to the console. For more detailed outputs, you can set the logging level to DEBUG.

For more information, you can refer to the LlamaIndex documentation ().
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
Add a reply
Sign up and join the conversation on Discord