Find answers from the community

Updated 12 months ago

How do I find the original prompt templates for various response synthesizers? How do I k

At a glance
The community members are discussing how to find and modify the original prompt templates for various response synthesizers in the LlamaIndex library. The key points are:

- LlamaIndex uses default prompt templates like text_qa_template and refine_template for different tasks.

- The original prompt templates can be found in the LlamaIndex GitHub repository.

- To access the prompts used within a module, the get_prompts() method can be used.

- To modify the prompts, custom prompts can be passed during query-time.

The community members also discuss an issue with the InstructorEmbedding class and the sentence_transformers package, but no clear solution is provided.

Useful resources
@kapa.ai How do I find the original prompt templates for various response synthesizers? How do I know which ones to modify and how to name them. Example text_qa_template refine_template
k
c
10 comments
Please stand by @cablecutter as I search the knowledge base. I normally take about 20 seconds or so. :wave:
----------------------
LlamaIndex uses a set of default prompt templates that are used for various tasks such as building the index, insertion, traversal during querying, and synthesizing the final answer. The most commonly used prompts are text_qa_template and refine_template.

  • text_qa_template is used to get an initial answer to a query using retrieved nodes.
  • refine_template is used when the retrieved text does not fit into a single LLM call with response_mode="compact" (the default), or when more than one node is retrieved using response_mode="refine".
You can find the original prompt templates in the LlamaIndex GitHub repository. Here are the links to the default prompt templates and chat model specific prompts:

  • Default prompt templates ()
  • Chat model specific prompts ()
To access the prompts used within a module, you can call get_prompts on many modules in LlamaIndex. This will return a flat list of prompts used within the module and nested submodules. For instance:

Plain Text
query_engine = index.as_query_engine(response_mode="compact")
prompts_dict = query_engine.get_prompts()
print(list(prompts_dict.keys()))


This might return keys like ['response_synthesizer:text_qa_template', 'response_synthesizer:refine_template']. Note that prompts are prefixed by their sub-modules as “namespaces”.

To modify the prompts, you can pass in custom prompts directly during query-time. Here's an example of how to do it:

Plain Text
query_engine = index.as_query_engine(
    text_qa_template=custom_qa_prompt, refine_template=custom_refine_prompt
)
For more details on which classes use which prompts, you can visit the Query class references () and the reference documentation () for a full set of all prompts.

Sources:
  • LlamaIndex Documentation - Prompts ()
  • LlamaIndex Documentation - Usage Pattern ()
  • LlamaIndex GitHub - Default Prompts ()
  • LlamaIndex GitHub - Chat Prompts ()
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai Are the InstructorEmbeddings not working right now?

I have tried multiple examples including https://docs.llamaindex.ai/en/stable/examples/embeddings/huggingface.html#huggingfaceembedding
And
https://docs.llamaindex.ai/en/stable/examples/embeddings/custom_embeddings.html

Both result in the error:

/usr/local/lib/python3.10/dist-packages/sentence_transformers/SentenceTransformer.py in init(self, model_name_or_path, modules, device, cache_folder, trust_remote_code, revision, token, use_auth_token)
192
193 if is_sentence_transformer_model(model_name_or_path, token, cache_folder=cache_folder, revision=revision):
--> 194 modules = self._load_sbert_model(
195 model_name_or_path,
196 token=token,

TypeError: INSTRUCTOR._load_sbert_model() got an unexpected keyword argument 'token'
Let me check the knowledge base for you. One moment! :books:
----------------------
Based on the error message you provided, it seems like there's a compatibility issue between the InstructorEmbedding class and the sentence_transformers package. The error is raised because the _load_sbert_model method in the INSTRUCTOR class is receiving an unexpected keyword argument 'token'.

Unfortunately, the extracts provided do not contain specific information on how to resolve this issue. It's possible that the issue might be due to a version mismatch between the sentence_transformers package and the InstructorEmbedding class.

Here are a few general suggestions that might help:

  1. Check your package versions: Make sure you're using the correct versions of the sentence_transformers and InstructorEmbedding packages. You might need to update or downgrade these packages to make them compatible.
  1. Check the method signature: Look at the _load_sbert_model method in the INSTRUCTOR class and see what arguments it expects. If 'token' is not one of them, you might need to modify your code to exclude this argument.
  1. Check the documentation or source code: The documentation or source code for the InstructorEmbedding class and the sentence_transformers package might provide more information on how to use these resources correctly.
If these suggestions don't resolve the issue, I recommend reaching out to the maintainers of the InstructorEmbedding class or the sentence_transformers package for further assistance.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
If token is expected, how should token be defined in the argument? What token is expected here?
@kapa.ai If token is expected, how should token be defined in the argument? What token is expected here?
Let me check the knowledge base for you. One moment! :books:
----------------------
Based on the information provided in the extracts, it's not clear what the 'token' argument in the _load_sbert_model method refers to. The error message suggests that this argument is not expected by the method, which is causing the TypeError.

In general, a 'token' in the context of language models and transformers often refers to a piece of text that the model has been trained to understand. However, without more specific information from the method's documentation or source code, it's hard to say what 'token' should be in this context.

Here are a few general possibilities:

  1. Authentication token: If the method is trying to access a remote resource, 'token' could be an authentication token used to verify the request.
  1. Text token: In the context of language models, a 'token' often refers to a piece of text that the model has been trained to understand.
  1. Model token: Some methods might use a 'token' to refer to a specific model or version of a model.
To resolve this issue, you would need to refer to the documentation or source code of the _load_sbert_model method to understand what the 'token' argument is supposed to be. If the method's documentation or source code is not clear, you might need to reach out to the maintainers of the InstructorEmbedding class or the sentence_transformers package for further assistance.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
Add a reply
Sign up and join the conversation on Discord