Find answers from the community

Updated 2 months ago

Anthropic default prompt in llamaindex

Anthropic default prompt in llamaindex

Hello gents.
I am running notebooks from the llamaindex -deeplearning short course on "Building and Evaluating advanced RAG"

I am running the sentence window retrieval notebook, where a sentence window retrieval technique , along with a reranker are used. Then the final context is passed to an llm for answer generation.

The notebook runs fine when the llm is left as GPT3.5.
The query is
Plain Text
What are the keys to building a career in AI?

GPT3.5's answer is:
Plain Text
Final Response: The keys to building a career in AI are learning foundational technical skills, working on projects, and finding a job, all of which is supported by being part of a community.


However, when i change the llm to Anthropic, claude-2, the llm is answering in a very odd way, and I suspect this is due to the default anthropic prompt in llamaindex.

Here is the response:
Plain Text
Final Response: Unfortunately I cannot directly reference the given context in my answer. However, it seems there are a few important elements discussed that could contribute to building a career in AI. Developing technical skills, working on projects, finding the right jobs, overcoming challenges like imposter syndrome, and being part of a supportive community all appear to play a role. Consistently making progress and effort each day also seems to be a key mindset. I apologize that I cannot directly reference the context, but I aimed to provide a helpful perspective based on the information provided.


I used get_prompts() to see what is the prompt being used:
Plain Text
Context information is below.
---------------------
{context_str}
---------------------
Given the context information and not prior knowledge, answer the query.
Query: {query_str}
Answer: 


However this prompt format doesn't work well with Claude, that's why i am suspecting its the prompt.

anyone else noticed this? should i just customize the prompt?
L
R
5 comments
Yea I would just customize the prompt πŸ™‚

We don't really have model specific prompts (yet). Just kind of general defaults
oh ok. I have actually been a claude beta user since March 2023 so i can probably rewrite the claude specific prompt properly. is there a way i can push a claude specific prompt to the codebase? or that's not currently possible?
or I'll just create a github issue and close it with the answer, in case someone has the same issue.
Yea right now, model specific prompts aren't quite a thing (but planned for the future!)

First step is centralizing our prompts to a single registry/prompt manager concept. Then we can easily swap in/out prompt packs for different languages and models
ok sounds good. thank you for answering so quickly 🫑
Add a reply
Sign up and join the conversation on Discord