Find answers from the community

Updated 2 months ago

my prompt template just ends in `return

my prompt template just ends in return json. I use the {context_str} and {query_str} correctly I think, they are being included in the prompt output that goes over the wire.
n
L
15 comments
I'm looking at anthropic_utils.py and see the prompt pieces, is there an easy way to override this?
a simple way to override ASSISTANT_PREFIX would be valuable
looks like the answer is messages_to_prompt but I'm looking for the right syntax here, keep getting an error saying instance of MessagesToPromptType
(obscure pydantic error here)
here's my code,
Plain Text
..
from llama_index.llms import ChatMessage
..

llm = Bedrock(model="anthropic.claude-instant-v1",
temperature=0.001,
profile_name=PROFILE_NAME,
max_tokens=max_output_tokens,
context_size=max_input_tokens,
messages_to_prompt=[ChatMessage(role="assistant", content=' {"answers":')]
)
@Logan M hate to ping you, but any idea?
llama-index 0.9.44
messages_to_prompt should be a callable
Plain Text
def messages_to_prompt(messages):
  return "\n".join([str(x) for x in messages])
something like that
now I am getting prompt must start with "Human:" turn after an optional system prompt trying to override the messages_to_prompt
Add a reply
Sign up and join the conversation on Discord