return json
. I use the {context_str} and {query_str}
correctly I think, they are being included in the prompt output that goes over the wire.anthropic_utils.py
and see the prompt pieces, is there an easy way to override this?.. from llama_index.llms import ChatMessage .. llm = Bedrock(model="anthropic.claude-instant-v1", temperature=0.001, profile_name=PROFILE_NAME, max_tokens=max_output_tokens, context_size=max_input_tokens, messages_to_prompt=[ChatMessage(role="assistant", content=' {"answers":')] )