Find answers from the community

Updated 3 months ago

View outputs inputs

We're using an internal tool to assess various open source LLMs against GPT-3.5. Is there a way to retrieve the exact prompt / prompt chain that was fed to OpenAI via llama_index (like the stuff you see when verbose is set to True and the logger is set to DEBUG)? This way we can create a test set for comparison.
L
a
3 comments
Try checking out the llama logger, it will return a list of helpful stuff ๐Ÿ‘Œ

Check out the bottom of this notebook
https://github.com/jerryjliu/llama_index/blob/main/examples/vector_indices/SimpleIndexDemo.ipynb
๐ŸคŸ @Logan M
Add a reply
Sign up and join the conversation on Discord