Find answers from the community

Updated 3 months ago

Does anyone know if there is a way to

Does anyone know if there is a way to print the result of a custom prompt template? The following is my current implementation, and I'd like to be able to view the actual resulting prompt that gets sent to the LLM. Thanks in advance!:

Plain Text
query_engine = index.as_query_engine(text_qa_template=Prompt(CHAT_PROMPT), verbose=True)
        chat_engine = index.as_chat_engine(
            query_engine=query_engine,
            chat_mode="context",
            chat_history=chat_history,
            query_str=query,
            similarity_top_k=2,
            verbose=True,
        )
L
M
7 comments
The easiest way to see what gets sent to the LLM is probably the token counting handler

At the bottom of the notebook, you can see how to view every llm input/output after running a query
https://gpt-index.readthedocs.io/en/stable/examples/callbacks/TokenCountingHandler.html#token-counting-handler
Damn man, you've got all the answers, you're gonna put the AIs out of job lol! Much appreciated!
LOL I know the codebase/docs inside and out πŸ˜‰
@Logan M I can tell lol. Also, that solution totally worked, but I've kind of run into some issues trying to pass a custom prompt template to a chat_engine. I'm glad I can see the prompt output now though, cause I didn't realize that it was still using the default prompt. I can't tell if the issue is because I'm using context mode for the chat engine. But I feel like I've tried like every possible configuration to get it to use my custom prompt, and I'm kind of stuck at this point. Is it just not possible to pass a custom prompt to a chat engine directly?
Actually, I may just try to use a normal query engine. Do you know if there are any other benefits to using a chat engine rather than a query engine other than the added statefulness of the chat_history? I'm currently handling chat history on the client side, so I'm not necessarily utilizing that functionality if that's all its useful for.
Since the context chat engine is only using the retriever, the only place for a custom prompt is in the system prompt

index.as_chat_engine(chat_mode="context", system_prompt="Talk like a pirate.")
The benefit is just having the chat history. You could also manually manage it and insert the chat history into the query or the prompt template πŸ‘
Add a reply
Sign up and join the conversation on Discord