The community member is using index.as_chat_engine() with OpenAI models and TextNode of PDF page texts with id_ attributes that point to PDF file pages. They want the ChatEngine to perform retrieval on the embedding of the textual page, but when the engine sends the retrieved nodes to the model, they want it to send them as images of the PDF pages. The community members are asking if it is possible to alter or extend the ChatEngine behavior to achieve this.
In the comments, one community member suggests that the community member might need to make their own chat engine using the language model and retriever directly. Another community member says this is a fun use case for workflows, but it can be done without them.
I am using index.as_chat_engine() with OAI models. I use TextNode of PDF page texts with id_ attributes that point to pdf file pages. When user queries something I want this ChatEngine to perform retrieval on embedding of that textual page but when this engine sends retrieved nodes to the model I want it to send them as images of these pdf pages.
Is it somehow possible to alter/extend this ChatEngine behaviour so that it works as descirbed?