Hello everyone !!! i would like some help/opinion. what "parsing instructions" would you use with llama-parse to extract only text and tables from a textbook like the one in the image ?
i am trying and trying again with multiple instructions, but it is obvious that i can't exploit this great functionality properly. I mainly use "accurate" and "premium-mode." Any suggestions are welcome ! thank you very much.
hello @Logan M ! sorry to bother you. is it possible that the reranker is skipped using the chat_engine on a query_engine and memory_buffer with context mode ? I am using cohere as the reranker, as specified in the example notebook.I checked the use of the api key in the Coehere dashboard and it doesn't seem to receive calls.
# Define the query engine with the reranker as a postprocessor query_engine = RetrieverQueryEngine.from_args( callback_manager=callback_manager, retriever=retriever, response_mode="compact", verbose=True, node_postprocessors=[reranker] )
memory = ChatMemoryBuffer.from_defaults(chat_history=messages or [])
## Define the Chat engine on Quey engine with memory buffer and context mode as query chat_engine = index.as_chat_engine( query_engine=query_engine, chat_mode="context", memory=memory, verbose=True )
hello everyone ! for llama-index, is it possible to implement the capabilities of KnowledgeGraph (Grafoù Properties) in a workflow that already uses QueryFusionRetriever, by merging bm25 + vector retriving ?