Find answers from the community

Updated 6 months ago

https://docs.llamaindex.ai/en/stable/

At a glance

The community members are facing issues with using the Pandas query engine with open-source language models like Zephyr 7b. They are seeking guidance on working with CSV files and building a RAG chatbot. The comments suggest that using open-source language models for specific outputs or outputs that need to be parsed requires a lot of work to be reliable. Additionally, the community members are experiencing slow output generation with the CodeLLaMA instruct model and are looking for ways to speed up the process and enable memory/chat history in the Pandas query engine.

Useful resources
https://docs.llamaindex.ai/en/stable/examples/pipeline/query_pipeline_pandas.html

Pandas query engine is not working properly with open source llms like Zephyr 7b. Any guide to work with CSV files , building rag chatbot
L
T
3 comments
Did you set up the prompts properly for zephyr? In any case though, using open-source LLMs for very specific outputs, or outputs you need to parse, requires a lot of work to be reliable
Seems codellama instruct model is working but taking lot of time to give output. Any guide how to make output generation fast
Also how to enable memory/chat history to pandas query engine.
Add a reply
Sign up and join the conversation on Discord