Find answers from the community

Home
Members
cmsimike
c
cmsimike
Offline, last seen 3 months ago
Joined September 25, 2024
Hi! I'm trying to understand the difference in output I'm seeing between using OpenAI vs a model hosted on HuggingFace (when I create an llm for the service context using HuggingFaceLLM) using the NLSQLTableQueryEngine. If I use OpenAI directly, I get a whole preamble of:
Given an input question, first create a syntactically correct sqlite query to run, then look at the results of the query and return the answer with a lot more prompt prepping openai for a response.
But if I use a model and wrap it around a HuggingFaceLLM class, i only get the SQL tables dumped out to the LLM.
Overall performance of non-OpenAI models have been poor in comparison (which might be a no-duh" comment, even using the 70B llama2) when using NLSQLTableQueryEngine but im just trying to make sure ive explored using this path as much as i can before falling back to openai
5 comments
c
L
If you don't mind me asking, but what makes you think it would be challenging? I ask only because if this is more effort than it's worth, I might try another approach. I don't want to spin my wheels here, fighting against the grain.
25 comments
L
c