Find answers from the community

Updated 2 months ago

has anyone faced issues with llama3 with

has anyone faced issues with llama3 with the ollama integration with llamaindex ?
same prompt, no custom parameters, diferent responses from ollama run llama3 and llm.complete

it feels way more dumb when called through llm.complete

i want llama3 to fix a json and with ollama run llama3 it returns a perfect json, while llm.complete messes the structure and returns only 10-20% of the string
L
p
4 comments
I saw you opened a github issue. If you provide a reproducible example, happy yto take look
provided on github
posted an update. Tbh it worked well for me lol but curious to see if using chat() and updating helps for you
Add a reply
Sign up and join the conversation on Discord