has anyone faced issues with llama3 with the ollama integration with llamaindex ? same prompt, no custom parameters, diferent responses from ollama run llama3 and llm.complete
it feels way more dumb when called through llm.complete
i want llama3 to fix a json and with ollama run llama3 it returns a perfect json, while llm.complete messes the structure and returns only 10-20% of the string