I'm kinda curious, why is LlamaIndex so integrated with OpenAI functionality as opposed to more open-sourced alternatives? It seems a bit misleading when Llama is in the name
thres a decent amount of support in the library these days for open-source stuff. Probably the biggest glaring issue is the defaults on openai, which can be sometimes frustrating to change
Hoping to fix that in the coming weeks by replacing the service context with proper global settings
I see, thanks for the response! Hoping the newer open-source models are able to meet those expectations. If not all from the same model, then certainly different specialized models for a given task.