Getting started with llamaindex for making llm calls and deploying
Getting started with llamaindex for making llm calls and deploying
At a glance
The community member is new to Llamaindex and is looking for guidance on how to start making a chain of LLM (Large Language Model) calls with loops and deploying them to see how they behave with multiple requests in the same time. Another community member asks if they can do this with structured outputs from OpenAI and if Llamaindex provides wrappers for that. The response indicates that yes, this can be done by passing strict=True and provides a link to the relevant Llamaindex documentation.