Find answers from the community

Updated 3 months ago

Getting started with llamaindex for making llm calls and deploying

At a glance

The community member is new to Llamaindex and is looking for guidance on how to start making a chain of LLM (Large Language Model) calls with loops and deploying them to see how they behave with multiple requests in the same time. Another community member asks if they can do this with structured outputs from OpenAI and if Llamaindex provides wrappers for that. The response indicates that yes, this can be done by passing strict=True and provides a link to the relevant Llamaindex documentation.

Useful resources
I am very new to llamaindex, where to start for a
  1. Making a chain of LLM calls with loops
  2. deploying and seeing how it behaves with multiple requests in the same time
Could you please indicate the core concepts/docs that would help me?
a
W
2 comments
Can I do it with structured outputs from openai as well? Is Llamaindex providing wrappers for that too?
Add a reply
Sign up and join the conversation on Discord