Find answers from the community

Updated 2 months ago

Hey guys - Quick one - is there a way of

Hey guys - Quick one - is there a way of using llama index without using RAG? i.e. just for generating a response using Azure OpenAI ?
L
H
3 comments
The LLM objects all have complete() and chat() functions -- you can use those directly
awesome, sorry to sound dumb, do you have an example?
Plain Text
resp = llm.complete("Hello world")
resp = llm.chat([ChatMessage(role="user", content="Hello world")])
Add a reply
Sign up and join the conversation on Discord