Find answers from the community

Updated 3 months ago

Hai

Hai,
I am new to Llamaindex.
I wanted to use Llamaindex in my task but how much i gone through the tutorial most of the people are using OpenAI. But i have falcon model that was in the form of API and I created a function that takes context, question, model name as input and return the generated text as output but I am confused how to use this in Llamaindex.
can someone please guide me how can i declare my llm.
L
l
17 comments
Hai @Logan M
you mean like this class---> class OurLLM(CustomLLM):
Yea, exactly. Then the complete calls would call your API
Hai @Logan M
I am getting error can you please help me.
for your reference I have attached the screenshot
Attachment
image.png
ah I think the input is intended to be a path object

from pathlib import Path
load_data(file=Path(...))
Thanks @Logan M Now it is working
Hai @Logan M ,
I am trying to create a multimodal chatbot where I want use Llama index for the backend and Gradio for the front end can you please guide me the backend steps. Already I created the frontend.
Hai @Logan M
What will be the best number for these two parameters
Plain Text
context_window = 2048
num_output = 128
because sometime i am getting incomplete response. I have attached the screenshot and please guide how i can do it better.
Attachment
image.png
It's tough when using a model like falcon which has such a small context window. 2048 is the max for falcon

You can try increasing num_output a bit, to maybe 256. You'll also want to make sure you LLM is setup to actually generate 256 tokens too
Hai @Logan M
Can you please help me to sort out the below error
Attachment
image.png
documents is not a list (just a single document), so need to change it to a list
.from_documents([documents])
Thanks its working.
Hai @Logan M
How can I do QA if my data is available in a database.
Can you please guide me how to do this?
Add a reply
Sign up and join the conversation on Discord