Find answers from the community

f
farez
Offline, last seen 6 months ago
Joined September 25, 2024
Hi all. Need some pointers on using llama-index with a proxy (pythonanywhere.com requires that APi calls go through their proxy).

How do I get llama-index, or more specifically, the pinecone library, to call Pinecone through a proxy?

Here's the error and explanation from pythonanywhere on the issue: https://help.pythonanywhere.com/pages/403ForbiddenError/#errno-111-connection-refused
3 comments
f
L
Hi all. Complete newbie here, spent the whole day looking for an answer, so I hope someone can help.

I've been able to create a chatbot, but now I want to be able to store the chat memory somewhere so that the user can continue their chat later in a different session.

As I understand it, LlamaIndex only keeps the memory within the same session. If so, what's the recommended method for saving the chat memory in a database and retrieving it later during another chat session?

Thank you!
10 comments
f
L
Hi all. LlamaIndex newbie here! Hope I can get some guidance from you guys here on a particular use case. I have been working on this with LlamaIndex, but stuck on a specific aspect of it. Details below:

I'm building a tool that helps people enquire about and submit applications. The kind that requires a lot of paperwork, e.g. resident or work visa applications.

When using the tool, we assume that the applicant could either be starting from zero, or they have a specific issue they need clarification on.

If starting from zero, they would be in advice-seeking mode, where they just need to find out what they need to do, what forms to fill in, who they need to talk to, and what the process looks like.

If they're asking about a specific issue, they just need answers and advice on how to complete a particular form or task, for example.

In both cases, the tool should be able to act as an advisor, asking the applicant questions, until it fully understand what the applicant needs. Only then will it provide appropriate advice, including which forms to fill in, what documents they need to provide, where to submit them to, etc.

The RAG part is fine. I've managed to ingest various documentation and use Sub Query Question Engine to answer complex questions.

But, I'm not stuck on the interactive Q&A part. How do I get the tool (agent?) to figure out what level of information the user needs and guide them appropriately via Q&A. It won't work if it's just the user asking questions and the agent answering that question because the agent should also be able to surface categories of information that the user is not aware of.

Is the Sub Query Question Engine still the appropriate tool for this. Is there another feature of LlamaIndex I should be looking at (eg. Chat engine with agent mode?). Or is it just a case of writing the right prompts?

Any tips appreciated. Thanks!
12 comments
f
L
e
Hi all. I'm thinking of moving our SaaS's AI backend to LlamaIndex. But I can't seem to find any licensing details. Is it ok to use LlamaIndex for commercial purposes, in our SaaS (we're bunni.ai)? Thanks.

/cc @leifjerami
2 comments
f
j