Find answers from the community

Home
Members
korzhov_dm
k
korzhov_dm
Offline, last seen 3 months ago
Joined September 25, 2024
hey!

Is it possible to setup custom retry?
1 comment
L
hey)

Is there any info how to add memory in my current Llama index (GPTVectorStoreIndex) and add system message for my bot?
59 comments
L
k
Hey guys!

I faced with a problem, when I use Pinecone index and try to query it when faced with problem on image below.

Maybe you know where I made a mistake.
13 comments
L
k
Any ideas why reference you provided contain this?

import logging import sys logging.basicConfig(stream=sys.stdout, level=logging.INFO) logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))

I noticed that such code only with Pinecone case
1 comment
L
What about other metadata? Like category?
1 comment
L
Guys, one question:

I have 2 database, like first for documents (a lot of documents) and second for tickets (a lot of tickets). My idea is create 2 indexes for each one and I need somehow combine them to be able ask questions in one single place. I have looked to ListIndex, but I think it is not that I need.

So, question about:)

sorry for mention you, but you always give me advices which instantly works)))
8 comments
k
L
Guys hey!

I have used LangChain and LLama as a indexes and notice one major difference: Langchain give me answer based on several sources (not only one document) and return me these sources. Llama always give me response based on one single document. Is it possible to the same as Langchain has? I mean answers based on several sources?
10 comments
k
L
Hey guys!

I have a csv with content and some metadata like link on the content and title of content.

I already made a code which allow to have ask question and get responses. But I have one simple question: How can I display not only answers, but and metadata of answers? Like title and link?
17 comments
L
k
hey!

I loaded documents with this docs (https://gpt-index.readthedocs.io/en/latest/examples/vector_stores/PineconeIndexDemo.html) and now I have a questions. How to load documents via Pinecone instead of creating indexes every time?

I already have all indexes in my Pinecone account, so how can I query it with Llama and have quick access to them?
7 comments
L
k
s
k
korzhov_dm
·

Llm

Hey @Logan M !

How are doing?

Can you assist me please a bit?

I've trying to use Anthropic in the basic Q&A flow, but I guess I did something wrong:(

Here is a my code example:
llm = Anthropic(api_key=api_key)
llm_predictor = LLMPredictor(llm=llm)
2 comments
L
Hey @Logan M!

Is it possible to setup streaming answer for gpt-4 / gpt-3.5?
2 comments
k
L