Find answers from the community

Home
Members
aszaiman1
a
aszaiman1
Offline, last seen 3 months ago
Joined September 25, 2024
I had a quick question about how it would be possible to access the embeddings inside an index for comparison? I have tried printing output.source_nodes[0].node.embedding but it returns none rather than the actual embedding
5 comments
a
L
a
aszaiman1
·

Faiss

also when building a faiss index would it use the vectorqueryretriever or the faiss reader
2 comments
a
L
Hi sorry had another question my code was working very well yesterday but I tried running it today and got the error "RecursionError: maximum recursion depth exceeded while calling a Python object" and I do no know where its coming from was there an update that could have caused this?
6 comments
L
a
Hi I had a quick question about the Tree Summarize function does it only sum the top_k results initialized or more. Also what is the base LLM that is used because the Tree summarize works but I have not provided an API key
27 comments
a
L
hi this is another beginner question is there anyway to call upon an existing collection rather then recreating and reindexing it
31 comments
s
L
a
Hi I was wondering if changing the querymode in a custom retriever affected the way the similarity was calculated and if there was a good way of changing this. Thanks so much
2 comments
a
d
Hi I hope you guys are well its been a little while. I had a couple of questions about the source_nodes. First is the entire source_node sent to the llm to generate a response and secondly is there any way to access the specific text field? i have tried .source_nodes['text'] but it has not seemed to work
7 comments
a
L
T
Some of my chat responses are getting cut off when they are outputted is there a way to get around this without streaming the message?
4 comments
a
L
would it be possible to preprocess and post process the sources the retriever retrievers before sending them to the llm?
30 comments
a
L
and what is the fastest chat engine
2 comments
a
L
Sorry to be back but does anyone know how to pass llms into different chat engines? I have been struggling all afternoon. I believe for contextengines you can pass service context but im not sure
1 comment
L
why would someone do this "index_struct = self.index.index_struct
vector_store_index = VectorStoreIndex(index_struct=index_struct)" and set the index equal to this rather than just using the index
33 comments
a
L
Would it be possible to create a condensechat engine with a custom retriever? My idea would be to specify the query engine as a vectorretriever engine and then specify the custom retriever as the retriever for said engine
4 comments
a
L
i just tried building a couple of collections and checked my usage and it did not go up. Is it only there so the service context it initialized with an llm but it doesnt actually use the api key to build?
7 comments
a
L