I had a quick question about how it would be possible to access the embeddings inside an index for comparison? I have tried printing output.source_nodes[0].node.embedding but it returns none rather than the actual embedding
Hi sorry had another question my code was working very well yesterday but I tried running it today and got the error "RecursionError: maximum recursion depth exceeded while calling a Python object" and I do no know where its coming from was there an update that could have caused this?
Hi I had a quick question about the Tree Summarize function does it only sum the top_k results initialized or more. Also what is the base LLM that is used because the Tree summarize works but I have not provided an API key
Hi I was wondering if changing the querymode in a custom retriever affected the way the similarity was calculated and if there was a good way of changing this. Thanks so much
Hi I hope you guys are well its been a little while. I had a couple of questions about the source_nodes. First is the entire source_node sent to the llm to generate a response and secondly is there any way to access the specific text field? i have tried .source_nodes['text'] but it has not seemed to work
Sorry to be back but does anyone know how to pass llms into different chat engines? I have been struggling all afternoon. I believe for contextengines you can pass service context but im not sure
why would someone do this "index_struct = self.index.index_struct vector_store_index = VectorStoreIndex(index_struct=index_struct)" and set the index equal to this rather than just using the index
Would it be possible to create a condensechat engine with a custom retriever? My idea would be to specify the query engine as a vectorretriever engine and then specify the custom retriever as the retriever for said engine
i just tried building a couple of collections and checked my usage and it did not go up. Is it only there so the service context it initialized with an llm but it doesnt actually use the api key to build?