Find answers from the community

Home
Members
hansson0728
h
hansson0728
Offline, last seen 2 months ago
Joined September 25, 2024
h
hansson0728
·

Legacy

just found out the long hard way that the BM25Retriver in the legacy pack is not working. the other one does (llama_index.retrievers.bm25 )
4 comments
L
h
okey, so i have my custom retriver to query my indexes and return Nodes, i want to combine it with a chatMemory and use it as a Chat engine, how would i go about doing that ? i dont want to use a response syntesiser since i want to keep my LLM calls to a minimum, ideas ?
2 comments
h
L
okey, so i have a pipeline with a redis cache, anyone know how i can remove a document from the cache ? without removing the whole cache ?
7 comments
h
L
any ideas on how to normalize BM25 score with vector scores without using the Reciprocal Rerank Fusion Retriever.. since i collect my nodes my self.....
1 comment
L
Prev/next node postprocessor does not work with the docstore created with the ”redis fullstacl” example ? Or maybe iam doing something wrong ???
5 comments
L
h
i want to use the simpledirectory reader and i have a csv but its ; delimited can i define this somehow ?
5 comments
L
h
what am i doing wrong here:
index = VectorStoreIndex.from_vector_store(
vector_store=vector_store,
service_context=service_context
)

retriever = index.as_retriever(
vector_store_kwargs={"filter": {"file_name": FileName } },
similarity_top_k = 1000,
)

the filter is not applied
1 comment
L
is it possible to list how many nodes in a vector store belongs to a specific document ?
3 comments
h
W
Anyone has some good reading material on using llamaindex llamacpp ie local llm on low spec devises for rag, ?? Iam looking for things to do to optimize my request as much as possible before sending them to the llm since in my case right now the llm is the bottleneck mostly looking i to rag features
9 comments
L
h
i have a vector db created with an ingestion pipeline, can i after the db has been populated, augemnt it with for example more meta data like titleextractor ? without rebuilding the whole index ?
2 comments
h
L
Iam building a docker app that will be completly airgapped how or what is a good solution for an embedder?
6 comments
L
h
when i have created an chromaDB with vector indexes, how can i get the filename metadata so i can compare it against incoming docs, to prevent from indexing the same document several times ? anyone ?
3 comments
h
L
i have Python llamacpp running in an container exposing the API, how can i connect to it with Llamaindex ?? when i import (from llama_index.llms import LlamaCPP) it wants to run llamaCPP on my local host but i want to connect to another host
13 comments
W
h
chat memory is broken since last update ?
3 comments
h
L
h
hansson0728
·

Question:

Question:

i have a bunch of documents in a vector store, and when i do retrival from the store the score is the same for all chunks recevided from each document, 5 chunks from docA has the exact same score, then 5 chunks from docB has the exact same score, as i understand the embedding is on document level. but then how can i know wich of the 5 chunks from doc A that really has the best match score ? i could do reranking... but lets say i do top_K 200 and i have 3 documents with 100 chunks each, doc a returns 100chunks with the same score and docB returns 100 chunks with same score, but in reality 1 chunk of the 100 chunks from docC is the "right" chunk.... i dont understand
9 comments
h
L
hmm iam working on a project, where iam very limited in GPU/CPU resources, and i want to limit my LLM calls as much as possible, focusing more on retreival.
how can i pass my retrivals to the LLM without usig the response_synthesizer wich calls the LLM several times depending on the retrived reults.
3 comments
h
f
Anyone know how i can use HierarchicalNodeParser as a transomer in an ingestion pipeline (redis) ??
2 comments
h
L
h
hansson0728
·

Agent

Anyone have any ideas on a minimal model to use as agent ?
1 comment
W
anyone have any information on how to run/create OpenAI agents running towards Local LLM (llamacpp)
6 comments
L
W
h
Ive read the redis ingestion pipeline and the redis index + docstore, i would really want to combine them in other words build the summary and key index in the ingestion pipeline is that possible ?
2 comments
h
L
h
hansson0728
·

Storage

Iam trying alot of things now and i was wondering wich kinds of indexes can i store in pg ? I saw the documentation for vectordb but what about docstore, summaruindexes and so forth?
29 comments
o
L
h
Anyone know where i can find more info on chatstorage ? Persist to db, recall history load sessions and so forth?
15 comments
h
L