Find answers from the community

Home
Members
emmepra
e
emmepra
Offline, last seen 3 months ago
Joined September 25, 2024
hey guys, I was wondering if any of you ever thought about asking a model to return the most relevant/cited topics across its context. I tried but obviously it failed not being able to correctly query the DB (chromadb in this case) for this specific task. Anyone working/worked on this and can share suggestions?

Thanks!
35 comments
e
L
E
Hey there! One quick question if i have an agent with multiple function tools, how can i let it take the output of one function and use it as the input of the subsequent?
11 comments
e
L
Im currently deploying celery as etl management system, what do you think about using ingestion pipeline workers?
11 comments
e
L
hey guys, hope everything going great!
just a quick question, in my app I would like to stream model response but also return retrieved context nodes, which I believe could be returned the moment it starts generating the response. Is there any standard way do so? Thanks a lot!
4 comments
e
T
that would be of absolute value
4 comments
e
L
e
emmepra
·

Jinai

I have a little issue with ChromaDB server embeddings, QueryEngine is returning: Type is not JSON serializable: numpy.float64. I found this https://github.com/run-llama/llama_index/pull/11458/files which seems to fix the issue, but it's llama_index.core only, how can I do if I'm using legacy (due to JinaAI needs, not included in core)? thanks a lot
9 comments
L
e
Same here, try importing: from llama_index.core.indices import VectorStoreIndex
1 comment
W
i'm trying to deploy my IngestionPipeline using ChromaVectorStore and TextEmbeddingsInference both containerized as docker services but i'm getting the following issues:

Plain Text
ValidationError                           Traceback (most recent call last)
...
ValidationError: 2 validation errors for IngestionPipeline
transformations -> 1
  Can't instantiate abstract class TransformComponent with abstract method __call__ (type=type_error)
vector_store
  Can't instantiate abstract class BasePydanticVectorStore with abstract methods add, client, delete, query (type=type_error)


this is how i setup the pipeline:

Plain Text
vector_store = ChromaVectorStore(host="localhost", port=8000, chroma_collection="articles")

ingestion_pipeline = IngestionPipeline(
    transformations=[
        TokenTextSplitter(chunk_size=512),
        TextEmbeddingsInference(
            base_url='http://localhost:8001',
            embed_batch_size=10,
            model_name="BAAI/bge-small-en-v1.5"
        ),
    ],
    vector_store=vector_store,
)
10 comments
L
e
W
how can i use local models for summarization from HF in llamaindex?
18 comments
L
e