Find answers from the community

H
Hamed
Offline, last seen 3 months ago
Joined September 25, 2024
Any idea how to integrate observability platforms (langfuse / Arize) to Llama Index (TS)?

Langufse docs is missing in both Python and TS but I found the Python integration for Arize. My backend is in Node tho :/
2 comments
H
W
Has anyone succeeded with using BM25 retriever using a VectorStoreIndex? The documentation says it can receive nodes, docstore or index, but when using Index (from PGVectorStoreIndex), it throws ZeroDivisionError (rightfully as index.docstore is empty).
1 comment
L
Hi everyone! Has anyone managed to get source_nodes before the response is generated in the streaming cases?

sec-insight is doing something similar by showing the SubQuery questions before the response is generated but the code is way too complex for my understanding (using anyio and different channels for each process). I was wondering if there's a simpler way?

I tried with something like this but doesn't work :/
Plain Text
streaming_response = chat_engine.stream_chat(last_message.content)

message = {
    "type": "sources",
    "data": [str(i) for i in streaming_response.source_nodes],
}
yield "Context received, before LLM call: " + str(time.time())
yield "\n\n"

for token in streaming_response.response_gen:
    yield token


Using it for streaming in FastAPI backend.
7 comments
A
L
k
H