The community member is having an issue with Langfuse and LlamaIndex, where certain steps like embedding the user's question and looking up documents in Qdrant do not have their own spans. They are asking if there is a way to get these to show up, and if they are missing a config or option somewhere.
The comments suggest that the community member should pass the callback manager into everything, including the embedding model, as it is not applied as the default to the embedding model, while it is applied to the LLM. Another community member confirms that this resolved the issue, but notes that the logic is "kind of janky".
The community member also wants to be able to share the documents used in producing an answer and certain parts of the metadata about them, and asks if there is a way to get the doc list and metadata from the higher-level APIs when making a query. They also wonder if this could be accomplished with QueryPipeline and Agents.
After integrating Langfuse with LlamaIndex, I noticed that certain steps did not have their own spans, notably the embedding of the user's question and the subsequent lookup of docs in Qdrant.
Is there a way to get these to show up? Am I missing a config or option somewhere?
yeah, janky is a good word for it, but there are times where one wants to get into the guts to see how things work and come together. This is partly, part of the learning process, but I also want to be able to share the documents used in producing an answer and certain parts of the metadata about them.
Is there any way to get the doc list & metadata used from the higher level APIs when making a query?