Find answers from the community

n
node0
Offline, last seen 3 months ago
Joined September 25, 2024
n
node0
·

Hi all,

Hi all,
I am building a knowledge graph manually following the instructions here:
https://docs.llamaindex.ai/en/stable/examples/index_structs/knowledge_graph/KnowledgeGraphDemo.html
As you can see in the output, it states the following:
"ERROR:llama_index.indices.knowledge_graph.retrievers:Index was not constructed with embeddings, skipping embedding usage..."

Why is the index constructed without embeddings? If there are no embeddings, how does the query engine retrieve relevant nodes?
5 comments
n
L
Is there a way to get OpenAIAgent to return retrieved nodes?
5 comments
k
n
Is there a way to call OpenAIAgent or OpenAIAssistantAgent so that the output of the underlying tool is returned instead of the synthesized response from the llm?
2 comments
k
Is there a way to retrieve nodes based on metadata information only, and not the main text?
5 comments
k
n
Hi all, I built a multi-document agent according to this tutorial: https://docs.llamaindex.ai/en/stable/examples/agent/multi_document_agents-v1/, but I am getting very bad performance on the ObjectIndex retriever. I am not sure what index it is using retrieve the right agent. Does anyone have an idea? Thx.
4 comments
n
L
Is there a way to define custom similarity scores for querying and index? I want to do use a weighted score like similarity_score = (0.25)similarity_with_content + (0.75)similarity_with_metadata
2 comments
k
Can I combine multiple indexes into one vector index?
4 comments
n
L
k
Hi all, what is the difference between using llm.predict_and_call([function1, function2]...) and using an agent with FunctionCallingAgentWorker([function1 function2], llm=llm,...)?
6 comments
R
L
n
Hi all, Is there a way to have the intermediate or final layers of a Query Pipeline output a Pydantic object? For e.g, is there a way I can use OpenAIPydanticProgram in my Query Pipeline? I tried and got the following error:
AttributeError: 'OpenAIPydanticProgram' object has no attribute 'set_callback_manager'
2 comments
n
L
Is there a way to set_global_handler so that outputs are saved to a local direcory?
2 comments
k
When using a query pipeline, is there a way to save intermediate outputs to a local folder?
2 comments
k
Is there a way to cusomize the scoring algorithm used by a retriever?
9 comments
L
n
Hi all, in my query pipeline, I have a node that outputs a list of items. In the next node, I would like to take the list of items as input and send separate queries to the llm for each item; then link the outputs from all those queries into a final node. Is there a way to do this? Thanks in advance.
2 comments
n
L
n
node0
·

Querying

Hi all, I am having trouble using the SubQuestionQueryEngine to perform multi-document queries. This is my scenario: I have constructed a query engine using multiple documents, let's say documents A and B. I then ask the query engine to retrieve some information from document A, then use the output from A as input for a second query on document B. This is like using chains in Langchain. The SubQuestionQueryEngine does well in retrieving relevant data from A or B, but doesn't use the output from A in the second query on B. Any suggestions? Thanks in advance.
2 comments
n
L
@kapa.ai In Llama Index, is there a way to do a hybrid search by specifying how much the metadata and the text chunk should influence the similarity score?
2 comments
k
@kapa.ai how can I see the list of keywords extracted by a SimpleKeywordTableIndex?
2 comments
k
@kapa.ai My retriever always returns two nodes even though the top_k is set to 10 nodes. Any ideas why?
5 comments
k
n
Hi all, I am trying to get fine-grained control over what indexes are returned based on a query. I am thinking of defining a custom similarity score with Weaviate or Elasticsearch. Is it possible to use LlamaIndex for retrieving from a vector database using a custom similarity score? thx.
18 comments
R
n
@kapa.ai Is there a way to define a custom similarity score for determining the similarity between a query and items in a vector store?
3 comments
k
n
node0
·

Nodes

Hi all, I am trying to apply metadata filters to my queries. I have tried using a VectorIndexAutoRetriever but it doesn't perform well. I am getting better results using an OpenAIAgent to infer filter arguments for a function tool that applies the filter. The only problem with using the OpenAIAgent is that I need it to return nodes, but it only gives me a response object. Is there way to have the OpenAIAgent return the nodes returned by the function it called? Thanks in advance.
22 comments
n
L
Hi all, I'm trying to build a tool that would let users query/interact with hundreds of documents. In some cases, users are looking for a single document (e.g "Summarize document 5?"). In other cases, their query requires that the tool iterates over all documents and retrieves specific information (e.g. "How many documents mentioned X?"). What I am thinking of doing is building functions for each type of retrieval; for example, when looking for a single document, use an AutoRetriever with metadata filters. Then I would set up an agent to select which retriever is appropriate based on the query. Before I start building, I wanted to know if this is a good approach, or is there a better way? Thanks in advance.
2 comments
n
L
@kapa.ai Is there a way to save nodes?
8 comments
k
n
n
node0
·

Hi all,

Hi all,
Lets say you have a query pipeline where each step outputs an intermediate file which is used as input for the next step. If you already have one or more of the intermediate files available (for e.g. from a previous failed run), is there a way to instruct the query pipeline to skip that step and just use the available output? Thanks.
1 comment
L
Hi all, I recently upgraded from gpt-3.5-turbo to gpt-4-turbo, and I am starting to get a lot of timeout errors. I increased the timeout limit to 10 minutes (OpenAI(....,timeout=600)), but I noticed that the call still fails with a timeout error after about 5 minutes? Has anyone run into a similar issue? Thanks.
2 comments
n
C
When using Query Pipelines is there a way to format the outputs of the intermediate steps? At some steps, I would like to output a JSON string and save the intermediate output. I am currently using the prompt to generate JSON output, but it is not consistent. Thanks in advance.
9 comments
n
L