Find answers from the community

a
andy
Offline, last seen 2 months ago
Joined September 25, 2024
a
andy
·

Retrive

If I create an index from documents and split it further into nodes via SentenceSplitter(). How does that work when I try to get the MRR and Hit Rate using llama index? The ground truth context will never be equal to the retrieved context because it was broken down via SentenceSplitter?
1 comment
L
How does the Autoretriever pass an updated query string for semantic search (with removal of metadata in query string) at query time?
Where In the code does it do that? Is it LLM based?


https://docs.llamaindex.ai/en/stable/examples/retrievers/auto_vs_recursive_retriever/

The link above states:

Metadata Filters + Auto-Retrieval: Tag each document with the right set of metadata. During query-time, use auto-retrieval to infer metadata filters along with passing through the query string for semantic search.

Can someone point me in the code where the query string is parsed for semantic search?
1 comment
L
a
andy
·

Vector

when i do VectorStoreIndex.from_documents() where under the hood in the source code are the embeddings created?
1 comment
L
a
andy
·

neo4j embeddings

Is there a way to store embeddings in existing neo4j graphs using the new llama index propertygraphindex framework?
2 comments
a
If I already have an existing neo4j graph. can i use the PropertyGraphIndex to load the neo4j graph but also leverage a vector store as well? The examples online dont show how this would work with a graph created outside llama index
8 comments
L
a
does llama index support batch requests so i can have multiple queries in one request
2 comments
k
what is the best way to batch requests to openai use llama index?
1 comment
L
a
andy
·

Nodes

How do I get nodes from an existing VectorStoreIndex?

index.docstore.docs.values() is always an empty dictionary…
14 comments
a
W
L
where is the default response synthesizer for the propertygraph stuff? @WhiteFang_Jr @Logan M
5 comments
L
a
I'm using a query engine in llama index and the following line of code..

Plain Text
response = query_engine.query('What was the first direct-to-video title produced by the company that co-purchased the rights to "Hustle & Flow"?')


Keeps giving me this error: Retrying llama_index.llms.openai.base.OpenAI._chat in 0.6104585767511872 seconds as it raised InternalServerError: Error code: 500 - {'statusCode': 500, 'message': 'Internal server error', 'activityId': 'b27959c7-65a9-46fc-8c84-8cb13ffc30a3'}.

Until it eventually fails. But when I remove the quotes for Hustle & Flow it works....

I'm using gpt-4o as the LLM - does anyone know why this is happening? Is it because of special characters or something?
@Logan M @WhiteFang_Jr
3 comments
J
L
a
andy
·

Calls

I get the following trace when using the callback manager. I'm just doing a query from the query_engine - I'm seeing that it's hitting the LLM more than once - causing the response time to be longer. Why is that? What can cause that to happen? @WhiteFang_Jr @Logan M

Plain Text
********
Trace: query
    |_query -> 6.464402 seconds
      |_synthesize -> 5.598681 seconds
        |_templating -> 2.5e-05 seconds
        |_llm -> 2.454756 seconds
        |_templating -> 2.8e-05 seconds
        |_llm -> 3.094954 seconds
********
1 comment
L
a
andy
·

Graph

where in the source code does calling PropertyGraphIndex.from_documents() create embeddings to store in Neo4J nodes?
1 comment
L
how can you pass in metadata filters at query time in llama index. is that supported?
for example something like this pseudo code..


Define the query with metadata filters

query = {
"query": "content",
"filters": {
"author": "Alice",
"date": "2023-01-03"
}
}

Execute the query

results = query_engine.search(query)
2 comments
a
W
does llama index support batching requests? for example asking 32 questions in one request rather than 32 unique calls?

open ai has something like this:

https://platform.openai.com/docs/guides/rate-limits/error-mitigation

but not sure about llama index
1 comment
L
What's the best way to persist a Qdrant vector store index and also load from the persist directory? The support for persisting is unclear to me for Qdrant.
3 comments
L
a
Is there functionality for refining Text2Cypher for Knowledge Graphs in llama-index?I'm looking at KnowledgeGraphQueryEngine() ...and I essentially want to reprompt if the cypher query output results in an error.

I looked into the Refine code - but not sure this fits my use case.
1 comment
L