Find answers from the community

Home
Members
haedamon
h
haedamon
Offline, last seen last month
Joined September 25, 2024
@Logan M I am referencing this thread that you replied to from last week: https://discord.com/channels/1059199217496772688/1059200010622873741/1304241311255101451
***
I still want to modify my prompt with custom metadata from each source node. Something like this:
Plain Text
Please answer only with the information given in context:
"""
Metadata: 
{ "book": "xyz", "page": "3" }
This is a chapter on the history of Rome etc
{ "book": "gef", "page": "374" }
This is a chapter on Athens etc ...
"""
Question: {} 

What is the best approach on how to create a custom template?
4 comments
h
L
I am using VectorStoreIndex. When I generate a response, I want to see if I can get where in a node that the answer is generated from. It's not enough to check the source_node score - I want to get which sentences or paragraphs from the context where an answer is pulled from.
I believe I can get this by customizing the prompt and insert extra metadata information associated with each node. Then I want to add an additional instruction in the prompt "pull the relevant metadata and which sentences where the answer was generated from".
I'm not quite sure if I can do this at a high level, or if I need to build a response synthesis from scratch. Any help on this?
5 comments
L
h
Can different types of indexes such as VectorStoreIndex and SummaryIndex share the same vector store - eg PGVectorStore?
2 comments
h
L
3 comments
L
llama-index 0.10.19+ has the following errors (I haven't checked prior versions)
Plain Text
def get_pg_storage_context():
    from llama_index.storage.docstore.postgres import PostgresDocumentStore
    from llama_index.storage.index_store.postgres import PostgresIndexStore
    from llama_index.vector_stores.postgres import PGVectorStore

    storage_context = StorageContext.from_defaults(
        docstore=PostgresDocumentStore.from_uri(uri="postgres://....."),
        index_store=PostgresIndexStore.from_uri="postgres://..."),
        vector_store=PGVectorStore.from_uri="postgres://...",
    )
    return storage_context

this raises the following exceptions for docstore and indexstore:
Plain Text
File ~/.virtualenvs/grizzly_3.10/lib/python3.10/site-packages/llama_index/storage/docstore/postgres/base.py:5
      3 from llama_index.core.storage.docstore.keyval_docstore import KVDocumentStore
      4 from llama_index.core.storage.docstore.types import DEFAULT_BATCH_SIZE
----> 5 from llama_index.storage.kvstore.postgres import PostgresKVStore

ModuleNotFoundError: No module named 'llama_index.storage.kvstore'

same issue with: llama_index/storage/index_store/postgres/base.py:4

the location of PostgresKVStore is at from llama_index.core.storage.kvstore.postgres_kvstore import PostgresKVStore
Fixing the bad imports allows the function above to run, but I don't know if there is anything else wrong w PostgresDocumentStore and PostgresIndexStore. Should I submit a PR for this?
4 comments
h
L
I've added documents to an index, while specifying a custom doc_id for each document
a) how do i determine if a doc_id already exists within the index?
b) How do I get a list of all documents within an index?
c) how do I delete a document from the index?
I've tried looking it up (https://github.com/run-llama/llama_index/issues/3255) but it seems the api has changed and the answers are obsolete
2 comments
h
Does PGVectorStore support FilterCondition.OR?!
3 comments
h
L
In my example above, I'm using SimpleDataStore as storage context.
From this ChromaDB example, https://docs.llamaindex.ai/en/stable/examples/vector_stores/chroma_metadata_filter.html, multiple metadatafilters with OR condition is supported.

Does this mean not all storage-contexts support FilterCondition.OR?!
4 comments
L
h
Topic: nested MetadataFilters
I am attempting to use MetadataFilters (similar to how this article describes: https://docs.llamaindex.ai/en/stable/examples/multi_tenancy/multi_tenancy_rag.html)
Recently, nested MetadataFilters were introduced in version 0.10.19
https://github.com/run-llama/llama_index/pull/11778
which fixes:
***
Notably. this changes the code in llama_index.core.vector_stores.types.py::MetadataFilters:
Plain Text
class MetadataFilters(BaseModel):
    """Metadata filters for vector stores."""

    # Exact match filters and Advanced filters with operators like >, <, >=, <=, !=, etc.
    filters: List[Union[MetadataFilter, ExactMatchFilter, "MetadataFilters"]]

However, when I run the code below, I encounter the following errors
9 comments
L
h