Find answers from the community

Home
Members
Tibiritabara
T
Tibiritabara
Offline, last seen 3 months ago
Joined September 25, 2024
Hey hey team, there is an orphan reference to ChatFireworks on llama_index.core.bridge.langchain when using llama-index v0.10.41 and llama-index-llms-langchain 0.1.4. I created a bug report here: https://github.com/run-llama/llama_index/issues/13868
1 comment
L
Hey team, thank you so much for the hard work you put in upgrading everything. Just a quick heads up: all of the postgres integrations: llama-index-storage-kvstore-postgres , llama-index-vector-stores-postgres, llama-index-storage-index-store-postgres work. There is one that is not working: llama-index-storage-docstore-postgres. It throws the next exception:

Plain Text
Because no versions of llama-index-storage-docstore-postgres match >0.1.0,<0.2.0
 and llama-index-storage-docstore-postgres (0.1.0) depends on llama-index-core (0.10.0), llama-index-storage-docstore-postgres (>=0.1.0,<0.2.0) requires llama-index-core (0.10.0).
So, because project depends on both llama-index-core (^0.10.1) and llama-index-storage-docstore-postgres (^0.1.0), version solving failed.


Any guidance would be greatly appreciated, and thank you so much for your help and support.
2 comments
L
W
So, I am currently implementing a solution for a multiuser operation using PGVectorStore and the recently created PostgresDocumentStore and PostgresIndexStore, for Key Value Storage. According to what I read, should each user have a different index stored in the PostgresIndexStore?. The only example available on multi-user/multi-tenant applications on the llamaindex blog (https://blog.llamaindex.ai/building-multi-tenancy-rag-system-with-llamaindex-0d6ab4e0c44b) speaks about metadata filtering, and does not include the concepts of index or separate indexes for this purpose. For this reason, I still struggle understanding the idea behind the IndexStore, as most of the examples and tutorials available harness only the VectorStore. Having asked this, after digging on the web a bit, I noticed that the PGVectorStore seems to not be working as expected with the IndexStore as seen on this PR https://github.com/run-llama/llama_index/issues/7360 . Should I pursue the integration of those three stores?. I will greatly appreciate any guideline or support
5 comments
L
T
T
Tibiritabara
·

Postgres

While reading the llama-index release notes, I noticed that v0.9.37 added a postgres docstore, as seen in #10233.

How does this differ from the current PGVectorStore, and what would be the use case from this postgres docstore?
4 comments
T
L
Hey hey folks!. I hope you guys are having a fantastic week and I wish you a fantastic weekend ahead. I have a question: I am creating a list of nodes, and retrieving its mappings, similarly to how it is made in this (create_llama) example [https://github.com/run-llama/create_llama_projects/blob/5c136d5b561f7e2b806ecce9d802bdcf9f8d9c80/embedded-tables/backend/app/utils/index.py#L138] . The nodes are already stored in OpenSearch. Now, the question is: how to store the node_mappings effectively? in the example the mappings are pickle files, but those do not scale easily when loading thousands of mappings for thousands of files. Is there native support for a database to store them?, or does someone have an example of them stored in a scalable solution? (redis, opensearch, pg, s3). Thank you so much to everyone for the help and support.
3 comments
T
L