Find answers from the community

s
F
Y
a
P
Home
Members
PwnosaurusRex
P
PwnosaurusRex
Offline, last seen last month
Joined September 25, 2024
P
PwnosaurusRex
·

Tool

This seems like a really awesome way to avoid tools (I.e. to run a SQL query or similarity search), but I can't find any way to add memory and convert to a chat engine, even looking at the base methods...is it possible?

https://docs.llamaindex.ai/en/stable/examples/query_engine/pgvector_sql_query_engine/
1 comment
L
I'm trying to add metadata filtering using LanceDB. I have it working fine using purely their package as outlined here and here

However if I tried to use MetadataFilters from LlamaIndex with LanceDB I always get no results...Thoughts? Something to do with this section of code?

Example query...I tried the key metadata.theme as well.

Plain Text
filters = MetadataFilters(
    filters=[
        MetadataFilter(
            key="theme", operator=FilterOperator.EQ, value="Fiction"
        ),
    ]
)
8 comments
P
L
Quick question. Does QueryFusionRetriever generate num_queries queries for each retriever defined? In the example here 3 are generated, but for some reason I get 6 (3 for bm25 and 3 for vector?). I would prefer just the 3 since they're almost always the same...

Plain Text
Generated queries:
1. What were the major events or milestones in the history of Interleafe and Viaweb?
2. Can you provide a timeline of the key developments and achievements of Interleafe and Viaweb?
3. What were the successes and failures of Interleafe and Viaweb as companies?
8 comments
L
P
Anyone here have a good doc on what these options mean? I know default = vector, and sparse = bm25(f?)...if I user queryfusionretreiver to build a "hybrid", is that functionally the same as just calling hybrid below? what about semantic_hybrid? How about text_search...that just a keyword search? I thought sparse is basically text, but does some extra stuff to better weigh certain issues (i.e. overuse of the keyword)?

https://docs.llamaindex.ai/en/latest/api_reference/storage/vector_store/?h=vectorstorequery#llama_index.core.vector_stores.types.VectorStoreQueryMode

Plain Text
    DEFAULT = "default"
    SPARSE = "sparse"
    HYBRID = "hybrid"
    TEXT_SEARCH = "text_search"
    SEMANTIC_HYBRID = "semantic_hybrid"
22 comments
S
P
L
QQ, I turned on logging when using a Huggingface embedding and I see it always connects to Huggingface, even if I already have the model cached. Looks like it grabs the tokenizer_config and config.json each time, even though I see those files in the cache folder. Any way to tell it to stop?

Plain Text
from llama_index.embeddings.huggingface import HuggingFaceEmbedding
# loads BAAI/bge-small-en
# embed_model = HuggingFaceEmbedding()
# loads BAAI/bge-small-en-v1.5
embed_model = HuggingFaceEmbedding(model_name="BAAI/bge-small-en-v1.5")
8 comments
L
P
Hey all, I know when I run index = VectorStoreIndex.from_documents(documents) I can show_progress, but that just shows the node's progress. Is it possible to print one of the fields/metadata fields so I can see what file/node is being processed? I have a file that is causing failures but not sure what it is, and turning on full logging I couldn't find it.
1 comment
L
P
PwnosaurusRex
·

Thanks

Just wanted to say thanks to @Logan M and the rest of the team for AMAZING support provided here. Also the documentation on the site makes this the most approachable library I've ever used and it's great having ready to use examples for all the workflows. Makes testing and tweaking much easier and faster!
1 comment
L
Hey all, anyone here using llama-create for a starting point? I did for a quick test of making a backend and it worked great! I'm looking for some advice. What's the easiest way to add the nodes and corresponding metadata that was returned? I want to maintain the streaming response, and don't think I can modify index.as_chat_engine...Just build a custom chat message object like this?

https://docs.llamaindex.ai/en/stable/module_guides/deploying/chat_engines/usage_pattern/#low-level-composition-api
2 comments
P
L
QQ on vector stores. I've been playing first with the native json storage method, then chromadb, and lancedb. Using the same data set that generated a 50 mb json file, all these methods seem to take a minute to load the index, and they don't max our the memory or CPU on the machine during load.

Do these "serverless" methods all just take a long time to load at first? Is the entire dataset loaded into memory?

If I move to postgres or similar I assume the "index load" will be much faster?
10 comments
v
P
L
SimpleDirectoryReader chunks a PDF into nodes by default (pages), how do I control the node sizing? Do I need to call out each file type explicitly? Or can I use something like SimpleNodeParser to overwrite the defaults?
33 comments
L
P
T
Hey folks, just getting my feet wet. I really like Chromadb for a sqlite file I can look at and understand. Are there any other sqlite vector stores that also support hybrid search? Or is weaviate and docker the next step?
21 comments
P
L