Find answers from the community

Home
Members
Pujari12
P
Pujari12
Offline, last seen 3 months ago
Joined September 25, 2024
Hi I am getting this error while trying to import download_loader -
ImportError: cannot import name 'download_loader' from 'llama_index' (unknown location)
how to solve this?
8 comments
P
W
P
Pujari12
·

Bm25

Hi, while trying to use bm25_retriever = BM25Retriever.from_defaults(
docstore=index.docstore, similarity_top_k=2
), getting this error - ZeroDivisionError
I am using weaviate as vector store and the loaded index does have a good amount of documents, How to resolve this error?
5 comments
L
P
P
Pujari12
·

Nodes

Hi while try to run the fusion_retriever notebook, I am getting this error while initializing the bm25 retriever - TypeError: BM25Retriever.from_defaults() got an unexpected keyword argument 'nodes'
How to solve it?
4 comments
L
P
Hi, I am trying to customize the SentenceWindowNodeParser e.g. change the chunk_length and the splitter. How can I do it?
7 comments
P
L
b
whats the best way to load a tweet? the llamahub twitter package isn't working
6 comments
v
W
Hi, is it possible to get this type of reponse in llamaindex - where the source data is getting mentioned along with the answer. basically getting the source data for each part of the response. One obvious way to do is Ig my prompting the llm in the "create and refine" mode to mention source metadata along with the answer.
1 comment
T
Hi guys, I am trying Qdrantvector store but getting timed out error while running the below code -
client = QdrantClient(url="http://xxxxxxxxxxxxxxxxxxxxx")

create our vector store with hybrid indexing enabled

batch_size controls how many nodes are encoded with sparse vectors at once

vector_store = QdrantVectorStore(
"llama2_paper", client=client, enable_hybrid=True, batch_size=20
)

storage_context = StorageContext.from_defaults(vector_store=vector_store)
Settings.chunk_size = 512

index = VectorStoreIndex.from_documents(
documents,
storage_context=storage_context,
)
how to solve this?
10 comments
s
L
Do I need to reload an index after every ingestion?
16 comments
H
W
P
Hi, I just saw this article - https://medium.com/enterprise-rag/open-sourcing-rule-based-retrieval-677946260973 which talks about rule-based/deterministically tell the model to use a particular chunk. Is there anything similar to this in llamaindex?
4 comments
s
v
P
T
seems like FilterOperator.IN isn't working can someone confirm?
1 comment
L
Hi, I am using simpleFusionretriever along with the Retriever query engine. I want to do metadata filtering for which I am using a node postprocessor. But in this approach, we are doing retrieval first and then doing the filtering (if I'm not wrong). Can we do the other way round e.g. first filtering and then retrieval? Hope my question is making sense lol
1 comment
L
I have created a custom node_postprocessor and passed it as an argument in the Query engine as -
query_engine = RetrieverQueryEngine.from_args(retriever, node_postprocessors=[postprocessor])
however, while doing query to the engine I am getting this error -
RuntimeError Traceback (most recent call last)
<ipython-input-51-cc65093c9740> in <cell line: 1>()
----> 1 response = query_engine.query(
2 "is sarthak a good boy"
3 )

22 frames
/usr/lib/python3.10/asyncio/runners.py in run(main, debug)
31 """
32 if events._get_running_loop() is not None:
---> 33 raise RuntimeError(
34 "asyncio.run() cannot be called from a running event loop")
35

RuntimeError: asyncio.run() cannot be called from a running event loop
6 comments
L
s
Hi, I am using the QueryFusionTransformer -
retriever = QueryFusionRetriever(
[index_1.as_retriever(), index_2.as_retriever()],
similarity_top_k=5,
num_queries=4, # set this to 1 to disable query generation
use_async=True,
verbose=True,
# query_gen_prompt="...", # we could override the query generation prompt here
)
Can we inject our own query into the generated queries list and then proceed ahead for the retrieval? Also, while doing filtering, will it be passed through both indexes? More importantly how to do filtering using this retriever.
6 comments
P
W
Hi, suppose I have two indexes. I want a query to use both indexes to get the answer. How to do it? Also, I don't want to use tooling.
1 comment
L
Hi, suppose I have 100k indexes , so should I load all indexes in memory to do query? is there any optimised way to do it?
6 comments
P
s
Hi Guys, As far as I understand we need to load an index every time to start querying against it. This process creates latency. So to resolve this, I am loading all indexes to memory first and then doing querying.is there any other way to do this? What if the number of bots in my system is dynamic?
7 comments
P
W
P
Pujari12
·

Multi -modal

Hi, is there any tutorial on Multi-modal uscases using Llamaindex and weaviate? or is there any tutorial on how to store image document in weaviate using llamaindex?
10 comments
P
L
W
Hi, does anyone have any resources/demo notebook for creating recommendation chatbot using llamaindex?
1 comment
d
Hi Guys, I am trying to run this example notebook - https://docs.llamaindex.ai/en/stable/examples/pipeline/query_pipeline_memory/
getting this error at the end -
how to fix it? @Logan M
2 comments
L
hi Guys, I am trying to use QueryFusionRetriever with MultiModalVectorStoreIndex. I am using azure openai for embedding and llm. However, getting this error -
8 comments
L
P
W
Im following the notebook only, currently using this -
base64str = base64.b64encode(response.content).decode("utf-8")

base64str2 = base64.b64encode(requests.get(image_url).content)


image_document = ImageDocument(image=base64str, image_mimetype="image/jpeg")
10 comments
P
D
Hi guys, I am trying to run this example notebook - https://docs.llamaindex.ai/en/stable/examples/multi_modal/azure_openai_multi_modal/?h=azureopenaimultimodal
while trying to run the model to respond, e.g :
complete_response = azure_openai_mm_llm.complete(
prompt="Describe the images as an alternative text",
image_documents=[image_document],
)
getting this error - BadRequestError: Error code: 400 - {'error': {'message': 'Invalid content type. image_url is only supported by certain models.', 'type': 'invalid_request_error', 'param': 'messages.[0].content.[1].type', 'code': None}}
how to resolve this?
8 comments
L
P
Hi guys, I am using the simple fusion retriever with Qdrant as vectorDB. While trying to run - nodes_with_scores = retriever.retrieve("How do I setup a chroma vector store?"), getting this error -
17 comments
L
P
W
Guys any on this - Hi, is it possible to get this type of reponse in llamaindex - where the source data is getting mentioned along with the answer. basically getting the source data for each part of the response. One obvious way to do is Ig my prompting the llm in the "create and refine" mode to mention source metadata along with the answer.
7 comments
P
L
T
W
Hi, I am looking to save web pages in an index. Whats the best way to do it? i saw few of the data_loaders in llama_hub but they aren't working
4 comments
P
W
L