Find answers from the community

Home
Members
anupamaze
a
anupamaze
Offline, last seen 3 months ago
Joined September 25, 2024
LookupError:
**
Resource stopwords not found.
Please use the NLTK Downloader to obtain the resource

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/adminuser/venv/lib/python3.9/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 5 from llama_index.legacy.llms import ChatMessage, MessageRole
File "/home/adminuser/venv/lib/python3.9/site-packages/llama_index/legacy/llms/init.py", line 12, in <module>
from llama_index.legacy.llms.ai21 import AI21
File "/home/adminuser/venv/lib/python3.9/site-packages/llama_index/legacy/llms/ai21.py", line 14, in <module>
from llama_index.legacy.llms.base import llm_chat_callback, llm_completion_callback
File "/home/adminuser/venv/lib/python3.9/site-packages/llama_index/legacy/llms/base.py", line 25, in <module>
from llama_index.legacy.core.query_pipeline.query_component import (
File "/home/adminuser/venv/lib/python3.9/site-packages/llama_index/legacy/core/query_pipeline/query_component.py", line 23, in <module>
from llama_index.legacy.core.response.schema import Response
File "/home/adminuser/venv/lib/python3.9/site-packages/llama_index/legacy/core/response/schema.py", line 7, in <module>
from llama_index.legacy.schema import NodeWithScore
File "/home/adminuser/venv/lib/python3.9/site-packages/llama_index/legacy/schema.py", line 17, in <module>
from llama_index.legacy.utils import SAMPLE_TEXT, truncate_text
File "/home/adminuser/venv/lib/python3.9/site-packages/llama_index/legacy/utils.py", line 89, in <module>
globals_helper = GlobalsHelper()
File "/home/adminuser/venv/lib/python3.9/site-packages/llama_index/legacy/utils.py", line 62, in init
nltk.download("stopwords", download_dir=self._nltk_data_dir)
8 comments
L
a
W
while trying to create index from nodes

running into this error when no of documents is more

TypeError Traceback (most recent call last)
Input In [11], in <cell line: 1>()
----> 1 index = VectorStoreIndex(nodes, service_context=service_context)
2 index.storagecontext.persist("./storageall"+str(len(documents)))

File ~/anaconda3/lib/python3.9/site-packages/llama_index/indices/vector_store/base.py:52, in VectorStoreIndex.init(self, nodes, index_struct, service_context, storage_context, use_async, store_nodes_override, insert_batch_size, show_progress, kwargs) 50 self._store_nodes_override = store_nodes_override 51 self._insert_batch_size = insert_batch_size---> 52 super().init( 53 nodes=nodes, 54 index_struct=index_struct, 55 service_context=service_context, 56 storage_context=storage_context, 57 show_progress=show_progress, 58 kwargs,
59 )

File ~/anaconda3/lib/python3.9/site-packages/llama_index/indices/base.py:51, in BaseIndex.init(self, nodes, index_struct, storage_context, service_context, show_progress, **kwargs)
49 raise ValueError("Only one of nodes or index_struct can be provided.")
50 # This is to explicitly make sure that the old UX is not used
---> 51 if nodes is not None and len(nodes) >= 1 and not isinstance(nodes[0], BaseNode):
52 if isinstance(nodes[0], Document):
53 raise ValueError(
54 "The constructor now takes in a list of Node objects. "
55 "Since you are passing in a list of Document objects, "
56 "please use from_documents instead."
57 )

TypeError: 'dict_values' object is not subscriptable
15 comments
L
a
#persist the nodes
index.storage_context.persist()
....

load the persist nodes

index = load_index_from_storage(StorageContext.from_defaults(), service_context=service_context)

get nodes

nodes = index.docstore.docs

#use it further in your retriever
2 comments
a
L
when i try to retreive the nodes from index and use them in bm25 retriever, i keep getting error
2 comments
a
r
how does a higher or lower value of prediction_threshold impact generation of entities
2 comments
a
L
ValueError: Unrecognized configuration class <class 'transformers.models.t5.configuration_t5.T5Config'> for this kind of AutoModel: AutoModelForCausalLM.
Model type should be one of BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, CamembertConfig, LlamaConfig, CodeGenConfig, CpmAntConfig, CTRLConfig, Data2VecTextConfig, ElectraConfig, ErnieConfig, FalconConfig, GitConfig, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, LlamaConfig, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MptConfig, MusicgenConfig, MvpConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, PegasusConfig, PLBartConfig, ProphetNetConfig, QDQBertConfig, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig, Speech2Text2Config, TransfoXLConfig, TrOCRConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig.
37 comments
a
L
t
while retrieving text using vector similarity, i am seeing irrelavant text which has some similarity also getting pulled in the context, i m using nodes with entity, is there a way to filter out such nodes/text
7 comments
a
W
L
and its so time consuming to generate nodes everytime on same data
10 comments
a
W
B
Is there a way to include memory buffer along with hybrid retriever with Chat Engine - Condense Plus Context Chat
17 comments
a
L