Find answers from the community

s
F
Y
a
P
Home
Members
ashishsha
a
ashishsha
Offline, last seen 4 weeks ago
Joined September 25, 2024
Which library llama index simple directory reader uses internally to read pdf files?
3 comments
a
L
I wonder if I am doing something incorrect -- but all I did is added bunch of metadata properties
15 comments
a
L
- can you please help
11 comments
a
L
AuthenticationError: No API key provided. You can set your API key in code using 'openai.api_key = <API-KEY>', or you can set the environment variable OPENAI_API_KEY=<API-KEY>). If your API key is stored in a file, you can point the openai module at it with 'openai.api_key_path = <PATH>'. You can generate API keys in the OpenAI web interface. See https://platform.openai.com/account/api-keys for details.
21 comments
L
a
It is failing here # Fetch up-to-date library from remote repo if loader_id not found
if loader_id is None:
library_rawcontent, = _get_file_content(loader_hub_url, "/library.json")
library = json.loads(library_raw_content)
if loader_class not in library:
raise ValueError("Loader class name not found in library")

loader_id = library[loader_class]["id"]
extra_files = library[loader_class].get("extra_files", [])
# Update cache
with open(library_path, "w") as f:
f.write(library_raw_content)
7 comments
L
a
Not sure what does error "ValueError("Vector store is required for vector store query.")" mean
22 comments
L
a
- question , how do I extract source docs from the chat response
5 comments
L
a
constructor accepts the chat history but setting is not allowed
1 comment
L
has anyone come across a scenario where openAI completion API returns the partial response ? The context has the entire informtion and api some how skips over last few lines and summarizes 70% of the context
5 comments
L
a
question - how can i get chat history from bot ?
6 comments
a
L
did upgrade from 0.6.7 to the latest 0.7.8 , app does not boot any more...running it in flask app in gunicorn. Last few lines of error File "/Users/ashish/opt/anaconda3/envs/LLMTools/lib/python3.9/site-packages/langchain/document_loaders/github.py", line 37, in <module>
class GitHubIssuesLoader(BaseGitHubLoader):
File "pydantic/main.py", line 198, in pydantic.main.ModelMetaclass.new
File "pydantic/fields.py", line 506, in pydantic.fields.ModelField.infer
File "pydantic/fields.py", line 436, in pydantic.fields.ModelField.init
File "pydantic/fields.py", line 552, in pydantic.fields.ModelField.prepare
File "pydantic/fields.py", line 663, in pydantic.fields.ModelField._type_analysis
File "pydantic/fields.py", line 808, in pydantic.fields.ModelField._create_sub_type
File "pydantic/fields.py", line 436, in pydantic.fields.ModelField.init
File "pydantic/fields.py", line 552, in pydantic.fields.ModelField.prepare
File "pydantic/fields.py", line 668, in pydantic.fields.ModelField._type_analysis
File "/Users/ashish/opt/anaconda3/envs/LLMTools/lib/python3.9/typing.py", line 852, in subclasscheck
return issubclass(cls, self.origin)
TypeError: issubclass() arg 1 must be a class
17 comments
a
L
Hi all - question. , is ReAct chat agent built on top of langchain chat agent
9 comments
L
a
Another question - when external store is specified , does it store everything in wevaite , in case of local storage we create three different files
1 comment
L
A dumb question - What happens when we recreate a vector index with the extenal vector store like weaviate . Does it add new objects everytime we recreate the index (even if the underlying document is the same ) . I see it is definining classes with node id (Gpt_Index_5128082748824505963_Node) . So I think it keeps on adding objects under new classes every time we recreate an index ? I would like more control over it - can I specify the class for the object I am going to add ?
42 comments
a
L
Any guidance on chunk size ...how small is too small , I believe default is pretty big
1 comment
L
hi all , just created this app https://www.nlp-tools.app/ . You can create knwledgebases and query them via composite indexes. Add multiple indexes to the knowledgebase and then convert them to composite indexes and offcourse query them . Currently local files and web pages are supported . Working on few more. Please try it out and any feed back is appreciated . DM me if you run into issues and need to discuss something
3 comments
a
r
j
Question - I have observed the similarity score difference between a valid answer and invalid answer is not as large as I was expecting. For example - I asked a question "what are the pricing plans" the vector search comes up with 0.76 score and when I ask "what are your birthday party plans" the similarity score is 0.67 . I believe both of these questions are talking about "plans" is that reason there is so little difference between similarity score.
14 comments
a
R
Hi - Question - How do I handle a use case where search in vector db does not return any nodes and results in essentially empty context. I would like to interecept the call and not make LLM call. Is there an api I can use
4 comments
a
W
@Logan M AgentChatResponse has empty sources[] and source_nodes[] . Is it by design ?
2 comments
L
@Logan M question - are there any memory modules availble that can be injected into bots ..something like RollingWindow Memory . If there is one - can you direct me to the notebook
10 comments
S
L
a
a
ashishsha
·

LlamaHub

running into issue - No module named 'llama_index.utilities' on this line DatabaseReader = download_loader('DatabaseReader')
2 comments
a
L
@Logan M question - do recency post processors always look at query first to see if the post processor should be applied , wondering if there is a way to always apply post processor to get the recent nodes . Or if there is any other way to achieve the same thing ?
2 comments
a
L
question - What is the best way to add additional properties to weaviate store. llama has default schema with 4 properties - text,ref_doc_id,node_info,relationships . What if I want to add additional properties that can be filtered later on ? How do I do that
15 comments
L
a
How do I terminate the processing and respond with custom response if there are no matching nodes in vector db. Currently it still makes call to LLM and gets some out of context response. Are there any hooks / node post processor I can inject ?
1 comment
L
a basic question - How can I summarize over bunch of text nodes. I have selected nodes from weviate alrady , all i want is to summarize over those nodes. Router query works fine but it makes bunch of LLM Calls ( showing summaries to LLM and then picking one node and then another LLM call for summary , few embedding calls as well in between ) that I want to avoid . Is there a llama component I can use ?
1 comment
L