Find answers from the community

Updated 2 months ago

Hi all. I have a question, I am truing

Hi all. I have a question, I am truing to use the unstructured loader
node_parser = UnstructuredElementNodeParser(llm=llm)
The llm is local and if i use any other parser then there isnt a problem.
But i get this error
Could not load OpenAI embedding model. If you intended to use OpenAI, please check your OPENAI_API_KEY.
Original error:
L
N
18 comments
Is the node parser actually the one raising the error?
It does not use embeddings
Do you have the full traceback?
Yeah i can copy and paste it
File "/home/adctme/.local/lib/python3.10/site-packages/llama_index/embeddings/utils.py", line 50, in resolve_embed_model
THis is where it comes from essentially
right, but can I see further up in the trace? Like what's leading to that being called?
Traceback (most recent call last):
File "/home/adctme/Downloads/kmcraft/llamadex/start_un.py", line 107, in <module>
index = node_parser.get_nodes_from_documents(documents)
File "/home/adctme/.local/lib/python3.10/site-packages/llama_index/node_parser/interface.py", line 61, in get_nodes_from_documents
nodes = self._parse_nodes(documents, show_progress=show_progress, kwargs) File "/home/adctme/.local/lib/python3.10/site-packages/llama_index/node_parser/relational/unstructured_element.py", line 294, in _parse_nodes nodes = self.get_nodes_from_node(node) File "/home/adctme/.local/lib/python3.10/site-packages/llama_index/node_parser/relational/unstructured_element.py", line 279, in get_nodes_from_node extract_table_summaries(table_elements, self.llm, self.summary_query_str) File "/home/adctme/.local/lib/python3.10/site-packages/llama_index/node_parser/relational/unstructured_element.py", line 126, in extract_table_summaries service_context = ServiceContext.from_defaults(llm=llm) File "/home/adctme/.local/lib/python3.10/site-packages/llama_index/service_context.py", line 188, in from_defaults embed_model = resolve_embed_model(embed_model) File "/home/adctme/.local/lib/python3.10/site-packages/llama_index/embeddings/utils.py", line 50, in resolve_embed_model raise ValueError(ValueError: **
Could not load OpenAI embedding model. If you intended to use OpenAI, please check your OPENAI_API_KEY.
Original error:
No API key found for OpenAI.
Please set either the OPENAI_API_KEY environment variable or openai.api_key prior to initialization.
API keys can be found or created at https://platform.openai.com/account/api-keys

Consider using embed_model='local'.
Visit our documentation for more embedding options: https://docs.llamaindex.ai/en/stable/module_guides/models/embeddings.html#modules
**
Try updating your llama-index package, this is fixed in a newer installation
yep let me check
For other modules that actually use embeddings (like a vector store index), you can set embed_model in the service context
wow that definitely fixed it
Yep that is what i was doing for the embed_model
I am getting this as a message
Embeddings have been explicitly disabled. Using MockEmbedding.
that is expected, its fine πŸ™‚
its using a summary index under the hood, and specifically disables the embeddings for that small piece because they aren't needed (this is what was causing the key error before)
Ah ok Thank you.
I am probably going to try to store it into a chroma database
Hopefully i run into no issues πŸ™‚
Add a reply
Sign up and join the conversation on Discord