Find answers from the community

Updated 4 months ago

AttributeError: type object 'LLMMetadata

AttributeError: type object 'LLMMetadata' has no attribute 'model_fields'

Receiving the above error when using NVIDIA llms library.
L
f
8 comments
Do you have the full traceback? What version of llama-index-core do you have?
Name: llama-index-core
Version: 0.10.68.post1
Summary: Interface between LLMs and your data
Home-page: https://llamaindex.ai
Author: Jerry Liu
Author-email: jerry@llamaindex.ai
License: MIT
Location: c:\Users\fsunavala\AppData\Local\Programs\Python\Python311\Lib\site-packages
Requires: aiohttp, dataclasses-json, deprecated, dirtyjson, fsspec, httpx, nest-asyncio, networkx, nltk, numpy, pandas, pillow, pydantic, PyYAML, requests, SQLAlchemy, tenacity, tiktoken, tqdm, typing-extensions, typing-inspect, wrapt
Required-by: llama-index, llama-index-agent-openai, llama-index-callbacks-wandb, llama-index-cli, llama-index-embeddings-azure-openai, llama-index-embeddings-nvidia, llama-index-embeddings-openai, llama-index-indices-managed-llama-cloud, llama-index-llms-azure-openai, llama-index-llms-nvidia, llama-index-llms-openai, llama-index-llms-openai-like, llama-index-multi-modal-llms-openai, llama-index-program-openai, llama-index-question-gen-openai, llama-index-readers-file, llama-index-readers-huggingface-fs, llama-index-readers-llama-parse, llama-index-readers-web, llama-index-vector-stores-azureaisearch, llama-parse
Note: you may need to restart the kernel to use updated packages.
Plain Text
# Settings enables global configuration as a singleton object throughout your application.
# Here, it is used to set the LLM, embedding model, and text splitter configurations globally.
from llama_index.core import Settings
from llama_index.llms.nvidia import NVIDIA

# Here we are using mixtral-8x7b-instruct-v0.1 model from API Catalog
Settings.llm = NVIDIA()
let me try upgrading to latest llama-index-core
this seems to work just fine
Plain Text
from llama_index.embeddings.nvidia import NVIDIAEmbedding
Settings.embed_model = NVIDIAEmbedding(model="nvidia/nv-embedqa-e5-v5")
negative 😦
@dev_advocate woud really appreciate any advice here^
nvm got it fixed by upgrading to the latest llama-index core
Add a reply
Sign up and join the conversation on Discord