Find answers from the community

Updated last year

OpenAI

At a glance

The community member is facing an error while using the VectorStoreIndex.from_documents() function from the llama_index library. The error is a "NotFoundError" with code 484 and message "Resource not found". The community members have shared their code and setup, which includes using the AzureOpenAI language model and embeddings. They have tried various configurations, including using the engine and deployment names, but are still facing the issue.

The solution provided by another community member is to define the service context and set it as the global service context. This involves creating a ServiceContext object with the llm and embed_model, and then setting it as the global service context using set_global_service_context(). This approach seems to have resolved the issue for the community member.

Hey guys
Using openai 1.6.1
While doing
Index= VectorStoreIndex.from_documents(documents)

Getting this error
NotFoundError: Error code: 484 ['error': ['code": "404', 'message': 'Resource not found")).

Same error in this also.
index = VectorStoreIndex.from_documents(documents, 11m=11m1, embed_model =embeddings, prompt helper=prompt_helper)
W
t
P
21 comments
Could you share your code if possible
what ll are you using (llama_index.llms) and how did u set it up
import os
from dotenv import load_dotenv
from PyPDF2 import PdfReader
from llama_index import VectorStoreIndex, SimpleDirectoryReader, PromptHelper
from llama_index.llms import AzureOpenAI
from openai import OpenAI
from llama_index.embeddings import AzureOpenAIEmbeddings
import openai

load_dotenv()

openai.api_key = "YOUR_OPENAI_API_KEY"
openai.api_base = "YOUR_OPENAI_API_BASE"
openai.api_version = "2023-07-01-preview"

llm1 = AzureOpenAI(azure_deployment="gpt-35-turbo", azure_endpoint=openai.api_base, api_key=openai.api_key, api_version="2023-07-01-preview")

embeddings = AzureOpenAIEmbeddings(deployment="text-embedding-ada-002",
model="text-embedding-ada-002",
openai_api_base=openai.api_base,
openai_api_type="azure",
openai_api_key=openai.api_key,
openai_api_version="2023-07-01-preview")

Define prompt helper

max_input_size = 3000
num_output = 256
chunk_size_limit = 1000
max_chunk_overlap = 20

prompt_helper = PromptHelper(context_window=500, num_output=num_output, chunk_size_limit=chunk_size_limit)

documents = SimpleDirectoryReader('../data/qna/').load_data()

index = VectorStoreIndex.from_documents(documents)
Do you have an azure subscription? are you sure about your deployement name?
ik i never name them the exact model name
Using faiss db I have done these things
Now trying it by llama indec
AZURE_KWARGS: dict = {
"api_key": AZURE_OPENAI_API_KEY,
"azure_endpoint": AZURE_API_BASE_URL,
# "api_type": AZURE_API_TYPE,
"api_version": AZURE_API_VERSION,
"reuse_client": False,
}

Used it like this and it worked
def gpt_35(self) -> AzureOpenAI:
"""Return the GPT_35 model."""
return AzureOpenAI(
engine="OneBotGPT351106",
model="gpt-3.5-turbo-1106",
temperature=self.GPT_TEMP,
additional_kwargs=self.GPT_KWARGS,
**shared_settings.AZURE_KWARGS,
)
You're missing the engine no?
I tried using engine also inside the AzureOpenAI still facing error
Try and check if your llm and embed model are working.


Plain Text
print(llm.complete("hi"))

print(embeddings.get_text_embedding("hi"))
Is engine different from deployment name?
no it is the same
Where is your service_context defined?
I didn't defined it here but I tried it using service context it won't worked last time
Try this once:

Plain Text
from llama_index import ServiceContext
from llama_index import set_global_service_context
service_context = ServiceContext.from_defaults(
    llm=llm,
    embed_model=embed_model)
set_global_service_context(service_context)
Do this after defining the llm and embedding
Its working great man thanks
Add a reply
Sign up and join the conversation on Discord