Find answers from the community

Updated last year

Subject: Inquiry: Retrieving Index by ID

At a glance

The community member created an index using GPTVectorStoreIndex.from_documents in the llama_index library and is inquiring about retrieving the index by its ID. The responses indicate that if the index is not persisted locally, it will not be stored, and the community member will need to persist the index to a directory in order to retrieve it later. The community member is also encountering an issue when trying to use the "gpt-4o-mini" model, which is not a valid OpenAI model name.

Useful resources
Subject: Inquiry: Retrieving Index by ID in llama_index

Dear LlamaIndex Team,

After creating an index in llama_index using GPTVectorStoreIndex.from_documents, I'm curious if there's a way to retrieve an index by its ID. Could you please provide guidance on this?

Thank you for your assistance.
W
A
7 comments
You mean retrieve the saved index?
If I don't store indices locally using llama-index, are they stored on your system? IF so can I retrieve a specific index by its ID?
If you do not persist the index it will not get stored in case of Llamaindex VectorStoreIndex.

You'll have to persist the index. and then you can load it back up simply by following this:
index.storage_context.persist(persist_dir="<persist_dir>")


To load this up, simply do:
Plain Text
from llama_index import load_index_from_storage


# load a single index
# need to specify index_id if multiple indexes are persisted to the same directory
index = load_index_from_storage(storage_context)


Find more here: https://docs.llamaindex.ai/en/latest/module_guides/storing/save_load.html
Thanks for the response. Just to clarify, does that mean if I create an index using Llamaindex VectorStoreIndex, I need to save it in local storage/drive to retrieve it later? If not stored locally, is there a way to retrieve the index from your system?
Yeah if you want to retrieve it for later use you'll have to store it. Locally it gets stored if you persist. For storing it into a different place you might want to look into Vector stores: Like chroma, pinecone, weaviate etc: https://docs.llamaindex.ai/en/stable/module_guides/storing/vector_stores.html#vector-stores
Thanks for the response that helped a lot.
#new_issue

Hello team llama-index as the use mentioned above I'm creating the index from the documents(text) then I'm Storing that index and then creating engine from that index to ask my queries in that process I am using chat gpt model 3.5 turbo but i want to use model gpt-4o-mini but as the method give exception shared below

code:-

llm = OpenAI(temperature=0.8, model="gpt-4o-mini")
service_context = ServiceContext.from_defaults(llm=llm)



exception


traceback (most recent call last):
File "C:\Users\ajinkya\Desktop\ISP_Qual_Pro\isp_qual_pro_api\src\nlp\gpt.py", line 94, in get_chat_engine_on_stimuli
service_context = ServiceContext.from_defaults(llm=llm)
File "C:\Users\ajinkya\Desktop\ISP_Qual_Pro\isp_qual_pro_api\env\lib\site-packages\llama_index\service_context.py", line 195, in from_defaults
llm_metadata=llm_predictor.metadata,
File "C:\Users\ajinkya\Desktop\ISP_Qual_Pro\isp_qual_pro_api\env\lib\site-packages\llama_index\llm_predictor\base.py", line 153, in metadata
return self._llm.metadata
File "C:\Users\ajinkya\Desktop\ISP_Qual_Pro\isp_qual_pro_api\env\lib\site-packages\llama_index\llms\openai.py", line 222, in metadata
context_window=openai_modelname_to_contextsize(self._get_model_name()),
File "C:\Users\ajinkya\Desktop\ISP_Qual_Pro\isp_qual_pro_api\env\lib\site-packages\llama_index\llms\openai_utils.py", line 195, in openai_modelname_to_contextsize
raise ValueError(
ValueError: Unknown model 'gpt-4o-mini'. Please provide a valid OpenAI model name in: gpt-4, gpt-4-32k, gpt-4-1106-preview, gpt-4-vision-preview, gpt-4-0613, gpt-4-32k-0613, gpt-4-0314, gpt-4-32k-0314, gpt-3.5-turbo, gpt-3.5-turbo-16k, gpt-3.5-turbo-1106, gpt-3.5-turbo-0613, gpt-3.5-turbo-16k-0613, gpt-3.5-turbo-0301, text-davinci-003, text-davinci-002, gpt-3.5-turbo-instruct, text-ada-001, text-babbage-001, text-curie-001, ada, babbage, curie, davinci, gpt-35-turbo-16k, gpt-35-turbo, gpt-35-turbo-1106, gpt-35-turbo-0613, gpt-35-turbo-16k-0613
Add a reply
Sign up and join the conversation on Discord