Find answers from the community

Home
Members
Ajinkya
A
Ajinkya
Offline, last seen 2 weeks ago
Joined September 25, 2024
code im using:-


to create index

def create_index_on_dir(directory_path: str, index_save_loc: str, logger=log, req_id: str = ""):
try:
# llm = llama_OpenAI(temperature=0.1, model="gpt-3.5-turbo")
llm = llama_OpenAI(temperature=1, model="gpt-4")
service_context = ServiceContext.from_defaults(llm=llm)
documents = SimpleDirectoryReader(directory_path).load_data()
index = VectorStoreIndex.from_documents(documents, service_context=service_context)
index.storage_context.persist(persist_dir=index_save_loc)
return index
except Exception as ex:
logger.exception(f"Exception occurred during creating chat engine from index: {ex} {req_id}")
return None

to create engine on index
def create_engine_from_index(index, logger=log, req_id: str = "", sys_prompt=system_prompt):
try:
engine = index.as_chat_engine('context', system_prompt=sys_prompt)
return engine
except Exception as ex:
logger.exception(f"Exception occurred during creating chat engine from index: {ex} {req_id}")
return None

to ask question or querys

def ask_engine_que(engine, question, logger=log, req_id: str = "", retry=10):
try:
response = engine.chat(question)
return response.response
except RateLimitError as rex:
if retry > 0:
time.sleep(20)
return ask_engine_que(engine, question, logger, req_id, retry - 1)
logger.exception(f"RateLimitError occurred during asking question: {rex} {req_id}")
return ""
except Exception as ex:
logger.exception(f"Exception occurred during asking question: {ex} {req_id}")
return ""
19 comments
W
A
Hi everyone I want a solution over problem where I have some of my transcription and I want to create a chat index form it and save using claude where I want to save that index and create engine from the saved index and ask the question to the engine and get answer
3 comments
F
A
W
A
Ajinkya
·

new_issue

#new_issue

Hello team llama-index as the use mentioned above I'm creating the index from the documents(text) then I'm Storing that index and then creating engine from that index to ask my queries in that process I am using chat gpt model 3.5 turbo but i want to use model gpt-4o-mini but as the method give exception shared below

code:-

llm = OpenAI(temperature=0.8, model="gpt-4o-mini")
service_context = ServiceContext.from_defaults(llm=llm)



exception


traceback (most recent call last):
File "C:\Users\ajinkya\Desktop\ISP_Qual_Pro\isp_qual_pro_api\src\nlp\gpt.py", line 94, in get_chat_engine_on_stimuli
service_context = ServiceContext.from_defaults(llm=llm)
File "C:\Users\ajinkya\Desktop\ISP_Qual_Pro\isp_qual_pro_api\env\lib\site-packages\llama_index\service_context.py", line 195, in from_defaults
llm_metadata=llm_predictor.metadata,
File "C:\Users\ajinkya\Desktop\ISP_Qual_Pro\isp_qual_pro_api\env\lib\site-packages\llama_index\llm_predictor\base.py", line 153, in metadata
return self._llm.metadata
File "C:\Users\ajinkya\Desktop\ISP_Qual_Pro\isp_qual_pro_api\env\lib\site-packages\llama_index\llms\openai.py", line 222, in metadata
context_window=openai_modelname_to_contextsize(self._get_model_name()),
File "C:\Users\ajinkya\Desktop\ISP_Qual_Pro\isp_qual_pro_api\env\lib\site-packages\llama_index\llms\openai_utils.py", line 195, in openai_modelname_to_contextsize
raise ValueError(
ValueError: Unknown model 'gpt-4o-mini'. Please provide a valid OpenAI model name in: gpt-4, gpt-4-32k, gpt-4-1106-preview, gpt-4-vision-preview, gpt-4-0613, gpt-4-32k-0613, gpt-4-0314, gpt-4-32k-0314, gpt-3.5-turbo, gpt-3.5-turbo-16k, gpt-3.5-turbo-1106, gpt-3.5-turbo-0613, gpt-3.5-turbo-16k-0613, gpt-3.5-turbo-0301, text-davinci-003, text-davinci-002, gpt-3.5-turbo-instruct, text-ada-001, text-babbage-001, text-curie-001, ada, babbage, curie, davinci, gpt-35-turbo-16k, gpt-35-turbo, gpt-35-turbo-1106, gpt-35-turbo-0613, gpt-35-turbo-16k-0613
2 comments
A
W
Hello team llamalndex, we are getting below error while enabling model

Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}
5 comments
A
W
Subject: Inquiry: Retrieving Index by ID in llama_index

Dear LlamaIndex Team,

After creating an index in llama_index using GPTVectorStoreIndex.from_documents, I'm curious if there's a way to retrieve an index by its ID. Could you please provide guidance on this?

Thank you for your assistance.
7 comments
A
W