Find answers from the community

Z
Zen
Offline, last seen 4 weeks ago
Joined December 23, 2024
Hi, when calling a simple code to predict, if there is unsufficient quota on the OpenAI, the code tries to hit the point multiple times what causes 429 error too many requests. How can I prevent these hittings and just get one exception about non-suffitient funds? Thanks
Plain Text
llm = OpenAI(temperature=0, model=model_name, api_key=ai_key, 
                callback_manager=callback_manager)
response = llm.predict(Prompt(prompt))
5 comments
L
Z
Hi guys,
after updating all the LLamaIndex libs I faced this problem: "ServiceContext is deprecated. Use llama_index.settings.Settings" after checking the documentation, my impression is that I can pass the parameter that I used in the sevicecontext, right into, say, VectorStoreIndex. I didn't find the chunk_size parameter, though. How can I pass it there? Thanks!
```
2 comments
L
Z