Find answers from the community

Updated 9 months ago

Hello guys, i used below code.

At a glance
Hello guys, i used below code.

rerank = SentenceTransformerRerank(
model="cross-encoder/ms-marco-MiniLM-L-2-v2", top_n=3
)
response_synthesizer = get_response_synthesizer(streaming=True)
try:
vector_retriever = self._vector_index.as_retriever(
similarity_top_k=10)
keyword_retriever = self._keyword_index.as_retriever()
custom_retriever = CustomRetriever(
vector_retriever, keyword_retriever)
custom_query_engine = RetrieverQueryEngine(
retriever=custom_retriever, response_synthesizer=response_synthesizer, node_postprocessors=[rerank])
response = custom_query_engine.query(query)
answer = generatorize_response(response.response_gen)

But getting the error :
tenacity.RetryError: RetryError[<Future at 0x7c4b17a48610 state=finished raised APIRemovedInV1>]

in this line of execution(response = custom_query_engine.query(query).
L
V
10 comments
What LLM are you using? What embedding model? Seems like "API Removed"
if you are using text-davinci-003, that model no longer exists on openai
llama-index==0.7.21 ,
ms-marco-MiniLM-L-2-v2 ,
and i am using gpt-3.5-turbo instead of text-davinci-003
oh wow thats an old version of llama-index --- its probably using a very old version of the openai client, that is using an endpoint that has been removed
thats like a year old πŸ˜…
Please give me any suggestion .
to resolve this issue
Update llama-index so that it uses a newer version of the openai client
You can probably update to v0.8.x or v0.9.x without any breaking changes
v0.10.x will require modifying all your imports
@Logan M
Thanks ! it's helpful.
Add a reply
Sign up and join the conversation on Discord