i try to use Azure Openai API to run the query engine and got following error: --------------------------------------------------------------------------- InvalidRequestError Traceback (most recent call last) <ipython-input-41-f3d4578d9508> in <cell line: 1>() 6 break 7 ----> 8 response = query_engine.query(query_text) 9 response_ext = Markdown(f"{response.response}").data 10
33 frames /usr/local/lib/python3.10/dist-packages/openai/api_requestor.py in _interpret_response_line(self, rbody, rcode, rheaders, stream) 761 stream_error = stream and "error" in resp.data 762 if stream_error or not 200 <= rcode < 300: --> 763 raise self.handle_error_response( 764 rbody, rcode, resp.data, rheaders, stream_error=stream_error 765 )
InvalidRequestError: Resource not found
but LLM itself is running fine. any idea how to resolve it ?
Tried. Not working well. Can I try to use similarity score? Is there any place I can find the sample code under verion 0.6 ? What's the default similarity k we r using ?
i am using Google Vertex AI API as LLM and using HF embedding. After completing the embedding and the file stored on local drive. i try to retrive those file using following instructions:
index = load_index_from_storage(storage_context=storage_context)
and got follow error: --------------------------------------------------------------------------- ValidationError Traceback (most recent call last) <ipython-input-12-a5b638486930> in <cell line: 4>() 2 storage_context = StorageContext.from_defaults(persist_dir='/content/drive/MyDrive/data/vectordb') 3 # load index ----> 4 index = load_index_from_storage(storage_context=storage_context)
7 frames /usr/local/lib/python3.10/dist-packages/pydantic/main.cpython-310-x86_64-linux-gnu.so in pydantic.main.BaseModel.init()
ValidationError: 1 validation error for OpenAI root Did not find openai_api_key, please add an environment variable OPENAI_API_KEY which contains it, or pass openai_api_key as a named parameter. (type=value_error)
How i can setup load_index_from_storage and not using OPENAI as default ? thx.
i have close 1000 txt documents need to be indexed using vector db and whole process should take pretty long time. is there anyway i can know the % of index progress ?
hi all, i have collected my meeting scripts from the recording. Can anyone show me the simple code to use llamaindex, go through all the chunks/nodes, and provide some summary information of meeting, like key topics were discussed, key decision were made, and actions should be taken, ownership and due date , etc ?
hi Logan, i have long meeting script and want to use llamaindex to help me summarize the discussion. Is it possible ? any sample code can be shared ? thx.
hi thanks for the sharing. is there any simple code (complete python code) to introduce how to load and index pdf and search the content use OPENAI API ?
i have qna bot running on llama-index==0.7.20, and i want to enhance it without upgrade to higher version. May i know how to find those docs related that version ? like those sample code etc ?
I used to use following format to get response: response = index.query(query, service_context=service_context, response_mode="compact", similarity_top_k=1, node_postprocessors=[SimilarityPostprocessor(similarity_cutoff=0.75)], text_qa_template= QNA_PROMPT, refine_template = REFINE_PROMPT ) how the code will be looks like under:query_engine.query ? what kind of import library will be needed ?
For those pdf /doc being embeddinged, is there any way to show the source of page numbers , apart from file name ? I saw one demo using langchain, can llamaindex support that feature ?
hi all, i am doing some project with locally installed llama 2 and following simple API interface:
{ "input": "how is weather in new york", "context":"new york is hot in these days" }
input the query and context should coming from the the vector DB. How i can get it integrate with existing lllamaindex library without change too much of my codes ? @WhiteFang_Jr
Good morning, everyone. I wanted to share that I've come across some issues with my code this morning. Specifically, I've noticed that GPTSimpleVectorIndex is no longer available and needs to be changed to GPTVectorStoreIndex. Additionally, the way to store and retrieve data has changed, and I need to use the following code: storage_context = StorageContext.from_defaults(persist_dir="./storage"), followed by index = load_index_from_storage(storage_context).
I also noticed that there are now three json files that are created, rather than just one that I can name myself. Unfortunately, I'm still having trouble with my first index.query. Is there a way for me to revert back to an older version of llama_index? Additionally, would someone be able to point me in the direction of documentation that outlines the differences and changes needed in the code? Thank you!
is there anyway i can un-install the latest llama_index and revert back to old version, which is supporting GPTSimpleVectorIndex well ?