Find answers from the community

s
F
Y
a
P
Home
Members
autratec
a
autratec
Offline, last seen last month
Joined September 25, 2024
i try to use Azure Openai API to run the query engine and got following error:
---------------------------------------------------------------------------
InvalidRequestError Traceback (most recent call last)
<ipython-input-41-f3d4578d9508> in <cell line: 1>()
6 break
7
----> 8 response = query_engine.query(query_text)
9 response_ext = Markdown(f"{response.response}").data
10

33 frames
/usr/local/lib/python3.10/dist-packages/openai/api_requestor.py in _interpret_response_line(self, rbody, rcode, rheaders, stream)
761 stream_error = stream and "error" in resp.data
762 if stream_error or not 200 <= rcode < 300:
--> 763 raise self.handle_error_response(
764 rbody, rcode, resp.data, rheaders, stream_error=stream_error
765 )

InvalidRequestError: Resource not found

but LLM itself is running fine. any idea how to resolve it ?
4 comments
a
L
Tried. Not working well. Can I try to use similarity score? Is there any place I can find the sample code under verion 0.6 ? What's the default similarity k we r using ?
4 comments
L
W
a
i am using Google Vertex AI API as LLM and using HF embedding. After completing the embedding and the file stored on local drive. i try to retrive those file using following instructions:

rebuild storage context

storage_context = StorageContext.from_defaults(persist_dir='/content/drive/MyDrive/data/vectordb')

load index

index = load_index_from_storage(storage_context=storage_context)

and got follow error:
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
<ipython-input-12-a5b638486930> in <cell line: 4>()
2 storage_context = StorageContext.from_defaults(persist_dir='/content/drive/MyDrive/data/vectordb')
3 # load index
----> 4 index = load_index_from_storage(storage_context=storage_context)

7 frames
/usr/local/lib/python3.10/dist-packages/pydantic/main.cpython-310-x86_64-linux-gnu.so in pydantic.main.BaseModel.init()

ValidationError: 1 validation error for OpenAI
root
Did not find openai_api_key, please add an environment variable OPENAI_API_KEY which contains it, or pass openai_api_key as a named parameter. (type=value_error)

How i can setup load_index_from_storage and not using OPENAI as default ? thx.
6 comments
a
W
Did you try Google cloud run ?
3 comments
a
L
Hi all, i am reading the code of img2vec . Does llamaindex have similar feature to integration ?
1 comment
L
i have close 1000 txt documents need to be indexed using vector db and whole process should take pretty long time. is there anyway i can know the % of index progress ?
1 comment
L
hi all, i have collected my meeting scripts from the recording. Can anyone show me the simple code to use llamaindex, go through all the chunks/nodes, and provide some summary information of meeting, like key topics were discussed, key decision were made, and actions should be taken, ownership and due date , etc ?
8 comments
a
L
E
hi Logan, i have long meeting script and want to use llamaindex to help me summarize the discussion. Is it possible ? any sample code can be shared ? thx.
2 comments
a
L
i have a index file which is 120M. i just tried to convert an 1100 pages income tax act into that index file.
15 comments
a
L
hi thanks for the sharing. is there any simple code (complete python code) to introduce how to load and index pdf and search the content use OPENAI API ?
1 comment
L
i have qna bot running on llama-index==0.7.20, and i want to enhance it without upgrade to higher version. May i know how to find those docs related that version ? like those sample code etc ?
2 comments
a
W
I used to use following format to get response:
response = index.query(query,
service_context=service_context,
response_mode="compact",
similarity_top_k=1,
node_postprocessors=[SimilarityPostprocessor(similarity_cutoff=0.75)],
text_qa_template= QNA_PROMPT,
refine_template = REFINE_PROMPT
)
how the code will be looks like under:query_engine.query ? what kind of import library will be needed ?
2 comments
a
W
we are doing it. what's the challenge or question you have ?
2 comments
a
b
What's the latest solution to embedding txt and picture together in a word or pdf and display the picture as needed in reply ?
7 comments
L
a
T
I need some suggestions here. I want to some real time embedding and semantic search. Can llamaindex framework support it? Any sample code can share ?
6 comments
a
W
For those pdf /doc being embeddinged, is there any way to show the source of page numbers , apart from file name ? I saw one demo using langchain, can llamaindex support that feature ?
3 comments
L
a
as i saw followign message. normally how to solve it ? reindex again ?

9 frames
/usr/local/lib/python3.10/dist-packages/llama_index/langchain_helpers/text_splitter.py in init(self, separator, chunk_size, chunk_overlap, tokenizer, backup_separators, callback_manager)
38 """Initialize with parameters."""
39 if chunk_overlap > chunk_size:
---> 40 raise ValueError(
41 f"Got a larger chunk overlap ({chunk_overlap}) than chunk size "
42 f"({chunk_size}), should be smaller."

ValueError: Got a larger chunk overlap (-4) than chunk size (-45), should be smaller.
1 comment
L
hi all, i am doing some project with locally installed llama 2 and following simple API interface:

{
"input": "how is weather in new york",
"context":"new york is hot in these days"
}

input the query and context should coming from the the vector DB. How i can get it integrate with existing lllamaindex library without change too much of my codes ? @WhiteFang_Jr
4 comments
a
L
just add on it. how to provide some list of documents as reference document, which AI used to form the answer ? any some code will be welcome.
36 comments
S
a
W
a
autratec
·

Azure

am i using gpt3.5 ?
5 comments
L
a
hi all for the line below: new_index = VectorStoreIndex.from_documents(
documents,
service_context=service_context,
)

can it handle PDF directly, or i need to convert it to csv or txt first ? thanks
3 comments
a
H
there is only one line: embed_model = LangchainEmbedding(HuggingFaceEmbeddings())
1 comment
L
Any idea when the integration with Google bard will be available?
1 comment
j
Good morning, everyone. I wanted to share that I've come across some issues with my code this morning. Specifically, I've noticed that GPTSimpleVectorIndex is no longer available and needs to be changed to GPTVectorStoreIndex. Additionally, the way to store and retrieve data has changed, and I need to use the following code: storage_context = StorageContext.from_defaults(persist_dir="./storage"), followed by index = load_index_from_storage(storage_context).

I also noticed that there are now three json files that are created, rather than just one that I can name myself. Unfortunately, I'm still having trouble with my first index.query. Is there a way for me to revert back to an older version of llama_index? Additionally, would someone be able to point me in the direction of documentation that outlines the differences and changes needed in the code? Thank you!

is there anyway i can un-install the latest llama_index and revert back to old version, which is supporting GPTSimpleVectorIndex well ?
2 comments
L
When we are using GPT 3.5 for llm, how to setup system context? Any suggestions to including previous query and respond as short term memory?
5 comments
a
L