Find answers from the community

A
AnoDy
Offline, last seen 3 months ago
Joined September 25, 2024
I install 0.8.63.post2 but i still can't use "gpt-4-1106-preview" and "gpt-3.5-turbo-1106" =(((( Why?
3 comments
L
A
Why do question marks, spaces, and periods affect the choice of llmaindex nodes?
For example, if I don't put a question mark at the end, the text fragment is selected correctly. If there is a question mark, then the fragment is not on the question, the same with a space, period, etc.
2 comments
A
W
A
AnoDy
·

Top_K

Hello, everyone. I am interested in this argument "similarity_top_k".

Is it possible to make it dynamic? That is, if only one paragraph matches the most - similarity_top_k == 1. If there are several really relevant ones, similarity_top_k == 2. etc.
4 comments
Ł
W
Hello, everyone. Could someone explain to me how the system of searching for suitable "pieces of text" in a document works in GPTVectorStoreIndex?
And how to make this selection more accurate, because sometimes it finds inappropriate passages.
44 comments
W
A
T
Hi guys! i try use gpt-4o but got error:
Unknown model 'gpt-4o'. Please provide a valid OpenAI model name in: gpt-4, gpt-4-32k, gpt-4-1106-preview, gpt-4-0125-preview, gpt-4-turbo-preview, gpt-4-vision-preview, gpt-4-0613, gpt-4-32k-0613, gpt-4-0314, gpt-4-32k-0314, gpt-3.5-turbo, gpt-3.5-turbo-16k, gpt-3.5-turbo-0125, gpt-3.5-turbo-1106, gpt-3.5-turbo-0613, gpt-3.5-turbo-16k-0613, gpt-3.5-turbo-0301, text-davinci-003, text-davinci-002, gpt-3.5-turbo-instruct, text-ada-001, text-babbage-001, text-curie-001, ada, babbage, curie, davinci, gpt-35-turbo-16k, gpt-35-turbo, gpt-35-turbo-0125, gpt-35-turbo-1106, gpt-35-turbo-0613, gpt-35-turbo-16k-0613
1 comment
L
A
AnoDy
·

Vector Store

I am currently using this type of vector storage:
index = GPTVectorStoreIndex.from_documents(
documents, service_context=service_context
)
index.storage_context.persist(persist_dir=index_name)

But when I upload files of about 40 MB, everything is indexed for a very long time and the response takes up to 2-3 minutes. Can Weaviate solve this problem?
2 comments
A
W
Hello friends. On my project, the version of llamaindex =
0.8.66. And the processing of one request takes about 26 seconds. Is it faster on the new version of llamain? Just 26 seconds is too long...
26 comments
W
A
Hello friends. I'm wondering if it is now possible to set a separation character for LLM other than "chunk_overlap", "chunk_size". I saw in LlamaIndex cloud that there is a menu where you can set the text splitter. But I don't see it in the documentation. The only way I found is to reassign the "get_nodes_from_node" function from the llama_index/node_parser/node_utils.py file
2 comments
A
@kapa.ai i want to gen questions to my nodes with metadata "questions_this_excerpt_can_answer"
2 comments
k
Hello Guys. I'm facing a problem with profitability.

Let me explain in more detail. I have a document that LlamaIndex works with. The document is 200+ pages long.
At every slightest change in the file, the document is re-indexed. Is there any way to index only the part of the document that has been updated. And update part of the LlamaIndex indexes?

Because when re-indexing a document of 200+ pages, it hits my wallet hard....
6 comments
W
A
A
AnoDy
·

Ukrainian

Hello, friends. I have created an assistant bot for one service.

It should communicate only in Ukrainian. But from time to time it can answer in English. How can I strictly limit it to use only Ukrainian?
2 comments
E
Hi everyone, as you may already know, we have a new language - Mojo (a slightly "better" version of python). Mojo can easily use python modules.

So, I have a question for the LlamaIndex developers. Today we have a rather slow llamaindex (something like 20 sec). Maybe we can try to use Mojo to make it faster?
2 comments
L
hi. Smb know analogue of LlamaIndex but for rust lang?
2 comments
A
L