Find answers from the community

Home
Members
Razerback.LABS
R
Razerback.LABS
Offline, last seen 3 months ago
Joined September 25, 2024
R
Razerback.LABS
·

Type

Hey everyone, im trying to integrate our application using llamaindex with a Qdrant vector store. In uploading documents, im having some issues. The main problem is when i try to pass the QdrantVectorStore object as the vector_store argument to the VectorStoreIndex.from_vector_store() function, the QdrantVectorStore is not considers a VectorStore object, but is instead a BasePydanticVectorStore. It doesnt seem theyre compatible. You cannot cast a QdrantVectorStore to a VectorStore or vice versa. So how are we to create an index from an existing QdrantVectorStore? is there another way to do it? This is the code im using. Any ideas?
2 comments
L
im having a very strange issue with the SentanceSplitter node parser. When i use a node_chunk_overlap of size 0 i have no issues, but if i use a positive value i always get an error that the chunk_overlap size is greater than the node_chunk_size, when it definitely is not larger. for example, a node_chunk_overlap of size 8 is considered larger than a node_chunk_size of 160. as shown here:

2024-07-04 10:11:54 Traceback (most recent call last):
2024-07-04 10:11:54 File "/app/main.py", line 27, in <module>
2024-07-04 10:11:54 init_settings()
2024-07-04 10:11:54 File "/app/app/settings.py", line 41, in init_settings
2024-07-04 10:11:54 Settings.node_parser = SentenceSplitter(chunk_size=Settings.chunk_size, chunk_overlap=Settings.chunk_overlap)
2024-07-04 10:11:54 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-07-04 10:11:54 File "/usr/local/lib/python3.11/site-packages/llama_index/core/node_parser/text/sentence.py", line 81, in init
2024-07-04 10:11:54 raise ValueError(
2024-07-04 10:11:54 ValueError: Got a larger chunk overlap (8) than chunk size (160), should be smaller.

i dont understand how it thinks 8 is larger than 160???
6 comments
R
L
R
Razerback.LABS
·

Gpt4p

I was trying to test out the GPT-4o model with my application and i hit this error:

File "/usr/local/lib/python3.11/site-packages/llama_index/llms/openai/utils.py", line 198, in openai_modelname_to_contextsize
raise ValueError(
ValueError: Unknown model 'gpt-4o'. Please provide a valid OpenAI model name in: gpt-4, gpt-4-32k, gpt-4-1106-preview, gpt-4-0125-preview, gpt-4-turbo-preview, gpt-4-vision-preview, gpt-4-0613, gpt-4-32k-0613, gpt-4-0314, gpt-4-32k-0314, gpt-3.5-turbo, gpt-3.5-turbo-16k, gpt-3.5-turbo-0125, gpt-3.5-turbo-1106, gpt-3.5-turbo-0613, gpt-3.5-turbo-16k-0613, gpt-3.5-turbo-0301, text-davinci-003, text-davinci-002, gpt-3.5-turbo-instruct, text-ada-001, text-babbage-001, text-curie-001, ada, babbage, curie, davinci, gpt-35-turbo-16k, gpt-35-turbo, gpt-35-turbo-1106, gpt-35-turbo-0613, gpt-35-turbo-16k-0613


is llamaindex not supporting GPT-4o or is this a version error im encountering?
1 comment
L
im working on a chat system where a document is indexed, and the index is persisted to the storage directory, then i want to load the index later from the SimpleIndexStore within the storage directory and query against it. For some reason, i can get it to work correctly with a note book test, but not with my create-llama-app application. any ideas why this isnt working?
8 comments
R
L