Find answers from the community

Home
Members
alex-feel
a
alex-feel
Offline, last seen 3 months ago
Joined September 25, 2024
Hi LlamaIndex Community,

I'm currently working on integrating LlamaIndex with Qdrant for a project. I've encountered an issue where my data doesn't appear in the specified Qdrant collection after running through the ingestion pipeline. I've confirmed that the collection exists and that there are no errors in the logs. The HTTP response from Qdrant is 200, indicating successful communication.

Here's a brief overview of what I'm doing:

  1. I've set up an ingestion pipeline using LlamaIndex, which processes documents and is supposed to index them into Qdrant.
  2. The collection in Qdrant is already created, and the environment variables for QDRANT_API_KEY and QDRANT_URL are correctly set.
  3. The logs show successful processing of documents and chunks being added, but when I check the collection in Qdrant, it's empty.
I've double-checked the collection name and ensured the QdrantVectorStore is correctly configured. There are no errors in the debug logs, and the process finishes with an exit code 0, suggesting that the script completes successfully.

Could I be missing something in the setup or a step I've overlooked that's preventing the data from being indexed in Qdrant? Any insights or suggestions would be greatly appreciated.

Thank you in advance for your help!
20 comments
L
a
W
Hi everyone, I'm trying to ensure that I'm using the Gemini Pro 1.0 model in LlamaIndex, particularly because starting May 2nd, Google is implementing charges for using their models, with distinct costs for versions 1.0 and 1.5 as detailed here. The terms also reflect this change. I couldn't find the specific code in LlamaIndex that differentiates between these versions. Could anyone guide me on how to explicitly select Gemini Pro 1.0 to avoid higher charges? Thanks!
3 comments
a
L