Find answers from the community

Home
Members
JoséMiguelVilches
J
JoséMiguelVilches
Offline, last seen 2 months ago
Joined October 11, 2024
Hi guys, how are you? I wanted to ask you something. I've been trying to install 'npx create-llama' to read a large set of CSV files (43 files in total, 7GB of data, with file sizes ranging from 10MB to 1GB) and use LLM to gain some insights.

The problem is that managing all these files has been really difficult. I've tried different approaches:

Creating a local database and importing all the file data into tables. It takes forever to load the data, and in the end, I can only query one table.
Reading the files directly from the folder. This also takes forever during the embedding step, especially when interacting with the backend.
Creating a vector index with Pinecone and attempting to read this index from the app. 3.1. I'm currently stuck here. My confusion is about the flow because I created the index (which took forever – just 80MB using Python), and now I think I need to bring this index into LlamaCloud so my app can point to LlamaCloud.
What do you think about point number 3? Does it make sense? Do people usually handle this differently? Could you help me, please? I'm a bit lost
Thanks!!
2 comments
L
J