I am a bit confused here, i have a client who wants to chat or ask his data, this data will be scientific papers, so before llamaindex i was using long road using Openai, embeddings, save to Supabase, then query, so here i am a bit confused how to start, i need a bit advanced example if there is any please.
@mahdicodex999 : The pipeline I wrote consumes general content, embeds it, and stores that in pgvector. The nice thing about that is I can use it directly on Supabase w/o having to standup any other services. You can write a single fast query joining your embeddings w/your other relational content.
@WhiteFang_Jr always has great answers, but I wanted to jump in to say it's pretty straightforward with llama-index. I tend to adopt boring technologies that just work and keep things as simple as possible.