Guys, in the last Part 3 video of Jerry, is there a way to add table schema and table context or its not necessary anymore? im so confused because the videos example change every week, dont we have to pass some default prompt like in part 2 !??!
Also, we dont use anymore node_mapping and VectorIndex and similarity_top_k .. all the part 3 kind of do all that by itself ? 🥴 🥴
What is the main objective of me using Text-to SQL with QueryPipeline to get my answer from the llm versus not using queryPipeline? the AI wont be smarter or anything, what is the point of using it aside of "visualuzing" the links ?
With one of the latest video from llama_index with text to sql and QueryPipeline, is there anyway to have a chat history so every next question the AI gets a bit smarter or so far its just one at a time?
Is there a way with llama_index and the llm to, depending of my user that is logged in, to only get a part of data ? example to always add a WHERE user = '123' in the sql clause when its him logged? i tried to add to the context but the SQl query created is not respecting it
hi guys i have a question on one of the latest video/documentation from llama_index on LLMs for Advanced Question-Answering over Tabular/CSV/SQL Data (Building Advanced RAG, Part 2)
Jerry Liu is indexing all his files first (he got tons of CSV files) but i was trying to do the same on my SQL, one of my table only has 27,000 rows (the other one is like 1 milions rows) and even the small table it tooks ages to index. i know his doing that to then use get_table_context_and_rows_str to give the AI some relevant rows
how can i do it ? is it because im supposed to save my SQL table data into CSv? is that why its taking so long ?
my question is, it seems the query result will NEVER do an inner join like if I ask give me the sales by customer and fromwhich state are they in .. it wont find the answer, the state is of course if the table customer.. is it me or it never joins table?