Find answers from the community

Home
Members
Meow Meow
M
Meow Meow
Offline, last seen 3 months ago
Joined September 25, 2024
M
Meow Meow
·

Sql

is it possible to help ?
3 comments
M
L
Is there any way with text-to-sql that once the AI creates the query, BEFORE running it to manipulate it?

i try to add to context. where clause it works but not often
3 comments
M
j
is there a way to use ReAct but also implement the retry and chat history ?

one thing i dont like of the docs is, every example is independant, very hard to try to mix anything
2 comments
L
Guys, in the last Part 3 video of Jerry, is there a way to add table schema and table context or its not necessary anymore? im so confused because the videos example change every week, dont we have to pass some default prompt like in part 2 !??!

Also, we dont use anymore node_mapping and VectorIndex and similarity_top_k .. all the part 3 kind of do all that by itself ? 🥴 🥴
11 comments
M
j
why when i do pip install llama_index i still see 0.9.48? isnt 0.10 out ?
46 comments
M
W
b
What is the main objective of me using Text-to SQL with QueryPipeline to get my answer from the llm versus not using queryPipeline? the AI wont be smarter or anything, what is the point of using it aside of "visualuzing" the links ?
3 comments
M
L
With one of the latest video from llama_index with text to sql and QueryPipeline, is there anyway to have a chat history so every next question the AI gets a bit smarter or so far its just one at a time?
8 comments
M
L
Is there a way with llama_index and the llm to, depending of my user that is logged in, to only get a part of data ? example to always add a WHERE user = '123' in the sql clause when its him logged? i tried to add to the context but the SQl query created is not respecting it
26 comments
M
a
L
guys, qp = QP(
modules={
"input": InputComponent(),
"table_retriever": obj_retriever,
"table_output_parser": table_parser_component,
"text2sql_prompt": text2sql_prompt,
"text2sql_llm": llm,
"sql_output_parser": sql_parser_component,
"sql_retriever": sql_retriever,
"response_synthesis_prompt": response_synthesis_prompt,
"response_synthesis_llm": llm,
},
verbose=True,
)

how do i get to do a print of the SQL syntax?
is there a way to do qp.[somethinghere] to get the sql ? i would like to show it in my streamlit app
17 comments
M
L
hi guys i have a question on one of the latest video/documentation from llama_index on LLMs for Advanced Question-Answering over Tabular/CSV/SQL Data (Building Advanced RAG, Part 2)

Jerry Liu is indexing all his files first (he got tons of CSV files) but i was trying to do the same on my SQL, one of my table only has 27,000 rows (the other one is like 1 milions rows) and even the small table it tooks ages to index. i know his doing that to then use get_table_context_and_rows_str to give the AI some relevant rows

how can i do it ? is it because im supposed to save my SQL table data into CSv? is that why its taking so long ?
4 comments
M
L
someone know ?
3 comments
L
M
hey guys, i followed perfectly examples for llama_index.. im using 2 tables and i add them to metadata and mapping such as :

sql_database = SQLDatabase(engine, include_tables=all_table_names)
table_node_mapping = SQLTableNodeMapping(sql_database)
table_schema_objs = []

for table_name in all_table_names :
table_schema_objs.append(SQLTableSchema(table_name=table_name, context_str=table_contexts[table_name]))

obj_index = ObjectIndex.from_objects(
table_schema_objs,
table_node_mapping,
VectorStoreIndex,
)
query_engine = SQLTableRetrieverQueryEngine(
sql_database,
obj_index.as_retriever(similarity_top_k=1),
service_context=service_context
)

my question is, it seems the query result will NEVER do an inner join like if I ask give me the sales by customer and fromwhich state are they in .. it wont find the answer, the state is of course if the table customer.. is it me or it never joins table?
11 comments
M
L