df(['col1'] == 'val1' & ['col2'] == 'val2')['col3']
<-- this is what PandasQueryEngine was doingPandas Instructions:
df_uk[df_uk['Level 1'] == 'Business Travel'][df_uk['Level 2'] == 'Petrol car']['GHG Conversion Factor 2020']
Pandas Output: There was an error running the output as Python code. Error message: name 'df_uk' is not definedTraceback (most recent call last):
# Read an excel file and print all sheets
df_uk = pd.read_excel(os.getcwd() + "/data/file.xlsx", sheet_name="data")
df_uk.sample(5)
query_engine = PandasQueryEngine(df=df_uk, verbose=True) prompts = query_engine.get_prompts()
new_prompt = PromptTemplate( """\ You are working with a pandas dataframe in Python. The name of the dataframe is `df_uk`. This is the result of `print(df_uk.head())`: {df_str} Follow these instructions: {instruction_str} Query: {query_str} Expression: """ ).partial_format( instruction_str = instruction_str, df_str = df_uk.head(5) ) query_engine.update_prompts({"pandas_prompt": new_prompt})
workflows
, in particular RAG with Re-ranking and vectorDBs. In the linked example, https://docs.llamaindex.ai/en/stable/examples/workflow/rag/ , instead of a vectorStoreIndex
with I am using MilvusVectorStore
and pass the new_index
in def ingest
in RAGWorkflow
class. storage_context = StorageContext.from_defaults(vector_store=vector_store) vector_store = MilvusVectorStore( uri="http://localhost:19530", # set local / docker / k8s dim=384, collection_name = collection_name, overwrite=True ) storage_context = StorageContext.from_defaults(vector_store=vector_store) new_index = VectorStoreIndex.from_documents( documents, storage_context=storage_context )
--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[23], line 4 1 # Run a query 2 result = await w.run(query="What is the conversion factor for Business Travel by Diesel car in miles?", index=uk_index) ----> 4 async for chunk in result.async_response_gen(): 5 print(chunk, end="", flush=True) AttributeError: 'VectorStoreIndex' object has no attribute 'async_response_gen' INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" The provided information does not include details about Business Travel by Diesel car, so a conversion factor for that specific category cannot be determined from the available data.
<llama_index.core.indices.vector_store.base.VectorStoreIndex at 0x7fd7be5a1d50>
?VectorStoreIndex
w = RAGWorkflow() result = await w.run(query="How was Llama2 trained?", index=index) async for chunk in result.async_response_gen(): print(chunk, end="", flush=True)
VectorStoreIndex.from_documents(.., ., storage_context=storage_context)