Find answers from the community

Home
Members
Alwiiiiiiiin
A
Alwiiiiiiiin
Offline, last seen 3 weeks ago
Joined September 25, 2024
Hi guys,
Does anyone know how to get rid of this error in Workflow?

"WorkflowTimeoutError: Operation timed out after 10.0 seconds"
14 comments
L
A
G
Hi guys,
Are there any multi-agent guidelines with LlamaIndex to use agents in parallel with different LLMs and the having memory in the loop?
Sth like CrewAI?
I read the CrewAI example with LlamaIndex but it is so simple and ignores many things
11 comments
L
A
A
Alwiiiiiiiin
·

Cuda

Hello,

I fine-tuned Llama3.1-8B for the Text2SQL task, and now I have two questions:

1) How can I load the locally saved finetuned model in LlamaIndex?
2) How can I use quantization to load the model on my GPU?

I tried pushing the model to HuggingFace and downloading it using the HuggingFace() LLM class in LlamaIndex (such as the other LLMs); however, the model is not loaded on the GPU.
6 comments
L
A
Hi guys,

I'm using ReAct agent using Llama3. here is my agent:

memory = ChatMemoryBuffer.from_defaults(token_limit=6000) agent_all = ReActAgent.from_tools(query_engine_tools, memory = memory, verbose = True, max_iteration = 20) response = agent_all.chat(query)

The issue is when I send a query, the logs show that the model get the answer correctly after 4 or 5 iterations, however, when it is supposed to print the response, I am getting this error:
ValueError: Reached max iterations.

can anyone help me?
2 comments
A
L
Is there anyone here who has worked on creating a pipeline for chatting with SQL using Open-source LLMs and getting good results?
6 comments
A
T
Hi @Logan M

I believe this issue is caused by an incompatibility between the 'google-generativeai' and 'llama-index-llms-gemini' libraries. Have you found a solution for this?

I've tried uninstalling and reinstalling them, but the problem persists. Do you know which specific versions work correctly with LlamaIndex?

Thanks in advance
9 comments
A
L
Hi everyone;
Do you know how to handle this error:
'cannot import name 'validator' from 'llama_index.core.bridge.pydantic''

after updating to v 0.11
1 comment
L
A
Alwiiiiiiiin
·

@Logan M

@Logan M

Hi,
I am using PandasQueryEngine and it works well with Opensource LLMs in both Jupyter notebook and Python file.
However, I have a very weird issue;
When I install all the same libraries in a new PC, the code works in Jupyter notebook, however, does not work as Python file!!!
Here is the error when running it as Python file in a new PC:

df[df['Technique ID'] == 'T1087'] - response: df[df['Technique ID'] == 'T1087'] Traceback (most recent call last): File "./.venv/lib/python3.10/site-packages/llama_index/experimental/query_engine/pandas/output_parser.py", line 40, in default_output_processor tree = ast.parse(output) File "/usr/lib/python3.10/ast.py", line 50, in parse return compile(source, filename, mode, flags, File "<unknown>", line 2 - response: df[df['Technique ID'] == 'T1087'] ^^^^^^^^^^ SyntaxError: illegal target for annotation Pandas Output: There was an error running the output as Python code. Error message: illegal target for annotation (<unknown>, line 2)

It seems that it forgets to add parenthesis after df[]!

Can you please help me in this regard?
15 comments
A
L
Hi @Logan M

i have seen there is a caht engine in LlamaIndex. however, in all the examples it is used with indexes. Is there any solution to save the chat history of the results of NLSQL query engine or Pandasquery engine?
7 comments
A
L
@kapa.ai
How can I efficiently read, parse and index a CSV file in a way that the column headers being integrated to the index of each row. hence, I can ask the question related to the content of different columns for each row.
3 comments
k
A
Alwiiiiiiiin
·

@Logan M

@Logan M
Hi, Is there any examples of using JsonQueryEngine with open source llms?
I tried a few of them and all generate ''wrong json path'' errors as they're not compatible with generating JSONs
2 comments
A
L
A
Alwiiiiiiiin
·

Hi guys.

Hi guys.
I have a set of JSON files in my directory and trying to index them via:
filename_fn = lambda filename: {"file_name": os.path.splitext(os.path.basename(filename))[0]} documents = SimpleDirectoryReader("./myfile", file_metadata=filename_fn, filename_as_id=True).load_data(show_progress=True) Settings.chunk_size = 2048 nodes = Settings.node_parser.get_nodes_from_documents(documents, show_progress=True)


How can I index them in a way that for any query I will ask, the LLM look at the similarity between the query and the meta data only to find the best index and retrieve the whole data related to that index?

I have tried:

index = VectorStoreIndex(nodes=nodes, show_progress=True) query_engine = index.as_query_engine(similarity_top_k = ) response = query_engine.query("query")

But it retrieves wrong data.
18 comments
L
A
@kapa.ai
I have a set of JSON files. How can I implement an indexing pipeline which can add the name of JSON files as their metadata?
9 comments
k
A
How can we chat with the obtained data from a website using Selenium in LlamaIndex without storing and indexing that data? I mean chat on the air with the obtained data via the other part of the code?
5 comments
A
L
Is there anyone here have executed this code successfully?

"https://github.com/run-llama/llama-hub/blob/main/llama_hub/llama_packs/tables/chain_of_table/chain_of_table.ipynb"

I am getting the "AttributeError: 'NoneType' object has no attribute 'group'" when running "response = query_engine.query("Who won best Director in the 1972 Academy Awards?")"
4 comments
A
L
have you ever tried any implementation of LlamaIndex with Google Gemma model?
the following examples give me the "json" error when using Gemma:

https://github.com/run-llama/llama_index/blob/main/docs/examples/query_engine/SQLAutoVectorQueryEngine.ipynb
https://github.com/run-llama/llama_index/blob/main/docs/examples/query_engine/SQLJoinQueryEngine.ipynb

Generally speaking, how can we tailor our custom LLM to work well with all the features of LlamaIndex? Such as GPT and Claude models.

Thanks in advance for your response.
10 comments
A
L
A
Alwiiiiiiiin
·

Json

Hello guys,

I am having trouble running the following example that utilizes LlamaIndex to retrieve data from both an SQL table and Wikipedia:
"https://github.com/run-llama/llama_index/blob/main/docs/examples/query_engine/SQLAutoVectorQueryEngine.ipynb"

This code functions seamlessly with GPT-3.5 and Chromadb. However, I attempted to substitute the GPT model with Gemma as:

from transformers import AutoTokenizer, AutoModelForCausalLM from llama_index.core import Settings, VectorStoreIndex, SimpleDirectoryReader from llama_index.llms.huggingface import HuggingFaceLLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b-it") Settings.llm = HuggingFaceLLM(model_name="google/gemma-7b-it",tokenizer_name="google/gemma-7b-it", device_map="auto") # replace "auto" with "cuda" if having a GPU with high memory! Settings.tokenizer = tokenizer Settings.embed_model = "local:BAAI/bge-small-en-v1.5"
and running the query as:

response = query_engine.query( "Tell me about the arts and culture of the city with the highest" " population" )
but, I got the following error:


JSONDecodeError: Extra data: line 7 column 1 (char 210) During handling of the above exception, another exception occurred: ScannerError Traceback (most recent call last) File c:\Users\.conda\envs\llamaindex_py3.10\lib\site-packages\llama_index\core\output_parsers\selection.py:84, in SelectionOutputParser.parse(self, output) ... { "choice": 2, "reason": "The question is about the arts and culture of a city, so the most relevant choice is (2) Useful for answering semantic questions about different cities." } ]
Can someone assist me with this? Is there anything wrong in using Gemma with the created SQL data in the code?
4 comments
L
A