Find answers from the community

C
COdXG
Offline, last seen 3 months ago
Joined September 25, 2024
any way to clear llamaindex memory since it is remembring data RAG from the past
5 comments
L
C
i have a qestion if i am using agents using llama_index and the user input containes grmmer or spilling error how can i fix it or tell the agent to understand it?
4 comments
C
L
C
COdXG
·

Csv

am getting this error and i dont see any way to fix it:
Traceback (most recent call last): File "/Users/ahmednadiir/Desktop/agency/app.py", line 11, in <module> from quran import quran_engine File "/Users/ahmednadiir/Desktop/agency/quran.py", line 19, in <module> quran_csv = CSVReader().load_data(file=csv_path); ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/ahmednadiir/Desktop/agency/localEvir/lib/python3.11/site-packages/llama_index/readers/file/tabular/base.py", line 48, in load_data return [Document(text="\n".join(text_list), metadata=extra_info)] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/ahmednadiir/Desktop/agency/localEvir/lib/python3.11/site-packages/pydantic/v1/main.py", line 341, in __init__ raise validation_error pydantic.v1.error_wrappers.ValidationError: 1 validation error for Document extra_info none is not an allowed value (type=type_error.none.not_allowed)

code:
import os from llama_index.core import StorageContext, VectorStoreIndex, load_index_from_storage from llama_index.readers.file import CSVReader def get_index(data, index_name): index = None if not os.path.exists(index_name): print("building index", index_name) index = VectorStoreIndex.from_documents(data, show_progress=True) index.storage_context.persist(persist_dir=index_name) else: index = load_index_from_storage( StorageContext.from_defaults(persist_dir=index_name) ) return index csv_path = os.path.join("data", "quran-english-tafsir.csv"); quran_csv = CSVReader().load_data(file=csv_path); quran_index = get_index(quran_csv, "quran") quran_engine = quran_index.as_query_engine() quran_engine.query()
9 comments
d
L
C
does CSVReader exist in llama_index:
from llama_index.core.readers import CSVReader
7 comments
C
L
Does llamaindex 🦙 support google Gemini api instead of OpenAI api ?
12 comments
C
W
L
C
COdXG
·

when i do this:

when i do this:
from llama_index.core.llms.ollama import Ollama
i get a error:
Traceback (most recent call last):
File "/Users/ahmednadiir/Desktop/agency/main.py", line 6, in <module>
from llama_index.core.llms.ollama import Ollama
ModuleNotFoundError: No module named 'llama_index.core.llms.ollama'

altho i run : pip install llama-index-llms-ollama
5 comments
C
L
I am using ollama local model api which I test it using OpenAI api request way and is working the only thing I changed was the baseUrl with my api and work but now when I am using llamaindex and I put my api in the env file and run the code is saying invalid api I was just wondering if I can change the llamaindex baseUrl to my local model api
17 comments
W
C
L
@Logan M why am i keep getting this error:

Could not load OpenAI embedding model. If you intended to use OpenAI, please check your OPENAI_API_KEY. Original error: No API key found for OpenAI. Please set either the OPENAI_API_KEY environment variable or openai.api_key prior to initialization. API keys can be found or created at https://platform.openai.com/account/api-keys

i am using google gemini and i hard code the api in it and i put it in the env file as : API_KEY = "api-is-here"
:
my code is:
import os from llama_index.core import StorageContext, VectorStoreIndex, load_index_from_storage from llama_index.readers.file import PDFReader from dotenv import load_dotenv from llama_index.core import Settings from llama_index.llms.gemini import Gemini Settings.llm = Gemini() load_dotenv() def get_index(data, index_name): index = None if not os.path.exists(index_name): print("building index", index_name) index = VectorStoreIndex.from_documents(data, show_progress=True) index.storage_context.persist(persist_dir=index_name) else: index = load_index_from_storage( StorageContext.from_defaults(persist_dir=index_name) ) return index pdf_path = os.path.join("data", "the-tafsir-of-the-quran.pdf") canada_pdf = PDFReader().load_data(file=pdf_path) canada_index = get_index(canada_pdf, "canada") canada_engine = canada_index.as_query_engine()
12 comments
C
L