Hey, i'm a big beginner in AI (that's my first project). I would have like to know some answers on my code that I encounter.
I want to implement something that take info in my files using local LLMs like vicuna or alpaca, instead of open AI
I know that the format of the code should look like that:
for exemple with PDFs:
from llama_index import GPTVectorStoreIndex, LLMPredictor, download_loader
from pathlib import Path
from llama_index import download_loader
-connexion to LLM (open AI or customs) (dont know how to do this part cause i cant find any exemple, everything is different)then
- "plugins" from llamahub.ai to give access to documents
PDF_NAME = '...'
file = requests.get('web_adress_to_pdf'.format(PDF_NAME), stream=True)
with open(PDF_NAME, 'wb') as location:
shutil.copyfileobj(file.raw, location)
PDFReader = download_loader("PDFReader")
loader = PDFReader()
documents = loader.load_data(file=Path('./article.pdf'))
index = GPTVectorStoreIndex(documents, llm_predictor=llm_predictor)
response = index.query("prompt")
print(response)
If you know how to solve this, i would like to know ! 🙂