hello, i am testing something and i need a little help
I have lets say 30 users, and every user has his own type of data that he want to embedd. Now i want every user to fine tune their stuff individually. How can I achieve this? do i need to have a serperate llm for every single user, or isit possible to have one llm and that everyone has their own fine tuning/embedding and other stuff.
I am trying to install llama-index on pythonanywhere.com hosting. It has Python version 3.10 When i install it on my local machine having Python 3.11 it runs well. It also runs well on Google colab notebook. But as soon as I try to install or deploy it on live server it doesn't work. I tried it on GoDaddy hosting server as well but didnt worked at all. llama-index is installed on pyhtonanywhere.com but when I import it in my app.py file it gives the below errors
Can some one tell me if I can also embed the images from my pdfs. And query about those images. It seems to be not working for me and all the images are ignored from the pdf. Or also help me in finding out the right way to embed or process the images. Any other technique, library or anything that can be handy?
@Logan M can you please with this. I am trying to embed the images and text but this snippet is not working as I copied it from notebook provoded by llama index
Hello guys, I have a data source from where I embedd the files. But at the moment it embedds everything whenever a new file is added in the folder. How can I only embedd the new file push the new data into current vector store. And one more thing. I am embedding pdfs and doc files. They have images too. How can embedd the images as well. Is it possible to embedd everything in one single vector store. So I can query about the images as well?
Can some one help me on a small issue. I want to upload the files to a folder that will be embedded. Now what I want is I want to embed the only newly added file instead on reembedding the whole folder
I am trying to insert a new document to existing embeddings but unable to do that. Either I need to recreate the embeddings for all the files, but that increases the cost.
Can some one guide me on this perticular thing.
I have a folder that is /files Then a folder /storage that keeps the vectorize data. Now if a add new pdf to /files folder, how can I just embed the new file only to exisiting vector store files.
Hello Mates, just doing some brain storming and trying some new stuff on learning side. I am wondering if there is a way we can inherit the embeddings.
Like we have a folder "Main_embedding" and it contains a the index that is saved on this disk I am using this embedding to query the questions
Now I have a user and he is User_1 He has a folder "Private_embedding" and he has his own chat bot and has own embedded documents.
Same for "User_2"
Now can we inherit the the embedding from "Main_embeddings" along with user specific embedding for their chat bot.
Like User_1 gets "Main_embeddings + /user1/Private_embeddings" and User_2 "Main_embeddings + /user2/Private_embeddings"
Hello Mates, Can you tell me some thing about pdf verctorization stuff?
Is it possible to directly vectorize the pdf? instead of pulling out the text 1st and then doing the vectiorization.
2nd: If I have a folder /data and it contains a pdf that need to be vectorize, then If I add a 2nd file to the same fodler, do i need to repeat the process for both files or the 2nd fill can be updated to current vector store that is locally saved on the disk.
or is it possible to create multiple vector stores and them use them all together?
I was trying some stuff with llama, and it was working well, but then I upgraded the libraries to latest ones. My python version is 3.10 But llama stopped working and started throwing bulk of errors in console. Then I downgraded to follwoing
langchain==0.0.174 llama-index==0.6.9
Now it is working well again.
May be this solution can help few people facing same issue.