Find answers from the community

Home
Members
Deleted User
D
Deleted User
Offline, last seen 3 months ago
Joined September 25, 2024
Hello everyone! I am trying to reuse a locally persisted store and upload it into a running weaviate instance. What I currently did was to load the DocumentStore and use VectorStoreIndex.from_documents() but this would mean that the embeddings will be recreated, instead of just used from the local storage. Is there a chance to directly push the local previously created store it into weaviate? Context: We commit small persisted stores into version control and want to push those into a running vector store provider (like weaviate). Any idea? Thanks!
4 comments
D
W
Does anyone know how to do a similarity search between embeddings? Let's say I have this:
embedding_model = HuggingFaceEmbedding(model_name="embedder")
index = VectorStoreIndex.from_documents(documents, embed_model=embedding_model, storage_context=storage_context)

I want to use the same embedding_model to encode a query and find the most similar document in my vector store
3 comments
D
W
v
Hi everyone. I would like to build a question-answering app that retrieves embeddings from a vector store and uses them as context in the prompt to answer a question. In this app, I am not using OPENAI_API_KEY as my LLM is from Hugging Face Hub. Specifically, I created my LLM instance ("llm") by HuggingFacePipeline, and provided it to the following:
Plain Text
llm_predictor = LLMPredictor(llm=llm)
service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor)
index = GPTVectorStoreIndex.from_documents(documents, service_context=service_context) 

However, GPTVectorStoreIndex throws
Plain Text
AuthenticationError: No API key provided.

Could anyone help me implement a vector store index without OPENAI_API_KEY?? (Or is a vector store index necessary to build an app if I am going to have an external vector store like FAISS or Pinecone?) Thank you in advance 🙏
8 comments
D
L
hey guys I am already one hour in with pinecone creating embeddings how long would it take?
7 comments
D
L
mod on ollama just banned me for 1 week for asking a simple question because he didn't know the answer lol. who is this @endollama:0.05b_instruct ? worst thing about discord is mods with ego, if they ever feel stupid about anything (his own fault) someone else gets it
1 comment
D
it is not perfect, and I'd like if you could show me what I could do to make it better
2 comments
D
L
can llamaindex be used in a large production environment where users can upload their files and "chat with their data"? what would the average cost be? i am planning to use it alongside with unstructured.io
13 comments
K
D
L
D
Deleted User
·

Excel

Anyone knows how can I make inference better? I'm using an XLSX file and I would like it to answer the questions in a more accurate manner. I'm receiving a lot of wrong results. Probably because gpt-3.5 is not very good at math. 😢
2 comments
D
L
Hello everyone, I'm wondering if someone could clarify whether we need an OpenAI key to convert a PDF file to a vector file?
2 comments
L
Hello everyone, I'm wondering if someone could clarify whether we need an OpenAI key to convert a PDF file to a vector file?
1 comment
W
I liked this talk about Llama_index 🤩 . Although it is not shared here in the discord nor the repo. It gave a really good overview on some of LLama_index capabilities.
https://www.youtube.com/watch?v=YN6n5hvmsx8
1 comment
d
Hello, there LlamaIndex discord!!
What are your approaches to answer "enumerate questions" from documents. if each example or part of the answer was in a different part of the document. How can the retriever and the LLM collaborate together to achieve the final solution?

Agents already can answer multihop questions, yet fall short in answering "enumerate questions"..
1 comment
L