Find answers from the community

w
walid
Offline, last seen 2 months ago
Joined September 25, 2024
Anyone has examples for using Pinecone to store data in one time then answering queries later on without having a reference on the created index?

I've seen this example: https://github.com/jerryjliu/gpt_index/blob/main/examples/vector_indices/PineconeIndexDemo.ipynb, but it uses an index created at a previous step.
How do you query using Pinecone's index directly without loading the documents again?
1 comment
w
What's the benefit of using LlamaIndex vs comparing vectors using Cosine Similarity?
11 comments
w
p
L
Any update on this https://github.com/jerryjliu/gpt_index/issues/271 ?
I'm trying to initialize a GPTPineconeIndex to use it for querying only (without loading any documents or recreating the index). However, the PineconeIndexStruct class is no longer available. Is there an alternative that prevents loading documents?
1 comment
w
Hey there, I am migrating from llama-index 0.4 to 0.6 and I am having trouble translating the syntax from the old version to the new one

How would one write the following in the newer versions?


Plain Text
# Indexing
# this should directly index documents into Elasticsearch
client = ElasticsearchVectorClient()
GPTOpensearchIndex(documents, client=client, chunk_size_limit=1024)

# Querying
# this should ask the query 'q' on the Elasticsearch index, using the qa & refinement templates provided.
# and with the LLM Predictor provided
client = ElasticsearchVectorClient()
index = GPTOpensearchIndex([], client=client)
llm_predictor = LLMPredictor(llm=ChatOpenAI(
    temperature=0, model_name="gpt-3.5-turbo"))

similarity_top_k = 1
index.query(q, similarity_top_k=similarity_top_k,
               llm_predictor=llm_predictor,
               text_qa_template=CHAT_QA_PROMPT,
               refine_template=CHAT_REFINE_PROMPT)
13 comments
w
L
Is there a way to pre-package my app with the needed loaders? instead of calling download_loader, I want to directly install it with pip and import it during runtime without the need to download
14 comments
j
w
L
Happened after I upgraded to 0.5.x
22 comments
j
w
L
w
walid
·

Import typing

I'm getting this back when I try to import llama_index
Plain Text
ImportError: cannot import name 'Protocol' from 'typing' (/usr/lib/python3.7/typing.py)


Anyone knows the cause?
2 comments
w
L
Any thoughts here?
4 comments
j
w
Anyone has examples for using Pinecone to store data in one time then answering queries later on without having a reference on the created index?

I've seen this example: https://github.com/jerryjliu/gpt_index/blob/main/examples/vector_indices/PineconeIndexDemo.ipynb, but it uses an index created at a previous step.
How do you query using Pinecone's index directly without loading the documents again?
9 comments
J
w
Getting the following when querying Opensearch
Plain Text
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.

Token indices sequence length is longer than the specified maximum sequence length for this model (3324 > 1024). Running this sequence through the model will result in indexing errors
1 comment
w
Any experience with using Llama index in production? I'd like people to upload their own data then query them.

My concern is that loading all those data in memory is infeasible.
Is there an on-disk-only setting that can be set to prevent the process for loading all indexes in memory and using too much?
5 comments
j
b
w