Find answers from the community

s
F
Y
a
P
Updated last year

Hey there I am migrating from llama

Hey there, I am migrating from llama-index 0.4 to 0.6 and I am having trouble translating the syntax from the old version to the new one

How would one write the following in the newer versions?


Plain Text
# Indexing
# this should directly index documents into Elasticsearch
client = ElasticsearchVectorClient()
GPTOpensearchIndex(documents, client=client, chunk_size_limit=1024)

# Querying
# this should ask the query 'q' on the Elasticsearch index, using the qa & refinement templates provided.
# and with the LLM Predictor provided
client = ElasticsearchVectorClient()
index = GPTOpensearchIndex([], client=client)
llm_predictor = LLMPredictor(llm=ChatOpenAI(
    temperature=0, model_name="gpt-3.5-turbo"))

similarity_top_k = 1
index.query(q, similarity_top_k=similarity_top_k,
               llm_predictor=llm_predictor,
               text_qa_template=CHAT_QA_PROMPT,
               refine_template=CHAT_REFINE_PROMPT)
w
L
13 comments
I've tried the following for indexing and it doesn't seem to work properly

Plain Text
documents = SimpleDirectoryReader('./data').load_data()
client = ElasticsearchVectorClient()
vector_store = OpensearchVectorStore(client=client)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
service_context = ServiceContext.from_defaults(chunk_size_limit=1024)

VectorStoreIndex.from_documents(documents, storage_context=storage_context, service_context=service_context)
dang, I just had it all typed out and it dissapeared 😦 Will type again.... lol
What I wrote above works now, I had something faulty in other code

I wonder if that is the correct way of doing it though?
That is the correct way I think! You are ahead of me lol I should have read that haha

For the query, it will look like this

Plain Text
query_engine = index.as_query_engine(similarity_top_k=1, text_qa_template=CHAT_QA_PROMPT, refine_template=CHAT_REFINE_PROMPT)
response = query_engine.query("my query")
Ah nice! thanks a lot @Logan M , you're always helpful!

In your example, how would you initialize that index with Elasticsearch as the backend?

I'm trying this but apparently there's nothing called from_vector_store

Plain Text
client = ElasticsearchVectorClient(account_id=account_id,
                                app_id=app_id)

vector_store = OpensearchVectorStore(client)
index = VectorStoreIndex.from_vector_store(vector_store=vector_store)


Also, the main refactor here is:
  1. that storage backends are provided through the storage_context parameter
  2. configurations specific to the query/indexer are provided through the service_context parameter
Is my understanding correct?
the from_vector_store is very new, I can't remember if it's on the latest pypi version yet or not πŸ‘€ But that's mainly used to connect to a vector db that's already be populated. Setting up the storage like you did before is the initial way to do it
And yea, that's the main refactor there! Additionaly, you might have noticed the as_query_engine() thing too, which kind of seperates the query from the index a bit
Yeah, I want to use a storage that is already populated.

I basically have two main endpoints in my app, one for indexing, and the other for querying.

And the querying endpoint should use the pre-populated Elasticsearch index
Any alternatives to from_vector_store that can use the prepopulated index?
the alternative is setting up the vector store with the storage context again (so you setup the client to connect to the existing vector db created with llama-index) and then you can do something like this

index = VectorStoreIndex([], storage_context=storage_context)
Ah interesting, so it's the old way (where we had to pass an empty list)

Makes sense πŸ€” thanks a lot man!
Yea! from_vector_store is just making that look less awkward (and automates the storage_context setup)
Noice! much cleaner than the old interface yet sadly, a very confusing and breaking change
Add a reply
Sign up and join the conversation on Discord