Hey there, I am migrating from llama-index 0.4 to 0.6 and I am having trouble translating the syntax from the old version to the new one
How would one write the following in the newer versions?
# Indexing
# this should directly index documents into Elasticsearch
client = ElasticsearchVectorClient()
GPTOpensearchIndex(documents, client=client, chunk_size_limit=1024)
# Querying
# this should ask the query 'q' on the Elasticsearch index, using the qa & refinement templates provided.
# and with the LLM Predictor provided
client = ElasticsearchVectorClient()
index = GPTOpensearchIndex([], client=client)
llm_predictor = LLMPredictor(llm=ChatOpenAI(
temperature=0, model_name="gpt-3.5-turbo"))
similarity_top_k = 1
index.query(q, similarity_top_k=similarity_top_k,
llm_predictor=llm_predictor,
text_qa_template=CHAT_QA_PROMPT,
refine_template=CHAT_REFINE_PROMPT)