I just updated LlamaIndex and got this error: ImportError: cannot import name 'PromptTemplate' from 'llama_index.prompts' (/Users/jana/work/LlamaIndexLangChain/jupyter/myenv/lib/python3.9/site-packages/llama_index/prompts/init.py) upon calling this query_engine = index.as_query_engine() on a vector index. Anyone experiencing the same issue and know how to resolve it?
Can you correct me if I am wrong: only the vector store index uses embeddings and vector store when creating an indexs. But when querying a list Index, we can also use an embedding-based query, and in this case the embeddings of nodes are created when querying data? Where else does LlamaIndex use embeddings? Not in a tree index or keyword Table Index? And also, when is best to use List Index (when creating synthesized answer?), when the Vector index, when the tree (summaries?) and when the keyword Index...I'm not sure I understand what are best practicies? I'm sorry for basic questions, I want to understand ...
Hi, I created a list and vector index and then engines for a few txt files. list_tool = QueryEngineTool.from_defaults( query_engine=list_query_engine, description="Useful for summarization of the podcast", ) vector_tool = QueryEngineTool.from_defaults( query_engine=chat_engine, description="Useful for retrieving specific context related to the podcast topic", )
I'm using RouterQueryEngine to switch between them with a PydanticSingleSelector
Query about specific content executes but the query about the summary reults into a RuntimeError: asyncio.run() cannot be called from a running event loop
What could cause this? If I run the first query again ir returns final response.
EntityExtractor for some reason does not wxtract any entities for me. I have installed the span-marker, nltk and punks. Metadata stays empty. What can I check? I'm using it with a long txt file with content.
I'm reading about LlamaIndex data agents: https://medium.com/llamaindex-blog/data-agents-eed797d7972f One of the use cases agents can solve is: "Calling any external service API in a structured fashion. They can either process the response immediately or index/cache this data for future use." Is there any example of how I would do this? I want to connect to newsapi, read the news and store it in VectorIndex if it is not there yet.
Hi, I have a problem with refreshing an index. I load index from a disk and then I read documents and call index.refresh_ref_docs. I pass in the service_context since I use QuestionsAnsweredExtractor for the documents and nodes metadata. The problem is the questions are being generated for all the documents, not only the added ones. How can I update the index and generate questions for only new and updated documents?
Hi @Logan M , would it be possible to get a simple overview explanation of pros and cons of each query engine type? For example, when to use Router Query Engine instead of Retriever Query engine instead of Sub question query engine. I would really appreciate having an overview and use cases when to use each query engine. Thank you!
Hi, I'm researching LlamaIndex and want to use it in a company with in-house documentation hosted on Confluence. As I read through the documentation, I can use LlamaIndex without LLM but for some indexes LlamaIndex still uses LLm to create embeddings and summaries. Am I right or not? And is this a security risk if I have sensitive information? Thank you for clearing things out for me