Find answers from the community

N
Nam
Offline, last seen 4 months ago
Joined September 25, 2024
Hi, currently I am running LlamaIndex 0.8.62 in google colab. I used LlamaCPP to load LLama 2 13B chat (and other models in GGUF file). After the first successful couple of query calls using VectorStoreIndex as query engine, the responses I get after that are always "Empty Response". Plus, I have experimented with and without node postprocessing: SentenceEmbeddingOptimizer and SentenceTransformerRerank. So how can I solve that problem?

P/s: my temporary solution now is checking if response == "Empty Response", if True then re-run query_engine.query(question) because it always returns "Empty Response" for the first time
4 comments
L
N
N
Nam
·

Numpy

Hello, I am using SimpleNodeParser, specifically:
node_parser = SimpleNodeParser.from_defaults()
but I got this error:
AttributeError: module 'numpy.linalg._umath_linalg' has no attribute '_ilp64'

Anyone please tell me how to fix it?
16 comments
L
W
N
Nam
·

Call

Hi everyone, why is the response of using VectorStoreIndex.from_documents(documents, service_context=service_context).as_query_engine much more different from RetrieverQueryEngine( retriever=vector_retriever, response_synthesizer=response_synthesizer)?
I mean when i use RetrieverQueryEngine with get_response_synthesizer, the information in the response is more accurate and compact, even when I tried to run it 3 times to see if the response between two ways could be the same
1 comment
L