Find answers from the community

S
Senna
Offline, last seen 3 months ago
Joined September 25, 2024
Anyone knows if there's a way to query again new nodes every time in chat_engine?

from llama_index.chat_engine import CondenseQuestionChatEngine

Plain Text
query_engine = index.as_query_engine()
chat_engine = CondenseQuestionChatEngine.from_defaults(
    query_engine=query_engine, 
    condense_question_prompt=custom_prompt,
    chat_history=custom_chat_history,
    verbose=True
)
response = chat_engine.chat("what about the deeplink issue?")
print(response)


Here's my code and it doesn't query anything related to deeplink. Instead, it query something related to my custom chat history.
3 comments
W
g
S
Senna
·

Retrieval

If my prompt is in english, will query engine able to retrieve information in other language that have similar meaning?
1 comment
L
how do query engine determine which node or chunk to choose? I'm talking about vectorstoreindex
6 comments
S
L
a
is hardware an issue? my cloud function only have 500 mb ram
4 comments
L
S
S
Senna
·

Pdf error

Hi @Logan M can you drop some wisdom abt this 😩 please
14 comments
S
L
S
Senna
·

Csv

whats the best way for llamaindex to analyze a csv file accurately ? I want the GPT to come up with an insight out of large csv files. think of it like chatgpt code interpreter but without the coding part
2 comments
L
does anyone know what is the best use case for knowledge graph? I tried feed it with a product description and ask about the product, doesn't seem to give a right answer
1 comment
L
I'm a front end guy and new to this stuff.

Plain Text
documents = SimpleDirectoryReader('branch-data',recursive=True).load_data()
index = GPTVectorStoreIndex.from_documents(documents)

retriever = index.as_retriever(retriever_mode='embedding')
query_engine = RetrieverQueryEngine.from_args(retriever, response_mode='tree_summarize')


this code loads my 5MB of text file, took 6 minutes in my macbook pro. And im thinking of using google compute engine to run it much, much faster. Can anyone share what config should i look for? high cpu or gpu?
2 comments
S
L
@kapa.ai is llamaindex data ingestion process a cpu based or gpu based algorithm? i'm trying to run it on a google cloud instance
2 comments
k
What’s the best way to ingest big csv data (10MB) into llamaindex? I tried it but it’s a bit hallucinating. The response make up numbers that don’t exist, and is wrong at counting something (e.g numbers of transaction with amount 190.00)
8 comments
S
j
S
Senna
·

Code

Meanwhile copy and pasting the code into chatgpt directly lead to a way better result
11 comments
L
S
y