Find answers from the community

Home
Members
Lau Fla
L
Lau Fla
Offline, last seen 2 months ago
Joined September 25, 2024
what is the difference between defining chunk_size_limit within ServiceContext.from_defaults and defining it within PromptHelper.from_llm_predictor . I only defined chunk_size_limit within SimpleNodeParser as part of the text_splitter:
Plain Text
splitter = TokenTextSplitter(chunk_size=chunk_size, chunk_overlap=chunk_overlap)
node_parser = SimpleNodeParser(text_splitter=splitter, include_extra_info=False, include_prev_next_rel=True)
5 comments
k
L
I'm interested in trying HyDE technique to query my graph (LlamaDocs: https://tinyurl.com/4dcwm7fm). The example is on 1 index, not a graph. What adaptations are needed to:
1) query_configs when using Hyde
2) How can I compare with HyDE VS without HyDE in Playground??
Thanks!
5 comments
L
L
How would you approach the whole query decompose part? I don't know if the query will be complex or straight forward, is there a way (like the agents in langchain) to let the LLM decide what approach is needed given a query?
1 comment
L
QUESTION that arises from the the examples in ServiceContext docs (Key Components > Customization > ServiceContext) about kwargs:

#1
LLM(model=text-davinci-003, max_tokens=256)
SimpleNodeParser(chunk_size=1024, chunk_overlap=20)
PromptHelper(context_window=4096, num_output=256, chunk_overlap_ratio=0.1, chunk_size_limit=None)
no chunk size in ServiceContext

#2
LLM(model=gpt-3.5-turbo, max_tokens not defined)
SimpleNodeParser & PromptHelper not defined
ServiceContext(chunk_size=512)

The confusion:
  • Both models have the same max token window of 4096 (± 1 token), which is defined in 1) but but in 2), why?
  • #2 didn't define NodeParsing but i guess ServiceContext(chunk_size=512) passes this over to default node parser which is like doing SimpleNodeParser(chunk_size=512, chunk_overlap=0), am I wrong?
  • Please help me understand the difference in #1 between LLM(max_tokens=256) & PromptHelper(num_output=256), docs say Number of outputs for the LLM. or set number of output tokens somewhere else, but I dont understand what this means for real. Does this define the length of the final answer?
  • I already chunked the nodes and only use a saved index from disc, is the splitter in that phase meant for the users input/question pre embeddings?
6 comments
L
L
HELP, I updated packages and everything broke - did anything change since version 0.6.14 that changed how prompt_helper is used?

I get an error from
Plain Text
prompt_helper = PromptHelper.from_llm_predictor(llm_predictor=llm_predictor, chunk_size_limit=1024)


I returned to version 0.6.14 and kept langchain version 0.0.205, but then choosing the newer gpt-3.5-turbo-16k model gives error:

Plain Text
[ERROR] ValueError: Unknown model: gpt-3.5-turbo-16k. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-32k, gpt-4-32k-0314, gpt-3.5-turbo, gpt-3.5-turbo-0301, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, davinci, text-davinci-003, text-davinci-002, code-davinci-002, code-davinci-001, code-cushman-002, code-cushman-001


then returning to the regular gpt-3.5-turbo model suddenly gives me error nltk package not found, please run pip install nltk - even though nltk is the first item on the requirements.txt file for my project - Im confused
14 comments
L
L
Regarding the new Example for SQL Auto Vector Query Engine (https://gpt-index.readthedocs.io/en/latest/examples/query_engine/SQLAutoVectorQueryEngine.html) I think I found a small mistake:
Plain Text
# define node parser and LLM
llm_predictor = LLMPredictor(llm=ChatOpenAI(temperature=0, model_name="gpt-4", streaming=True))
service_context = ServiceContext.from_defaults(chunk_size_limit=1024, llm_predictor=llm_predictor)
text_splitter = TokenTextSplitter(chunk_size=service_context.chunk_size_limit)
node_parser = SimpleNodeParser(text_splitter=text_splitter)


Shouldn't the service_context also include node_parser=node_parser ?
1 comment
L
General question on time to index. Im trying to index the same documents with different indexes (GPTSimpleVector, GPTKnowledgeGraphIndex, GPTSimpleKeywordTableIndex).

Vector and Keyword were pretty fast, but KnowledgeGraph is so slow that I get the impression that its stuck, running over 10 minutes! I tried knowlede-graph indexing with both: with and without embeddings, but it doesnt matter!
13 comments
L
L
I just discovered something weird, maybe someone can share some insights...

I indexed 3 different knowledge bases with GPTSimpleVectorIndex, lets call them index1, index2 & index3.

My goal is to build a graph on top but for now I checked them separately. I use gpt-3.5-turbo as my model of choice.

llm_predictor_gpt3 = LLMPredictor(llm=ChatOpenAI(temperature=0.2, model_name='gpt-3.5-turbo', max_tokens=2000))

So I asked the same questions querying index1, then index2 followed by index3. The answer I got at the end led me to understand that the bot remembered the last 2 queries!

Even though I did not use any memory object (as in langchain), the bot knew that I already asked this question and that it received new information since I queried another index.

Question: Does that mean that if my bot uses one openAI API token for everything, that question and answers from different users may bleed into another? User A asks question 1, then user B asks question 2 but the answer takes question 1 (and answer 1) into consideration....
I'm not even sure if that is a bad thing, but wow can I avoid this?
19 comments
L
L
Regarding composability/graph, why is a summary needed when doing a ListIndex? From my understanding in a TreeIndex the summary helps to LLM to route the question to the right index, but a ListIndex will anyways check all parts, so what is the summary used then in List?
2 comments
L
L