Find answers from the community

s
F
Y
a
P
Home
Members
itsgeorgep
i
itsgeorgep
Offline, last seen last month
Joined September 25, 2024
noob question - when do I need llamaIndex and when would I need langchain. I learned about llama index first but just found langchain... seems like it does most of the stuff llamaindex does?
2 comments
i
j
I'm asking this question:

give me as much detail about these documents as you can

and I'm getting this response:
Plain Text
The new context information provided is a list of numerical values, which are likely the embeddings_dict mentioned in the original answer. These values may be used to represent the document in a numerical format for machine learning or natural language processing purposes. However, this information does not provide any additional details about the document store or the specific document mentioned in the original answer. Therefore, the original answer remains the same.


heres my code:
Plain Text
def queryIndex(indexes, query):
    
    catsIndex = Document(text=json.dumps(indexes[0]))
    dogsIndex = Document(text=json.dumps(indexes[1]))
    
    combined = GPTSimpleVectorIndex([catsIndex, dogsIndex])

    llm_predictor = LLMPredictor(llm=ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo", max_tokens=250))
    return combined.query(query, mode="default", llm_predictor=llm_predictor)


when I pass in indexes it's a list of dicts.

anyone have any idea what I'm doing wrong here?
18 comments
M
i
L
I think I'm missing something about how tokens are calculated. I'm getting this:
This model's maximum context length is 4097 tokens, however you requested 4499 tokens (3499 in your prompt; 1000 for the completion). Please reduce your prompt; or completion length.

But my prompt was like 15 words... how's it possible that it used so many?
1 comment
L
Noob question, I'm using the paul graham essay example. I ran a prompt once. If I run another prompt, will it have the context of my previous prompt? (like a chat GPT window would)
1 comment
j