Find answers from the community

Updated 3 months ago

Issue with the transition from Service_

Issue with the transition from Service_context to Settings.
Attachment
image.png
s
L
18 comments
I've set llm and embed_model in the code with Settings see the following code.


from llama_index.embeddings.openai import OpenAIEmbedding from llama_index.core.node_parser import SentenceSplitter from llama_index.llms.openai import OpenAI from llama_index.core import Settings Settings.llm = OpenAI(model="gpt-3.5-turbo") Settings.embed_model = OpenAIEmbedding(model="text-embedding-3-small") Settings.node_parser = SentenceSplitter(chunk_size=512, chunk_overlap=20) Settings.num_output = 512 Settings.context_window = 3900
Also loading index from storagecontext.
Attachment
image.png
Running pip freeze | grep llama-index
Attachment
image.png
I just fixed this actually
please pip install -U llama-index llama-index-core
Awesome, it's working now. Follow-up question, given the change, do I have to reindex my doc for RAG?
you should not need to reindex, existing indexes will work πŸ™
(if they dont, please let me know!)
Will get back to you.
I already restarted the index, it's much faster, did your guys change any code around
SimpleDirectoryReader and
VectorStoreIndex.from_documents?
Hmmm not that I'm aware of at a high level. But if it seems to be working, then I'll take it πŸ™‚
hmmmm, yeah, when I try to index it last night, it took 2 hrs, now it's only taking 15 mins...
Not sure what happens.
But it's faster so that's good.
Hmmm. Maybe double check that queries/retrieval are working as expected lol
Actually, there was an issue with simpledirectoryreader
that I fixed in this latest update
It was casuing a ton of nodes to get created, which might be why it was slow last night
Got it, ok, that makes sense.
Let me try it out.
Add a reply
Sign up and join the conversation on Discord