Find answers from the community

Updated last year

`documents SimpleDirectoryReader data

At a glance
documents = SimpleDirectoryReader('data').load_data() parser = SimpleNodeParser.from_defaults() nodes = parser.get_nodes_from_documents(documents) index = VectorStoreIndex(nodes) query_engine = index.as_query_engine() response = query_engine.query("What is this book all about?") print(response)

with this code i'm facing RateLimitError

if i do this without nodes
like this...
documents = SimpleDirectoryReader('data').load_data() index = VectorStoreIndex.from_documents(documents) query_engine = index.as_query_engine() response = query_engine.query("Short Summery of book?") print(response)

i'm not getting RateLimitError

why so any work around? 🀨
L
H
M
9 comments
I think it might be a weird coincidence πŸ˜…

Do you have a paid openai account? The free tokens are severly rate limited
nope. free πŸ₯²
Try using local embeddings for now
Plain Text
from llama_index.service_context import ServiceContext, set_global_service_context

service_context = ServiceContext.from_defaults(embed_model="local:BAAI/bge-small-en-v1.5")

set_global_service_context(service_context)
It will download that model (~200MB) and run it locally for embeddings
but still call openai for generating the responses
should help get around the rate limits
@Logan M is the simpledirectoryloader also creating/parsing the nodes? I'm confused by that part.
It's creating documents (which is basically the same as nodes tbh). Different loaders will create documents differently
Add a reply
Sign up and join the conversation on Discord