Find answers from the community

Updated last year

have any content about how to index

have any content about how to index faster?
L
E
16 comments
Are you using a vector index? You can increase the embedding batch size when creating the index so that it creates a little faster

Plain Text
from llama_index.embeddings.openai import OpenAIEmbedding
from llama_index import ServiceContext, GPTVectorStoreIndex

service_context = ServiceContext.from_defaults(embed_model=OpenAIEmbedding(embed_batch_size=50))

index = GPTVectorStoreIndex.from_documents(documents, service_context=service_context)
do you know the default embed batch size? @Logan M
Looks like it's 10
@Logan M last question, sorry haha, I stored my indexes on disk, so now when I want to do a query only will be charged for tokens of the prompt or what? srry, very noob
Yea! So not that its stored on disk, the only token usage will be
  • embedding tokens for the question text (using text-ada-002)
  • LLM tokens for the query + internal prompts + response text (using text-davinci-003 by default here)
The biggest cost by far will be the LLM (embeddings are very cheap compared to it anyways)
gotcha, you're the man ty!
@Logan M there's no way to use api key like this?
Attachment
image.png
I don't wnat to set on os.environment
want to use by request
on the doc/source code I just find this:
Attachment
image.png
Yea thats for embeddings, you can leave the rest of the arguments as default though if you arent using azure, just set the api key

You will also need to set the api key in the LLM definition too

Plain Text
from llama_index import ServiceContext, LLMPredictor
from langchain.llms import OpenAI
# if you want to use gpt-3.5 or gpt-4, use ChatOpenAI
# from langchain.chat_models import ChatOpenAI

service_context = ServiceContext.from_defaults(
  llm_predictor=LLMPredictor(llm=OpenAI(model_name='text-davinci-003', temperature=0, openai_api_key="<key>"))
  embed_model=embed_model
)

index = load_index_from_storage(storage_context, service_context=service_context)
Add a reply
Sign up and join the conversation on Discord