Find answers from the community

Updated 3 months ago

OpenAI error

yep sure, it's pretty much the same example from the docs
Plain Text
llm = HuggingFaceLLM(
    context_window=4096,
    max_new_tokens=256,
    generate_kwargs={"temperature": 0.7, "do_sample": False},
    system_prompt=system_prompt,
    query_wrapper_prompt=query_wrapper_prompt,
    tokenizer_name="facebook/opt-350m",
    model_name="facebook/opt-350m",
    device_map="auto",
    stopping_ids=[50278, 50279, 50277, 1, 0],
    tokenizer_kwargs={"max_length": 4096},
    # uncomment this if using CUDA to reduce memory usage
    # model_kwargs={"torch_dtype": torch.float16}
)
documents = SimpleDirectoryReader("./documents").load_data()
service_context = ServiceContext.from_defaults(chunk_size=1024, llm=llm)
index = VectorStoreIndex.from_documents(
    documents, service_context=service_context)
W
M
7 comments
You have not passed the embedding model so it is using by default openai embedding
You'll need to pass opensource embed model if you wish to not use openai completely
Sharing one sample code
Plain Text
from llama_index import LangchainEmbedding
from langchain.embeddings.huggingface import HuggingFaceEmbeddings

embed_model = LangchainEmbedding(HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2"))
service_context = ServiceContext.from_defaults(chunk_size=1024, llm=llm, embed_model=embed_model)


You can try this @Mouhand Alkadri
@WhiteFang_Jr Thank you very much, that was really helpful, it worked
Add a reply
Sign up and join the conversation on Discord