Find answers from the community

Updated 2 months ago

Huggingface inference

Hello! I'm wondering if there's any way to use LlamaIndex with the Hugging Face Inference API, for example to use falcon-7b-instruct?
L
1 comment
Yes! You can setup the llm using the langchain modules, then just throw it into a service context

Plain Text
from llama_index import ServiceContext, set_global_service_context

llm = <setup from langchain>

# adjust context window and lower chunk size since the window is smaller
service_context = ServiceContext.from_defaults(llm=llm, context_window=2048, chunk_size=512)

set_global_service_context(service_context)
Add a reply
Sign up and join the conversation on Discord