----------------------
To connect LlamaIndex to the API of a predictive model, you would typically use a Language Learning Model (LLM) integration. However, the specific steps can vary depending on the API and the model you're using.
For instance, LlamaIndex supports integrations with various models and APIs such as OpenAI, Hugging Face, PaLM, and more. It also supports running local models. Here's an example of how you can use a local LLM such as Llama 2:
from llama_index import ServiceContext
service_context = ServiceContext.from_defaults(llm="local")
This will use llama2-chat-13B from with LlamaCPP, and assumes you have
llama-cpp-python
installed. A
full LlamaCPP usage guide is available ().
For more specific integrations, LlamaIndex provides guides for integrating with
Monster API (),
Ollama - Llama 2 7B (),
Predibase (), and
Replicate - Llama 2 13B () among others.
Please refer to the specific guide that matches your predictive model's API for detailed instructions. If your model's API is not listed, you may need to create a custom LLM integration. For more details on this, you can refer to the
custom LLM’s How-To ().
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------