TogetherAI
embedding_model: str = "togethercomputer/m2-bert-80M-8k-retrieval", generative_model: str = "mistralai/Mixtral-8x7B-Instruct-v0.1",
embedding_model
and generative_model
to use?from llama_index.llama_pack import download_llama_pack # download and install dependencies, comment if already downloaded OllamaQueryEnginePack = download_llama_pack("OllamaQueryEnginePack", "./ollama_pack")
base.py
llm = Ollama(model=self_model, base_url=self._base_url)