The community member is asking how to use an API-URL as a Large Language Model (LLM) instead of using the Hugging Face library. The comments suggest that the community member can wrap the API requests with a custom LLM, as shown in an example from the LlamaIndex documentation. Another community member confirms that the API-URL can be used in place of the "dummy_response" in the example, and that the community member would need to make an API request using a library like requests. However, there is a question about whether an LLM is required for building the service context, and how the data would look like if an LLM is not used.
Hi @Logan M, thanks for reaching out. Here in the place of dummy_response I can give my API-URL, right? I.e dummy_response= 'HTTP://23.34.56:8001/model'