Find answers from the community

Home
Members
tanaysai
t
tanaysai
Offline, last seen 3 months ago
Joined September 25, 2024
t
tanaysai
·

Llms

If index was created with one llm model, and if you load that index with another llm model, would it work?
I have some indexes that are mixed up.

Directly looking at the json files, there seems to be no way of knowing which llm model they correspond to.
I tried to load those indexes, and tried things like index._service_context.llm_predictor.get_llm_metadata() etc, but they dont seem to indicate which LLM the index was created with.

On a prior thread there, it was suggested that I could define the llm and create a custom service object with that llm and load it into the service object while loading index from storage
Like index._service_context.llm_predictor.get_llm_metadata()

While I am doing that, i see error like below (likely because this index was created with another llm than the current one)

File "/home/paperspace/wynk/wynkenv/lib/python3.9/site-packages/torch/nn/functional.py", line 2515, in layer_norm
return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
4 comments
t
L
For such indexes generated via stablellm, i am expecting very little usage of openai credits for embeddings. But I see davinci llm usage in openai report. So i am suspecting some of the indexes were older version generated with openai llm. Hence want to identify and regenerate them using stablellm
5 comments
L
t
Is HuggingFaceLLMPredictor deprecated? I can see it in change log. But the migration guide link is broken

Changelog : https://gpt-index.readthedocs.io/en/latest/development/changelog.html#new-features
Migration Guide (Broken): https://gpt-index.readthedocs.io/how_to/customization/llms_migration_guide.html

Should we use HuggingFaceLLM instead of HuggingFaceLLMPredictor? But the import doesnt seem to work
from llama_index.llms import HuggingFaceLLM

Also, all examples of HuggingFaceLLM, and HuggingFaceLLMPredictor pass temperature as param. But both of them in most recent version seem to say temperature is an unexpected param.

Any working example for initializing a HF LLM with temperature?
3 comments
L