Find answers from the community

Updated 2 months ago

For such indexes generated via stablellm

For such indexes generated via stablellm, i am expecting very little usage of openai credits for embeddings. But I see davinci llm usage in openai report. So i am suspecting some of the indexes were older version generated with openai llm. Hence want to identify and regenerate them using stablellm
L
t
5 comments
hmm, there's no way to know when loading the index sadly

(However, make sure you still pass in the service context when loading from disk, to avoid defaulting to davinci)
If you added the LLM as metadata to your nodes/documents, well then I suppose you could tell from there. Or just changed where you save the index πŸ˜…
Thanks Logan. Good idea. It didnt occur to me that I can override the service context and pass a new llm while loading from the disk
@Logan M - would it be possible to print name of the llm from the loaded index? (assuming the index is loaded from storage without passing any custom service context)

I tried index._service_context.llm_predictor.get_llm_metadata()

It seems to only show below details
context_window=4097, num_output=-1)

Is there any similar helper function in service context to get the name or other details of the model used in that index?
I thiiiink you can print index._service_context.llm_predictor._llm.model_name
Add a reply
Sign up and join the conversation on Discord