some finetuned LLMs will not always return a string. The one I am using returns
{'score': 0.85932', 'answer':'the answer'}
,
This is actually what I would prefer returned, since I am using the accumulate method over a SummaryIndex, so I can compare confidences and select the best answer. Is there a way to do this? Right now, when my CustomLLM's
_call
method returns anything other than a string, I get an error in langchain_core/language_models/llms.py