Find answers from the community

Updated 10 months ago

some finetuned LLMs will not always

some finetuned LLMs will not always return a string. The one I am using returns
Plain Text
{'score': 0.85932', 'answer':'the answer'}
,

This is actually what I would prefer returned, since I am using the accumulate method over a SummaryIndex, so I can compare confidences and select the best answer. Is there a way to do this? Right now, when my CustomLLM's _call method returns anything other than a string, I get an error in langchain_core/language_models/llms.py
n
L
4 comments
maybe there is a better import to use for my class? for example something other than from langchain.llms.base import LLM
You can could create a custom LLM with llama-index, but it will probably hit the same string limitation

You could always parse the string back into json later?

If you need hyper-custom stuff, you can always design a custom query pipeline or a custom query engine

https://docs.llamaindex.ai/en/stable/module_guides/querying/pipeline/root.html
https://docs.llamaindex.ai/en/stable/examples/query_engine/custom_query_engine.html#defining-a-custom-query-engine
ok. that makes sense.
Add a reply
Sign up and join the conversation on Discord