Find answers from the community

Updated 3 months ago

Hi Team, in langchain we have the

Hi Team, in langchain we have the following runnable functions, do we have anything similar to this, like runnabel functions in llamaindex - @abstractmethod
def invoke(self, input: Input, config: Optional[RunnableConfig] = None) -> Output:
"""Transform a single input into an output. Override to implement.

Args:
input: The input to the Runnable.
config: A config to use when invoking the Runnable.
The config supports standard keys like 'tags', 'metadata' for tracing
purposes, 'max_concurrency' for controlling how much work to do
in parallel, and other keys. Please refer to the RunnableConfig
for more details.

Returns:
The output of the Runnable.
"""
L
M
2 comments
Maybe a better question is what is your use-case/what are you trying to do?
@logan thanks for asking, i found the root cause, this is to execute two process in parallel , i did not give the right prompt and so i end up seeing this error, updated the prompt that resolves the error that occurred while making a call to LLM
Add a reply
Sign up and join the conversation on Discord