Find answers from the community

s
sphex
Offline, last seen 2 months ago
Joined September 25, 2024
Plain Text
class classify(BaseModel):
    check_docs: int
    web_search: int
    mentioned_company: str


classify_llm = llm.as_structured_llm(output_cls=classify)
fprompt = CLASSIFICATION.format(
    user_input="How can I contact your firm?")
response = classify_llm.chat([ChatMessage(
    role="user", content=fprompt)])

print(response.raw.check_docs)
print(response.raw.web_search)
print(response.raw.mentioned_company)

Is there a way to do this without setting up a structured llm every time for different objects.
5 comments
L
s
is there a way to implement colpali using llama index I couldn't find it in the docs
2 comments
L
Hello I am trying to use the html tag style output formatting in the MultiModalLLMCompletionProgram since it is much more reliable than JSON output format, I can modify the output format prompt but even though I get expected results, it gives out an error saying that could not extract json string from output

Plain Text
format_string = '''
    Format your response using the following HTML-like tags:
    <people_count>[number]</people_count>
    <background>[description]</background>

    Do not include any other text or preamble.
    '''
    output_parser = PydanticOutputParser(
        output_cls=FrameAnalysisOutput, pydantic_format_tmpl=format_string)

    llm_program = MultiModalLLMCompletionProgram.from_defaults(
        output_parser=output_parser,
        prompt_template_str=prompt_template_str,
        multi_modal_llm=mm_model,
        image_documents=image_docs,
        verbose=True,
    )


Plain Text
Starting MultiModalLLMCompletionProgram...
> Raw output: <people_count>2</people_count>
<background>A person sitting on a couch, covered by a pink blanket. They are positioned against white striped curtains.</background>
Error during frame analysis: Could not extract json string from output: <people_count>2</people_count>
9 comments
L
s
Hello I am trying to run a multimodal model using ollama (minicpm-v) I want to run this model in parallel to process the same query over multiple images at the same time, is this possible? I know that ollama has some concurrency parameters to run multiple models but I couldn't get it to work, I tried the "Parallel Execution of Same Event Example" cookbook workflow but I failed and got this error. Error during frame analysis: Ollama does not support async completion.
11 comments
L
s