Find answers from the community

Updated 3 months ago

When using a query pipeline, is there a way to save intermediate outputs to a local folder

When using a query pipeline, is there a way to save intermediate outputs to a local folder?
k
2 comments
Please stand by @node0 as I search the knowledge base. I normally take about 20 seconds or so. :wave:
----------------------
Hmm, I don't know enough to give you a confident answer yet. The extracts provided do not mention a way to save intermediate outputs of a query pipeline to a local folder. However, LlamaIndex does support streaming outputs, which means you can process each output as it becomes available.

Here's an example of how to use streaming with LlamaIndex:

Plain Text
prompt_str = "Please generate related movies to {movie_name}"
prompt_tmpl = PromptTemplate(prompt_str)
prompt_str2 = """\
Here's some text:

{text}

Can you rewrite this with a summary of each movie?
"""
prompt_tmpl2 = PromptTemplate(prompt_str2)
llm = OpenAI(model="gpt-3.5-turbo")
llm_c = llm.as_query_component(streaming=True)

p = QueryPipeline(
    chain=[prompt_tmpl, llm_c, prompt_tmpl2, llm_c], verbose=True
)

output = p.run(movie_name="The Dark Knight")
for o in output:
    print(o.delta, end="")

In this example, the outputs are printed as they become available. You could modify this code to save each output to a local folder instead of printing it. However, this would be a custom solution and not a built-in feature of LlamaIndex.

For more information, you may want to check the LlamaIndex documentation ().
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
Add a reply
Sign up and join the conversation on Discord