----------------------
Hmm, I don't know enough to give you a confident answer yet. The extracts provided do not mention a way to save intermediate outputs of a query pipeline to a local folder. However, LlamaIndex does support streaming outputs, which means you can process each output as it becomes available.
Here's an example of how to use streaming with LlamaIndex:
prompt_str = "Please generate related movies to {movie_name}"
prompt_tmpl = PromptTemplate(prompt_str)
prompt_str2 = """\
Here's some text:
{text}
Can you rewrite this with a summary of each movie?
"""
prompt_tmpl2 = PromptTemplate(prompt_str2)
llm = OpenAI(model="gpt-3.5-turbo")
llm_c = llm.as_query_component(streaming=True)
p = QueryPipeline(
chain=[prompt_tmpl, llm_c, prompt_tmpl2, llm_c], verbose=True
)
output = p.run(movie_name="The Dark Knight")
for o in output:
print(o.delta, end="")
In this example, the outputs are printed as they become available. You could modify this code to save each output to a local folder instead of printing it. However, this would be a custom solution and not a built-in feature of LlamaIndex.
For more information, you may want to check the
LlamaIndex documentation ().
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------