Find answers from the community

Updated 3 days ago

I want to take some text and reformat the text using an LLM, what is the best way to do th

@kapa.ai I want to take some text and reformat the text using an LLM, what is the best way to do this, should I use acomplete method?
k
c
7 comments
Please stand by @cmosguy as I search the knowledge base. I normally take about 15 seconds or so. :wave:
----------------------
Based on the knowledge sources provided, I can offer guidance on using an LLM to reformat text with LlamaIndex. The acomplete method you mentioned is indeed one way to accomplish this task, but there are a few options available depending on your specific needs.

For text reformatting using an LLM, you can use either the complete method for synchronous operations or the acomplete method for asynchronous operations. Here's a brief overview:

  1. Synchronous method:
    You can use the complete method to send a prompt to the LLM and get a response. This is suitable for single, non-streaming text completions.
Example:
Plain Text
   from llama_index.llms.openai import OpenAI

   llm = OpenAI()
   response = llm.complete("Reformat the following text: [Your text here]")
   print(response.text)
   


  1. Asynchronous method:
    The acomplete method is useful when you want to perform asynchronous operations, which can be beneficial for handling multiple requests or integrating with asynchronous workflows.
Example:
Plain Text
   import asyncio
   from llama_index.llms.openai import OpenAI

   async def reformat_text():
       llm = OpenAI()
       response = await llm.acomplete("Reformat the following text: [Your text here]")
       print(response.text)

   asyncio.run(reformat_text())
   


Both methods are valid approaches, and the choice between them depends on your specific use case and whether you need asynchronous capabilities.

If you're working with multiple text chunks or need to process them in parallel, the asynchronous acomplete method might be more efficient. However, for simple, one-off text reformatting tasks, the synchronous complete method is straightforward and sufficient.
Remember to properly set up your LLM and any necessary API keys before using these methods. The examples above use OpenAI, but LlamaIndex supports various LLM providers, so you can choose the one that best fits your needs.

For more detailed information on using LLMs with LlamaIndex, you can refer to the LlamaIndex documentation on Using LLMs.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai how do I get the string of the response from the reformatted textex from the acomplete method?
Let me check the knowledge base for you again. One moment! :books:
----------------------
To get the string of the response from the reformatted text using the acomplete method, you can access the text attribute of the CompletionResponse object returned by the method. Here's how you can do it:

Plain Text
from llama_index.llms import OpenAI

llm = OpenAI()
resp = await llm.acomplete("Reformat the following text: [Your text here]")
reformatted_text = resp.text
print(reformatted_text)


This approach is consistent across different LLM implementations in LlamaIndex. For example, the same pattern works for other LLMs like Cohere, Anthropic, or custom implementations:

  1. For Cohere:
    ```python
    from llama_index.llms.cohere import Cohere
llm = Cohere(model="command", api_key=api_key)
resp = await llm.acomplete("Reformat the following text: [Your text here]")
reformatted_text = resp.text
print(reformatted_text)
Plain Text
   [Source](<https://docs.llamaindex.ai/en/latest/examples/llm/cohere/#async>)

2. For Anthropic:
   
python
from llama_index.multi_modal_llms.anthropic import AnthropicMultiModal

llm = AnthropicMultiModal()
resp = await llm.acomplete("Reformat the following text: [Your text here]", image_documents=[])
reformatted_text = resp.text
print(reformatted_text)
`` [Source](<https://docs.llamaindex.ai/en/latest/api_reference/multi_modal_llms/anthropic/#llama_index.multi_modal_llms.anthropic.AnthropicMultiModal>) The CompletionResponse object returned by acomplete contains the reformatted text in its text attribute. By accessing resp.text`, you get the string representation of the LLM's response, which in this case would be the reformatted text.
Remember that acomplete is an asynchronous method, so you need to use it within an async function or with an event loop.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
Add a reply
Sign up and join the conversation on Discord