Find answers from the community

Updated 3 months ago

HI Team, how to achieve the following

HI Team, how to achieve the following langchain code in to llamaindex - final_prompt = Chatprompt.from_template(context_prompt)


chain = prompt | MODEL.llm | StrOutputParser()

response = chain.run({
"input": question,
"history": "\n".join(memory)},
config = {"callbacks": [MODEL.callback]})
L
M
7 comments
For just a string prompt

Plain Text
from llama_index.core.prompts import PromptTemplate
from llama_index.llms.openai import OpenAI

prompt = PromptTemplate("Some topic {topic}")

formatted_prompt = prompt.format(topic="ghosts")

llm = OpenAI(model="gpt-4o-mini")
response = llm.complete(formatted_prompt)
print(str(response))


For a prompt with chat messages

Plain Text
from llama_index.core.prompts import ChatPromptTemplate
from llama_index.core.llms import ChatMessage
from llama_index.llms.openai import OpenAI

prompt = ChatPromptTemplate.from_messages([
  ChatMessage(role="system", content="Talk like a pirate."),
  ChatMessage(role="user", content="Tell me a joke about {topic}.")
])

formatted_messages = prompt.format_messages(topic="Dogs")

llm = OpenAI(model="gpt-4o-mini")
response = llm.chat(formatted_messages)
print(response.message.content)
Thanks @Logan M , however how to pass the question like below along with history and callack, response = chain.run({
"input": question,
"history": "\n".join(memory)},
config = {"callbacks": [MODEL.callback]})
I don't know what your callback is doing

If you want memory, you'll have to manage that

Plain Text
from llama_index.core.memory import ChatMemoryBuffer

memory = ChatMemoryBuffer.from_defaults(llm=llm)
memory.put(ChatMessage(role='user', content='some_message'))

# get latest buffer
messages = memory.get()

# get all
messages = memory.get_all()

# combine memory with your prompt
new_message = ChatMessage(role="user", content=formatted_prompt)
messages = memory.get()

response = llm.chat(messages)

# add messages to memory
memory.put(new_message)
memory.put(response.message)
@Logan M - okay got it thanks again, how about the following if i want to change it for other providers other than openAI , llm = OpenAI(model="gpt-4o-mini")
Then you can install and use any other llm

For example
pip install llama-index-llms-ollama

Plain Text
from llama_index.llms.ollama import Ollama

llm = Ollama(model="llama3.1:latest", request_timeout=120)


We should have a notebook in our docs for every LLM we support
Add a reply
Sign up and join the conversation on Discord