Find answers from the community

Updated 3 months ago

Hi all, I'm new to LlamaIndex and

Hi all, I'm new to LlamaIndex and looking to set up a chat functionality with the following features:

  • Uses gpt-4o as the llm
  • Streams
  • Has a system prompt or document to define its functionality (perfecting the user's search query)
  • Includes conversation memory/history recall
  • Can trigger a custom component for specific company or person queries, which activates an already existing search pipeline (with this perfected query as input)
Could someone help me with a basic Python setup to get me started? Thanks!
W
M
2 comments
Hi, I would suggest that you go through the docs: https://docs.llamaindex.ai/en/stable/
Start building your project up and then if you face any issue, I'll be happy to help!!
Hi WhiteFang, thanks, I have been delving into the documentation deeper today, but could not seem to figure out how to get a streaming question-answer type chat set up using openai's gpt-4o model, one that has one or more tools at hand and knows when to trigger in them during the conversation.

In the documentation, the chat engine assumes an index, and an index assumes a node. But what I need is a bit simpler. It can just be a basic chat without any knowledge base, only having a system prompt. The key, however, is that it should know when to trigger an agent (like when chatGPT triggers DALL-E because you ask it to generate an image). And I don't know how to get started with such a chatbot. The query pipeline that will be triggered is working.

Any more specific tips or snippets that might help?
Add a reply
Sign up and join the conversation on Discord