Find answers from the community

M
Martin
Offline, last seen 3 months ago
Joined September 25, 2024
M
Martin
·

Worlflow

Hi all, I created an agent that makes an external api call and retrieves json data of contact persons. I run into problems in my workflow when this api output is passed back to the llm.

Is there a specific way in which I can have the llm handle the retrurned json without any complications?
1 comment
W
M
Martin
·

Google Colab

When installing: pip install --upgrade llama-index-core llama-index-llms-openai llama-index-utils-workflow

I do not have llama_index.core.workflow available

Do i need to install anything else @Logan M ?

(im following: https://colab.research.google.com/drive/1GhF8uBC2LrnYf195CcTe_e5K8Ai6Z4ta#scrollTo=3cBku4_C0CQk)

Thanks!
5 comments
L
M
Hi all, I'm new to LlamaIndex and looking to set up a chat functionality with the following features:

  • Uses gpt-4o as the llm
  • Streams
  • Has a system prompt or document to define its functionality (perfecting the user's search query)
  • Includes conversation memory/history recall
  • Can trigger a custom component for specific company or person queries, which activates an already existing search pipeline (with this perfected query as input)
Could someone help me with a basic Python setup to get me started? Thanks!
2 comments
M
W
M
Martin
·

Generator

I'm trying to have the llm stream the output but get this message:
<generator object llm_chat_callback.<locals>.wrap.<locals>.wrapped_llm_chat.<locals>.wrapped_gen at 0x766381b12340>

How should I have properly done it? (without the selected code in the script the code runs fine.
3 comments
L
M