llm.complete()
actually calls the chat endpointllm.complete()
and chat engines call llm.chat()
import phoenix
had a way of detecting whether it's running already or not, but that would be useful.I didn't notice if import phoenix had a way of detecting whether it's running already or not, but that would be useful.There is! You can always retrieve the active session via
px.active_session()
Wish it would show me, in addition to what it already does, the actual LLM endpoint call