Would GPT Index be appropriate for this? And if so, how would I provide the previous conversational responses as continuing context to the LLM, as well as enable it to make these functional calls internally within the GPT Index architecture?
Look into LangChain (which GPT-Index is built on top of)
LangChain is used to recursively call an LLM for answers (Sounds like you want an Agent with some Memory)
GPT-Index is used manage getting and passing large amounts of data to an LLM
So if you wanted to have a conversation about a book, you'd use GPT-Index to generate a Index of a book, and then give a LangChain Agent a Tool that allows them to query that Index
Have a look at @Krrish@LiteLLM.ai 's blog below - I think that covers most of what you need, you'll just need to add in the Memory for the LLM agent