is the llama networks component (also?) designed to setup a client-server model, eg a streamlit app that talks to a llamindex service to answer the query?
i want to separate the webservice more from the backend, so eg they don't need access to secrets as much (secrets to access the LLM, the vectordatabase etc etc)
main reason is that even if llama_inde creates server client mode, i will have logic around the usual llama_index api that i also want to keep out of the webservice part (simply because it's not webservice related)
Sounds like you've given it some good thought and landed on something that makes sense for you -- kudos!
Do you think you'd be able to provide feedback as to why wrapping your query engine with a ContributorService that exposes it behind a fastapi REST app anyways, and modifying the app thereafter to your needs would not work?