I think it makes sense! although I'm not sure if the fine tuning is necessary π€ unless you are trying to embed new knowledge into a model... but I'm not sure how well that works.
If you want to record the exact inputs/outputs to openAI, you'll want to use the llama logger (since llama index takes your query and pairs it with a few various prompt templates depending on the situation)
Check it out at the bottom of the notebook
https://github.com/jerryjliu/llama_index/blob/main/examples/vector_indices/SimpleIndexDemo.ipynb