I have deployed python based llamaindex in aws lambda to act as backend for slack bot. The lambda init is timing out beyond 10 sec which retrigger the message from slack.
I suspect that this is due to "[nltk_data] Downloading package punkt to /tmp/llama_index."
Is there a way to package the model as part of docker build process and tell llama_index to use it instead of downloading it ?