The post indicates that the community member's employer is paranoid about using certain technologies, even though the community member personally doesn't see it as an issue. The comments discuss using llama index with local models, noting that open-source solutions only work well for basic use cases, and that more complex query engines may not perform as well. The comments also mention the resource requirements for running local models, suggesting that a cheap AWS instance may not be sufficient. When asked about the best local model for basic use cases, the community members recommend the 7B Falcon model, noting that models above 7-13B have more complex requirements, although 4-bit quantization can help.