Where would I need to check or modify in the BasicChatEngine python backend generated by the create-llama starter(GitHub - run-llama/create-llama: The easiest way to get started wit...) in order to incorporate an embedding's metadata field stored in a vector store?
Would I need to do anything specific for llamaindex to be aware of them and use the metafield object in the embedding? would I need to pass in some type of filterfields object to index = VectorStoreIndex.from_vector_store(store) for example?