The post asks if the in-memory vector store used in llamaindex is embedded in Word2vec format. The comments indicate that the in-memory vector store uses a different model, the bge-small embed model via HF, to create embeddings. Community members also note that Word2vec is an older approach and question what type of model is used in the in-memory vector store.