I.e. embedding_model.get_text_embedding(documents[0].text) is returning a value, however, VectorStoreIndex.from_documents when queried via index.as_retriever().retrieve('foo')[0].embedding is null/empty
It's not that it's empty, it's just the llama-index is not populating the embedding field of the node when retirveing (the embedding is stored separately from the node)
There code be a PR to attach the embedding to the node in the vector db you are using
| result = self.index.as_retriever(similarity_top_k=10).retrieve(query) | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | File "/opt/conda/lib/python3.11/site-packages/llama_index/indices/base_retriever.py", line 22, in retrieve | return self._retrieve(str_or_query_bundle) | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | File "/opt/conda/lib/python3.11/site-packages/llama_index/indices/vector_store/retrievers/retriever.py", line 81, in _retrieve | if query_bundle.embedding is None and len(query_bundle.embedding_strs) > 0:
Wow - indeed this was the right pointer. I have no idea why inside docker the message is not a string (it ineed is a string in the local setup) but it is a chainlit message which needs to be unwrapped. @disiok thanks!