@Teemu one last question, just in case you know the answer. I tried with ask-kapa bot but it wasn't sure:I'm not performing text generation at the moment, I'm just trying to run the indexing subroutine. I'm using the following code snippet. I don't think there should be any calls being made to Llama.cpp, however even with the below the program complains:
Could not import llama_cpp library.
:
###Setup the embedding model
embed_model = HuggingFaceEmbeddings(
model_name="sentence-transformers/all-mpnet-base-v2"
)
service_context = ServiceContext.from_defaults(embed_model=embed_model)
# optionally set a global service context
set_global_service_context(service_context)
# http endpoint for your cluster (opensearch required for vector index usage)
endpoint = getenv("OPENSEARCH_ENDPOINT", "http://localhost:9200")
# index to demonstrate the VectorStore impl
idx = getenv("OPENSEARCH_INDEX", "gpt-index-demo")
# load some sample data
documents = SimpleDirectoryReader("../paul_graham_essay/data").load_data()
# OpensearchVectorClient stores text in this field by default
text_field = "content"
# OpensearchVectorClient stores embeddings in this field by default
embedding_field = "embedding"
# OpensearchVectorClient encapsulates logic for a
# single opensearch index with vector search enabled
client = OpensearchVectorClient(
endpoint, idx, 1536, embedding_field=embedding_field, text_field=text_field
)
vector_store = OpensearchVectorStore(client)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
# initialize an index using our sample data and the client we just created
index = VectorStoreIndex.from_documents(
documents=documents,
storage_context=storage_context,
service_context=service_context
)