Find answers from the community

n
nbivs
Offline, last seen 3 months ago
Joined September 25, 2024
I have used this to use a custom embedding search on my data. This is a direct copy from https://gpt-index.readthedocs.io/en/latest/how_to/customization/embeddings.html. When I test this on the paul graham essay I get a response that is almost identical to the example from OpenAI. I am curious how I am getting a response at all. I am only clarifying an Embed model and have not entered a OpenAI key. So how is LLamaIndex generating a response here and what is the expected response?

from llama_index import VectorStoreIndex, SimpleDirectoryReader
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
from llama_index import LangchainEmbedding, ServiceContext

// load in HF embedding model from langchain
embed_model = LangchainEmbedding(HuggingFaceEmbeddings())
service_context = ServiceContext.from_defaults(embed_model=embed_model)

// load index
documents = SimpleDirectoryReader('../paul_graham_essay/data').load_data()
new_index = VectorStoreIndex.from_documents(
documents,
service_context=service_context,
)

// query will use the same embed_model
query_engine = new_index.as_query_engine(
verbose=True,
)
response = query_engine.query("<query_text>")
print(response)
4 comments
n
L