Find answers from the community

Updated 3 months ago

I have used this to use a custom

I have used this to use a custom embedding search on my data. This is a direct copy from https://gpt-index.readthedocs.io/en/latest/how_to/customization/embeddings.html. When I test this on the paul graham essay I get a response that is almost identical to the example from OpenAI. I am curious how I am getting a response at all. I am only clarifying an Embed model and have not entered a OpenAI key. So how is LLamaIndex generating a response here and what is the expected response?

from llama_index import VectorStoreIndex, SimpleDirectoryReader
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
from llama_index import LangchainEmbedding, ServiceContext

// load in HF embedding model from langchain
embed_model = LangchainEmbedding(HuggingFaceEmbeddings())
service_context = ServiceContext.from_defaults(embed_model=embed_model)

// load index
documents = SimpleDirectoryReader('../paul_graham_essay/data').load_data()
new_index = VectorStoreIndex.from_documents(
documents,
service_context=service_context,
)

// query will use the same embed_model
query_engine = new_index.as_query_engine(
verbose=True,
)
response = query_engine.query("<query_text>")
print(response)
L
n
4 comments
well, it's calling someones OpenAI account lol
unless it had some responses cached or something?
If your openai key is not present, it would crash when calling the query with this setup
Thanks for your quick response
Add a reply
Sign up and join the conversation on Discord