Find answers from the community

Home
Members
JohnnyT
J
JohnnyT
Offline, last seen 3 months ago
Joined September 25, 2024
J
JohnnyT
·

Llama Index

Hello everyone,

I have read all the articles on the LlamaIndex website, and I apologize for asking what may seem like a noob question.
My purpose is to perform question answering or semantic searching through a large set of articles (tens of thousands).

OpenAI's cookbook (https://github.com/openai/openai-cookbook/blob/main/examples/Question_answering_using_embeddings.ipynb) has taught us to:

  1. Turn articles into embeddings using the embedding API
  2. Turn our query into embeddings using the embedding API
  3. Compare the similarity between the two vectors
  4. Find the article that matches the query the most
  5. Inject the article into the prompt as context, send along with completion/chat API
My questions are:

  1. Can I think of LlamaIndex as a convenience wrapper for these steps?
  2. What is the main difference between using the steps provided by OpenAI and using LlamaIndex if I want to achieve Question answering or Semantic search?
Initially, I thought LlamaIndex was a convenience wrapper for these steps, but it appears that LlamaIndex transforms my data to an index. I'm still not entirely sure how I can leverage this index data structure.

Thank you all!
10 comments
h
j
L
m
J