Find answers from the community

Home
Members
xingxing
x
xingxing
Offline, last seen 3 months ago
Joined September 25, 2024
how does embedding work in the context of LLM model ? I understand that embedding is to 'digest' 'text' into vector that can be used for matching/search. But what role the LLM play in the context of response generation ? say I have a python project 'digested' through embedding(using openai's embedding api, via llamaindex) but when I try to ask questions about the code, my project obviously don't have all the info about python.
5 comments
a
L
x