Find answers from the community

Updated 12 months ago

Hello,

Hello,

do you have to use the chatgpt api just once to create a vector index file ?

Or has it to be done every time? because that would be expensive πŸ˜„

i am building chatbot with langchain and need some knoledge based on local documents instead of using the cloud

edit: i think yes you just generate it one time.
BUT do you upload/pass it every time (at every request/input of a user) to the ChatGPT API then?
W
L
10 comments
Yea you can You can persist the data in LlamaIndex and in next iteration check if the data is present and if so then load that data to avoid indexing again

https://docs.llamaindex.ai/en/stable/getting_started/starter_example.html#storing-your-index
thank you @WhiteFang_Jr . i just edited my text:

BUT do you upload/pass the data/json every time (at every request/input of a user) to the ChatGPT API after that? i think yes right? or maybe just at beginning of the conversation into the prompt
No, you don't have to pass it to chatgpt from your side. You can use chat engines from llamaindex and it will take care of maintaining the context of the conversation.


https://docs.llamaindex.ai/en/stable/module_guides/deploying/chat_engines/root.html
Thank you. So llama index has a NLP component to match the right questions to the users input?
in a q&a json for example
Yes. Basically llamaindex uses similarity algo


Flow in simple terms is like that

  • you provide the docs
  • docs are chunked into parts
  • vectors are created for these chunks
  • when you ask a query , most relatable chunk gets picked up
  • chunk + your query is passed to LLM to generate answrr
All of this will be handled by llamaindex
Thank you very much! πŸ˜‰
Add a reply
Sign up and join the conversation on Discord