Awesome thank you, yea I was just checking out your message above mine haha
So how is this index different from a fine-tuned model say directly with openAI?
As for fine tuning, llama index does not do that. It simply sends user queries, along with relevant context, to an LLM using prompt templates
cause man this is so much easier
ahhh ok so it's using just a standard model say "text-davinci-003" and just engineering the prompt?
cause from my understanding wouldn't you have to train a custom model?
Training is only for really specific use cases.
Generally, with LLMs like what OpenAI has, you can just ask the model to do something and it will follow your instructions. No fine-tuning needed 💪
I gotcha, i'm helping build a custom chatbot for a company that will be interacted by thousands of customers a day so trying to find the most scalable option while providing high quality responses
interesting i'll check this out. Thank you so much for your help!
one last question if you will, is there a way to see the token_usage form a query?
amazing!!! that's great to know. Man can't believe more people aren't using this lol
I'm doing exactly the same. I'm currently building a chabot for a company customer service. I already begin and had some good results however I want to improve it with all the new features and the new models
hell yea that's awesome! Are you fine-tuning or using a custom LLM with llama index?
I'm not fine-tunning. I'm using llama-index to index, store the datas and query the index. Then just add a last layer with openai gpt-3.5-turbo model to monitor the answer, add memory and make the model act as I wish.
Amazing, I think this is the route I’m going as well. Creating the querying the index makes much higher quality responses in my opinion than a fine-tuned model. I need to learn Django tho to create the backend
@Logan M got your repo pulled down and set up the flask backend along with streamlit however it's missing a secret.toml file.
Is stream-lit used to build up the front-end or should I go in the flask_react directory directly and run it with an npm start.
Sorry I am pretty clueless with this whole tech-stack (python & flask) and haven't used docker much.
I gotta say tho you did a absolutely phenomenal job getting all of this set up. Was so easy to follow your steps in the readME. Congrats, i will definetly share amongst my friends!
Ah, the streamlit and flask_react folders are separate folders 💪
As in, separate projects, they aren't related or dependent on eachother😅 I made the react front-end from scratch
Meanwhile, streamlit provides an easy way to build a UI using python, but it's definitely more for experimentation and small tests compared to react