Find answers from the community

Updated 4 months ago

Node, React, fine tuning

At a glance
Hey everyone I recently used the simpleCSV loader with LlamaIndex and I gotta say it was incredibly easy and the results just from a simple query were astonishing based on the little data I provided it.

But how is this different than say fine-tuning a GPT-3 model directly? I did this with about 50 prompt/completion pairs the results of the prompts were no where near as good as the custom knowledge LlamaIndex. in fact, quite poor

And the reason I wanted to fine tune the model was so I could utilize it in a MERN application (which is what I'm familiar with). How can I access this model in my own requests say with nodeJs?

apologies if this is a dumb question lol
L
r
p
24 comments
The best way to use it with node is a python api (like flask or FastAPI) I have a react + flask example here https://github.com/logan-markewich/llama_index_starter_pack
Awesome thank you, yea I was just checking out your message above mine haha
So how is this index different from a fine-tuned model say directly with openAI?
As for fine tuning, llama index does not do that. It simply sends user queries, along with relevant context, to an LLM using prompt templates
cause man this is so much easier
ahhh ok so it's using just a standard model say "text-davinci-003" and just engineering the prompt?
cause from my understanding wouldn't you have to train a custom model?
Training is only for really specific use cases.

Generally, with LLMs like what OpenAI has, you can just ask the model to do something and it will follow your instructions. No fine-tuning needed 💪
I recommend checking out the docs, like this page explaining how things work under the hood.

Very helpful! https://gpt-index.readthedocs.io/en/latest/guides/index_guide.html
I gotcha, i'm helping build a custom chatbot for a company that will be interacted by thousands of customers a day so trying to find the most scalable option while providing high quality responses
i'll check this out
Oh cool! Llama index is definitely a great choice then.

For chatbots, I'd also look into how llama index can integrate with langchain (either as a tool, or memory, or both)

https://github.com/jerryjliu/llama_index/blob/main/examples/langchain_demo/LangchainDemo.ipynb
interesting i'll check this out. Thank you so much for your help!
one last question if you will, is there a way to see the token_usage form a query?
It gets printed to the console, so you could redirect to a file and parse it if you need it. Otherwise the logs are good for visuall debugging

You can also predict token usage before running the actual query https://github.com/jerryjliu/llama_index/blob/main/examples/cost_analysis/TokenPredictor.ipynb
amazing!!! that's great to know. Man can't believe more people aren't using this lol
I'm doing exactly the same. I'm currently building a chabot for a company customer service. I already begin and had some good results however I want to improve it with all the new features and the new models
hell yea that's awesome! Are you fine-tuning or using a custom LLM with llama index?
I'm not fine-tunning. I'm using llama-index to index, store the datas and query the index. Then just add a last layer with openai gpt-3.5-turbo model to monitor the answer, add memory and make the model act as I wish.
Amazing, I think this is the route I’m going as well. Creating the querying the index makes much higher quality responses in my opinion than a fine-tuned model. I need to learn Django tho to create the backend
@Logan M got your repo pulled down and set up the flask backend along with streamlit however it's missing a secret.toml file.

Is stream-lit used to build up the front-end or should I go in the flask_react directory directly and run it with an npm start.

Sorry I am pretty clueless with this whole tech-stack (python & flask) and haven't used docker much.

I gotta say tho you did a absolutely phenomenal job getting all of this set up. Was so easy to follow your steps in the readME. Congrats, i will definetly share amongst my friends!
Ah, the streamlit and flask_react folders are separate folders 💪
As in, separate projects, they aren't related or dependent on eachother😅 I made the react front-end from scratch

Meanwhile, streamlit provides an easy way to build a UI using python, but it's definitely more for experimentation and small tests compared to react
You can comment out the mentions of the secret file and just hardcode the values for now, or you can read more about streamlit secrets here: https://docs.streamlit.io/streamlit-community-cloud/get-started/deploy-an-app/connect-to-data-sources/secrets-management
Add a reply
Sign up and join the conversation on Discord