the very first you built I believe - the very basic query bot with gpt 3 that you could ask it stuff about a book I believe
I've literally not done any changes ever since it was called gpt-index π
Heh yes! If you pull in the latest changes, and upgrade your versions to match the requirements.txt, it should work
If you have an existing saved index.json file, you should probably delete it and start fresh π
oh shit I do have an existing json file that I made a while ago - why should I start fresh?
There was a lot of breaking changes introduced around v0.5.0 of llama index
If the index isn't very big, it's easier to start fresh.
If it will cost a lot to index again, there is a migration tool you can try
when deploying the app on streamlit it's now asking me to install pytorch, is this normal? lol π
do I need to set the requirements.txt file to the specific versions? so just having "langchain" in there not good enough? π
just having langchain can be ok sometimes, but with python, it's best practice to have versions in your requirements (otherwise a a package might update and break everything lol)
oh I think it's asking me to install pytorch and transformers because I just changed the name of the index that it should be importing, to force the app to build a new index lol
also, is this running now gpt 3.5 turbo? does it "remember" past queries from the conversation?
It does not remember past queries. For that, youd want to use langchain, and then llama index as a tool in langchain π
(and it is using gpt 3.5 turbo)
something is fucked up, either 3.5 turbo or the embeddings or something. Because I'm asking something that is in the document I indexed, but It's giving back this answer
ohhh yea :PSadge: OpenAI did something to GPT 3.5. In generally the model seems like a huge step down lately, it's having trouble following instructions in the internal prompts ....
is there a way to make it go back to davinci 003? since it's a query bot a chat model is kinda wasted vs a raw completion model
I'm trying to research some fixes in my spare time, but my conspiracy theory, I think openAI scammed everyone buy offering gpt-3.5 for super cheap, and then downgrading it so people are forced to switch the apps they build to more expensive models LOL
Yea you can switch back easily since that is the default. Just remove the llm_predictor argument from the service context obejct, and it will default to davinci-003
is there a way to easily change the base prompt for the bot so I can tell it "You are a bot that should reply only based on the context provided below... etc" "If you don't have the answer in the context provided say I don't know" and such?
I'd also like to not have it summarize the responses so much π the document I fed it has a bunch of definitions, so having a longer answer is not that bad