Find answers from the community

Updated last year

Hi Logan M I haven t upgraded llama

Hi @Logan M , I haven't upgraded llama index for a month, good to see so many new features. Today, I tried to upgrade, I have an issue with 0.8.0 to 0.8.5.
For my Q&A feature that uses Tree GPT35turbo before, it worked well up to 0.7.24, starting 0.8.0, it returns a lot more false positive results. For example, instead of having 10 yes for a bunch of questions, I saw 50 yes now, 40 were false positive. Do I need to change anything for the breaking changes? I have been setting my own defaults with gpt3.5turbo, temperature of 0.01 and tree retriever, so the defaults change should not impact me. But the prompts changes do, to me, new tree prompts do not work as well as before. Please advice. Thanks in advance.
L
j
5 comments
Can you explain a bit more what features you've been using? You mentioned a tree retriever, how did you set that up?
The main things that changed that would affect you are
  • the default text_qa template
  • the default refine template
  • the default tree_summarize template
Although between 0.8.0 and 0.8.5, some more minor tweaks were made in response to user feedback -- I would make sure you are testing with the latest
i have tested with 0.8.0 and 0.8.5.post1, both have the same issue. I have been using the default prompts template.
some code snippets:
ModelChoice.GPT35: {"model_name": "gpt-3.5-turbo", "chunk_size": 1024},
predictor = LLMPredictor(llm=ChatOpenAI(temperature=0.01, model_name=model_settings["model_name"]))
retriever = TreeSelectLeafRetriever(index=self.index, child_branch_factor=self.tree_child_branch_factor)
Plain Text
            reader = SimpleDirectoryReader(input_dir=self.input_dir)
            documents = reader.load_data()
            index = TreeIndex.from_documents(documents, service_context=service_context)
Add a reply
Sign up and join the conversation on Discord