Find answers from the community

Updated 2 years ago

Logan M I have a couple of questions

At a glance
@Logan M I have a couple of questions regarding your "Discover LlamaIndex: Bottoms-Up Development with LLMs (Part 3, Evaluation)" yt video :-

  1. First of all amazing video, very informative but I am not able to find the link to the jupyter-notebook in this video.
  1. Also, in this video you created a big_document by concatenating all the smaller ones citing the reason that it will lead to lesser LLM calls but won't the context window become too big then? Is this a good thing to do to improve performance of our LLM based apps!?
  1. How are you able to print the time taken at the bottom of each run? πŸ₯Ή
L
1 comment
  1. Link: https://github.com/run-llama/llama_docs_bot/blob/main/3_eval_baseline/3_eval_basline.ipynb
  1. Nah, the generator chunks each document into nodes, and generates a question per node
  1. I think that was just part of the jupyter notebook extension in vscode πŸ™‚
Add a reply
Sign up and join the conversation on Discord