Find answers from the community

Updated 2 years ago

Common issues

At a glance
Looking for thoughts on four common issues I see
Hey all! I'm working on a few chatbots built with LlamaIndex, a collection of (1000s of) blog posts as a data-source, and GPT. Really impressed right out of the box, but as I continue to work I've found a few commons ways in which responses are bad. I'm working through mitigating each issue - all of which I think are very solvable.

Issues
  1. Failing to account for recency. Can I somehow get my bot to prioritize more recent context if the same thing is mentioned many times. Maybe I can store date in some metadata?
  2. Requiring very specifically worded questions. I.e. Ask two questions that mean the same thing to a human. Bot won't be able to find answer for one, will for the other.
  3. Aggregating vs. Non-aggregating Index. I'm using a simple vector index. Some questions would benefit from an index that could use aggregation of info from across my blog posts.. Others wouldn't. How can I balance this?
  4. How to handle subjective questions for which there is nothing in the context. I think this comes down to prompt engineering.
If you have any thoughts on the above, please let me know, I'd love to hear them. I'm sure I'm missing some easy improvements.

More info
I wrote about this in depth on my website https://www.mattambrogi.com/posts/chat-bots/
L
m
j
12 comments
Super interesting summary from a power user, thanks for taking the time to write that up!

My general impression is that 1 and 2 might be related to how embeddings (currently) work πŸ€”
Yea I think all of these are to be expected given how LlamaIndex / embeddings work. But I think there are probably workarounds to mitigate them.
@matt_a this is awesome feedback!!

Quick thoughts re: recency, we've added a few modules to process by recency: https://gpt-index.readthedocs.io/en/latest/how_to/query/node_postprocessor.html#recency-postprocessors
would any of those modules work for your use case?
in general, this feedback is super detailed, going to save this so we can better improve the tool πŸ™‚
@jerryjliu0 Thanks for sharing this. Are there any code snippets for this? Seems like it would work great but not seeing how I would actually use this or specify where to look for date.

Planning to setup a baseline using the evaluation modules and go through each one of this issues and see if I can improve my bot. Will share what works as I go.
This is awesome, excited to try this out.
Let us know your feedback!
@jerryjliu0 @Logan M could either of you explain when I might want to use the Embedding Recency vs Fixed Recency post processor?

Running some experiments with these now.
Fixed recency is when you docs/nodes have specific dates that you want to filter on

Embedding recency just prioritizes the most recently inserted data, useful for when you are inserting things often (the llama index discord help channel would be a good use case for embedding recency, if you were constantly pulling in New messages)
Oh thanks a bunch, that was not clear from the docs
Add a reply
Sign up and join the conversation on Discord