Find answers from the community

Home
Members
Tmeister
T
Tmeister
Offline, last seen 3 months ago
Joined September 25, 2024
T
Tmeister
·

In memory

Thank you, Logan. I understand now when you said, "Keep it in memory," In the PHP world from where I came from ;), we use to use Redis, for example, to save data in memory. What would be the Python or Llama index way?
3 comments
L
T
T
Tmeister
·

Cut off

Hey there, I'm playing with Llama and the GPTSimpleVectorIndex. So far, I've been able to create a custom index and query it. but when I query the index, it looks like the answer has been cut off. I think that maybe it is something with the max_token value. Should I use LLMPredictor to set a bigger max_tokens value?

It would be great if you could point me on the right path here.
1 comment
L
Hey there, I'm new here, and I have a question to learn a little about training and model consumption.

My idea is to create a model (not sure if a model is the best way) based on specific documents, pages, etc.

That "model" would be updated every 2 weeks or so; meanwhile, we can query the "model" to get answers based on the data submitted without creating an index every time. Does this make sense?

What would be the best way to accomplish this?
4 comments
L
T