• 0 Posts
  • 20 Comments
Joined 20 days ago
cake
Cake day: December 23rd, 2025

help-circle








  • Is this why they’re trying to obtain so damn much memory components? So they can ship the entire data relied upon by the statistical models to consumers?

    I really didn’t think they could be more wasteful but clearly I was wrong, they want to eat up bandwidth like a fish drinks water. Or maybe they want to actually SHIP the DATABASE?

    Here is what will change if your AI could remember better: nothing. The LLMs are statistical models. The next most likely token based on the sample data would rarely if ever change based on a new prompt being memorized. Normally, the models need to be retrained whenever a change is made to the data, if you could bypass that you will probably lose accuracy but even if you didn’t then you still won’t gain any tangible accuracy.











  • Honey, we all know they’re hitting the ceiling they predicted in their OpenAI paper on AI Scaling Laws in 2020 and corrected by Deepmind’s 2022 followup paper. It’s trash now and it will always be trash, attempting to give it that infinite power and compute time to reach 94% is just an exercise in futility.