racemaniac@lemmy.dbzer0.comtoTechnology@lemmy.world•Nvidia falls 14% in premarket trading as China's DeepSeek triggers global tech sell-offEnglish
18·
5 days agoThat’s the claim, it has apparently been trained using a fraction of the compute power of the GPT models and achieves similar results.
They’ll probably do that, but that’s assuming we aren’t past the point of diminishing returns.
The current LLM’s are pretty basic in how they work, and it could be that with the current training we’re near what they’ll ever be capable of. They’ll of course invest a billion in training a new generation, but if it’s only marginally better than the current one, they won’t keep investing billions into it if it doesn’t really improve the results.