swiftywizard@discuss.tchncs.de to Programmer Humor@programming.dev · edit-22 days agoLavalamp too hotdiscuss.tchncs.deimagemessage-square60linkfedilinkarrow-up1451arrow-down113
arrow-up1438arrow-down1imageLavalamp too hotdiscuss.tchncs.deswiftywizard@discuss.tchncs.de to Programmer Humor@programming.dev · edit-22 days agomessage-square60linkfedilink
minus-squareFeathercrown@lemmy.worldlinkfedilinkEnglisharrow-up6·edit-27 hours agoHmm, interesting theory. However: We know this is an issue with language models, it happens all the time with weaker ones - so there is an alternative explanation. LLMs are running at a loss right now, the company would lose more money than they gain from you - so there is no motive.
minus-squareJerkface (any/all)@lemmy.calinkfedilinkEnglisharrow-up3·5 hours agoit was proposed less as a hypothesis about reality than as virtue signalling (in the original sense)
minus-squareMotoAsh@piefed.sociallinkfedilinkEnglisharrow-up1arrow-down1·6 hours agoOf course there’s a technical reason for it, but they have incentive to try and sell even a shitty product.
Hmm, interesting theory. However:
We know this is an issue with language models, it happens all the time with weaker ones - so there is an alternative explanation.
LLMs are running at a loss right now, the company would lose more money than they gain from you - so there is no motive.
it was proposed less as a hypothesis about reality than as virtue signalling (in the original sense)
Of course there’s a technical reason for it, but they have incentive to try and sell even a shitty product.