The hell are you talking about? It’s right there in the article. But maybe you didn’t read it?
Ad hominem attacks like you are using are a sign you don’t have anything useful to say.
The hell are you talking about? It’s right there in the article. But maybe you didn’t read it?
Ad hominem attacks like you are using are a sign you don’t have anything useful to say.
Stupid users send private keys and other secrets to their AIs all the time. This is a big fucking threat to US global imperialism.
The US trusts OpenAI (even if they shouldn’t) to not send hackers after US companies. They definitely don’t trust Chinese companies to have the same restraints.
Nah, I’m speaking from the perspective of the US, since the article is about US policy. The decision making is obvious when you’re thinking at a national protectionist level.
Obviously privacy violations are bad for the user regardless. Never trust your corporations or government!
Well yeah, it’s obviously more of a risk to send directly to your rival than internally. Both are risky but one is much, much worse.
Interesting conclusion, LLMs are inherently 1D in nature, and ARC is a 2D task. LLMs are able to emulate 2D reasoning for sufficiently small tasks, but suffer greatly as the size of the task increases. This is like asking humans to solve 4D problems.
This is probably a fundamental limitation in LLM architecture and will need to be solved someday, presumably by something completely different.
If bitnet takes off, that’s very good news for everyone.
The problem isn’t AI, it’s AI that’s so intensive to host that only corporations with big datacenters can do it.
I’d argue it’s part of “the fediverse” but not “The Fediverse”.