I bet he just wants a card to self host models and not give companies his data, but the amount of vram is indeed ridiculous.
I bet he just wants a card to self host models and not give companies his data, but the amount of vram is indeed ridiculous.
honored to be the -200th
I could see this coming a parsec away lol.
How’s this different than a self reflecting agent you can write today? I thought when o1 was announced, while some people were excited, even calling it some sort of proto AGI.
OpenAI is great at raising money and just feeds from the hype.
ha, not going to happen
yeah, I’ve been wanting a card like that to run local models since 2020 when I got a 3080. Back then I’d have spent a bit more to get one with the same performance but some 20GB of VRAM.
Nowadays, if they released an RX 9070 with at least 24GB at a price between the 16GB model and an RTX 5080 (also 16GB); that would be neat.
They said smart, not a good person.
I hate it, but that’s what happens without competition.
I’m already annoyed when someone is using their phone in the dark and doesn’t adjust the brightness settings.
If you do this during night flights, sincerely, fuck you.
won’t someone please think of the shareholders?
Until last week, you absolutely NEEDED an NVidia GPU equipped with CUDA to run all AI models.
mate, that means they are using PTX directly. If anything, they are more dependent to NVIDIA and the CUDA platform than anyone else.
to simplify: they are bypassing the CUDA API, not the NVIDIA instruction set architecture and not CUDA as a platform.
I wish that was true, but this doesn’t threaten any monopoly
aah I see them now
I don’t think anyone is saying CUDA as in the platform, but as in the API for higher level languages like C and C++.
PTX is a close-to-metal ISA that exposes the GPU as a data-parallel computing device and, therefore, allows fine-grained optimizations, such as register allocation and thread/warp-level adjustments, something that CUDA C/C++ and other languages cannot enable.
lmao my workplace encourages use / exploration of LLMs when useful, but that’s stupid
what is this, snake for ants?
insanity is also relying on a single 2FA device, ffs
using a password manager without 2FA is insanity, glad they’re doing it
Probably a mistake, considering the current generation follows the RX 7_00 naming pattern.