That’s a very toxic attitude.
Inference is in principle the process of generation of the AI response. So when you run locally and LLM you are using your GPU only for inference.
That’s a very toxic attitude.
Inference is in principle the process of generation of the AI response. So when you run locally and LLM you are using your GPU only for inference.
The whole startup industry rely on investors to cover for their costs for years, while they work on a loss, in order to obtain a bigger market share. Look at Netflix, Facebook, WhatsApp, etc.
So buying an account you are increasing their market share.
But feel free to use Mistral, Deepseek, etc. that would be better
Please show me an LLM model that is really open source. My understanding is that most of the open models are open weights. For the record Mistral is also releasing Open weights models.
What is amazing in this case is that they achieved spending a fraction of the inference cost that OpenAI is paying.
Plus they are a lot cheaper too. But I am pretty sure that the American government will ban them in no time, citing national security concerns, etc.
Nevertheless, I think we need more open source models.
Not to mention that NVIDIA also needs to be brought to earth.
And still their shares are just a fraction of Tesla’s share price.
I read the article and it felt very strongly opinionated. I would personally wait for independent reviews of the capabilities of both GPT-4 and Gemini Ultra but I dare say that we as consumers of AI can only benefit from increased competition in the sector, pushing the prices down and the quality of the models up.
The problem is that NVIDIA is consistently gimping the mid range making it a very unattractive proposition.