That’s not quite it either.
The model itself is just a giant ball of math. They made a thing that can transform an English through the collected knowledge of much of humanity a few dozen times and have it crap out a reasonable English answer.
The open source part is kind of a misnomer. They explained how they cooked the meal but not the ingredient list.
To complete the analogy, their astounding claim is that they managed to cook the meal with less fire than anyone else has by a factor of like 1000.
But the model itself is inherently safe. It’s not like it’s a binary that can carry a virus or do crazy crap. Even convincing it to do give planned nefarious answers is frankly beyond our capabilities so far.
The dangerous part that proton is looking at and honestly is a given for any hosted AI, is in the hosting server side of things. You make your requests to their servers and then their servers put the requests into the model and return you the output.
If you ask their web servers for information about tiananmen square they will block you.
You can, however, download the model yourself and run it yourself and there’s not any security issues there.
It will tell you anything that you need to know about tiananmen square.
You should try the comparison between the larger models and the distilled models yourself before you make judgment. I suspect you’re going to be surprised by the output.
All of the models are basically generating possible outcomes based on noise. So if you ask it the same model the same question five different times and five different sessions you’re going to get five different variations on an answer.
You will find that an x out of five score between models is not that significantly different.
For certain cases larger models are advantageous. If you need a model to return a substantial amount of content to you. If you’re asking it to write you a chapter story. Larger models will definitely give you better output and better variation.
But if you’re asking you to help you with a piece of code or explain some historical event to you, The average 14B model that will fit on any computer with a video card will give you a perfectly serviceable answer.