Bringing back racism and good vibes under “scientific principle.”
Yeah, nothing says “this person will repay their loans” like looking at their face and nothing fucking else.
I love how you can just call it capetalismo in portuguese, capeta = devil
Dystopian neutrality in article.
without discriminating on grounds of protected characteristics
AI classification is trained based on supervised (the right answers are predetermined) learning. MechaHitler for the fascist nationalism’s sake, will rate Obama’s face as a poor employee, and Trump’s as the bestest employee.
Open training data sets would be subject to 1. zero competitive advantage to a model, 2. massive complaints about any specific training data.
For some jobs, psycopaths AND loyalty are desirable traits, even though they can be opposite traits. Honesty, integrity, intelligence can be desirable traits, or obstacles to desperate loyalty. My point is that if there are many traits determined by faces, much more training data is needed to detect them, and then human hiring decision based on 10 or 30 traits matching to a (impossibly unique) position, where there direct manager only cares about loyalty, without being too talented, but higher level managers might prefer a candidate with potential to replace their direct manager, but all of them care about race or pregnancy risk, and then post training based on some “illegal characteristics”.
A Gattaca situation where, everyone has easy time getting great job, and moving to a greater job, OR being shut out of all jobs, creates a self contradicting prediction on “loyalty/desperation” controlability traits. If job duties are changed to include blow job services, then surely those agreeable make a better employee, despite any facial ticks responding to suggestion.
Human silent “illegal discrimination” is not eliminated/changed, but the new capability, you can use a computer to do the interviewing, and waste more interviewees’ time at no human cost to employer is why this will be a success. A warehousing company recently looked at facial expressions to determine attention to safety, and this leads to “The AI punishments to your life will continue until you smile more.” Elysium’s automated parole incident interviews is a good overview of the dystopia.
A classification problem is correlation vs causation. Sunspots and mini skirts have been correlated with stock market returns to some degree, but it tends to be a tenuous connection not guaranteed to hold up over time, or to actually have any meaningful relevance whatsoever. It’s easy to oversell models.
“If he’s black, get him out of here”
Cool. Literal Nazi shit, but now with AI 😵💫
Basically the slogan for the 2020s
Cool. Literal Nazi shit, still powered by IBM.
Yeah but it’s cool cause some rich white guy taught the computer to be racist for him, so you can’t complain.
But what if bias was not the reason? What if your face gave genuinely useful clues about your probable performance?
I hate this so much, because spouting statistics is the number one go-to of idiot racists and other bigots trying to justify their prejudices. The whole fucking point is that judging someone’s value someone based on physical attributes outside their control, is fucking evil, and increasing the accuracy of your algorithm only makes it all the more insidious.
The Economist has never been shy to post some questionable kneejerk shit in the past, but this is approaching a low even for them. Not only do they give the concept credibility, but they’re even going out of their way to dishonestly paint it as some sort of progressive boon for the poor.
But what if bias was not the reason? What if
your face gave genuinely useful clues about your probable performancewe just agreed to redefine “bias” as something else, despite this fitting the definition of the word perfectly, just so I can claim this isn’t biased?
this should be grounds for a prison sentence. open support for Nazism shouldn’t be covered by free speech laws.
Actually, what if slavery wasn’t such a bad idea after all? Lmao they never stop trying to resurrect class warfare and gatekeeping.
I’m not sure we’re going to make it y’all.
That image reminds me a meme from “Scientific diagrams that look like shitposts”. It was titled something like “Mask of Damascus(?)/Triagones(?) - Acquire it (from a prisoner(?)) with a scimitar!”
This is so absurd it almost feels like it isn’t real. But indeed, the article appears when I look it up
It’s very nazi Germany real actually.
I always pity the Germans who don’t deserve this but keep this shame since the war, and it’s worse since nazis became an international club.
Wow. If a black box analysis of arbitrary facial characteristics is more meritocratic than the status quo, that speaks volumes about the nightmare hellscape shitshow of policy, procedure and discretion that resides behind the current set of ‘metrics’ being used.
Spoken like somebody with the sloping brow of a common criminal.
I really must commend you for overcoming your natural murderous inclinations and managing to become a useful member of society despite the depression in your front lobe. Keep resisting those dark temptations!
The Unmentionables?
Do we lump all the teenagers with acne in the incel category, and put them in prison? I’m just asking questions.
Because HR is already using “phrenology”.
The gamification of hiring is largely a result of businesses de-institutionalizing Human Resources. If you were hired on at a company like Exxon or IBM in the 1980s, there was an enormous professionalized team dedicated to sourcing prospective hires, vetting them, and negotiating their employment.
Now, we’ve automated so much of the process and gutted so much of the actual professionalized vetting and onboarding that its a total crap shoot as to whom you’re getting. Applicants aren’t trying to impress a recruiter, they’re just aiming to win the keyword search lottery. Businesses aren’t looking to cultivate talent long term, just fill contract positions at below-contractor rates.
So we get an influx of pseudo-science to substitute for what had been a real sociological science of hiring. People promising quick and easy answers to complex and difficult questions, on the premise that they can accelerate the churn of staff without driving up cost of doing business.
Gotcha. This is replacing one nonsense black box with a different one, then. That makes a depressing kind of sense. No evidence needed, either!
All of that being typed, I’m aware that the ‘If’ in my initial response is doing the same amount of heavy lifting as the ‘Some might argue’ in the article. Barring the revelation of some truly extraordinary evidence, I don’t accept the premise.
A primary application of “AI” is providing blackboxes that enable the extremely privileged to wield arbitrary control with impunity.
"Imagine appearing for a job interview and, without saying a single word, being told that you are not getting the role because your face didn’t fit. You would assume discrimination, and might even contemplate litigation. But what if bias was not the reason?
Uh… guys…
Discrimination: the act, practice, or an instance of unfairly treating a person or group differently from other people or groups on a class or categorical basis
Prejudice: an adverse opinion or leaning formed without just grounds or before sufficient knowledge
Bias: to give a settled and often prejudiced outlook to
Judging someone’s ability without knowing them, based solely on their appearance, is, like, kinda the definition of bias, discrimination, and prejudice. I think their stupid angle is “it’s not unfair because what if this time it really worked though!” 😅
I know this is the point, but there’s no way this could possibly end up with anything other than a lazily written, comically clichéd, Sci Fi future where there’s an underclass of like “class gammas” who have gamma face, and then the betas that blah blah. Whereas the alphas are the most perfect ughhhhh. It’s not even a huge leap; it’s fucking inevitable. That’s the outcome of this.
I should watch Gattaca again…
Like every corporate entity, they’re trying to redefine what those words mean. See, it’s not “insufficient knowledge” if they’re using an AI powered facial recognition program to get an objective prediction, right? Right?
The most generous thing I can think is that facial structure is not a protected class in the US so they’re saying it’s technically okay to descriminate against.
People see me in cargo pants, polo shirt, a smartphone in my shirt pocket, and sometimes tech stuff in my (cargo) pants pockets and they assume that I am good at computers. I have an IT background and have been on the Internet since March of 1993 so they are correct. I call it the tech support uniform. However, people could dress similarly to try to fool people.
People will find ways, maybe makeup and prosthetics or AI modifications, to try to fool this system. Maybe they will learn to fake emotions. This system is a tool, not a solution.
Yeah, but is it useful to rob the Mona Lisa?
Goodhart’s law: “When a measure becomes a target, it ceases to be a good measure”
TLDR as soon as you have a system like this people will game it…
I think their stupid angle is “it’s not unfair because what if this time it really worked though!”
I think their angle is “its not unfair because the computer says it!”. automated bias. offloading liability to an AI.











