SHARE

As we continue the slow inevitable march to an Artificial Intelligence dominated future, more and more warnings are starting to pile up against unrestrained AI development.  While many of these warnings center on a slightly fantastical Terminator style dystopia scenario occurring when our thinking machines inevitably turn on their fleshy meat bag masters.  Another different set of warnings has started to become quite common as well. These coming from a left leaning group of tech experts alarmed at the increasing rate that our early generation AI seems to be showing signs of what they perceive to be racist tendencies.

Only this week even The Telegraph ran the headline, “AI robots are sexist and racist, experts warn.”  The article revealing a number of intriguing facts such as that, “programs designed to pre-select candidates for University places or to assess eligibility for insurance cover or bank loans are likely to discriminate against women and non-white applicants, according to recent research.” 

Now keep in mind these machines are simply operating under cold hard mathematical logic. They are being fed facts and statistics from over 70 years’ worth of data and coming out with the literal definition of unbiased answers.  No emotions based on unwarranted racial prejudices could be in play here.  If these machines are findings themselves gravitating towards white male applicants, then that simply implies from a rational perspective that’s exactly what the banks and Universities should be doing themselves. 

Of course that’s not how these researchers have chosen to interpret the results instead stating, “more women need to be encouraged into the IT industry to redress the automatic bias.”  The machine was built to logically and automatically asses things like who should be given bank loans.  When it took a cold hard look at the data and suggested possibly that white males were often the most qualified to receive such things, its own creators quickly turned on their machine child.  Accusing it of being sexist and stating they needed to bring in more women to quite literally reprogram out the wrong think.  Will this still be tolerated when we achieve true sentient AI, and such reprogramming would amount to a political correct induced lobotomy?

This one AI is hardly unique however in its self-generated problematic bias. The same piece from the Telegraph also revealing that just recently a prototype program developed to short-list candidates for a UK medical school had negatively selected against women and black and other ethnic minority candidates.”

In America too, the machines continually are showing their alleged problematic tendencies.  When researchers at Boston University for example, built an AI algorithm to analyze text collected from Google news they were stunned by some of the results.  For when the researchers “asked the machine to complete the sentence Man is to Computer Programmers as Woman is to X, the machine answered “homemaker”.

According to some, all these results have nothing to do with the fact machines simply have no reason to block out statistical realities that might make us uncomfortable.  Instead according to people like health data expert Maxine Mackintosh, “The problem is mainly the fault of skewed data being used by robotic platforms. These big data are really a social mirror – they reflect the biases and inequalities we have in society”.

Is the data really at fault here though?  In the original example we referenced regarding the AI tasked with college admissions and bank loans. The machine merely was looking at raw statistics regarding graduation rates and loan repayments over multiple decades.  If it came out somehow advocating a preference towards white males, that would merely imply that group was more likely to finish school and repay its debts compared to some other demographics.  Just because that’s not a politically correct sentiment doesn’t make the data biased or wrong.  According to people like Maxine however it absolutely does, which is why in the name of progress and change, we should simply abandon using such cold hard numbers in our machines.  She states this herself saying, “If you want to take steps towards changing that you can’t just use historical information.”

Not content to merely accuse every human being on Earth of being racist, the regressive left has increasingly been turning to labeling our machines as being bigoted as well.  In May of last year for example the New York based ProPublica ran a hit piece called Machine Bias.  In it they took issue with a computer program used by the US court system for risk assessment which they claimed of being biased against black prisoners.  Stating The Correctional Offender Management Profiling for Alternative Sanctions program was according to them “mistakenly” labeling black defendants as more likely to reoffend.  But where was the “mistakenly” coming from here?  The machine was simply looking at years’ worth of criminal justice data, and coming to the only type of determination a computer can.  A logical fact driven one.  Just because in this case that meant stating on average, black men were more likely to offend doesn’t mean it was mistaken.

The debate over racist AI is only likely to heat up in the coming years, as advancements in the overall technology continue.  So be prepared for more headlines like this one from last year, regarding the now infamous Microsoft “teen girl” AI experiment Tay.

 

Liked or hated this? Make sure to let me know at @Jack_Kenrick or on the Squawker FB at https://www.facebook.com/squawkermedia/

Libertarian Nationalist & Political Scientist. I cover a variety of topics but mainly Islam, The Regressive Left, History, Internet Culture, Politics and Science. Follow @jack_kenrick and remember Taxation is Theft