Wiser Technology Advice Blog
- HOME
- WISER-TECHNOLOGY-ADVICE-BLOG
- IS ARTIFICIAL INTELLIGENCE RACIST
Is artificial intelligence racist?
07 September 2020
With the recent rise of the Black Lives Matter movement, conversations are being had about the racial and gender biases embedded in artificial intelligence.So why have these problems crept into the algorithms that underpin artificial intelligence systems?And what can we do about it?
How can an artificial intelligence ‘think’ about individuals?
Artificial intelligence systems such as facial recognition are being used in surveillance and policing around the world, for example to identify criminal suspects in a crowd.
Some predictive policing tools draw links between places, events and crime rates to predict when and where crimes are most likely to take place. Police use this information to allocate more resources to these ‘hot spots’.
And there’s a plethora of other predictive justice systems that analyse demographic data such as age, gender, marital status, history of substance abuse and criminal record to predict who is most likely to commit future crime.These systems are being used in courts during pre-trial hearings or sentencing, predicting the likelihood that someone charged with an offence will reoffend.
How can an artificial intelligence be racist?
The problem with these policing and justice systems lies with the training data provided to the machine learning algorithms. Historical arrest rates in the United States show that you’re more than twice as likely to be arrested if you are black than if you are white, and a black person is five times more likely to be stopped without just cause than a white person.
Here in Australia, Aboriginal people and ethnic minorities are over-represented in Australia’s justice system and are more likely to experience over-policing.
While race is not explicitly used as a predictor in the training data, other demographic data such as socioeconomic background, education and place of residence – congruous with racial dividing lines – act as proxies.
As a result, these predictive systems return a high number of false positives due to racial bias in the underlying algorithms: somebody with attributes that point towards them being a person of colour is more likely (according to the system’s flawed thinking) to commit crime. The end result here is a racist policing and justice system, regardless of the ethics and morals of those who use it.
There are many other high-stakes applications of predictive analysis which also rely on underlying biases.One example of this is predictive analysis used to process home loan applications.Machine learning algorithms used for home loan approvals will look at, not only the history of the person applying for a loan, but at all the historic data of that person’s demographics.So a person from a poor neighbourhood may have an excellent personal credit record, but because the system has learnt that the majority of people from that neighbourhood had their applications rejected, the system will have an implicit bias and reject new loan application for everyone from that neighbourhood.
Bias in artificial intelligence can become self-reinforcing when re-training data includes the results of a biased system.When biased historic data is used as the starting point for training machine learning algorithms, the system tends to reflect and reinforce existing societal biases.
When the system is retrained on data that includes results from the system, it may therefore work to the detriment of historically disadvantaged groups.
If we are going to build AI systems that support desirable long-term societal outcomes – against racial minorities, and against people from socially disadvantaged communities - we need to understand when and why such negative feedback loops occur, and how to prevent them.
How can we ensure fair systems?
In June this year, in response to George Floyd’s murder at the hands of police, Amazon put a one-year hold on use of its facial recognition software by police in the US.Amazon stated, “We hope this one-year moratorium might give Congress enough time to implement appropriate rules, and we stand ready to help if requested.”One day later, Microsoft also announced it would await federal regulation before selling facial recognition systems to police departments.
Unfortunately, the solution isn’t as simple as introducing some laws and government policies. The root of the problem lies in the systems themselves and, more importantly, in the training data fed into the systems.
I don’t like using the term ‘artificial intelligence’, as there is nothing that intelligent about the systems we are creating today.I prefer the term ‘machine learning algorithms’, or just ‘algorithms’, because these systems are akin to naïve toddlers who are just learning about the world.Like toddlers, machine learning algorithms learn from the information they’re provided about the world, and don’t necessarily have the skills to interpret the information with the intelligence of an adult.
So, we need to be careful to ensure the learning data we provide to machine learning systems doesn’t include inherent historical biases, such as in the example of the justice systems unfairly targeting disadvantaged groups and neighbourhoods.
One way to avoid this could be to employ diverse hiring strategies, or seeking input from people who don’t look or think like us. Teams building machine learning must be composed of people from different backgrounds and communities.
The world of technology development tends to be very insular. We get excited about the technology, and forget artificial intelligence systems are just a blunt tool. When you’re a hammer, everything tends to look like a nail.
But these systems have impacts on real people, so when creating and training these systems, we need to consider diversity and include more experts about people, psychology, behaviour and history.
Without diversity in a team, it is impossible to understand and recognise our unconscious biases.We must strive for justice, not just equity or equality.That means working toward solving the root causes of inequality, removing the systematic barriers in our society. This can be achieved, in part, by keeping an eye out for machine learning that relies on biased data.
Want to know more?
Please get in touch if you’d like my help to explore how your business can take advantage of machine learning artificial intelligence while avoiding the problems of inherently biased systems.
Get in contact today, I’m always happy to meet and have a chat over a coffee.
Further Reading
AI cameras to detect violence on Sydney trains, IT News 31 August 2020, available at https://www.itnews.com.au/news/ai-cameras-to-detect-violence-on-sydney-trains-552635?eid=3&edate=20200831&utm_source=20200831_PM&utm_medium=newsletter&utm_campaign=daily_newsletter
Amazon to block police use of facial recognition for a year, IT News 11 Jun 2020, available at https://www.itnews.com.au/news/amazon-to-block-police-use-of-facial-recognition-for-a-year-549132
Facebook says it will look for racial bias in its algorithms, MIT Technology Review 22 Jul 2020, available at https://www.technologyreview.com/2020/07/22/1005532/facebook-says-it-will-look-for-racial-bias-in-its-algorithms/
Inioluwa Deborah Raji, MIT Technology Review Innovators Under 35, available at https://www.technologyreview.com/innovator/inioluwa-deborah-raji/
Is Artificial Intelligence Racist?, Towards Data Science 3 Apr 2019, available at https://towardsdatascience.com/https-medium-com-mauriziosantamicone-is-artificial-intelligence-racist-66ea8f67c7de
Is Facebook Doing Enough To Stop Racial Bias In AI?, Forbes 5 August 2020, available at https://www.forbes.com/sites/charlestowersclark/2020/08/05/is-facebook-doing-enough-to-stop-racial-bias-in-ai/#60a67b311d66
Now Microsoft bans police use of facial recognition software, IT News 12 Jun 2020, available at https://www.itnews.com.au/news/now-microsoft-bans-police-use-of-facial-recognition-software-549183
Of course technology perpetuates racism. It was designed that way, MIT Technology Review 3 June 2020, available at https://www.technologyreview.com/2020/06/03/1002589/technology-perpetuates-racism-by-design-simulmatics-charlton-mcilwain/
People vs profit: How the Fourth Industrial Revolution is changing workplace norms, Monash Data Futures Institute 26 May 2020, available at https://news.itu.int/people-vs-profit-how-the-fourth-industrial-revolution-is-changing-workplace-norms-opinion/
Predictive policing algorithms are racist. They need to be dismantled, MIT Technology Review 17 Jul 2020, available at https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/
Statistical Briefing Book – Law Enforcement and Juvenile Crime, Office of Juvenile Justice and Delinquency Program, available at https://www.ojjdp.gov/ojstatbb/crime/ucr.asp?table_in=2
The Inherent Racism of Australian Police: An Interview With Policing Academic Amanda Porter, Sydney Criminal Lawyers 11 Jun 2020, available at https://www.sydneycriminallawyers.com.au/blog/the-inherent-racism-of-australian-police-an-interview-with-policing-academic-amanda-porter/
When Bias begets bias: A source of negative feedback loops in AI systems, Microsoft Research Blog 21 Jan 2020, available at https://www.microsoft.com/en-us/research/blog/when-bias-begets-bias-a-source-of-negative-feedback-loops-in-ai-systems/