Prejudice In Coding And How To Fix It
By Kathy Gong | Evolution obeys the laws of nature, yet revolution follows human intentions. In the context of natural laws, it is the fundamental building blocks such as quarks (or particles even smaller, yet to be discovered) out of which we – and everything around us – are made. What are the building blocks of our revolutions in human history? The most recent one to rise is the revolution of Artificial Intelligence (or Super-intelligence). It is only a matter of time before the advancement of Artificial Intelligence will rebuild some of the existing orders in our society, just as the Industrial Revolution and Information Revolution did in the past.
In my opinion, the building block in the revolution of Artificial Intelligence is coding.
You may disagree because coding has been there since the very beginning of machinery, or more strictly speaking, in the early days of computing. As US Army mathematicians put it: “Garbage in, garbage out (GIGO).”
However, compared to the coding in the Information Age that was mainly instructed to generate data, the essential difference now is that coding is formed into substantial amounts of intelligence, which is empowered to make decisions on behalf of human beings.
Sooner or later – in fact, it is already happening now – most of the crucial moments where people interact with large bureaucratic systems will involve an algorithm and be decided by an algorithm, from applying for a credit card to getting a job, buying healthcare insurance or receiving a body checkup.
Even our personal decisions such as making friends and dating someone will inevitably be affected subconsciously or unconsciously, because people are now being scored and quantified in datasets through coding.
However, let us not forget that artificial brains can only be as ‘moral’ as the people coding them. The output can only be as ‘fair’ as the chosen data being input.
If prejudicial thinking is coded and the data of minorities is somewhat underrepresented or compromised, the DNA of its discriminative mother sample will be copied, evolved and turned into new building blocks of our society and deeply embedded in the practices of human rights.
Worst of all, classes of artificial intelligence algorithms are black boxes coded by the very select few in skewed demographics. When AI spots patterns in the initially selected data, enabling it to recognize similar patterns in new data and evolve via unsupervised advanced learning, even the select few could stop understanding how it works.
Artificial Intelligence itself does not have common sense or a purpose. When the blueprint of a building block is comprised, we will find ourselves in an age of “prejudice in, discrimination out”. Only with artificial intelligence, the discrimination will be programmed, automated, largely applied and nearly impossible to appraise as the AI’s decision-making processes would be too complex to deduce.
Yes, we need representatives, not just of a select few, but a full cross-section of humanity. It is much easier said than done.
Let’s not hide the truth. First, the data used to calibrate machine-learning algorithms are abundant and disproportionate because of the way they are being generated and labelled. Second, algorithms were coded without a design in the system of checks to restrict the concentrated results. Therefore a potential abuse of data.
The last one seems least urgent, but will perhaps have the most profound effects – unequal access to and non-relevant education (we’re still teaching our children knowledge that will be completely useless when they enter society in 10 years). Unfair access to education and the quality of it will further widen and accelerate inequality in the face of new technology – such as farming to agricultural revolution, power to industrial revolution, Internet to information revolution.
Common sense and values need to be designed as a blueprint and coded as a system, whereas biases are detected as bugs.
It takes the brightest minds and all of humans’ efforts to tackle this global challenge that will fundamentally affect us and future generations are going to face – and it needs to be addressed now. Even though we do not have a well-thought solution for such a complex problem yet, there are some existing ways of thinking that can inspire us towards that direction.
There are a few important AI frameworks being built under the logic of Game Theory. Generative adversarial networks (GANs) is implemented by a system of two neural networks contesting and trying to ‘fool’ each other in a zero-sum game framework. GANs was invented by my friend Dr Ian Goodfellow in 2014.
This technique is now used to generate photographs that look superficially authentic to human observers, as well as used to detect superficial photographs that are impossible to be distinguished by human eyes.
The use of GANs is now common in image processing. However with the recent development of new deep stubborn networks, it has a potential to move toward more high-level simulation of human cognitive tasks.
Let’s say we build multi-generative networks that construct results from input in each network, and its input are selected and coded in preset orders. The results are shown to their paired discriminative network. The discriminative network is then supposed to distinguish between relatively ‘well-balanced and fair’ and ‘disproportionate and biased’ results given by the generative network. The generative networks try to ‘fool’ the discriminative networks, which will be trained to recognize particular sets of patterns and models.
To code common sense, advance the power of neural networks and their ability to “think” in humanity, is our first step towards a possible solution.
Revolution has a purpose. It is to build a society where ordinary people will have a chance to achieve extraordinary things, and where “the elderly are cared for, friends are trusted and the young are well taught.” (Confucius
It is our duty to safeguard this purpose of humanity and ensure that it thrives through changing times.
About The Author Kathy Gong is Co-founder and CEO of WafaGames in Beijing. She was named one of MIT’s 35 Innovators under 35 in 2017 and has launched a series of companies in different industries, including a machine-learning company that created a robotic divorce lawyer called Lily and a robotic immigration lawyer called Mike.
She founded ai.Law, Seeway Investment and KG Inc and was a Foundation Board member of Global Shapers Community, an initiative of the World Economic Forum. Kathy became a chess master at age 13 and is a former national chess champion in China. She holds a bachelor’s degree in Economics & East Asia Studies from Columbia University.