Our Brave New World

The challenges of programming morality into artificial intelligence

 

With AI permeating every imaginable aspect of our lives, the future conceived by 20th century dystopian literature no longer seems implausible. While these pre-modern narratives focused largely on red-eyed alien robots bent on destroying humanity, the issues underpinning AI ethics now extend far beyond Asimov’s three laws of robotics or Orwellian concerns about thought policing. Actions as simple as a Google search raise moral quandaries that most people couldn’t comprehend just a decade ago.

 

Human decision-making mechanisms ensure that societies remain cohesive and functional. With AI becoming a more active participant in society, we expect that it will also conform to both the spoken and unspoken norms that govern our understanding of morality. But these systems don’t always follow our moral codes. 

 

In 2017, Amazon had to abandon its AI-powered recruitment system that seemed to favor men over women, replicating the gender bias pervasive within the tech industry. Even tools created by the justice system to institute fairer sentences have been criticized for bias against people of color. 

 

Of course, discrimination isn’t the only ethical concern posed by AI. Mass surveillance techniques have been labeled as breaches of privacy, and the use of AI in warfare is repeatedly criticized.

 

The solution seems simple enough: program robots to make moral decisions and reject immoral ones. However, human beings are convinced that our ability to emote and empathize is impossible for a machine to replicate. This belief drives the common misconception that translating moral subjectivity into precise lines of code is necessary, yet impossible. Were that the case, the hope of creating ethical machines would remain a mere pipe dream. Fortunately, it isn’t.

 

Dr. Dekai Wu

Dr. Dekai Wu

According to Dr. Dekai Wu, an ethicist who teaches Computer Science and Engineering at the Hong Kong University of Science and Technology (HKUST), the assumption that morality and machines are incompatible couldn’t be further from the truth.

 

“The real world is not rule-based. As soon as you try to write more than a few hundred lines of code to describe the real world, you find yourself completely tied up in knots by paradoxes,”he adds.

 

As a result, probabilistic machine learning approaches have surpassed rule-based imperative approaches and even allowed scientists to map emotions onto machines. That means AI could realistically be taught how to make moral decisions grounded in empathy. Thus, the problem isn’t that ethics and algorithms are inherently irreconcilable; in fact, it’s far more complex than that.

 

“AI is an exponential amplifier of all things human. It amplifies the good in us–that’s what we are trying to do–but it also amplifies the bad in us,” Wu explains.

 

With the advancement of machine learning, AI has taken the form of a ‘black box,’ where it can function outside of what programmers can foresee. Its reasoning isn’t apparent or open to inspection. The most frightening part of looking inside this ‘black box’ is anticipating what might be staring back: a reflection of our shortcomings as a society. 

 

AI doesn’t exist in a vacuum; bias is fed into it through training data that replicates existing human prejudices. Amazon’s hiring mechanism failed because the AI was provided datasets that included resumes submitted mostly by men, so it internalized the idea that men were preferable to women. Similarly, judicial forecasting for recidivism is discriminatory because predictive algorithms use datasets that show higher arrest rates for people of color.

 

Wu is hopeful that future AI will be self-correcting. In fact, research is being conducted on machine learning mechanisms that detect bias in data, circumventing the possibility of them internalizing unethical assumptions. However, eliminating the replication of bias within AI doesn’t do anything to tackle bias as a standalone societal issue. This discrepancy implies that it isn’t the machines whose morality we need to police, but rather that of the people behind those machines. 

 

Recently, Google dissolved its Advanced Technology External Advisory Council [where Wu was a member] after facing backlash for the inclusion of Kay Coles James, the President of a notoriously anti-LGBTQ right-wing think tank. In addition, The Partnership on AI–a collaborative organization formed by Google, Amazon, IBM, Facebook, and Microsoft to oversee AI ethics–has faced scrutiny for its mostly Caucasian, male staff. 

 

 

Smart weaponry developer Axon has also received criticism for its ethics board, which includes numerous members of law enforcement, an institution that is famous for its misuse of the very same public safety technology that the company produces. Examples of such localized rulemaking are rampant within the sphere of AI ethics, despite scientists advocating for a globalized approach.

 

That is not to say that the efforts made by these companies are entirely futile. With AI still in its nascent stage of development, it’s essential to formulate guidelines for its deployment that are both generalized and enforceable. While private companies aren’t fully transparent in their assessment of these internal ethics boards, they have started a long overdue conversation about ethics that was somehow overlooked in the global race to adopt AI systems.

Eric Thain

Eric Thain

 

Eric Thain, President of the AI Society in Hong Kong, remains skeptical. He advocates for codes that are not only fully transparent but formulated by independent institutions.

 

“[Internal codes of conduct] are very self-serving. Any time there are controversies surrounding the big tech giants, they bring out their codes of ethics almost like a shield.” 

 

The public sector also has a significant role to play in this space. According to Wu, the most urgent action it can take at this stage is to fund research to dissect the paradoxes that obstruct the development of ethical machine learning systems. 

 

He stresses the importance of questioning the assumptions that govern internal codes of ethics, and analyzing the unintended consequences of the seemingly prosaic tasks that we assign to machines every day. 

 

Wu also proposes an overhaul of the education system to address the false dichotomy between the humanities and the sciences, which leads to myopia within the general population. He emphasizes that scientists and engineers need to study the humanist disciplines to tackle the repercussions that are coming into view as AI becomes “inevitably ascendant.” 

 

“At a time where humanity needs the tools to understand what AI will do to our culture, and to ask ourselves the most fundamental questions about what humanity is and what we want it to be, we cannot be crippling our population intellectually,” he adds.

 

Perhaps individuals play the most critical role in the ethics of AI. The machines that we use on a day-to-day basis, from our phones to our smart coffee makers, are so influential that they have taken shape as members of society. Improper usage of such systems could lead to a disruption that threatens the preservation of our human dignity and rights. 

 

It’s imperative to understand that technology can embed certain assumptions through its interactions with us. The first step to ensuring the harmonious coexistence of man and machine is to address any and all of the questions that arise in the process of teaching values, many of which will involve challenging the assumptions that drive our morality.

 

Email This Post Email This Post

Review overview
NO COMMENTS

POST A COMMENT