One rule for all? Governing AI for a Global Society

Lauren Toulson
CARRE4
Published in
6 min readMar 9, 2021

--

With the increase in globalisation, ethical principles for AI follow the same global outlook. In doing so, variations in cultural values and ideologies are being overlooked and instead AI is designed for the values of the global west. This article explores the issue of having a global standard for AI ethics and whether human morals can, or should, be reduced to principles at all.

Bias and ethics — How do we define what is and isn’t biased?

Ethics is a difficult thing to define because, despite some general rules and principles that govern our modes of behaviour as collective societies, like human rights, many ethical choices are made on a personal basis. Moral philosophers still don’t have a consensus on what is and isn’t morally correct.

On top of this, relativists respect that societies and cultures differ in their grounding morals and traditions, therefore require different ethical frameworks on a national basis. This is why, for instance, some countries still have the death penalty, while others don’t. So when it comes to using ethics as part of the technical process of designing new data based technologies, should we be designing to meet global ideas of ethics, or appreciate that each society may require different regulations?

The problem with AI Secularism

In Wired’s first issue of the year, Sparsh Ahuja wrote about the lack of non-western cultural and religious ethics being designed into artificial intelligence. Philosophy has been tackling the ‘trolly problem’ for millennia and today its application to AI uses homogenised, secular ethics to tackle ethical issues in designs for self-driving cars and life-or-death decisions in healthcare. From the perspective of traditional ethics that ground Islamic beliefs, the question of the trolly problem, for self-driving cars, should not be who the car should prioritise, but how to cause no harm at all. In addition, the article raises the issue of healthcare that aims to ‘artificially’ extend life (with equipment like resuscitators), a goal that is not in line with religious thought.

Photo by Masjid Pogung Dalangan on Unsplash

Ethical decisions that are being programmed into our technologies that deal with such life or death events are not necessarily the questions we should be focusing on. Ethicists, social scientists and data engineers alike should instead focus on whether their technology is ethically in line with the culture that uses it. Is it ethical to make decisions on behalf of its users based on global standardised values rather than an individual’s religious ethical principles?

For Merve Hickok, founder of AI Ethicist and ‘Ethical data practices’ panelist at our upcoming AI Summit, Ethical data practice “is about applying ethical reasoning and decisions across the cycle of data from collecting to processing to using to disposing”

West Versus East: How Individualism and Communitarianism pose issues

The ethical concerns of the regulation and protection of privacy is another area in which AI ethics differ between nations. The West and Silicon Valley operate with libertarian values and highly prize the protection of individuals, despite recent issues of surveillance capitalism that exploits the data we provide to make profit and further shape our data-giving behaviours. In contrast, as argued by Zuboff, China didn’t have a word for privacy until the 1990s, and is well known for having strict controls over citizen internet-liberties through their online firewalls. Both are exploiting user privacy, but the West does so for money, the East for control.

Without the Western democratic values, their use of ethics when governing AI takes a different approach. For instance, the OECD formed non-binding principles with a broad range of countries, but China did not endorse these principles.

Sociologist Anthony Giddens proposes new ethical principles to govern both East and Western technologies. One of which is to “respect right to privacy”. However, to what extent can Western governing agencies propose what is ethically ‘right’ to impose globally, and where do we draw the line between respecting cultural differences and safeguarding basic human rights?

“Privacy is not just about one’s information. It is about the way we share our collective spaces and how we safeguard autonomy and freedom of thought in the age of AI.” says Ivana Bartoletti, our speaker for our upcoming Privacy, Regulation and Surveillance Capitalism panel discussion.

Governance

Another question looms over this topic — should these ethical frameworks be governed by private or public sectors? It’s thought that governance from private sectors alone is not sufficient for technologies that have large scale deployment, such as facial recognition technology. Technologies at this scale, then, could be regulated by Governments, especially as tech companies thrive on lack of oversight. Tech companies that create governing guidelines may prioritise commercial frameworks that do not fully address the ethical concerns at hand. That said, unlike Governments, tech companies are more likely to have a good supply of the internal expertise that is needed for developing comprehensive regulatory frameworks. Therefore, governments could support the private sector in the creation of regulatory frameworks by promoting good practice and sustainable development.

But can moral judgements by reduced to a set of frameworks and programmable instructions? Hasselberger uses the example of the Ethics Einstein, a hypothetical app that makes moral decisions for us, to illustrate the ethical challenges of automating complex moral dilemmas:

“We would be shocked if we learned that a friend “outsourced” a serious moral decision and carried out its instructions without personally understanding their justification or felt confident that her action must have been morally good.”

he further explains

“Perception and interpretation are necessary for recognising features of the social world like bullying, cruelty, humiliation, generosity, courage, civility, and the like — skills that themselves rely on our felt human sense of the moral meaning”.

Returning to the previous discussion, these skills not only rely on human sense, but on cultural and religious backgrounds. When considering how to write ethical judgements into self-driving cars and other automated machines, we need to consider how such complex moral decision-making can be computed, and if it should be at all.

As a concluding thought, Junaid Qadir, Human-Centered AI Researcher and panelist at our AI Summit, explains:

“With artificial intelligence increasingly infiltrating our lives, a renewed focus on what makes us human is the need of the day. This requires a focus on the question of values and purposes and on inculcating moral character and institutionalising ethical social norms.”

This blog has aimed to summarise just some of the vital discussions about AI and its governance faced by society today. Other critical issues include how data and algorithms are fuelling inequality and a biased society, covered in the first blog of the series, and in the second blog how our online activities, and bad platform design contribute to the erosion of democracy. The next blog will examine why big health data means we need to rethink privacy.

The blog is written by Lauren for Digital Bucket Company, who are hosting their full-day AI Summit on 30th March 2021 with leaders in Tech and Government around the world joining to discuss the key issues in AI: Bias and discrimination, Privacy and Governance, Data Ethics, Women in Tech and the Future of AI. You can book your free ticket here.

--

--

Lauren Toulson
CARRE4

Studying Digital Culture, Lauren is an MSc student at LSE and writes about Big Data and AI for Digital Bucket Company. Tweet her @itslaurensdata