Ethics & AI

Can AI Have a Moral Compass?

I recently read an article by Donghee Shin (2025) titled “Can AI have a sense of morality?” It challenges the popular obsession with “programming virtue” into machines and shifts the focus back to where it belongs: human responsibility.

As AI systems increasingly participate in life‑and‑death decisions—triage in healthcare, autonomous driving dilemmas (whose life is prioritized: passengers or pedestrians?)—we must confront a simple truth:

AI doesn’t have a conscience. It has an architecture.


Two Key Insights from Shin’s Argument

1. The Myth of “Neutral” AI

We often assume that bias can be fully removed from data or algorithms. Shin argues the opposite:
bias is a constitutive part of human understanding.

AI is not a neutral tool; it is a mirror.
If our data is biased, our “ethical AI” will simply: automate, perpetuate, and statistically refine the same inequalities we already have.

2. Simulation Is Not Moral Agency

AI can simulate ethical behavior through logic, rules, and probability.
But it cannot: experience empathy, understand moral stakes, or assume responsibility.

Because AI lacks intentionality, the ethical burden cannot be placed on the software. It remains with the developers, the institutions deploying the systems, and the society that governs them.


The Real Issue: Human Responsibility

We keep waiting for machines to become more human.
But perhaps the real challenge is for humans to become more responsible.

The goal is not to build a “perfectly moral machine.”
The goal is to build a system where the humans behind the machine are held to:


Questions Worth Reflecting On

  1. How do we build trust in a system that has moral consequences but no moral awareness?

  2. How do we move from “artificial virtue” to “accountable infrastructure”?


Source

Shin, D. (2025). Can AI have a sense of morality?
https://doi.org/10.1007/s00146-025-02476-7