Università Cattolica del Sacro Cuore

Cattolica International

Ethics in the age of AI: responsibility, bias, and human values

 
In an era where AI is closing in on human-like cognition, is this technology about to spark the defining ethical debate of the next decade—and forever change how we define progress?

 

Professor Ciro De Florio challenges us to view AI not as a simple imitation of human intelligence, but as a continuum of adaptive systems that pose both age-old and novel ethical questions.
As AI evolves, so must our global discourse on responsibility, inclusivity, and openness. By recognising both the promise and the limits of AI, we can better harness its strengths to serve the collective good – without mistaking a powerful tool for a cure-all solution.

 

Professor De Florio, your academic interest in AI goes back well before the widespread adoption of large language models such as ChatGPT. AI’s rapid evolution raises philosophical questions about what it means to be human. In your opinion, what ethical risks arise when AI starts to mimic human intelligence?

A key point is that we often presume AI imitates our intelligence, which is a rather anthropocentric view: we look at AI and say, “It’s imitating us.” However, many experts in engineering or cognitive science regard intelligence as a capacity for massive adaptation to an environment in pursuit of certain goals. Under this definition, even simple devices show basic forms of intelligence. A thermostat displays very basic intelligence by adapting to temperature changes. A robot vacuum shows more intelligence since it can navigate rooms and clean them. A cat shows even more intelligence, and a human shows the most. So rather than AI being just an imitation of human thinking, it exists on a spectrum of systems that can adapt to their environments - from very simple to very complex.

When talking about ethical risks, I would distinguish between:

1. Common Technology Risks: The kind of misuse or unintended consequences we see with any technological innovation. Fire, for example, can cook meals but also burn down a village.

2. AI-Specific Risks: Challenges heightened by AI’s autonomy and scale.

Autonomy matters because it complicates accountability. If a system makes decisions automatically – screening job applications, predicting health diagnoses, or performing large-scale data analysis – there can be real difficulty in assigning responsibility if something goes wrong. This is different from using a conventional tool, where it is easier to see the chain of cause and effect.

Another factor is scale: a biased AI tool in one organisation is problematic; but if everyone uses that same flawed tool, the bias becomes a large-scale societal issue. So, while bias, misuse, or misinformation are not new ethical concerns, AI’s speed and autonomy magnify those existing problems. 

 

Some AI leaders predict that, within the next decade, AI will help solve complex challenges – from cancer treatments to climate change. Do you share their optimism?

I would recommend being cautious with broad…long-term predictions – especially sweeping claims that AI will solve climate change or cure 95% of cancers. When technology is experiencing a boom, there is a tendency to extrapolate current trends indefinitely and assume they’ll yield miraculous results.

AI can certainly help with data-intensive tasks like protein folding or drug discovery. If we focus on a specific, concrete application – say, using AI to accelerate the analysis of molecular interactions – then yes, that’s where the technology truly shines. But the statement “AI will eradicate cancer” is simply too general. Cancer encompasses many different diseases requiring multiple interventions – biological, medical, and social.

Similarly, we cannot solve climate change just by telling an AI system, “Fix global carbon emissions.” Climate change is a complex mix of technology, geopolitics, and social behaviour. AI tools can assist with energy optimisation, predicting weather patterns, or designing better batteries, but they do not eliminate human conflict, inequality, or the need for political cooperation.

In short, AI is invaluable as a sophisticated problem-solving aid, not a magic wand.

 

You are part of a group of professors at Università Cattolica who are running a multidisciplinary introductory AI course for all firstyear students. Given your academic background, how do you see the role of universities in shaping the ethical discourse surrounding AI? What responsibilities do institutions like Università Cattolica have in educating future leaders on the ethical implications of AI and its societal impact?

Universities should think of AI not merely as a single subject – like a stand-alone course – but as a transversal competency that pervades many fields. AI is ultimately about how we process and interpret information. In that sense, it’s akin to “writing” or “mathematics”: useful in nearly every domain.

At the same time, we must ensure our students (and faculty) understand how AI works. Technical awareness is crucial for responsibly using AI – if you give it the wrong prompt or apply it to the wrong task, you’ll get flawed outcomes. From this standpoint, I believe universities should promote:

1. Technical literacy – Teaching data fundamentals, algorithmic concepts, and the nature of training sets.

2. Ethical sensitivity – Encouraging students to grapple with questions about fairness, transparency, and societal impact.

3. Critical thinking – Helping future professionals distinguish genuine innovation from marketing hype.

Without this balanced approach, we risk a scenario where only a small group understands AI deeply, and everyone else treats it as a mysterious black box.

 

AI is global, yet cultural perspectives on ethics vary widely. How important is international collaboration for setting AI standards or guidelines?

It’s very important, but also extremely challenging. “Ethics” itself can mean different things in different cultural contexts. Even broadly shared values like “inclusiveness” can diverge sharply from one society to another.

Additionally, AI research is dominated by a few large companies with enormous resources. Most advanced AI algorithms are proprietary. This raises concerns over whether international
guidelines can genuinely influence development – or whether market forces and corporate interests will remain decisive
.

Yes, we can strive for global frameworks, much like nuclear treaties or climate accords, but true consensus demands honest dialogue and willingness to reconcile diverse viewpoints.
You can’t simply impose one region’s definition of “ethical AI” everywhere. Universities can contribute by championing open research, open-source platforms, and cross-cultural dialogues. At the same time, we must recognise that AI research is heavily concentrated in just a few tech hubs, and we need to think about how to protect the public interest without stifling legitimate innovation.

 

Is there a movie or a book you would recommend for understanding AI’s deeper challenges or opportunities?

I recommend the film, Her (directed by Spike Jonze). It shows how an AI chatbot, designed to converse with humans, sparks questions about emotional attachment – people may start attributing true thoughts and feelings to a system that simply sounds human.

As for books, classic science fiction can be quite insightful. Do Androids Dream of Electric Sheep? by Philip K. Dick explores the nature of identity and empathy in a world where androids and humans coexist.

Asimov’s work examines the concept of “programmed ethics” and how it might conflict with human intentions.
These narratives remain strikingly relevant because they remind us that advanced technology can amplify not just our hopes but also our biases and ethical dilemmas.

Bio: Ciro De Florio is an Associate Professor of Logic and Philosophy of Science at Università Cattolica del Sacro Cuore in Milan.

His research focuses on mathematical logic, philosophy of logic and mathematics, and philosophy of science.

He is particularly interested in higher-order logical systems, characterisations of the standard model of natural numbers, formalisations of logical consequence, and the logical analysis of the truth predicate.

Currently, he is working on systems of pragmatic logic, characterising the nexus of logical consequence in terms
of conceptual grounding, and models of temporal logic.

De Florio is also a member of the Humane Technology Lab at the University, where he explores the philosophical implications of artificial intelligence and technology.