top of page

The Relational Turn in AI Ethics

  • Anna Puzio
  • 1. Apr.
  • 1 Min. Lesezeit

David Gunkel, Joshua Gellers, and I have published a paper in the journal AI & Society in which we respond to recent criticisms of the relational turn in AI ethics. You can read it here:


Gunkel, D., Puzio, A. & Gellers, J. Dangerous gatekeeping. AI & Soc (2026). https://doi.org/10.1007/s00146-025-02843-4


"Philosophers love a hierarchy. Nothing seems to fit the needs and desires of the moral imagination more than the promise of a clean, orderly ladder of moral worth with (not surprisingly) humans at the top, a few “higher” animals somewhere beneath, plants and rocks at the bottom, and now, far outside the frame, artificial agents politely waiting their turn. In addition, it is in the context of AI that this impulse to police the moral boundary has returned with renewed urgency. Many commentators, including Adrianna de Ruiter in her recently published essay ‘Dangerous Liaisons’ (2025), warn that relational approaches to AI moral status are dangerous, misleading, or lacking philosophical rigor. They insist that we must firmly anchor moral considerability in some intrinsic, scientifically respectable property (i.e., sentience, consciousness, phenomenal experience, and vulnerability) and that anything less—especially when it comes to AI—is dangerous."

 
 
 

Aktuelle Beiträge

Alle ansehen
Material Agency of a Large Language Model

AI isn’t magic ✨ 🧚‍♀️ —it’s a material industry. There's growing debate about the agency of LLMs, AI agents, and agentic AI. In my article “The Material Agency of a Large Language Model” in the journ

 
 
 

Kommentare


bottom of page