The sense of agency (SoA)—a matter that is the subject of lively debate across the philosophy and the cognitive (neuro)science of action—describes the subjective experience and judgement of controlling one’s intentional actions and their effects on the outside world. As such, it can be regarded as a fundamental underpinning of moral responsibility for the consequences of one’s behaviour. Empirical evidence has shown that the SoA, and consequently our subjective sense of responsibility, tends to be modulated by contextual or external factors such as the fluency of the action selection process and the outcome valence. Crucially, it is also influenced by the presence of other interacting human agents. In addition to these more traditional research topics, recent attention has been directed towards exploring interactions with artificial devices, sometimes perceived as having their own intentionality. This is particularly true for devices that are equipped with forms of artificial intelligence. This perception can also affect the human SoA and responsibility for action. In this chapter, I review this evidence and discuss the conceptual and empirical implications of this line of research.
Bonicalzi, S. (2024). Sense of agency in human-human and human-computer interactions. In G.S. Marcello Ienca (a cura di), Developments in Neuroethics and Bioethics - Brains and Machines: Towards a Unified Ethics of AI and Neuroscience (pp. 85-100). Elsevier [10.1016/bs.dnb.2024.02.006].
Sense of agency in human-human and human-computer interactions
sofia bonicalzi
2024-01-01
Abstract
The sense of agency (SoA)—a matter that is the subject of lively debate across the philosophy and the cognitive (neuro)science of action—describes the subjective experience and judgement of controlling one’s intentional actions and their effects on the outside world. As such, it can be regarded as a fundamental underpinning of moral responsibility for the consequences of one’s behaviour. Empirical evidence has shown that the SoA, and consequently our subjective sense of responsibility, tends to be modulated by contextual or external factors such as the fluency of the action selection process and the outcome valence. Crucially, it is also influenced by the presence of other interacting human agents. In addition to these more traditional research topics, recent attention has been directed towards exploring interactions with artificial devices, sometimes perceived as having their own intentionality. This is particularly true for devices that are equipped with forms of artificial intelligence. This perception can also affect the human SoA and responsibility for action. In this chapter, I review this evidence and discuss the conceptual and empirical implications of this line of research.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.