Five reasons why a conversational artificial intelligence cannot be treated as a moral agent in psychotherapy
 
More details
Hide details
1
The John Paul II Catholic University of Lublin
 
 
Submission date: 2023-04-02
 
 
Final revision date: 2023-07-25
 
 
Acceptance date: 2023-07-28
 
 
Online publication date: 2023-12-17
 
 
Publication date: 2023-12-17
 
 
Corresponding author
Marcin Paweł Ferdynus   

The John Paul II Catholic University of Lublin
 
 
Arch Psych Psych 2023;25(4):26-29
 
KEYWORDS
TOPICS
ABSTRACT
Sedlakova and Trachsel present an analysis regarding the evaluation of a new therapeutic technology, namely conversational artificial intelligence (CAI) in psychotherapy. They suggest that CAI cannot be treated as an equal partner in the therapeutic conversation, because it is not a moral agent. I agree that CAI is not a moral agent. However, I believe that CAI lacks at least five basic attributes or abilities (phenomenal consciousness, intentionality, ethical reflection, prudence, conscience) that would allow it to be defined as a moral agent. It seems that the ethical assessment of the possibilities, limitations, benefits and risks associated with the use of CAI in psychotherapy requires a determination of what CAI is in its moral nature. In this paper, I attempt to show that CAI is devoid of essential moral elements and hence cannot be treated as a moral agent.
eISSN:2083-828X
ISSN:1509-2046
Journals System - logo
Scroll to top