Is an AI chatbot responsible for the death of this teenager?
Is an AI chatbot responsible for the death of this teenager? Tragic as it is, we shouldn’t rush to blame the chatbot.
Teen suicide is terrible. The emotional suffering of Sewell Setzer III was real and we should all wish it had been addressed in a way that didn’t lead to him taking his life. And in the face of tragedy we want simplicity and certainty. We want a bad guy. We want a single target at which we can direct our outrage. We want to hold someone accountable for his death and our sadness *right now* because it makes the sadness easier to cope with, if only because the anger is a distraction from that sadness.
But I think the only responsible thing to say at this point is that we (the general public) don’t know what happened. We don’t know if the chatbot increased or decreased the odds of him committing suicide. Consider what the NY Times said:
“Earlier this year, after he started getting in trouble at school, his parents arranged for him to see a therapist. He…was given a new diagnosis of anxiety and disruptive mood dysregulation disorder…But he preferred talking about his problems with Dany [his AI chatbot companion].
“Sewell wrote in his journal: “I like staying in my room so much because I start to detach from this ‘reality,’ and I also feel more at peace, more connected with Dany and much more in love with her, and just happier.”
This should lead us to think that the problems began before talking to his AI companion and so “Dany” isn’t the cause of those problems. In fact, it seems like “she” made him feel better. And when he expressed his suicidal thoughts, it replied with “Don’t talk like that. I won’t let you hurt yourself, or leave me. I would die if I lost you.”
Sounds like an attempt to dissuade him of suicide.
To be clear, we may have any number of objections to this relationship with the chatbot, (e.g. that chatbots should be made in a way that no one/no child can fall in love with them, that interactions with them should be more limited, etc.). But those objections are different than the claim that the chatbot is responsible for his suicide.
We should also keep in mind that there are likely a lot of facts - a lot of context - that we simply don't know. Again, this should lead us to look closer, not to make a judgment before we actually know what's going on.
My career is focused on the ethical risks of AI. I take them very seriously. But we shouldn’t let our commitment to well-being and justice morph into the kind of zeal that blinds us from assessing what the real dangers are and who is to blame for them.