Is our collective approach to ensuring AI doesn’t go off the rails fundamentally misguided? Is our approach too narrow to get the job done? My guest, John Basl argues exactly that. We need to broaden our perspective, he says, and prioritize what he calls an “AI ethics ecosystem.” It’s a big lift, but without it it’s an even bigger problem.
Subscribe where you listen:
Spotify | Apple Podcasts | iHeart Radio | PocketCast | YouTube
Reid Blackman: You have an interesting proposition. It's that the way that we're doing AI ethics, is it all sorts of screwed up? Do I have that right?
John Basl: Yeah, that's right. And I don't think just in the standard philosopher way where philosophers just think everyone's wrong about everything. I think I mean something slightly different.
Yeah, I guess the thing I would say is something like we continue to see not just when people deploy an AI tool that it has ethical impacts that we're unhappy about. Even when we're trying to do ethical AI, even when we're putting resources into that, we just keep running into the same ethics mistakes over and over again.
I'm happy to chat about what I think recurring ones are, but the real thing we're wrong about is the lessons we are learning from those things. Like we keep drawing a set of lessons from the mistakes we make. And I just think they're the wrong lessons because they're oriented towards specific solutions when the solution is.
What I would call building an ecosystem to manage these problems.
For the rest of the conversation, go to the podcast episode wherever you listen to your podcasts:
Subscribe where you listen:
Spotify | Apple Podcasts | iHeart Radio | PocketCast | YouTube
John Basl is Assistant Professor of Philosophy at Northeastern University. He works in normative philosophy and applied ethics, with emphasis on the ethical and epistemological challenges raised by emerging technologies.
Reid Blackman is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.




