One person driving one car creates a negligible amount of pollution. The problem arises when we have lots of people driving cars. Might this kind of issue arise with AI use as well? What if everyone uses the same hiring or lending or diagnostic algorithm? My guest, Kathleen Creel, argues that this is bad for society and bad for the companies using these algorithms. The solution, in broad strokes, is to introduce randomness into the AI system. But is this a good idea? If so, do we need regulation to pull it off? This and more on today’s episode.
Subscribe where you listen:
Spotify | Apple Podcasts | iHeart Radio | PocketCast | YouTube
Reid Blackman: Most of us are looking for, if we're in the AI world, we're looking for the best algorithm, we want to get the best algorithm for whatever it is that we're trying to do, hire people or interview people or assess mortgage applications.Â
And I take it that you think that there's something or a number of things that are problematic about the search for the best algorithm. Is that right?
Kathleen Creel: I'm very sympathetic with the basic idea that if we have a choice between algorithms, we should pick the best one on whatever metric that is. Both as a computer scientist, that's what we're trained to do. We're trained to optimize. And then as a philosopher, there are lots of times where we think for epistemic rationality, in order to do the right things in terms of making the best decision we can on the basis of some kind of knowledge, Yeah, we want to pick the best algorithm.
But the problem that I want to introduce and that some people in this field have been talking about recently is, what if everyone picks the same best algorithm?
Reid Blackman: So this is kind of a problem on an individual level, it's fine if you throw out your piece of plastic, that's fine, it's not a big deal, instead of recycling it, but when no one's recycling it, or, it's okay if an individual drives a car, the problem is when tens of millions, or hundreds of millions, or however many people are driving cars, then we have this collective problem, is that the right analogy?
For the rest of the conversation, go to the podcast episode wherever you listen to your podcasts:
Subscribe where you listen:
Spotify | Apple Podcasts | iHeart Radio | PocketCast | YouTube
Kathleen Creel is an assistant professor at Northeastern University, cross appointed between the Department of Philosophy and Religion and Khoury College of Computer Sciences.
Her current research explores the moral, political, and epistemic implications of machine learning as it is used in non-state automated decision making and in science. I have other ongoing projects on early modern philosophy and general philosophy of science.