Businesses are afraid to say “ethics”
And if they can’t say it, they can't do it. (Ok, fine "do ethics" doesn't make a ton of sense but the tagline sounds punchy, no?)
After twenty years in academia, ten of which were spent as a professor of philosophy researching, teaching, and publishing on ethics, I attended my first non-academic conference in 2018. It was sponsored by a Fortune 50 financial services company and the topic was “sustainability.” Having taught courses on environmental ethics I thought it would be interesting to see how corporations think about their responsibilities vis-à-vis their environmental impacts. But to my surprise and puzzlement, no one talked about the environment. Instead, I found presentations on educating women around the globe, lifting people out of poverty, and contributing to the mental and physical health of all. ‘The topic of this conference isn’t sustainability,’ I thought. ‘It’s ethics. Why in the world are they using the word ‘sustainability’?
It took me an embarrassingly long time to figure out that “sustainability” in the corporate and non-profit world doesn’t mean “practices that don’t destroy the environment for future generations,’ as I had supposed. Instead, it means ‘practices in pursuit of ethical goals’ combined with the empirical claim that those practices promote the bottom line. As for why businesses couldn’t simply say, “ethics is good for the bottom line,” I did not understand.
This behavior - of refraining from using the word “ethics” and replacing it with some other term - repeats itself throughout the business world. My contention is that this is bad for business and bad for society at large. Our world, corporate and otherwise, is confronted with a growing mountain of ethical problems, spurred on by technologies that bring us fresh new ways of realizing our familiar ethical nightmares. These issues do not disappear via semantic legerdemain. We need to name our problems accurately if we are to address them effectively.
Tell Me You Care about Ethics without Telling me Your Care about “Ethics”
At that conference I also learned about investing in companies that align with Environmental, Social, and Governance criteria, or ESG investing (once called “impact investing”). The goal of ESG is to financially support companies that pursue ESG goals, which are remarkably similar to sustainability goals. “ESG,” I learned, actually means ‘investment practices in pursuit of avoiding ethical risks (which are a subset of sustainability goals)’ combined with the empirical claim that those practices protect the bottom line.
Finding ethics in semantic dark corners of corporations does not end here. It’s as though someone issued a challenge years ago to businesses around the word: “tell me you’re committed to ethics without telling me you’re committed to ‘ethics’.”
In the world of product design I ran into “human-centric” design. ‘This is good.’ I thought. ‘The aardvarks have had their day.’
Never mind the fact that different humans have different goals, including unethical ones, and we need to choose which humans to serve. Never mind also how utterly bizarre it is to talk about ‘humans’ instead of ‘people’ as though product designers are members of an alien race intent on creating delightful UX for their newfound pets.
On the actual and virtual walls of corporations I found a commitment to “values” and being a “value-driven” company. Also a “purpose-driven” company. Also a “mission-driven” company.” Never mind that the values often have nothing to do with ethics. “Customer obsessed” and “innovation” aren’t ethical values; they are economic values that employers want employees to internalize because that sort of thing drives the bottom line.
Also pay no attention to the fact that a purpose or mission can be completely amoral (putting immoral to the side). At Facebook, “[our mission is] to give people the power to build community and bring the world closer together. People use Facebook to stay connected with friends and family, to discover what's going on in the world, and to share and express what matters to them.” But which communities? Any of them? White supremacist communities? Hackers? Disinformation spreaders? Myanmar military personnel? The mission statement doesn’t say. But still, leaders of values/purpose/mission driven companies seem to be quite proud – morally proud – of being value/purpose/mission driven.
Sensitive to the shifting zeitgeist, members of the Business Roundtable – a collection of 500 or so of the most powerful CEOs in the country – proclaimed in 2019 that it’s high time to transition from “shareholder capitalism” to “stakeholder capitalism.” The idea is that the interests of those impacted by businesses other than shareholders – including employees, customers, clients, communities, and society at large – should be considered in business operations. In other words, instead of pursuing shareholder interests come what may to anyone and everyone else in the world, businesses should take other people’s welfare or well-being into account. If selfishness is a vice in the individual and its virtuous contraries include compassion, generosity, and a general concern that justice be done, then an exclusive focus on maximizing shareholder value is the new vice of business, and its alleged contrary, stakeholder capitalism, speaks to the new virtuous corporation. But don’t say “ethics.” It’s “stakeholder interest,” though which stakeholder interests and how important they are and how their interests get incorporated into operations and products, no one has said.
Finally, the world of AI ethics has grown tremendously in prominence over the past five-plus years, especially since ChatGPT came on the scene. Corporations heard the call, “we want AI ethics!” And in another funhouse echo mirror we have, “Yes, we, too, are for Responsible AI!” And it’s this transition – from “AI Ethics” to “Responsible AI” that I want to focus on. It throws into relief two kinds of problems that arise when you leave “ethics” behind. The first has to do with bad translations of the word “ethics” leading to ethical blindspots. The second has to do with burying ethics in a pile of other concerns.
Moving from AI Ethics to Responsible AI
In 2018, when I started my ethics consultancy, there were murmurings of AI ethics among ethicists (by which I mean graduate students and professors of philosophy who specialize in ethics). I also found that IEEE (pronounced “I triple E”), the professional standards organization for engineers, had started a “global initiative” on AI ethics (for which I briefly volunteered). At the time, a handful of corporations were talking about the issue. But as the ethical implications of AI became clearer, mostly through headlines detailing various disasters (e.g. a hospital algorithm that recommended to healthcare practitioners to pay more attention to white patients than sicker black patients, Goldman Sachs being sued for using AI to give discriminatory credit ratings to women, a self-driving Uber killing a pedestrian, and so on), corporations began to take heed. But a very weird thing happened. They didn’t call it “AI Ethics.” Instead they called it “Responsible AI.”
With the new moniker came additional baggage. Responsible AI included ethics in some fashion, but they added to the scope of “RAI” things like cybersecurity, engineering excellence, and regulatory compliance. AI ethics, in other words, was determined to be a subset of Responsible AI concerns.
Many companies went further backwards than this. “AI ethics” wasn’t even an area of interest anymore. Instead you find companies committed to “fairness” and “safety.” Also “transparency” and “accountability.”
Put it all together and Responsible AI usually entails a(n alleged) commitment to the following:
Fairness
Safety
Transparency
Accountability
Cybersecurity
Engineering excellence (often referred to as “Model Robustness”)
Regulatory compliance, especially with privacy regulations
There are two moves I want to highlight in all this.
Move 1: They’ve translated “ethics” to fairness, safety, transparency, and accountability.
Move 2: They’ve added cybersecurity, engineering excellence, and regulatory compliance to the list of what they cover.
Both of these moves are moves away from ethics, not a deeper pursuit of “responsibility.”
The Problem with Move 1: Ethical Gaps
The first problem is a conceptual or metaphysical problem that gives birth to a practical problem. To bring out my point, I want to start with something more mundane: bug spray.
Suppose you’re in the business of creating bug spray and you tell your Chief Bug Spray Scientist, “Let’s make some bug spray.” Then he asks you what you mean by “bugs” and you say “ants, bees, and hornets.” So off he goes and creates the bug spray.
Fast forward a few weeks and you see some lady bugs in your house. You spray them but, of course, it has no effect. Eventually it turns into an infestation. You go back to your Chief Bug Spray Scientist, ask him why your bug spray isn’t working, and he explains to you: “I asked what you meant by bug spray and you said ants, bees, and hornets. You didn’t say anything about lady bugs.”
Now suppose I tell you to create Ethics Spray…
You see where I’m going. If you tell me ethics is just about fairness, safety, transparency, and accountability, that’s all your spray is going to solve for. And if there are ethical issues outside of those, those ethical risks will be realized, just like the ladybug infestation. That’s because you did a bad job when you translated ethics into those four categories.
The Problem with Move 2: Ethical Dodges
The second problem in our move to Responsible AI is that, in adding cybersecurity, engineering excellence, and regulatory compliance, corporations are led to focusing on those elements of Responsible AI instead of the ethics component. The reason has to do with what corporations actually do when they endeavor to create a Responsible AI program. I’ve seen this again and again with the companies I work with.
Here's what happens: they put together a committee or working group to design the program and they staff it with experts in cybersecurity, engineering/data/AI experts, and legal and compliance leaders. There is never an ethicist involved. So when these experts in their respective field are asked to develop the responsible AI program, I’ll give you one guess as to what they focus on and what they ignore. Here’s a hint: they tend not to focus on those things about which they know little to nothing. That’s not to blame them; when I work with clients, I don’t focus on what I don’t know, either. At any rate, fast forward months into the design of the Responsible AI program and they think it looks great, but that’s because they don’t see the ethical gaps. They were focused on cybersecurity, engineering excellence, and regulatory compliance. And so once again, the move to Responsible AI leads to, intentionally or not, dodging the ethical issues.
If We Don’t Solve the Fear of Ethics, We Won’t Make Progres
Words are tools. We use them to think and communicate. There’s nothing sacred about them and so there’s nothing sacred about the word “ethics.” If there are some other tools lying around, like “sustainable,” “human-centric” or “purpose,” and they do the job just as well, go ahead and use them. Don’t have a doorstop? Use that book instead. Just get the job done. So if “Responsible AI” works, then great, use it. But in my experience, it doesn’t get the job done. In fact, insofar as it takes the eye off the ethics ball, it positively undermines getting the job done.
Here's the thing, though. I’ve been fighting this battle for years, and I’m pretty sure I’ve lost. Businesses don’t want to talk about ethics. And I know what you’re thinking: you’re thinking it’s because they are greedy and just care about money. But as much as this is certainly part of the story and I love an evil enemy just as much as the next person, that’s not what I see. What I mostly see is nonculpable ignorance for how to think about ethics in a business context combined with the fear of trying to understand a new space. But if ethical progress is going to be made, the fear has to be left behind so that ethics can be taken seriously.
Want to Read More Like This?
If you liked this piece then you might like two Harvard Business Review articles I’ve written.
The first is a critique of the move from “AI ethics” to “fairness in AI.” It’s probably one of my best HBR pieces: https://hbr.org/2021/04/if-your-company-uses-ai-it-needs-an-institutional-review-board
The second discusses some of what I say here, though not in as much detail, and further articulates why not saying ethics makes it harder for executives to tackle the problems they face. https://hbr.org/2023/05/how-to-avoid-the-ethical-nightmares-of-emerging-technology
Subscribe to my podcast Ethical Machines where you listen:
Spotify | Apple Podcasts | iHeart Radio | PocketCast | YouTube
Really pertinent piece, but intrigued as to why you think businesses are afraid to use the term 'ethics' and why they are afraid to consider it. You didn't seem to answer the obvious question your piece poses.
I would be interested to hear why you think the move from 'AI Ethics' to 'Responsible AI' is made. If, as you suggest, it is not merely a semantic move, then businesses seem to have another motive rather than just moving away from the scary word 'ethics'. What do you believe this is? And why select fairness/safety etc as the representatives of this? We have to, after all, have some way of operationalising ethics!