<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Reid Blackman: Podcast]]></title><description><![CDATA[A podcast that goes deep on tech, ethics, and society. 
]]></description><link>https://reidblackman.substack.com/s/ethical-machines-podcast</link><generator>Substack</generator><lastBuildDate>Mon, 04 May 2026 21:00:03 GMT</lastBuildDate><atom:link href="https://reidblackman.substack.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Reid Blackman]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[reidblackman@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[reidblackman@substack.com]]></itunes:email><itunes:name><![CDATA[Reid Blackman]]></itunes:name></itunes:owner><itunes:author><![CDATA[Reid Blackman]]></itunes:author><googleplay:owner><![CDATA[reidblackman@substack.com]]></googleplay:owner><googleplay:email><![CDATA[reidblackman@substack.com]]></googleplay:email><googleplay:author><![CDATA[Reid Blackman]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[The Ethical Nightmare Challenge: Chapters 6-7]]></title><description><![CDATA[Chapter Six; Dream Teams for Ethical Nightmares]]></description><link>https://reidblackman.substack.com/p/the-ethical-nightmare-challenge-chapters-841</link><guid isPermaLink="false">https://reidblackman.substack.com/p/the-ethical-nightmare-challenge-chapters-841</guid><pubDate>Sun, 03 May 2026 12:03:37 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/195897239/1659d76e67520ed3f2336e2f82de0543.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p><strong>Chapter Six; Dream Teams for Ethical Nightmares</strong></p><ul><li><p>Three Types of ENC Teams </p></li><li><p>ENC Teams as Emergency Response </p></li><li><p>Tools for Teams </p></li><li><p>ENC Teams in Bloom </p></li></ul><p><strong>Chapter Seven; ENC: An Approach So Flexible It Makes Simone</strong></p><ul><li><p>Biles Look Like C-3PO</p></li><li><p>Hands Off!</p></li><li><p>You Do You</p></li><li><p>Marrying ENC to Existing Practices</p></li><li><p>Folding Existing Resources into ENC Teams</p></li><li><p>Folding ENC Teams into Existing Resources</p></li><li><p>The Ethical Nightmare Challenge for... Everyone</p></li></ul><p>Website: <a href="https://www.amazon.com/dp/B0GRC1ZPYX">https://www.amazon.com/dp/B0GRC1ZPYX</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://reidblackman.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://reidblackman.substack.com/subscribe?"><span>Subscribe now</span></a></p><div class="pullquote"><p><em><strong>Subscribe where you listen: </strong></em></p><p><em><a href="https://open.spotify.com/show/117LXLgTRuZnxb17VzpCop?si=bb9660a6a65f44d6">Spotify</a> | <a href="https://podcasts.apple.com/us/podcast/ethical-machines/id1751550186">Apple Podcasts</a> | <a href="https://iheart.com/podcast/189139713/">iHeart Radio</a> | <a href="https://pca.st/r5l49jbn">PocketCast</a> | <a href="https://www.youtube.com/@reidblackman">YouTube</a></em></p></div><p>Reid Blackman is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p>]]></content:encoded></item><item><title><![CDATA[The Ethical Nightmare Challenge: Chapters 4-5]]></title><description><![CDATA[Chapter 4; The Standard Approach to Responsible AI Is Crumbling]]></description><link>https://reidblackman.substack.com/p/the-ethical-nightmare-challenge-chapters-aff</link><guid isPermaLink="false">https://reidblackman.substack.com/p/the-ethical-nightmare-challenge-chapters-aff</guid><pubDate>Fri, 01 May 2026 12:14:49 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/195897143/dd57d4e0bfc132e157fbd5e4cb8f3c6b.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p><strong>Chapter 4; The Standard Approach to Responsible AI Is Crumbling</strong></p><ul><li><p>The Standard Approach</p></li><li><p>The Madness in the Method</p></li><li><p>Turn That Smile Upside Down</p></li><li><p>Cats and Tigers, Oh My!</p></li></ul><p><strong>Chapter 5; Why I Like Nightmares and You Should, Too</strong></p><ul><li><p>The Power of Nightmares</p></li><li><p>What Good Nightmares Look Like</p></li><li><p>And Now the Moment You&#8217;ve Been Waiting For</p></li></ul><p>Website: <a href="https://www.amazon.com/dp/B0GRC1ZPYX">https://www.amazon.com/dp/B0GRC1ZPYX</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://reidblackman.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://reidblackman.substack.com/subscribe?"><span>Subscribe now</span></a></p><div class="pullquote"><p><em><strong>Subscribe where you listen: </strong></em></p><p><em><a href="https://open.spotify.com/show/117LXLgTRuZnxb17VzpCop?si=bb9660a6a65f44d6">Spotify</a> | <a href="https://podcasts.apple.com/us/podcast/ethical-machines/id1751550186">Apple Podcasts</a> | <a href="https://iheart.com/podcast/189139713/">iHeart Radio</a> | <a href="https://pca.st/r5l49jbn">PocketCast</a> | <a href="https://www.youtube.com/@reidblackman">YouTube</a></em></p></div><p>Reid Blackman is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p>]]></content:encoded></item><item><title><![CDATA[The Ethical Nightmare Challenge: Chapters 2-3]]></title><description><![CDATA[Watch now | Chapter Two; Things Get Complicated with Generative AI]]></description><link>https://reidblackman.substack.com/p/the-ethical-nightmare-challenge-chapters</link><guid isPermaLink="false">https://reidblackman.substack.com/p/the-ethical-nightmare-challenge-chapters</guid><dc:creator><![CDATA[Reid Blackman]]></dc:creator><pubDate>Thu, 30 Apr 2026 12:15:57 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/195897031/3724eab247a5bd8ac2e1c029fbabf374.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p><strong>Chapter Two; Things Get Complicated with Generative AI</strong></p><ul><li><p>So Now We&#8217;re Going to Lose My Grandmother, Again</p></li><li><p>The Creators&#8217; Version of a Rough Draft</p></li><li><p>The Creators Align (Kind of)</p></li><li><p>BigBusinessAI</p></li><li><p>The Master Prompter</p></li><li><p>The Changing AI Risk Landscape</p></li></ul><p><strong>Chapter Three; Humans Had a Good Run, but Now I Bring You... AI Agents!</strong></p><ul><li><p>How to Build an AI Agent</p></li><li><p>AI Agent Ecosystems</p></li><li><p>Agentic Sources of Ethical Nightmares</p></li><li><p>The Classic &#8220;But Humans Make Errors, Too!&#8221; Objection</p></li><li><p>The Ground Exploded Beneath Our Feet</p></li><li><p>After the Earthquake</p></li></ul><p><strong>Interlude: Get a Grip, Man!</strong></p><p>Website: <a href="https://www.amazon.com/dp/B0GRC1ZPYX">https://www.amazon.com/dp/B0GRC1ZPYX</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://reidblackman.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://reidblackman.substack.com/subscribe?"><span>Subscribe now</span></a></p><div class="pullquote"><p><em><strong>Subscribe where you listen: </strong></em></p><p><em><a href="https://open.spotify.com/show/117LXLgTRuZnxb17VzpCop?si=bb9660a6a65f44d6">Spotify</a> | <a href="https://podcasts.apple.com/us/podcast/ethical-machines/id1751550186">Apple Podcasts</a> | <a href="https://iheart.com/podcast/189139713/">iHeart Radio</a> | <a href="https://pca.st/r5l49jbn">PocketCast</a> | <a href="https://www.youtube.com/@reidblackman">YouTube</a></em></p></div><p>Reid Blackman is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p>]]></content:encoded></item><item><title><![CDATA[The Ethical Nightmare Challenge]]></title><description><![CDATA[Watch now | My new book released just two days.]]></description><link>https://reidblackman.substack.com/p/the-ethical-nightmare-challenge</link><guid isPermaLink="false">https://reidblackman.substack.com/p/the-ethical-nightmare-challenge</guid><dc:creator><![CDATA[Reid Blackman]]></dc:creator><pubDate>Thu, 23 Apr 2026 11:15:49 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/195049430/1188f325081a186a02223b80533928f4.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>My new book released just two days. It&#8217;s about how insanely complex the AI risk landscape has become, why the standard approach to Responsible AI is broken, and develops a novel approach to avoiding the worst of AI. In this episode I offer you the Introduction and Chapter 1 of the audiobook. If you don&#8217;t laugh at least once, I consider the book a failure.</p><p>Website: <a href="https://www.amazon.com/dp/B0GRC1ZPYX">https://www.amazon.com/dp/B0GRC1ZPYX</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://reidblackman.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://reidblackman.substack.com/subscribe?"><span>Subscribe now</span></a></p><div class="pullquote"><p><em><strong>Subscribe where you listen: </strong></em></p><p><em><a href="https://open.spotify.com/show/117LXLgTRuZnxb17VzpCop?si=bb9660a6a65f44d6">Spotify</a> | <a href="https://podcasts.apple.com/us/podcast/ethical-machines/id1751550186">Apple Podcasts</a> | <a href="https://iheart.com/podcast/189139713/">iHeart Radio</a> | <a href="https://pca.st/r5l49jbn">PocketCast</a> | <a href="https://www.youtube.com/@reidblackman">YouTube</a></em></p></div><p>Reid Blackman is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p>]]></content:encoded></item><item><title><![CDATA[Creating Universal Standards for AI Risk]]></title><description><![CDATA[With Patrick Sullivan]]></description><link>https://reidblackman.substack.com/p/creating-universal-standards-for</link><guid isPermaLink="false">https://reidblackman.substack.com/p/creating-universal-standards-for</guid><dc:creator><![CDATA[Reid Blackman]]></dc:creator><pubDate>Thu, 16 Apr 2026 11:36:13 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/194329607/131ab33fea542f58d8b4e9f00e0eecf6.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>ISO 42001 sounds serious. It's got a serious (and boring) name, it's backed by 60+ countries, and some companies seek ISO 42001 certification. But is the standard any good? Does it actually prevent harms? Can we have generic standards? And how can the standards be flexible enough to account for the fast paced change in the AI world? I&#8217;m a bit of a skeptic about all this, but my guest, Patrick Sullivan, VP of Strategy and Innovation at A-lign, is a true believer. And he makes a strong case. You decide if my skepticism is unwarranted.</p><p>Website: <a href="https://www.a-lign.com/">https://www.a-lign.com/</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://reidblackman.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://reidblackman.substack.com/subscribe?"><span>Subscribe now</span></a></p><div class="pullquote"><p><em><strong>Subscribe where you listen:</strong></em></p><p><em><a href="https://open.spotify.com/show/117LXLgTRuZnxb17VzpCop?si=bb9660a6a65f44d6">Spotify</a> | <a href="https://podcasts.apple.com/us/podcast/ethical-machines/id1751550186">Apple Podcasts</a> | <a href="https://iheart.com/podcast/189139713/">iHeart Radio</a> | <a href="https://pca.st/r5l49jbn">PocketCast</a> | <a href="https://www.youtube.com/@reidblackman">YouTube</a></em></p></div><p>Reid Blackman is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p>]]></content:encoded></item><item><title><![CDATA[ Existentialist Risk]]></title><description><![CDATA[With Ariela Tubert and Justin Tiehen]]></description><link>https://reidblackman.substack.com/p/existentialist-risk-d83</link><guid isPermaLink="false">https://reidblackman.substack.com/p/existentialist-risk-d83</guid><dc:creator><![CDATA[Reid Blackman]]></dc:creator><pubDate>Thu, 09 Apr 2026 11:15:43 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/193594513/fcbe9732517528b8bd1051279ac9dc5f.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Technologist&#8217;s are racing to create AGI, artificial general intelligence. They also say we must align the AGI&#8217;s moral values with our own. But Professors Ariela Tubert and Justin Tiehen argue that&#8217;s impossible. Once you create an AGI, they say, you also give them the intellectual capacity needed for freedom, including the freedom to reject your given values. <em>Originally aired in season 2.</em></p><p>Website: <a href="https://www.justintiehen.com/">https://www.justintiehen.com/</a></p><p>Website: <a href="https://www.arielatubert.com/">https://www.arielatubert.com/</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://reidblackman.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://reidblackman.substack.com/subscribe?"><span>Subscribe now</span></a></p><div class="pullquote"><p><em><strong>Subscribe where you listen: </strong></em></p><div class="apple-podcast-container" data-component-name="ApplePodcastToDom"><iframe class="apple-podcast " data-attrs="{&quot;url&quot;:&quot;https://embed.podcasts.apple.com/us/podcast/existentialist-risk/id1751550186?i=1000760403073&quot;,&quot;isEpisode&quot;:true,&quot;imageUrl&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/podcast-episode_1000760403073.jpg&quot;,&quot;title&quot;:&quot;Existentialist Risk&quot;,&quot;podcastTitle&quot;:&quot;Ethical Machines&quot;,&quot;podcastByline&quot;:&quot;&quot;,&quot;duration&quot;:2794000,&quot;numEpisodes&quot;:&quot;&quot;,&quot;targetUrl&quot;:&quot;https://podcasts.apple.com/us/podcast/existentialist-risk/id1751550186?i=1000760403073&amp;uo=4&quot;,&quot;releaseDate&quot;:&quot;2026-04-09T05:15:34Z&quot;}" src="https://embed.podcasts.apple.com/us/podcast/existentialist-risk/id1751550186?i=1000760403073" frameborder="0" allow="autoplay *; encrypted-media *;" allowfullscreen="true"></iframe></div><p><em><a href="https://open.spotify.com/show/117LXLgTRuZnxb17VzpCop?si=bb9660a6a65f44d6">Spotify</a> | <a href="https://podcasts.apple.com/us/podcast/ethical-machines/id1751550186">Apple Podcasts</a> | <a href="https://iheart.com/podcast/189139713/">iHeart Radio</a> | <a href="https://pca.st/r5l49jbn">PocketCast</a> | <a href="https://www.youtube.com/@reidblackman">YouTube</a></em></p></div><p>Reid Blackman is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p>]]></content:encoded></item><item><title><![CDATA[Could AI Have Moral Worth?]]></title><description><![CDATA[With Josh Gellers]]></description><link>https://reidblackman.substack.com/p/could-ai-have-moral-worth</link><guid isPermaLink="false">https://reidblackman.substack.com/p/could-ai-have-moral-worth</guid><dc:creator><![CDATA[Reid Blackman]]></dc:creator><pubDate>Thu, 02 Apr 2026 15:02:47 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/192952358/580db2755fd2e5c903f2e8671063bb27.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>My guest today, Josh Gellers, Dean at the University of North Florida, argues that AI has more awards. More specifically, he thinks that AI has been used to create new biological organisms that meet the criteria for moral worth. Does that mean that AI itself has moral worth? Should we think that if something is not natural it lacks moral worth? All this and more in today&#8217;s episode.</p><p>Website: <a href="https://www.joshgellers.com/">https://www.joshgellers.com/</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://reidblackman.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://reidblackman.substack.com/subscribe?"><span>Subscribe now</span></a></p><div class="pullquote"><p><em><strong>Subscribe where you listen: </strong></em></p><p></p><div class="apple-podcast-container" data-component-name="ApplePodcastToDom"><iframe class="apple-podcast " data-attrs="{&quot;url&quot;:&quot;https://embed.podcasts.apple.com/us/podcast/could-ai-have-moral-worth/id1751550186?i=1000758848808&quot;,&quot;isEpisode&quot;:true,&quot;imageUrl&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/podcast-episode_1000758848808.jpg&quot;,&quot;title&quot;:&quot;Could AI Have Moral Worth?&quot;,&quot;podcastTitle&quot;:&quot;Ethical Machines&quot;,&quot;podcastByline&quot;:&quot;&quot;,&quot;duration&quot;:3249000,&quot;numEpisodes&quot;:&quot;&quot;,&quot;targetUrl&quot;:&quot;https://podcasts.apple.com/us/podcast/could-ai-have-moral-worth/id1751550186?i=1000758848808&amp;uo=4&quot;,&quot;releaseDate&quot;:&quot;2026-04-02T12:19:39Z&quot;}" src="https://embed.podcasts.apple.com/us/podcast/could-ai-have-moral-worth/id1751550186?i=1000758848808" frameborder="0" allow="autoplay *; encrypted-media *;" allowfullscreen="true"></iframe></div><p><em><a href="https://open.spotify.com/show/117LXLgTRuZnxb17VzpCop?si=bb9660a6a65f44d6">Spotify</a> | <a href="https://podcasts.apple.com/us/podcast/ethical-machines/id1751550186">Apple Podcasts</a> | <a href="https://iheart.com/podcast/189139713/">iHeart Radio</a> | <a href="https://pca.st/r5l49jbn">PocketCast</a> | <a href="https://www.youtube.com/@reidblackman">YouTube</a></em></p></div><p>Reid Blackman is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p>]]></content:encoded></item><item><title><![CDATA[Don’t Believe the Hype About AI Job Displacement]]></title><description><![CDATA[With Kate Vredenburgh and Lauren Wong]]></description><link>https://reidblackman.substack.com/p/dont-believe-the-hype-about-ai-job</link><guid isPermaLink="false">https://reidblackman.substack.com/p/dont-believe-the-hype-about-ai-job</guid><dc:creator><![CDATA[Reid Blackman]]></dc:creator><pubDate>Thu, 26 Mar 2026 11:15:49 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/192124464/75e75c4321d5a689a872061448e0b76f.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>My guests today - Professor Kate Vredenburgh and VR specialist Lauren Wong - argue that there are at least two strong reasons for calming down: first, AI isn&#8217;t good enough to replace us at our jobs. Second, even if they were, it&#8217;s up to us to develop AI in a way that supports rather than replaces us. We also talk about whether AI adoption is suffering for the same reasons the metaverse was never successful: we&#8217;re failing to appreciate how to get people to justifiably buy in to the technology.</p><p>Website: <a href="https://katevredenburgh.com/">https://katevredenburgh.com/</a> </p><p>Website: <a href="https://uk.linkedin.com/in/lauren-wong-b8030024">https://linkedin.com/in/lauren-wong</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://reidblackman.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://reidblackman.substack.com/subscribe?"><span>Subscribe now</span></a></p><div class="pullquote"><p><em><strong>Subscribe where you listen: </strong></em></p><p></p><div class="apple-podcast-container" data-component-name="ApplePodcastToDom"><iframe class="apple-podcast " data-attrs="{&quot;url&quot;:&quot;https://embed.podcasts.apple.com/us/podcast/dont-believe-the-hype-about-ai-job-displacement/id1751550186?i=1000757427893&quot;,&quot;isEpisode&quot;:true,&quot;imageUrl&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/podcast-episode_1000757427893.jpg&quot;,&quot;title&quot;:&quot;Don&#8217;t Believe the Hype About AI Job Displacement&quot;,&quot;podcastTitle&quot;:&quot;Ethical Machines&quot;,&quot;podcastByline&quot;:&quot;&quot;,&quot;duration&quot;:2476000,&quot;numEpisodes&quot;:&quot;&quot;,&quot;targetUrl&quot;:&quot;https://podcasts.apple.com/us/podcast/dont-believe-the-hype-about-ai-job-displacement/id1751550186?i=1000757427893&amp;uo=4&quot;,&quot;releaseDate&quot;:&quot;2026-03-26T05:15:28Z&quot;}" src="https://embed.podcasts.apple.com/us/podcast/dont-believe-the-hype-about-ai-job-displacement/id1751550186?i=1000757427893" frameborder="0" allow="autoplay *; encrypted-media *;" allowfullscreen="true"></iframe></div><p><em><a href="https://open.spotify.com/show/117LXLgTRuZnxb17VzpCop?si=bb9660a6a65f44d6">Spotify</a> | <a href="https://podcasts.apple.com/us/podcast/ethical-machines/id1751550186">Apple Podcasts</a> | <a href="https://iheart.com/podcast/189139713/">iHeart Radio</a> | <a href="https://pca.st/r5l49jbn">PocketCast</a> | <a href="https://www.youtube.com/@reidblackman">YouTube</a></em></p></div><p>Reid Blackman is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p>]]></content:encoded></item><item><title><![CDATA[Does Social Media Diminish Our Autonomy?]]></title><description><![CDATA[With Elettra Bietti]]></description><link>https://reidblackman.substack.com/p/does-social-media-diminish-our-autonomy</link><guid isPermaLink="false">https://reidblackman.substack.com/p/does-social-media-diminish-our-autonomy</guid><dc:creator><![CDATA[Reid Blackman]]></dc:creator><pubDate>Thu, 19 Mar 2026 12:15:41 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/191394831/070fe23ff16a6a2088275b556361ccee.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Are we dependent on social media in a way that erodes our autonomy? After all, platforms are designed to keep us hooked and to come back for more. And we don&#8217;t really know the law of the digital lands, since how the algorithms influence how we relate to each other online in unknown ways. Then again, don&#8217;t we bear a certain degree of personal responsibility for how we conduct ourselves, online or otherwise? What the right balance is and how we can encourage or require greater autonomy is our topic of discussion today.<em> Originally aired in season two.</em></p><p>Website: <a href="https://www.elettrabietti.com/">https://www.elettrabietti.com/</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://reidblackman.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://reidblackman.substack.com/subscribe?"><span>Subscribe now</span></a></p><div class="pullquote"><p><em><strong>Subscribe where you listen: </strong></em></p><div class="apple-podcast-container" data-component-name="ApplePodcastToDom"><iframe class="apple-podcast " data-attrs="{&quot;url&quot;:&quot;https://embed.podcasts.apple.com/us/podcast/does-social-media-diminish-our-autonomy/id1751550186?i=1000756075194&quot;,&quot;isEpisode&quot;:true,&quot;imageUrl&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/podcast-episode_1000756075194.jpg&quot;,&quot;title&quot;:&quot;Does Social Media Diminish Our Autonomy?&quot;,&quot;podcastTitle&quot;:&quot;Ethical Machines&quot;,&quot;podcastByline&quot;:&quot;&quot;,&quot;duration&quot;:2947000,&quot;numEpisodes&quot;:&quot;&quot;,&quot;targetUrl&quot;:&quot;https://podcasts.apple.com/us/podcast/does-social-media-diminish-our-autonomy/id1751550186?i=1000756075194&amp;uo=4&quot;,&quot;releaseDate&quot;:&quot;2026-03-19T04:05:19Z&quot;}" src="https://embed.podcasts.apple.com/us/podcast/does-social-media-diminish-our-autonomy/id1751550186?i=1000756075194" frameborder="0" allow="autoplay *; encrypted-media *;" allowfullscreen="true"></iframe></div><p><em><a href="https://open.spotify.com/show/117LXLgTRuZnxb17VzpCop?si=bb9660a6a65f44d6">Spotify</a> | <a href="https://podcasts.apple.com/us/podcast/ethical-machines/id1751550186">Apple Podcasts</a> | <a href="https://iheart.com/podcast/189139713/">iHeart Radio</a> | <a href="https://pca.st/r5l49jbn">PocketCast</a> | <a href="https://www.youtube.com/@reidblackman">YouTube</a></em></p></div><p>Reid Blackman is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p>]]></content:encoded></item><item><title><![CDATA[How AI Robs Us of Meaning]]></title><description><![CDATA[With Sven Nyholm]]></description><link>https://reidblackman.substack.com/p/how-ai-robs-us-of-meaning</link><guid isPermaLink="false">https://reidblackman.substack.com/p/how-ai-robs-us-of-meaning</guid><dc:creator><![CDATA[Reid Blackman]]></dc:creator><pubDate>Thu, 12 Mar 2026 11:15:34 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/190659662/e20adce2f9857dc492f5b57eef0ee2e4.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Much of what we find fulfilling in life isn&#8217;t the having but the doing. It&#8217;s the process of working through a problem, taking action, doing what needs to be done. But that meaning may be on the verge of being greatly diminished; so contends my guest, Sven Nyholm, Professor of Ethics of AI at lMU MUNICH. I push back in various ways: how real and/or imminent is this threat, really? And who is responsible for staving it off?</p><p>Website: <a href="https://www.philosophie.lmu.de/de/personenuebersicht/kontaktseite/sven-nyholm-4f56fa3b.html">https://www.philosophie.lmu.de/de/personenuebersicht/kontaktseite/sven-nyholm-4f56fa3b.html</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://reidblackman.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://reidblackman.substack.com/subscribe?"><span>Subscribe now</span></a></p><div class="pullquote"><p><em><strong>Subscribe where you listen:</strong></em></p><div class="apple-podcast-container" data-component-name="ApplePodcastToDom"><iframe class="apple-podcast " data-attrs="{&quot;url&quot;:&quot;https://embed.podcasts.apple.com/us/podcast/how-ai-robs-us-of-meaning/id1751550186?i=1000754781894&quot;,&quot;isEpisode&quot;:true,&quot;imageUrl&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/podcast-episode_1000754781894.jpg&quot;,&quot;title&quot;:&quot;How AI Robs Us of Meaning&quot;,&quot;podcastTitle&quot;:&quot;Ethical Machines&quot;,&quot;podcastByline&quot;:&quot;&quot;,&quot;duration&quot;:3063000,&quot;numEpisodes&quot;:&quot;&quot;,&quot;targetUrl&quot;:&quot;https://podcasts.apple.com/us/podcast/how-ai-robs-us-of-meaning/id1751550186?i=1000754781894&amp;uo=4&quot;,&quot;releaseDate&quot;:&quot;2026-03-12T04:05:51Z&quot;}" src="https://embed.podcasts.apple.com/us/podcast/how-ai-robs-us-of-meaning/id1751550186?i=1000754781894" frameborder="0" allow="autoplay *; encrypted-media *;" allowfullscreen="true"></iframe></div><p><em><a href="https://open.spotify.com/show/117LXLgTRuZnxb17VzpCop?si=bb9660a6a65f44d6">Spotify</a> | <a href="https://podcasts.apple.com/us/podcast/ethical-machines/id1751550186">Apple Podcasts</a> | <a href="https://iheart.com/podcast/189139713/">iHeart Radio</a> | <a href="https://pca.st/r5l49jbn">PocketCast</a> | <a href="https://www.youtube.com/@reidblackman">YouTube</a></em></p></div><p>Reid Blackman is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p>]]></content:encoded></item><item><title><![CDATA[Should Anthropic Have Allowed Autonomous Weapons Systems?]]></title><description><![CDATA[With Michael C. Horowitz]]></description><link>https://reidblackman.substack.com/p/should-anthropic-have-allowed-autonomous</link><guid isPermaLink="false">https://reidblackman.substack.com/p/should-anthropic-have-allowed-autonomous</guid><dc:creator><![CDATA[Reid Blackman]]></dc:creator><pubDate>Thu, 05 Mar 2026 12:14:36 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/189894613/5d527b7ab694070c5aa8587c4915acc7.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Anthropic just got the axe from the U.S. government for refusing to allow the Department of Defense (War?) to use Claude for autonomous weapons systems and mass surveillance. For the first 15 minutes of this conversation with Michael Horowitz - professor at UPenn, Senior Fellow for Technology and Innovation at the Council on Foreign Relations, and formerly Deputy Assistant Secretary of Defense for Force Development and Emerging Capabilities and Director of the Emerging Capabilities Policy Office at the DoD - we talk explicitly about Anthropic vs. the U.S. government. Why Anthropic did it, why this is more about personality than policy, and more. In the remaining 45 minutes you&#8217;ll hear a replay of an episode Michael and I did back in October, in which Michael defends the functional and ethical importance of potentially using AI for autonomous weapons systems.</p><p>Website: <a href="https://live-sas-www-polisci.pantheon.sas.upenn.edu/people/standing-faculty/michael-c-horowitz">https://live-sas-www-polisci.pantheon.sas.upenn.edu/</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://reidblackman.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://reidblackman.substack.com/subscribe?"><span>Subscribe now</span></a></p><div class="pullquote"><p><em><strong>Subscribe where you listen:</strong></em></p><div class="apple-podcast-container" data-component-name="ApplePodcastToDom"><iframe class="apple-podcast " data-attrs="{&quot;url&quot;:&quot;https://embed.podcasts.apple.com/us/podcast/should-anthropic-have-allowed-autonomous-weapons-systems/id1751550186?i=1000753160228&quot;,&quot;isEpisode&quot;:true,&quot;imageUrl&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/podcast-episode_1000753160228.jpg&quot;,&quot;title&quot;:&quot;Should Anthropic Have Allowed Autonomous Weapons Systems?&quot;,&quot;podcastTitle&quot;:&quot;Ethical Machines&quot;,&quot;podcastByline&quot;:&quot;&quot;,&quot;duration&quot;:4157000,&quot;numEpisodes&quot;:&quot;&quot;,&quot;targetUrl&quot;:&quot;https://podcasts.apple.com/us/podcast/should-anthropic-have-allowed-autonomous-weapons-systems/id1751550186?i=1000753160228&amp;uo=4&quot;,&quot;releaseDate&quot;:&quot;2026-03-05T04:10:39Z&quot;}" src="https://embed.podcasts.apple.com/us/podcast/should-anthropic-have-allowed-autonomous-weapons-systems/id1751550186?i=1000753160228" frameborder="0" allow="autoplay *; encrypted-media *;" allowfullscreen="true"></iframe></div><p><em><a href="https://open.spotify.com/show/117LXLgTRuZnxb17VzpCop?si=bb9660a6a65f44d6">Spotify</a> | <a href="https://podcasts.apple.com/us/podcast/ethical-machines/id1751550186">Apple Podcasts</a> | <a href="https://iheart.com/podcast/189139713/">iHeart Radio</a> | <a href="https://pca.st/r5l49jbn">PocketCast</a> | <a href="https://www.youtube.com/@reidblackman">YouTube</a></em></p></div><p>Reid Blackman is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p>]]></content:encoded></item><item><title><![CDATA[How an Attorney Leads Responsible AI Practices]]></title><description><![CDATA[With James Desir]]></description><link>https://reidblackman.substack.com/p/how-an-attorney-leads-responsible</link><guid isPermaLink="false">https://reidblackman.substack.com/p/how-an-attorney-leads-responsible</guid><dc:creator><![CDATA[Reid Blackman]]></dc:creator><pubDate>Thu, 26 Feb 2026 12:15:41 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/189204587/b1b74f34ccf5d0c87e1e55256e210f2b.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>What does it look like for a non-technologist to lead Responsible AI practices at a Fortune 500 company? Today I talk with James Desir, Senior corporate counsel at Progressive Insurance and a key leader in their RAI efforts. We discuss how he found his way into this space, how he persuades data scientists to treat him as a thought partner instead of a blocker, and how to demonstrate the ROI of RAI to fellow executives. We also talk about the increasing complexity of AI and how a small RAI team can handle the scale of the problem.</p><p>LinkedIn Profile: <a href="https://www.linkedin.com/in/james-desir-436ab1a">https://www.linkedin.com/in/james-desir-436ab1a</a></p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://reidblackman.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://reidblackman.substack.com/subscribe?"><span>Subscribe now</span></a></p><div class="pullquote"><p><em><strong>Subscribe where you listen:</strong></em></p><div class="apple-podcast-container" data-component-name="ApplePodcastToDom"><iframe class="apple-podcast " data-attrs="{&quot;url&quot;:&quot;https://embed.podcasts.apple.com/us/podcast/how-an-attorney-leads-responsible-ai-practices/id1751550186?i=1000751694105&quot;,&quot;isEpisode&quot;:true,&quot;imageUrl&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/podcast-episode_1000751694105.jpg&quot;,&quot;title&quot;:&quot;How an Attorney Leads Responsible AI Practices&quot;,&quot;podcastTitle&quot;:&quot;Ethical Machines&quot;,&quot;podcastByline&quot;:&quot;&quot;,&quot;duration&quot;:2798000,&quot;numEpisodes&quot;:&quot;&quot;,&quot;targetUrl&quot;:&quot;https://podcasts.apple.com/us/podcast/how-an-attorney-leads-responsible-ai-practices/id1751550186?i=1000751694105&amp;uo=4&quot;,&quot;releaseDate&quot;:&quot;2026-02-26T05:10:12Z&quot;}" src="https://embed.podcasts.apple.com/us/podcast/how-an-attorney-leads-responsible-ai-practices/id1751550186?i=1000751694105" frameborder="0" allow="autoplay *; encrypted-media *;" allowfullscreen="true"></iframe></div><p><em><a href="https://open.spotify.com/show/117LXLgTRuZnxb17VzpCop?si=bb9660a6a65f44d6">Spotify</a> | <a href="https://podcasts.apple.com/us/podcast/ethical-machines/id1751550186">Apple Podcasts</a> | <a href="https://iheart.com/podcast/189139713/">iHeart Radio</a> | <a href="https://pca.st/r5l49jbn">PocketCast</a> | <a href="https://www.youtube.com/@reidblackman">YouTube</a></em></p></div><p>Reid Blackman is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p>]]></content:encoded></item><item><title><![CDATA[We May Have Only 2-3 years Until AI Dominates Us]]></title><description><![CDATA[With Olle H&#228;ggstr&#246;m]]></description><link>https://reidblackman.substack.com/p/we-may-have-only-2-3-years-until</link><guid isPermaLink="false">https://reidblackman.substack.com/p/we-may-have-only-2-3-years-until</guid><dc:creator><![CDATA[Reid Blackman]]></dc:creator><pubDate>Thu, 19 Feb 2026 12:14:30 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/188454760/55775ed59674b204251a53ef1ab2b5c5.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>I tend to dismiss claims about existential risks from AI, but my guest thinks I  - or rather we - need to take it very seriously. His name is Olle H&#228;ggstr&#246;m and he&#8217;s a professor of mathematical statistics at Chalmers University of Technology in, Sweden, and a member of the Royal Swedish Academy of Sciences. He argues that if AI becomes more intelligent than us, and it will, then it will dominate us in much the way we dominate other species. But it&#8217;s not too late! We can and we must, he argues, change the trajectory of how we develop AI.</p><p>Website: <a href="https://www.math.chalmers.se/~olleh/">https://www.math.chalmers.se/</a></p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://reidblackman.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://reidblackman.substack.com/subscribe?"><span>Subscribe now</span></a></p><div class="pullquote"><p><em><strong>Subscribe where you listen:</strong></em></p><div class="apple-podcast-container" data-component-name="ApplePodcastToDom"><iframe class="apple-podcast " data-attrs="{&quot;url&quot;:&quot;https://embed.podcasts.apple.com/us/podcast/we-may-have-only-2-3-years-until-ai-dominates-us/id1751550186?i=1000750432704&quot;,&quot;isEpisode&quot;:true,&quot;imageUrl&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/podcast-episode_1000750432704.jpg&quot;,&quot;title&quot;:&quot;We May Have Only 2-3 years Until AI Dominates Us&quot;,&quot;podcastTitle&quot;:&quot;Ethical Machines&quot;,&quot;podcastByline&quot;:&quot;&quot;,&quot;duration&quot;:2764000,&quot;numEpisodes&quot;:&quot;&quot;,&quot;targetUrl&quot;:&quot;https://podcasts.apple.com/us/podcast/we-may-have-only-2-3-years-until-ai-dominates-us/id1751550186?i=1000750432704&amp;uo=4&quot;,&quot;releaseDate&quot;:&quot;2026-02-19T05:05:25Z&quot;}" src="https://embed.podcasts.apple.com/us/podcast/we-may-have-only-2-3-years-until-ai-dominates-us/id1751550186?i=1000750432704" frameborder="0" allow="autoplay *; encrypted-media *;" allowfullscreen="true"></iframe></div><p><em><a href="https://open.spotify.com/show/117LXLgTRuZnxb17VzpCop?si=bb9660a6a65f44d6">Spotify</a> | <a href="https://podcasts.apple.com/us/podcast/ethical-machines/id1751550186">Apple Podcasts</a> | <a href="https://iheart.com/podcast/189139713/">iHeart Radio</a> | <a href="https://pca.st/r5l49jbn">PocketCast</a> | <a href="https://www.youtube.com/@reidblackman">YouTube</a></em></p></div><p>Reid Blackman is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p>]]></content:encoded></item><item><title><![CDATA[Let AI Do the Writing]]></title><description><![CDATA[With Luciano Floridi,]]></description><link>https://reidblackman.substack.com/p/let-ai-do-the-writing</link><guid isPermaLink="false">https://reidblackman.substack.com/p/let-ai-do-the-writing</guid><dc:creator><![CDATA[Reid Blackman]]></dc:creator><pubDate>Thu, 12 Feb 2026 12:15:26 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/187714669/601f5f915c4ae1011bebf30d110243aa.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>We hear that &#8220;writing is thinking.&#8221; We believe that teaching all students to be great writers is important. All hail the essay! But my guest, philosopher Luciano Floridi, professor and Founding Director of the Digital Ethics Center, sees things differently. Plenty of great thinkers were not also great writers. We should prioritize thoughtful and rigorous dialogue over the written word. As for writing, perhaps it should be considered akin to a musical instrument; not everyone has to learn the violin&#8230;</p><p>Website: <a href="https://www.philosophyofinformation.net/">https://www.philosophyofinformation.net/</a></p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://reidblackman.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://reidblackman.substack.com/subscribe?"><span>Subscribe now</span></a></p><div class="pullquote"><div class="apple-podcast-container" data-component-name="ApplePodcastToDom"><iframe class="apple-podcast " data-attrs="{&quot;url&quot;:&quot;https://embed.podcasts.apple.com/us/podcast/let-ai-do-the-writing/id1751550186?i=1000749382542&quot;,&quot;isEpisode&quot;:true,&quot;imageUrl&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/podcast-episode_1000749382542.jpg&quot;,&quot;title&quot;:&quot;Let AI Do the Writing&quot;,&quot;podcastTitle&quot;:&quot;Ethical Machines&quot;,&quot;podcastByline&quot;:&quot;&quot;,&quot;duration&quot;:3043000,&quot;numEpisodes&quot;:&quot;&quot;,&quot;targetUrl&quot;:&quot;https://podcasts.apple.com/us/podcast/let-ai-do-the-writing/id1751550186?i=1000749382542&amp;uo=4&quot;,&quot;releaseDate&quot;:&quot;2026-02-12T05:30:42Z&quot;}" src="https://embed.podcasts.apple.com/us/podcast/let-ai-do-the-writing/id1751550186?i=1000749382542" frameborder="0" allow="autoplay *; encrypted-media *;" allowfullscreen="true"></iframe></div><p><em><strong>Subscribe where you listen:</strong></em></p><p><em><a href="https://open.spotify.com/show/117LXLgTRuZnxb17VzpCop?si=bb9660a6a65f44d6">Spotify</a> | <a href="https://podcasts.apple.com/us/podcast/ethical-machines/id1751550186">Apple Podcasts</a> | <a href="https://iheart.com/podcast/189139713/">iHeart Radio</a> | <a href="https://pca.st/r5l49jbn">PocketCast</a> | <a href="https://www.youtube.com/@reidblackman">YouTube</a></em></p></div><p>Reid Blackman is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p>]]></content:encoded></item><item><title><![CDATA[What AI Risk Needs to Learn From Other Industries]]></title><description><![CDATA[With Jason Stanley]]></description><link>https://reidblackman.substack.com/p/what-ai-risk-needs-to-learn-from</link><guid isPermaLink="false">https://reidblackman.substack.com/p/what-ai-risk-needs-to-learn-from</guid><dc:creator><![CDATA[Reid Blackman]]></dc:creator><pubDate>Thu, 05 Feb 2026 12:11:04 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/186943058/91d3cd47d2dbca8f0b2ad15949e72cbf.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>We&#8217;ve been doing risk assessments in lots of industries for decades. For instance, in financial services and cyber security and aviation, there are lots of ways of thinking about what the risks are and how to mitigate them at both a microscopic and microscopic level. My guest today, Jason, Stanley of Service now, is probably the smartest person I&#8217;ve talked to on this topic. We discussed the three levels of AI risk and the lessons he draws from those other industries that we crucially need in the AI space.</p><p><em><strong>Jason Stanley</strong> is </em>Head of AI Research Deployment; Director of Applied AI Research at ServiceNow.</p><p><strong>Jason&#8217;s LinkedIn</strong>: https://www.linkedin.com/in/jasonstanley2/</p><p><strong>Jason&#8217;s Substack (which is truly excellent)</strong>: </p><div class="embedded-post-wrap" data-attrs="{&quot;id&quot;:182954210,&quot;url&quot;:&quot;https://jasonstanley.substack.com/p/start-here-a-map-of-the-blog-and&quot;,&quot;publication_id&quot;:7084453,&quot;publication_name&quot;:&quot;Jason Stanley&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!WBso!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1162034d-20cd-4661-9ea0-ecbacfa986ba_1251x1251.png&quot;,&quot;title&quot;:&quot;Start Here: A map of the blog and what to read first&quot;,&quot;truncated_body_text&quot;:&quot;This is the hub post: what this publication is about, who it&#8217;s &#8230;&quot;,&quot;date&quot;:&quot;2025-12-30T13:39:18.531Z&quot;,&quot;like_count&quot;:1,&quot;comment_count&quot;:0,&quot;bylines&quot;:[{&quot;id&quot;:8866899,&quot;name&quot;:&quot;Jason Stanley&quot;,&quot;handle&quot;:&quot;jasonstanley&quot;,&quot;previous_name&quot;:&quot;Jason&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2062ce0c-c956-4be6-8e0c-2ff8bb616838_198x198.png&quot;,&quot;bio&quot;:&quot;I lead an AI R&amp;D team at ServiceNow focused on agent security, system-level evaluation, trustworthy AI. Experience across product, research, policy, strategy.&quot;,&quot;profile_set_up_at&quot;:&quot;2024-04-30T22:59:05.803Z&quot;,&quot;reader_installed_at&quot;:&quot;2023-12-22T01:40:24.789Z&quot;,&quot;publicationUsers&quot;:[{&quot;id&quot;:7229743,&quot;user_id&quot;:8866899,&quot;publication_id&quot;:7084453,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:false,&quot;publication&quot;:{&quot;id&quot;:7084453,&quot;name&quot;:&quot;Jason Stanley&quot;,&quot;subdomain&quot;:&quot;jasonstanley&quot;,&quot;custom_domain&quot;:null,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;Deep, practical writing on agent security, system-level evaluation, and portfolio/systemic risk from real deployments &#8212; patterns, failures, and architectures that actually hold up in real organizations. New writing several times per week.&quot;,&quot;logo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1162034d-20cd-4661-9ea0-ecbacfa986ba_1251x1251.png&quot;,&quot;author_id&quot;:8866899,&quot;primary_user_id&quot;:8866899,&quot;theme_var_background_pop&quot;:&quot;#FF6719&quot;,&quot;created_at&quot;:&quot;2025-11-27T03:06:11.195Z&quot;,&quot;email_from_name&quot;:&quot;Jason Stanley | Agent Security &amp; Trust&quot;,&quot;copyright&quot;:&quot;Jason&quot;,&quot;founding_plan_name&quot;:null,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;disabled&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;newspaper&quot;,&quot;is_personal_mode&quot;:false}}],&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null,&quot;status&quot;:{&quot;bestsellerTier&quot;:null,&quot;subscriberTier&quot;:null,&quot;leaderboard&quot;:null,&quot;vip&quot;:false,&quot;badge&quot;:null,&quot;paidPublicationIds&quot;:[],&quot;subscriber&quot;:null}}],&quot;utm_campaign&quot;:null,&quot;belowTheFold&quot;:false,&quot;type&quot;:&quot;newsletter&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPostToDOM"><a class="embedded-post" native="true" href="https://jasonstanley.substack.com/p/start-here-a-map-of-the-blog-and?utm_source=substack&amp;utm_campaign=post_embed&amp;utm_medium=web"><div class="embedded-post-header"><img class="embedded-post-publication-logo" src="https://substackcdn.com/image/fetch/$s_!WBso!,w_56,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1162034d-20cd-4661-9ea0-ecbacfa986ba_1251x1251.png"><span class="embedded-post-publication-name">Jason Stanley</span></div><div class="embedded-post-title-wrapper"><div class="embedded-post-title">Start Here: A map of the blog and what to read first</div></div><div class="embedded-post-body">This is the hub post: what this publication is about, who it&#8217;s &#8230;</div><div class="embedded-post-cta-wrapper"><span class="embedded-post-cta">Read more</span></div><div class="embedded-post-meta">4 months ago &#183; 1 like &#183; Jason Stanley</div></a></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://reidblackman.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://reidblackman.substack.com/subscribe?"><span>Subscribe now</span></a></p><div class="pullquote"><p></p><p></p><p><em><strong>Subscribe where you listen:</strong></em></p><div class="apple-podcast-container" data-component-name="ApplePodcastToDom"><iframe class="apple-podcast " data-attrs="{&quot;url&quot;:&quot;https://embed.podcasts.apple.com/us/podcast/what-ai-risk-needs-to-learn-from-other-industries/id1751550186?i=1000748314913&quot;,&quot;isEpisode&quot;:true,&quot;imageUrl&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/podcast-episode_1000748314913.jpg&quot;,&quot;title&quot;:&quot;What AI Risk Needs to Learn From Other Industries&quot;,&quot;podcastTitle&quot;:&quot;Ethical Machines&quot;,&quot;podcastByline&quot;:&quot;&quot;,&quot;duration&quot;:3484000,&quot;numEpisodes&quot;:&quot;&quot;,&quot;targetUrl&quot;:&quot;https://podcasts.apple.com/us/podcast/what-ai-risk-needs-to-learn-from-other-industries/id1751550186?i=1000748314913&amp;uo=4&quot;,&quot;releaseDate&quot;:&quot;2026-02-05T04:53:50Z&quot;}" src="https://embed.podcasts.apple.com/us/podcast/what-ai-risk-needs-to-learn-from-other-industries/id1751550186?i=1000748314913" frameborder="0" allow="autoplay *; encrypted-media *;" allowfullscreen="true"></iframe></div><p><em><a href="https://open.spotify.com/show/117LXLgTRuZnxb17VzpCop?si=bb9660a6a65f44d6">Spotify</a> | <a href="https://podcasts.apple.com/us/podcast/ethical-machines/id1751550186">Apple Podcasts</a> | <a href="https://iheart.com/podcast/189139713/">iHeart Radio</a> | <a href="https://pca.st/r5l49jbn">PocketCast</a> | <a href="https://www.youtube.com/@reidblackman">YouTube</a></em></p></div><p>Reid Blackman is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p>]]></content:encoded></item><item><title><![CDATA[Can AI Do Ethics?]]></title><description><![CDATA[With Tristram McPherson]]></description><link>https://reidblackman.substack.com/p/can-ai-do-ethics-f7b</link><guid isPermaLink="false">https://reidblackman.substack.com/p/can-ai-do-ethics-f7b</guid><dc:creator><![CDATA[Reid Blackman]]></dc:creator><pubDate>Thu, 29 Jan 2026 12:14:23 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/186096283/b451fc3c5f3dc24e229f7f0905a6ad81.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Many researchers in AI think we should make AI capable of ethical inquiry. We can&#8217;t teach it all the ethical rules; that&#8217;s impossible. Instead, we should teach it to ethically reason, just as we do children. But my guest thinks this strategy makes a number of controversial assumptions, including how ethics works and what actually is right and wrong. <em>From the best of season two. </em></p><p><em>Tristram McPherson is a Professor and Placement Director in the Department of Philosophy at Ohio State University. He is also affiliated with Ohio State&#8217;s Center for Ethics and Human Values, its Sustainability Institute, and the Global Arts and Humanities Discovery Theme and is an associate editor with JESP and Ergo.</em></p><p>Website: <a href="https://sites.google.com/site/drtristram/">https://sites.google.com/site/drtristram/</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://reidblackman.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://reidblackman.substack.com/subscribe?"><span>Subscribe now</span></a></p><div class="pullquote"><p><em><strong>Subscribe where you listen:</strong></em></p><div class="apple-podcast-container" data-component-name="ApplePodcastToDom"><iframe class="apple-podcast " data-attrs="{&quot;url&quot;:&quot;https://embed.podcasts.apple.com/us/podcast/can-ai-do-ethics/id1751550186?i=1000747128224&quot;,&quot;isEpisode&quot;:true,&quot;imageUrl&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/podcast-episode_1000747128224.jpg&quot;,&quot;title&quot;:&quot;Can AI Do Ethics?&quot;,&quot;podcastTitle&quot;:&quot;Ethical Machines&quot;,&quot;podcastByline&quot;:&quot;&quot;,&quot;duration&quot;:2633000,&quot;numEpisodes&quot;:&quot;&quot;,&quot;targetUrl&quot;:&quot;https://podcasts.apple.com/us/podcast/can-ai-do-ethics/id1751550186?i=1000747128224&amp;uo=4&quot;,&quot;releaseDate&quot;:&quot;2026-01-29T05:00:10Z&quot;}" src="https://embed.podcasts.apple.com/us/podcast/can-ai-do-ethics/id1751550186?i=1000747128224" frameborder="0" allow="autoplay *; encrypted-media *;" allowfullscreen="true"></iframe></div><p><em><a href="https://open.spotify.com/show/117LXLgTRuZnxb17VzpCop?si=bb9660a6a65f44d6">Spotify</a> | <a href="https://podcasts.apple.com/us/podcast/ethical-machines/id1751550186">Apple Podcasts</a> | <a href="https://iheart.com/podcast/189139713/">iHeart Radio</a> | <a href="https://pca.st/r5l49jbn">PocketCast</a> | <a href="https://www.youtube.com/@reidblackman">YouTube</a></em></p></div><p>Reid Blackman is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p>]]></content:encoded></item><item><title><![CDATA[AI is Culturally Ignorant]]></title><description><![CDATA[With Rocky Clancy]]></description><link>https://reidblackman.substack.com/p/ai-is-culturally-ignorant</link><guid isPermaLink="false">https://reidblackman.substack.com/p/ai-is-culturally-ignorant</guid><dc:creator><![CDATA[Reid Blackman]]></dc:creator><pubDate>Thu, 22 Jan 2026 12:10:33 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/185321570/aa839ba0768a4f50d5a825fff7776ea8.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>AI is deployed across the globe. But how sensitive is it to the cultural contexts - ethics, norms, laws and regulations - in which it finds itself. My guest today, Rocky Clancy of Virginia Tech, argues that AI is too Western-focused. We need to engage in empirical research so that AI is developed in a way that comports with the people it interacts with, wherever they are.</p><p><em>&#8203;Rockwell Clancy is a Research Scientist in the Department of Engineering Education at Virginia Tech. Before moving to Virginia, he was a Research Assistant Professor at Mines, Lecturer at Delft, Associate Teaching Professor at the University of Michigan-Shanghai Jiao Tong University Joint Institute, and Research Fellow in the Institute of Social Cognition and Decision-making, Shanghai Jiao Tong University. Rockwell completed his PhD in philosophy and literature at Purdue University in 2012, and worked as a long-term educational to set up a course and write a corresponding textbook on global engineering ethics for a grant project at Purdue during the Spring 2016 semester.</em></p><p>Website: <a href="https://rockwellfclancy.com/">https://rockwellfclancy.com/</a></p><p><a href="https://link.springer.com/article/10.1007/s43681-025-00821-6">Exploring AI ethics in global contexts: a culturally responsive, psychologically realist approach</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://reidblackman.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://reidblackman.substack.com/subscribe?"><span>Subscribe now</span></a></p><div class="pullquote"><p><em><strong>Subscribe where you listen:</strong></em></p><div class="apple-podcast-container" data-component-name="ApplePodcastToDom"><iframe class="apple-podcast " data-attrs="{&quot;url&quot;:&quot;https://embed.podcasts.apple.com/us/podcast/ai-is-culturally-ignorant/id1751550186?i=1000746160323&quot;,&quot;isEpisode&quot;:true,&quot;imageUrl&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/podcast-episode_1000746160323.jpg&quot;,&quot;title&quot;:&quot;AI is Culturally Ignorant&quot;,&quot;podcastTitle&quot;:&quot;Ethical Machines&quot;,&quot;podcastByline&quot;:&quot;&quot;,&quot;duration&quot;:2492000,&quot;numEpisodes&quot;:&quot;&quot;,&quot;targetUrl&quot;:&quot;https://podcasts.apple.com/us/podcast/ai-is-culturally-ignorant/id1751550186?i=1000746160323&amp;uo=4&quot;,&quot;releaseDate&quot;:&quot;2026-01-22T05:00:30Z&quot;}" src="https://embed.podcasts.apple.com/us/podcast/ai-is-culturally-ignorant/id1751550186?i=1000746160323" frameborder="0" allow="autoplay *; encrypted-media *;" allowfullscreen="true"></iframe></div><p><em><a href="https://open.spotify.com/show/117LXLgTRuZnxb17VzpCop?si=bb9660a6a65f44d6">Spotify</a> | <a href="https://podcasts.apple.com/us/podcast/ethical-machines/id1751550186">Apple Podcasts</a> | <a href="https://iheart.com/podcast/189139713/">iHeart Radio</a> | <a href="https://pca.st/r5l49jbn">PocketCast</a> | <a href="https://www.youtube.com/@reidblackman">YouTube</a></em></p></div><p>Reid Blackman is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p>]]></content:encoded></item><item><title><![CDATA[When Metrics Make Us Happy, or Miserable]]></title><description><![CDATA[With C. Thi Nguyen]]></description><link>https://reidblackman.substack.com/p/when-metrics-make-us-happy-or-miserable</link><guid isPermaLink="false">https://reidblackman.substack.com/p/when-metrics-make-us-happy-or-miserable</guid><dc:creator><![CDATA[Reid Blackman]]></dc:creator><pubDate>Thu, 15 Jan 2026 12:15:28 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/184622045/4b11efceddd5a676d7560d03e226fd85.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>When we&#8217;re playing a game or a sport, we like being measured. We want a high score, we want to beat the game. Measurement makes it fun. But in work, being measured, hitting our numbers, can make us miserable. Why does measuring ourselves sometimes enhance and sometimes undermine our happiness and sense of fulfillment? That&#8217;s the question C. Thi Nguyen tackles in his new book &#8220;The Score: How to Stop Playing Somebody Else&#8217;s Game.&#8221; Thi is one of the most interesting philosophers I know - enjoy!</p><p><em>C. Thi Nguyen is a philosophy professor at University of Utah. He writes about trust, art, games, and communities. He is interested in the ways that our social structures and technologies shape how we think and what we value. His first book is Games: Agency as Art. It was awarded the American Philosophical Associations 2021 Book Prize.</em></p><p>Website: <a href="https://objectionable.net/">https://objectionable.net/</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://reidblackman.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://reidblackman.substack.com/subscribe?"><span>Subscribe now</span></a></p><div class="pullquote"><p><em><strong>Subscribe where you listen:</strong></em></p><div class="apple-podcast-container" data-component-name="ApplePodcastToDom"><iframe class="apple-podcast " data-attrs="{&quot;url&quot;:&quot;https://embed.podcasts.apple.com/us/podcast/when-metrics-make-us-happy-or-miserable/id1751550186?i=1000745244655&quot;,&quot;isEpisode&quot;:true,&quot;imageUrl&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/podcast-episode_1000745244655.jpg&quot;,&quot;title&quot;:&quot;When Metrics Make Us Happy, or Miserable&quot;,&quot;podcastTitle&quot;:&quot;Ethical Machines&quot;,&quot;podcastByline&quot;:&quot;&quot;,&quot;duration&quot;:3238000,&quot;numEpisodes&quot;:&quot;&quot;,&quot;targetUrl&quot;:&quot;https://podcasts.apple.com/us/podcast/when-metrics-make-us-happy-or-miserable/id1751550186?i=1000745244655&amp;uo=4&quot;,&quot;releaseDate&quot;:&quot;2026-01-15T05:30:09Z&quot;}" src="https://embed.podcasts.apple.com/us/podcast/when-metrics-make-us-happy-or-miserable/id1751550186?i=1000745244655" frameborder="0" allow="autoplay *; encrypted-media *;" allowfullscreen="true"></iframe></div><p><em><a href="https://open.spotify.com/show/117LXLgTRuZnxb17VzpCop?si=bb9660a6a65f44d6">Spotify</a> | <a href="https://podcasts.apple.com/us/podcast/ethical-machines/id1751550186">Apple Podcasts</a> | <a href="https://iheart.com/podcast/189139713/">iHeart Radio</a> | <a href="https://pca.st/r5l49jbn">PocketCast</a> | <a href="https://www.youtube.com/@reidblackman">YouTube</a></em></p></div><p>Reid Blackman is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p>]]></content:encoded></item><item><title><![CDATA[We Need International Agreement on AI Standards]]></title><description><![CDATA[Watch now | With Joanna Bryson]]></description><link>https://reidblackman.substack.com/p/we-need-international-agreement-on</link><guid isPermaLink="false">https://reidblackman.substack.com/p/we-need-international-agreement-on</guid><dc:creator><![CDATA[Reid Blackman]]></dc:creator><pubDate>Thu, 08 Jan 2026 12:02:58 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/183807414/af93b6da12f416aa50a5745bf45d04da.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>When it comes to the foundation models that are created by the likes of Google, Anthropic, and OpenAI, we need to treat them as utility providers. So argues my guest, Joanna Bryson, Professor of Ethics and Technology at the Hertie School of Business in Berlin, Germany. She further argues that the only way we can move forward safely is to create a transnational coalition of the willing that creates and enforces ethical and safety standards for AI. Why such a coalition is necessary, who might be part of it, how plausible it is that we can create such a thing, and more are covered in our conversation.</p><p><em>Dr. Joanna Bryson is an academic recognised for broad expertise on intelligence, its nature, and its consequences.  Holding two degrees each in psychology and AI (BA Chicago, MSc &amp; MPhil Edinburgh, PhD MIT), she is since 2020 the Professor of Ethics and Technology at Hertie School, in Berlin, where she was hired as one of the founding professors of their Centre for Digital Governance.</em></p><p>Website: <a href="https://www.joannajbryson.org/">https://www.joannajbryson.org/</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://reidblackman.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://reidblackman.substack.com/subscribe?"><span>Subscribe now</span></a></p><div class="pullquote"><p><em><strong>Subscribe where you listen:</strong></em></p><div class="apple-podcast-container" data-component-name="ApplePodcastToDom"><iframe class="apple-podcast " data-attrs="{&quot;url&quot;:&quot;https://embed.podcasts.apple.com/us/podcast/we-need-international-agreement-on-ai-standards/id1751550186?i=1000744247385&quot;,&quot;isEpisode&quot;:true,&quot;imageUrl&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/podcast-episode_1000744247385.jpg&quot;,&quot;title&quot;:&quot;We Need International Agreement on AI Standards&quot;,&quot;podcastTitle&quot;:&quot;Ethical Machines&quot;,&quot;podcastByline&quot;:&quot;&quot;,&quot;duration&quot;:2872000,&quot;numEpisodes&quot;:&quot;&quot;,&quot;targetUrl&quot;:&quot;https://podcasts.apple.com/us/podcast/we-need-international-agreement-on-ai-standards/id1751550186?i=1000744247385&amp;uo=4&quot;,&quot;releaseDate&quot;:&quot;2026-01-08T06:02:39Z&quot;}" src="https://embed.podcasts.apple.com/us/podcast/we-need-international-agreement-on-ai-standards/id1751550186?i=1000744247385" frameborder="0" allow="autoplay *; encrypted-media *;" allowfullscreen="true"></iframe></div><p><em><a href="https://open.spotify.com/show/117LXLgTRuZnxb17VzpCop?si=bb9660a6a65f44d6">Spotify</a> | <a href="https://podcasts.apple.com/us/podcast/ethical-machines/id1751550186">Apple Podcasts</a> | <a href="https://iheart.com/podcast/189139713/">iHeart Radio</a> | <a href="https://pca.st/r5l49jbn">PocketCast</a> | <a href="https://www.youtube.com/@reidblackman">YouTube</a></em></p></div><p>Reid Blackman is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p>]]></content:encoded></item><item><title><![CDATA[Rewriting History with AI]]></title><description><![CDATA[With Nuno Moniz]]></description><link>https://reidblackman.substack.com/p/rewriting-history-with-ai</link><guid isPermaLink="false">https://reidblackman.substack.com/p/rewriting-history-with-ai</guid><dc:creator><![CDATA[Reid Blackman]]></dc:creator><pubDate>Thu, 18 Dec 2025 12:03:12 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/181907309/782b9ec88d02a91c6bbb67ff1b2c2364.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>What happens when students turn to LLMs to learn about history? My guest, Nuno Moniz, Associate Research Professor at the University of Notre Dame, argues this can ultimately lead to mass confusion, which in turn can lead to tragic conflicts. There are at least three sources of that confusion: AI hallucinations, misinformation spreading, and biased interpretations of history getting the upper hand. Exactly how bad this can get and what we&#8217;re supposed to do about it isn&#8217;t obvious, but Nuno has some suggestions.</p><p><em>Nuno Moniz is an Associate Research Professor at the Lucy Family Institute for Data &amp; Society. He is also the Associate Director of the Data, Inference, Analytics, and Learning Lab and joined the University of Notre Dame in 2022.</em></p><p>Website: <a href="https://www.nunomoniz.co/">https://www.nunomoniz.co/</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://reidblackman.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://reidblackman.substack.com/subscribe?"><span>Subscribe now</span></a></p><div class="pullquote"><p><em><strong>Subscribe where you listen:</strong></em></p><div class="apple-podcast-container" data-component-name="ApplePodcastToDom"><iframe class="apple-podcast " data-attrs="{&quot;url&quot;:&quot;https://embed.podcasts.apple.com/us/podcast/rewriting-history-with-ai/id1751550186?i=1000741801786&quot;,&quot;isEpisode&quot;:true,&quot;imageUrl&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/podcast-episode_1000741801786.jpg&quot;,&quot;title&quot;:&quot;Rewriting History with AI&quot;,&quot;podcastTitle&quot;:&quot;Ethical Machines&quot;,&quot;podcastByline&quot;:&quot;&quot;,&quot;duration&quot;:3091000,&quot;numEpisodes&quot;:&quot;&quot;,&quot;targetUrl&quot;:&quot;https://podcasts.apple.com/us/podcast/rewriting-history-with-ai/id1751550186?i=1000741801786&amp;uo=4&quot;,&quot;releaseDate&quot;:&quot;2025-12-18T07:00:13Z&quot;}" src="https://embed.podcasts.apple.com/us/podcast/rewriting-history-with-ai/id1751550186?i=1000741801786" frameborder="0" allow="autoplay *; encrypted-media *;" allowfullscreen="true"></iframe></div><p><em><a href="https://open.spotify.com/show/117LXLgTRuZnxb17VzpCop?si=bb9660a6a65f44d6">Spotify</a> | <a href="https://podcasts.apple.com/us/podcast/ethical-machines/id1751550186">Apple Podcasts</a> | <a href="https://iheart.com/podcast/189139713/">iHeart Radio</a> | <a href="https://pca.st/r5l49jbn">PocketCast</a> | <a href="https://www.youtube.com/@reidblackman">YouTube</a></em></p></div><p>Reid Blackman is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p>]]></content:encoded></item></channel></rss>