Netflix To Buy Warner Bros For $82.7 Billion, But Trump FCC, DOJ Could Intervene For All The Wrong Reasons [Techdirt]
So Netflix has announced that it’s buying Warner Brothers Discovery (including HBO) for a whopping $82.7 billion. As we’ve well covered, it’s the latest in a long series of pointless Warner mergers stretching back to the 2001 AOL acquisition, which all resulted in oodles of chaos, price hikes, layoffs, and generally a steady erosion in product quality.
Netflix’s deal includes a $5.8 billion breakup fee and promises to maintain Warner Bros. current operations, “including theatrical releases.” The deal doesn’t include Warner Bros Discovery’s struggling linear networks business (CNN, TNT, HGTV and Discovery+) which Netflix wisely wanted nothing to do with. Those are scheduled to be spun out next year into their own sagging sub-company.
Netflix is, of course, making all manner of pre-merger promises about how the deal will be great for everyone, especially creatives:
“Netflix also made its pitch to filmmakers and creatives, writing that “by uniting Netflix’s member experience and global reach with Warner Bros.’ renowned franchises and extensive library, the Company will create greater value for talent — offering more opportunities to work with beloved intellectual property, tell new stories and connect with a wider audience than ever before.”
But as we’ve seen the last three or four times Warner Brothers has been acquired, pre-merger promises mean absolutely nothing. The massive debt created by these acquisitions inevitably results in panicked cost cutting, which usually involves mass layoffs, (even bigger) price hikes, and a general cannibalization of brand and product quality. It happens over and over again.
Of the suitors that could have bought Warner Brothers Discovery (Comcast/NBC and Larry Ellison/CBS/Paramount), Netflix is probably the “best” option. They are (for now) the least up Trump’s ass of the three bidders; and generally may retain more of the core Warner Brothers Discovery infrastructure and staff due to fewer existing redundancies.
That’s not to say the deal will be good, necessarily. If we lived in a non-corrupt country with functioning regulators, the government likely wouldn’t allow any additional consolidation in mainstream corporate media, as the results to date have been nothing but harmful for labor, consumers, and markets. These companies’ journalism arms, if you haven’t noticed, like to downplay or ignore this fact in coverage.
Play a little game with me at home: if you’re reading a story about this deal, stop and notice if the journalist and outlet, at any point, mentions the fact that the decades’ worth of past variants of Warner deals were utterly disastrous for labor, consumers, creativity, and healthy markets. Because that’s kind of important context if your job is informing the public of the truth!

The bungled AT&T acquisition of Warner and DirecTV alone resulted in a massive layoff spree including 50,000 people. But when the consolidated corporate press covers the latest merger, that’s not mentioned. Why is that choice made editorially, do you wager?
Meanwhile Netflix still has a hurdle to face: the weird zealots at the Trump DOJ and FCC. Paramount and/or the Trump administration has spent the last week seeding complaints in Republican-friendly media that the bidding process was unfair to Larry Ellison and CBS/Paramount, and that the Trump administration is concerned about the antitrust impact of a Netflix Warner Brothers combination.
The Trump administration couldn’t give any less of a shit about antitrust or consolidated corporate power; they just want leverage over Netflix and/or to make sure their friend Larry Ellison can acquire HBO and CNN. And they’re mad at Netflix because they put some gay people in shows about the military. The Ellisons may acquire the spun off TV assets, but they may also still want to leverage Trump to get much more.
So I would not be surprised that if in a few weeks or so you see Trump’s FCC lackey Brendan Carr launch some kind of fake inquiry into “irregularities in the bidding process,” where he talks a lot about Netflix’s consolidated power “not being in the public interest.” The goal will be twofold: to force ownership over to Ellison, or at least to (as we’ve seen with CBS mergers) force Netflix to kiss Donald’s ass.
The press (and the usual assortment of useful idiot pundits) will likely then help Trump pretend these inquiries are legitimate “populist” antitrust actions.
If you recall, the first Trump administration sued to stop the AT&T Time Warner deal. That was heralded as a rare example of the administration actually caring about consolidated corporate power by the press; but it turned out it was mostly because Rupert Murdoch had his own acquisition offer for CNN rejected and wanted to scuttle the deal.
Keep your eyes peeled for regulatory shenanigans. Even if the Trump administration doesn’t abuse FCC and DOJ power to help Ellison, they’ll certainly abuse regulatory merger approval power to try and force Netflix to kiss their asses in new and problematic ways (see: CBS, Verizon). And Netflix, no stranger to throwing ethics under the bridge when convenient, will very likely be happy to oblige.
Techdirt Podcast Episode 439: The Resonant Computing Manifesto [Techdirt]
Earlier today, we joined in announcing the Resonant Computing Manifesto: a call for restoring a culture of technology that empowers users and enriches their lives. The manifesto was created by a group led by Alex Komoroske, and today Alex joins the podcast for a deeper dive into what “resonant computing” means and what a better future might look like.
You can also download this episode directly in MP3 format.
Follow the Techdirt Podcast on Soundcloud, subscribe via Apple Podcasts or Spotify, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
Radicalized Anti-AI Activist Should Be A Wake Up Call For Doomer Rhetoric [Techdirt]
A cofounder of a Bay Area “Stop AI” activist group abandoned its commitment to nonviolence, assaulted another member, and made statements that left the group worried he might obtain a weapon to use against AI researchers. The threats prompted OpenAI to lock down its San Francisco offices a few weeks ago. In researching this movement, I came across statements that he made about how almost any actions he took were justifiable, since he believed OpenAI was going to “kill everyone and every living thing on earth.” Those are detailed below.
I think it’s worth exploring the radicalization process and the broader context of AI Doomerism. We need to confront the social dynamics that turn abstract fears of technology into real-world threats against the people building it.

OpenAI’s San Francisco Offices Lockdown
On November 21, 2025, Wired reported that OpenAI’s San Francisco offices went into lockdown after an internal alert about a “Stop AI” activist. The activist allegedly expressed interest in “causing physical harm to OpenAI employees” and may have tried to acquire weapons.
The article did not mention his name but hinted that, before his disappearance, he had stated he was “no longer part of Stop AI.”1 On November 22, 2025, the activist group’s Twitter account posted that it was Sam Kirchner, the cofounder of “Stop AI.”
According to Wired’s reporting
A high-ranking member of the global security team said [in OpenAI Slack] “At this time, there is no indication of active threat activity, the situation remains ongoing and we’re taking measured precautions as the assessment continues.” Employees were told to remove their badges when exiting the building and to avoid wearing clothing items with the OpenAI logo.

“Stop AI” provided more details on the events leading to OpenAI’s lockdown:
Earlier this week, one of our members, Sam Kirchner, betrayed our core values by assaulting another member who refused to give him access to funds. His volatile, erratic behavior and statements he made renouncing nonviolence caused the victim of his assault to fear that he might procure a weapon that he could use against employees of companies pursuing artificial superintelligence.
We prevented him from accessing the funds, informed the police about our concerns regarding the potential danger to AI developers, and expelled him from Stop AI. We disavow his actions in the strongest possible terms.
Later in the day of the assault, we met with Sam; he accepted responsibility and agreed to publicly acknowledge his actions. We were in contact with him as recently as the evening of Thursday Nov 20th. We did not believe he posed an immediate threat, or that he possessed a weapon or the means to acquire one.
However, on the morning of Friday Nov 21st, we found his residence in West Oakland unlocked and no sign of him. His current whereabouts and intentions are unknown to us; however, we are concerned Sam Kirchner may be a danger to himself or others. We are unaware of any specific threat that has been issued.
We have taken steps to notify security at the major US corporations developing artificial superintelligence. We are issuing this public statement to inform any other potentially affected parties.”

A “Stop AI” activist named Remmelt Ellen wrote that Sam Kirchner “left both his laptop and phone behind and the door unlocked.” “I hope he’s alive,” he added.
Early December, the SF Standard reported that the “cops [are] still searching for ‘volatile’ activist whose death threats shut down OpenAI office.” Per this coverage, the San Francisco police are warning that he could be armed and dangerous. “He threatened to go to several OpenAI offices in San Francisco to ‘murder people,’ according to callers who notified police that day.”
A Bench Warrant for Kirchner’s Arrest
When I searched for any information that had not been reported before, I found a revealing press release. It invited the press to a press conference on the morning of Kirchner’s disappearance:
“Stop AI Defendants Speak Out Prior to Their Trial for Blocking Doors of Open AI.”
When: November 21, 2025, 8:00 AM.
Where: Steps in front of the courthouse (San Francisco Superior Court).
Who: Stop AI defendants (Sam Kirchner, Wynd Kaufmyn, and Guido Reichstadter), their lawyers, and AI experts.
Sam Kirchner is quoted as saying, “We are acting on our legal and moral obligation to stop OpenAI from developing Artificial Superintelligence, which is equivalent to allowing the murder [of] people I love as well as everyone else on earth.”
Needless to say, things didn’t go as planned. That Friday morning, Sam Kirchner went missing, triggering the OpenAI lockdown.

Later, the SF Standard confirmed the trial angle of this story: “Kirchner was not present for a Nov. 21 court hearing, and a judge issued a bench warrant for his arrest.”

“Stop AI” – a Bay Area-Centered “Civil Resistance” Group
“Stop AI” calls itself a “non-violent civil resistance group” or a “non-violent activist organization.” The group’s focus is on stopping AI development, especially the race to AGI (Artificial General Intelligence) and “Superintelligence.” Their worldview is extremely doom-heavy, and their slogans include: “AI Will Kill Us All,” “Stop AI or We’re All Gonna Die,” and “Close OpenAI or We’re All Gonna Die!”
According to a “Why Stop AI is barricading OpenAI” post on the LessWrong forum from October 2024, the group is inspired by climate groups like Just Stop Oil and Extinction Rebellion, but focused on “AI extinction risk,” or in their words, “risk of extinction.” Sam Kirchner explained in an interview: “Our primary concern is extinction. It’s the primary emotional thing driving us: preventing our loved ones, and all of humanity, from dying.”
Unlike the rest of the “AI existential risk” ecosystem, which is often well-funded by effective altruism billionaires such as Dustin Moskovitz (Coefficient Giving, formerly Open Philanthropy) and Jaan Tallinn (Survival and Flourishing Fund), this specific group is not a formal nonprofit or funded NGO, but rather a loosely organized grassroots group of volunteer-run activism. They made their financial situation pretty clear when the “Stop AI” Twitter account replied to a question with: “We are fucking poor, you dumb bitch.”2
According to The Register, “STOP AI has four full-time members at the moment (in Oakland) and about 15 or so volunteers in the San Francisco Bay Area who help out part-time.”
Since its inception, “Stop AI” has had two central organizers: Guido Reichstadter and Sam Kirchner (the current fugitive). According to The Register and the Bay Area Current, Guido Reichstadter has worked as a jeweler for 20 years. He has an undergraduate degree in physics and math. Reichstadter’s prior actions include climate change and abortion-rights activism.
In June 2022, Reichstadter climbed the Frederick Douglass Memorial Bridge in Washington, D.C., to protest the Supreme Court’s decision overturning Roe v. Wade. Per the news coverage, he said, “It’s time to stop the machine.” “Reichstadter hopes the stunt will inspire civil disobedience nationwide in response to the Supreme Court’s ruling.”
Reichstadter moved to the Bay Area from Florida around 2024 explicitly to organize civil disobedience against AGI development via “Stop AI.” Recently, he undertook a hunger strike outside Anthropic’s San Francisco office for 30 days.
Sam Kirchner worked as a DoorDash driver and, before that, as an electrical technician. He has a background in mechanical and electrical engineering. He moved to San Francisco from Seattle, cofounded “Stop AI,” and “stayed in a homeless shelter for four months.”
AI Doomerism’s Rhetoric
The group’s rationale included this claim (published on their account on August 29, 2025): “Humanity is walking off a cliff,” with AGI leading to “ASI covering the earth in datacenters.”
As 1a3orn pointed out, the original “Stop AI” website said we risked “recursive self-improvement” and doom from any AI models trained with more than 10^23 FLOPs. (The group dropped this prediction at some point) Later, in a (now deleted) “Stop AI Proposal,” the group asked to “Permanently ban ANNs (Artificial Neural Networks) on any computer above 10^25 FLOPS. Violations of the immediate 10^25 ANN FLOPS cap will be punishable by life in prison.”
To be clear, tens of current AI models were trained with over 10^25 FLOPs.

In a “For Humanity” podcast episode with Sam Kirchner, “Go to Jail to Stop AI” (episode #49, October 14, 2024), he said: “We don’t really care about our criminal records because if we’re going to be dead here pretty soon or if we hand over control which will ensure our future extinction here in a few years, your criminal record doesn’t matter.”

The podcast promoted this episode in a (now deleted) tweet, quoting Kirchner: “I’m willing to DIE for this.” “I want to find an aggressive prosecutor out there who wants to charge OpenAI executives with attempted murder of eight billion people. Yes. Literally, why not? Yeah, straight up. Straight up. What I want to do is get on the news.”

After Kirchner’s disappearance, the podcast host and founder of “GuardRailNow” and the “AI Risk Network,” John Sherman, deleted this episode from podcast platforms (Apple, Spotify) and YouTube. Prior to its removal, I downloaded the video (length 01:14:14).
Sherman also produced an emotional documentary with “Stop AI” titled “Near Midnight in Suicide City” (December 5, 2024, episode #55. See its trailer and promotion on the Effective Altruism Forum). It’s now removed from podcast platforms and YouTube, though I have a copy in my archive (length 1:29:51). It gathered 60k views before its removal.
The group’s radical rhetoric was out in the open. “If AGI developers were treated with reasonable precaution proportional to the danger they are cognizantly placing humanity in by their venal and reckless actions, many would have a bullet put through their head,” wrote Guido Reichstadter in September 2024.

The above screenshot appeared in a Techdirt piece, “2024: AI Panic Flooded the Zone Leading to a Backlash.” The warning signs were there:
Also, like in other doomsday cults, the stress of believing an apocalypse is imminent wears down the ability to cope with anything else. Some are getting radicalized to a dangerous level, playing with the idea of killing AI developers (if that’s what it takes to “save humanity” from extinction).
Both PauseAI and StopAI stated that they are non-violent movements that do not permit “even joking about violence.” That’s a necessary clarification for their various followers. There is, however, a need for stronger condemnation. The murder of the UHC CEO showed us that it only takes one brainwashed individual to cross the line.
In early December 2024, I expressed my concern on Twitter: “Is the StopAI movement creating the next Unabomber?” The screenshot of “Getting arrested is nothing if we’re all gonna die” was taken from Sam Kirchner.
Targeting OpenAI
The main target of their civil-disobedience-style actions was OpenAI. The group explained that their “actions against OpenAI were an attempt to slow OpenAI down in their attempted murder of everyone and every living thing on earth.” In a tweet promoting the October blockade, Guido Reichstadter claimed about OpenAI: “These people want to see you dead.”
“My co-organizers Sam and Guido are willing to put their body on the line by getting arrested repeatedly,” said Remmelt Ellen. “We are that serious about stopping AI development.”
On January 6, 2025, Kirchner and Reichstadter went on trial for blocking the entrance to OpenAI on October 21, 2024, to “stop AI before AI stop us” and on September 24, 2024 (“criminal record doesn’t matter if we’re all dead”), as well as blocking the road in front of OpenAI on September 12, 2024.
The “Stop AI” event page on Luma list further protests in front of OpenAI: on January 10, 2025; April 18, 2025; May 23, 2025 (coverage); July 25, 2025; and October 24, 2025. On March 2, 2025, they had a protest against Waymo.
On February 22, 2025, three “Stop AI” protesters were arrested for trespassing after barricading the doors to the OpenAI offices and allegedly refusing to leave the company’s property. It was covered by a local TV station. Golden Gate Xpress documented the activists detained in the police van: Jacob Freeman, Derek Allen, and Guido Reichstadter. Officers pulled out bolt cutters and cut the lock and chains on the front doors. In a Bay Area Current article, “Why Bay Area Group Stop AI Thinks Artificial Intelligence Will Kill Us All,” Kirchner is quoted as saying, “The work of the scientists present” is “putting my family at risk.”
October 20, 2025 was the first day of the jury trial of Sam Kirchner, Guido Reichstadter, Derek Allen, and Wynd Kaufmyn.
On November 3, 2025, “Stop AI”’s public defender served OpenAI CEO Sam Altman with a subpoena at a speaking event at the Sydney Goldstein Theater in San Francisco. The group claimed responsibility for the onstage interruption, saying the goal was to prompt the jury to ask Altman “about the extinction threat that AI poses to humanity.”
Public Messages to Sam Kirchner
“Stop AI” stated it is “deeply committed to nonviolence“ and “We wish no harm on anyone, including the people developing artificial superintelligence.” In a separate tweet, “Stop AI” wrote to Sam: “Please let us know you’re okay. As far as we know, you haven’t yet crossed a line you can’t come back from.”
John Sherman, the “AI Risk Network” CEO, pleaded, “Sam, do not do anything violent. Please. You know this is not the way […] Please do not, for any reason, try to use violence to try to make the world safer from AI risk. It would fail miserably, with terrible consequences for the movement.”
Rhetoric’s Ramifications
Taken together, the “imminent doom” rhetoric fosters conditions in which vulnerable individuals could be dangerously radicalized, echoing the dynamics seen in past apocalyptic movements.
In “A Cofounder’s Disappearance—and the Warning Signs of Radicalization”, City Journal summarized: “We should stay alert to the warning signs of radicalization: a disaffected young person, consumed by abstract risks, convinced of his own righteousness, and embedded in a community that keeps ratcheting up the moral stakes.”
“The Rationality Trap – Why Are There So Many Rationalist Cults?” described this exact radicalization process, noting how the more extreme figures (e.g., Eliezer Yudkowsky)3 set the stakes and tone: “Apocalyptic consequentialism, pushing the community to adopt AI Doomerism as the baseline, and perceived urgency as the lever. The world-ending stakes accelerated the ‘ends-justify-the-means’ reasoning.”
We already have a Doomers “murder cult” called the Zizians and their story is way more bizarre than anything you’ve read here. Like, awfully more extreme. And, hopefully, such things should remain rare.
What we should discuss is the dangers of such an extreme (and misleading) AI discourse. If human extinction from AI is just around the corner, based on the Doomers’ logic, all their suggestions are “extremely small sacrifices to make.” Unfortunately, the situation we’re in is: “Imagined dystopian fears have turned into real dystopian ‘solutions.’”
This is still an evolving situation. As of this writing, Kirchner’s whereabouts remain unknown.
—————————
Dr. Nirit Weiss-Blatt, Ph.D. (@DrTechlash), is a communication researcher and author of “The TECHLASH and Tech Crisis Communication” book and the “AI Panic” newsletter.
—————————
Endnotes
︎
︎
︎Colorado Judge Says ICE Can’t Arrest People Without A Warrant [Techdirt]
The Trump administration is so sure it can get away with anything that it’s willing to try anything. That misapprehension of the situation has resulted in at least 200 rulings against the administration’s anti-immigrant efforts. Still, the regime persists with its attempts to brute force constitutional rights out of existence.
Like it or not, MAGA faithful, immigrants have rights. They have the rights natural born citizens have access to, which is certainly something the Trump administration doesn’t think is true, even though it is.
The administration got a big win from the Supreme Court in terms of violating Fourth Amendment rights. In a solo concurrence, Justice Kavanaugh made it clear the majority believed there is nothing wrong with rounding up people simply because they look a bit more brown than white on the outside.
Meanwhile, ICE pretends it’s still targeting criminals, even when all data says otherwise. It continues to claim it’s going after known criminals but its paperwork doesn’t match its public statements. If it was really going after criminals, it should be able to obtain arrest warrants. The fact that it rarely has anything more than administrative warrants (self-issued warrants without judicial backing) in its possession at any given time contradicts its assertions about its alleged “targeted” enforcement efforts.
The Trump administration continues to get railed on the regular by federal courts. The latest is no exception:
A federal judge in Denver on Tuesday ordered federal immigration officers to stop making arrests in Colorado without a warrant, unless the detainee posed a flight risk, the latest in a string of lower-court decisions rebuking President Trump’s immigration enforcement tactics.
[…]
In Colorado, Judge Jackson, an appointee of President Barack Obama, found that immigration agents had acted unlawfully by arresting and detaining immigrants — some for as long as 100 days — without showing the required probable cause that they posed a threat of fleeing.
This decision aligns itself with several others. Unfortunately, the body of judicial work ruling against Trump’s anti-immigration programs hasn’t really changed anything. Many rulings have been appealed. What has yet to be heard by the Supreme Court has often been given a pass by appellate judges.
And even if a court rules definitively against Trump, there’s no reason to believe this administration will act in accordance with the ruling. Emil Bove — the former DOJ lawyer who told prosecutors to tell the courts to fuck themselves if they opposed Trump — is now sitting on the Third Circuit. Other rulings delivered by federal courts have been immediately stayed by appellate courts who normally would have allowed things to play out at the lower level before undercutting their findings.
What’s happening here affects a lot of rights beyond the immediate recognition of Fourth Amendment incursion. These warrantless arrests are often followed by indefinite detentions that involve violations of Fifth, Sixth, and Fourteenth Amendment rights.
This government is plain nasty. It has zero interest in the rule of law. It wants to be the bully on the block at all times. If the system of checks and balances rears it head, the administration will either ignore the concerns raised or engage in unprecedented attacks on the judiciary itself. Pointing out the incompetency of Trump administration thugs is about as useless as criticizing the GPA of the person beating your skull to pulp with a baseball bat. The end result is the same. Any legitimate points raised mid-beating won’t do anything to reduce the CTE trauma. It’s best to assume bad faith from the beginning because this is the administration’s sole operating speed.
Daily Deal: The Complete Raspberry Pi And Alexa A-Z Bundle [Techdirt]
Learn Raspberry Pi and start building Amazon Alexa projects with The Complete Raspberry Pi and Alexa A-Z Bundle. Catered for all levels, these project-based courses will get you up and running with the basics of Pi, before escalating to full projects. Before you know it, you’ll be building a gaming system to play old Nintendo, Sega, and Playstation games and a personal digital assistant using the Google Assistant API. You will also learn how to build Alexa Skills that will run on any Amazon Echo device to voice control anything in your home, and how to build your own Echo clone. The bundle is on sale for $30.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Kanji of the Day: 声 [Kanji of the Day]
声
✍7
小2
voice
セイ ショウ
こえ こわ-
声優 (せいゆう) — voice actor or actress (radio, animation, etc.)
声明 (しょうみょう) — sabdavidya (ancient Indian linguistic and grammatical studies)
歓声 (かんせい) — cheer
声援 (せいえん) — encouragement
音声 (おんじょう) — voice
大声 (おおごえ) — loud voice
歌声 (うたごえ) — singing voice
共同声明 (きょうどうせいめい) — joint declaration
声をかける (こえをかける) — to greet
掛け声 (かけごえ) — shout (of encouragement, etc.)
Generated with kanjioftheday by Douglas Perkins.
Kanji of the Day: 喉 [Kanji of the Day]
喉
✍12
中学
throat, voice
コウ
のど
耳鼻咽喉科 (じびいんこうか) — otorhinolaryngology
喉頭 (こうとう) — larynx
耳鼻咽喉 (じびいんこう) — ear, nose, and throat
喉元 (のどもと) — throat
喉の痛み (のどのいたみ) — sore throat
咽喉 (いんこう) — throat
喉が渇く (のどがかわく) — to be thirsty
喉頭蓋 (こうとうがい) — epiglottis
喉越し (のどごし) — feeling of food or drink going down one's throat
喉仏 (のどぼとけ) — Adam's apple
Generated with kanjioftheday by Douglas Perkins.
Bring Back Innovation That Empowers, Rather Than Extracts: The Resonant Computing Manifesto [Techdirt]
Everyone’s pissed at the tech industry. And for good reason. The term enshittification is super popular for many valid reasons. Companies that used to provide real value, are now focused on extracting more value from users, rather than improving their products and services. People used to be excited by new innovations. There was a time when many people felt more fulfilled after using new innovations that helped them do new things, communicate with new people, create new wonderful creations.
That feels like an unfortunately rare experience, so much so that some have forgotten about it entirely.
Remember when you’d use something new and feel… good? Empowered, even? When tech made you feel like you could do more, create more, connect more meaningfully?
Yeah, that’s mostly gone. We’ve replaced it with engagement metrics, growth hacks, and AI slop. The tech industry spent the last decade optimizing for shareholder value and calling it innovation.
But, it doesn’t need to be that way.
We can live in a world where technology works for us, not against us. Where we get value from it, rather than having it extract value from us.
So a group of us—organized by entrepreneur Alex Komoroske, who wrote for us this summer about why centralized AI isn’t inevitable—decided to articulate what the alternative actually looks like. Not just “tech should be better” hand-waving that we sometimes see, but actual principles for building technology that works for people instead of extracting from them.
We’re calling it the Resonant Computing Manifesto, and it’s an attempt to reclaim what innovation should mean:
We call this quality resonance. It’s the experience of encountering something that speaks to our deeper values. It’s a spark of recognition, a sense that we’re being invited to lean in, to participate. Unlike the digital junk food of the day, the more we engage with what resonates, the more we’re left feeling nourished, grateful, alive. As individuals, following the breadcrumbs of resonance helps us build meaningful lives. As communities, companies, and societies, cultivating shared resonance helps us break away from perverse incentives, and play positive-sum infinite games together.
For decades, technology has required standardized solutions to complex human problems. In order to scale software, you had to build for the average user, sanding away the edge cases. In many ways, this is why our digital world has come to resemble the sterile, deadening architecture that Alexander spent his career pushing back against.
That word—resonance—is doing real work here. It’s the opposite of what we’ve got now: software that leaves you feeling depleted, manipulated, or just vaguely dirty. Resonant computing is technology that makes you feel more capable, more connected, more like yourself.
This matters because the current narrative is stuck between two equally bankrupt positions: either all tech is inevitably corrupting, or we should just accelerate harder into whatever the VCs are funding this quarter. Both are bullshit. Tech can be good. It requires building for people rather than metrics. And to get there, we need to call it out and demand it.
I know that some of the more cynical among you will say that techies have always cloaked their efforts in the language of “empowering people” and “changing the world.” We don’t deny that. But we’d like to make it real, and to be able to use this conversation to remind everyone that technology can be good. If it follows certain principles.
So what does that actually mean? The manifesto lays out five principles:
- Private: In the era of AI, whoever controls the context holds the power. While data often involves multiple stakeholders, people must serve as primary stewards of their own context, determining how it’s used.
- Dedicated: Software should work exclusively for you, ensuring contextual integrity where data use aligns with your expectations. You must be able to trust there are no hidden agendas or conflicting interests.
- Plural: No single entity should control the digital spaces we inhabit. Healthy ecosystems require distributed power, interoperability, and meaningful choice for participants.
- Adaptable: Software should be open-ended, able to meet the specific, context-dependent needs of each person who uses it.
- Prosocial: Technology should enable connection and coordination, helping us become better neighbors, collaborators, and stewards of shared spaces, both online and off.
Notice what’s not in there: no handing over all your data to billionaires, no single solution from a centralized provider, no tech bro buzzwords. What we have here are requirements that take us away from the current ecosystem. Privacy doesn’t mean Mark Zuckerberg has to better protect your data. It means systems where you control your own data. Plural means more than “sprinkle in a few more competitors,” it’s about interoperability and the ability to actually leave with ease.
The “dedicated” principle is particularly important in the age of AI. Your AI tools shouldn’t have dual loyalty to you and to a giant company. It should work for you, period. That seems obvious, but look around: how many products actually meet that bar?
This is also why we’re not just throwing this out there and walking away. Unlike the endless parade of “ethics frameworks” that companies sign onto and promptly ignore, this is meant to be a starting point. It’s kicking off a conversation as well as guidelines for building actual systems. There’s a collaborative doc where people can contribute ideas, and we’ll be talking through what this looks like in practice.
We launched this manifesto yesterday afternoon at Wired’s Big Interview event in San Francisco, and Steven Levy wrote a lovely profile about it, which we spoke about on stage:
Humanity is the glue of the five principles of resonant computing listed in the document. It politely demands that users have control of their tech tools, which should promote social value and true connection. It is, natch, resonant of the idealism that once oozed from every pore of the creators of the early microcomputer revolution and the internet boom, when what was good for the world seemed more important than building scale and maximizing the stock price. “I certainly subscribe to the principles,” says Tim O’Reilly, an early signer who has been urging those same values for years.
Komoroske and his coauthors know that their campaign is only a tiny step toward actually fixing Silicon Valley. “I am under no illusion that some manifesto will magically solve this at all,” he says. (Komoroske himself has cofounded a startup called Common Tools, still in stealth, which presumably will be resonant AF.) Instead, the authors’ goal is to energize and support a new generation of tech professionals who want to be proud of their creations. “When they’re building things, they might start taking these ideas into account,” says Masnick. “And it becomes a tool for people within companies to push back on some of the incentives.”
If nothing else, a few thousand signers would indicate to the idealists that they’re not alone—and some of them might willingly pass on opportunities to make VP and instead make the software that they’d want to use themselves.
There’s the ability to sign onto the manifesto if you’d like. We’ve been thrilled to have folks like Tim O’Reilly, Bruce Schneier, Kevin Kelly and many, many others already sign on. The reaction at the Wired event yesterday from many enthusiastic folks suggested that many more would like to sign on as well.
I’ll have Alex on the Techdirt podcast later today to dig into what this looks like in practice—how you actually build systems that meet these principles, what the tradeoffs are, and why we think this is both necessary and possible.
This was a true group effort, and I want to credit everyone who contributed: Maggie Appleton, Samuel Arbesman, Daniel Barcay, Rob Hardy, Aishwarya Khanduja, Geoffrey Litt, Brendan McCord, Bernhard Seefeld, Ivan Vendrov, Amelia Wattenberger, Zoe Weinberg, and Simon Willison. Our regular meetings brainstorming and discussing all this have been a highlight of this year.
Look, I know manifestos are cheap. They seem to come out every few months. But here’s the thing: we’re at a genuinely weird moment where the biggest players in tech have decided that user empowerment was actually the problem all along. That’s not inevitable. It’s a choice. And we can make different choices.
This feeling has “resonated” with many of the people we’ve shared it with so far, and we hope that it resonates with you as well.
Resonant computing is possible—we’ve experienced it before. The question is whether we’re willing to build it, and whether users will demand it. That’s what this is about: creating a shared language and vision for what better looks like, so we can actually build toward it instead of just complaining about enshittification.
If that resonates with you, and you think that matters, sign the manifesto. Join the conversation. Build a better, more resonant, world.
Bold enough to fail [Seth Godin's Blog on marketing, tribes and respect]
The only theories worth testing are those that are falsifiable–that it’s possible for the test to indicate that in fact, the theory is wrong.
And the difference between art and illustration is the same. Illustration can’t fail. It can be improved, surely, but it’s not wrong.
Art, on the other hand, is a bold assertion, something that might not work.
Pluralistic: The Reverse-Centaur’s Guide to Criticizing AI (05 Dec 2025) [Pluralistic: Daily links from Cory Doctorow]
Last night, I gave a speech for the University of Washington's "Neuroscience, AI and Society" lecture series, through the university's Computational Neuroscience Center. It was called "The Reverse Centaur’s Guide to Criticizing AI," and it's based on the manuscript for my next book, "The Reverse Centaur’s Guide to Life After AI," which will be out from Farrar, Straus and Giroux next June:
The talk was sold out, but here's the text of my lecture. I'm very grateful to UW for the opportunity, and for a lovely visit to Seattle!
==
I'm a science fiction writer, which means that my job is to make up futuristic parables about our current techno-social arrangements to interrogate not just what a gadget does, but who it does it for, and who it does it to.
What I don't do is predict the future. No one can predict the future, which is a good thing, since if the future were predictable, that would mean that what we all do couldn't change it. It would mean that the future was arriving on fixed rails and couldn't be steered.
Jesus Christ, what a miserable proposition!
Now, not everyone understands the distinction. They think sf writers are oracles, soothsayers. Unfortunately, even some of my colleagues labor under the delusion that they can "see the future."
But for every sf writer who deludes themselves into thinking that they are writing the future, there are a hundred sf fans who believe that they are reading the future, and a depressing number of those people appear to have become AI bros. The fact that these guys can't shut up about the day that their spicy autocomplete machine will wake up and turn us all into paperclips has led many confused journalists and conference organizers to try to get me to comment on the future of AI.
That's a thing I strenuously resisted doing, because I wasted two years of my life explaining patiently and repeatedly why I thought crypto was stupid, and getting relentless bollocked by cryptocurrency cultists who at first insisted that I just didn't understand crypto. And then, when I made it clear that I did understand crypto, insisted that I must be a paid shill.
This is literally what happens when you argue with Scientologists, and life is Just. Too. Short.
So I didn't want to get lured into another one of those quagmires, because on the one hand, I just don't think AI is that important of a technology, and on the other hand, I have very nuanced and complicated views about what's wrong, and not wrong, about AI, and it takes a long time to explain that stuff.
But people wouldn't stop asking, so I did what I always do. I wrote a book.
Over the summer I wrote a book about what I think about AI, which is really about what I think about AI criticism, and more specifically, how to be a good AI critic. By which I mean: "How to be a critic whose criticism inflicts maximum damage on the parts of AI that are doing the most harm." I titled the book The Reverse Centaur's Guide to Life After AI, and Farrar, Straus and Giroux will publish it in June, 2026.
But you don't have to wait until then because I am going to break down the entire book's thesis for you tonight, over the next 40 minutes. I am going to talk fast.
#
Start with what a reverse centaur is. In automation theory, a "centaur" is a person who is assisted by a machine. You're a human head being carried around on a tireless robot body. Driving a car makes you a centaur, and so does using autocomplete.
And obviously, a reverse centaur is machine head on a human body, a person who is serving as a squishy meat appendage for an uncaring machine.
Like an Amazon delivery driver, who sits in a cabin surrounded by AI cameras, that monitor the driver's eyes and take points off if the driver looks in a proscribed direction, and monitors the driver's mouth because singing isn't allowed on the job, and rats the driver out to the boss if they don't make quota.
The driver is in that van because the van can't drive itself and can't get a parcel from the curb to your porch. The driver is a peripheral for a van, and the van drives the driver, at superhuman speed, demanding superhuman endurance. But the driver is human, so the van doesn't just use the driver. The van uses the driver up.
Obviously, it's nice to be a centaur, and it's horrible to be a reverse centaur. There are lots of AI tools that are potentially very centaur-like, but my thesis is that these tools are created and funded for the express purpose of creating reverse-centaurs, which is something none of us want to be.
But like I said, the job of an sf writer is to do more than think about what the gadget does, and drill down on who the gadget does it for and who the gadget does it to. Tech bosses want us to believe that there is only one way a technology can be used. Mark Zuckerberg wants you to think that it's technologically impossible to have a conversation with a friend without him listening in. Tim Cook wants you to think that it's technologically impossible for you to have a reliable computing experience unless he gets a veto over which software you install and without him taking 30 cents out of every dollar you spend. Sundar Pichai wants you think that it's impossible for you to find a webpage unless he gets to spy on you from asshole to appetite.
This is all a kind of vulgar Thatcherism. Margaret Thatcher's mantra was "There is no alternative." She repeated this so often they called her "TINA" Thatcher: There. Is. No. Alternative. TINA.
"There is no alternative" is a cheap rhetorical slight. It's a demand dressed up as an observation. "There is no alternative" means "STOP TRYING TO THINK OF AN ALTERNATIVE." Which, you know, fuck that.
I'm an sf writer, my job is to think of a dozen alternatives before breakfast.
So let me explain what I think is going on here with this AI bubble, and sort out the bullshit from the material reality, and explain how I think we could and should all be better AI critics.
#
Start with monopolies: tech companies are gigantic and they don't compete, they just take over whole sectors, either on their own on in cartels.
Google and Meta control the ad market. Google and Apple control the mobile market, and Google pays Apple more than $20 billion/year not to make a competing search engine, and of course, Google has a 90% Search market-share.
Now, you'd think that this was good news for the tech companies, owning their whole sector.
But it's actually a crisis. You see, when a company is growing, it is a "growth stock," and investors really like growth stocks. When you buy a share in a growth stock, you're making a bet that it will continue to grow. So growth stocks trade at a huge multiple of their earnings. This is called the "price to earnings ratio" or "P/E ratio."
But once a company stops growing, it is a "mature" stock, and it trades at a much lower P/E ratio. So for ever dollar that Target – a mature company – brings in, it is worth ten dollars. It has a P/E ratio of 10, while Amazon has a P/E ratio of 36, which means that for ever dollar Amazon brings in, the market values it at $36.
It's wonderful to run a company that's got a growth stock. Your shares are as good as money. If you want to buy another company, or hire a key worker, you can offer stock instead of cash. And stock is very easy for companies to get, because shares are manufactured right there on the premises, all you have to do is type some zeroes into a spreadsheet, while dollars are much harder to come by. A company can only get dollars from customers or creditors.
So when Amazon bids against Target for a key acquisition, or a key hire, Amazon can bid with shares they make by typing zeroes into a spreadsheet, and Target can only bid with dollars they get from selling stuff to us, or taking out loans. which is why Amazon generally wins those bidding wars.
That's the upside of having a growth stock. But here's the downside: eventually a company has to stop growing. Like, say you get a 90% market share in your sector, how are you gonna grow?
Once the market decides that you aren't a growth stock, once you become mature, your stocks are revalued, to a P/E ratio befitting a mature stock.
If you are an exec at a dominant company with a growth stock, you have to live in constant fear that the market will decide that you're not likely to grow any further. Think of what happened to Facebook in the first quarter of 2022. They told investors that they experienced slightly slower growth in the USA than they had anticipated, and investors panicked. They staged a one-day, $240B sell off. A quarter-trillion dollars in 24 hours! At the time, it was the largest, most precipitous drop in corporate valuation in human history.
That's a monopolist's worst nightmare, because once you're presiding over a "mature" firm, the key employees you've been compensating with stock, experience a precipitous pay-drop and bolt for the exits, so you lose the people who might help you grow again, and you can only hire their replacements with dollars. With dollars, not shares.
And the same goes for acquiring companies that might help you grow, because they, too, are going to expect money, not stock. This is the paradox of the growth stock. While you are growing to domination, the market loves you, but once you achieve dominance, the market lops 75% or more off your value in a single stroke if they don't trust your pricing power.
Which is why growth stock companies are always desperately pumping up one bubble or another, spending billions to hype the pivot to video, or cryptocurrency, or NFTs, or Metaverse, or AI.
I'm not saying that tech bosses are making bets they don't plan on winning. But I am saying that winning the bet – creating a viable metaverse – is the secondary goal. The primary goal is to keep the market convinced that your company will continue to grow, and to remain convinced until the next bubble comes along.
So this is why they're hyping AI: the material basis for the hundreds of billions in AI investment.
#
Now I want to talk about how they're selling AI. The growth narrative of AI is that AI will disrupt labor markets. I use "disrupt" here in its most disreputable, tech bro sense
The promise of AI – the promise AI companies make to investors – is that there will be AIs that can do your job, and when your boss fires you and replaces you with AI, he will keep half of your salary for himself, and give the other half to the AI company.
That's it.
That's the $13T growth story that MorganStanley is telling. It's why big investors and institutionals are giving AI companies hundreds of billions of dollars. And because they are piling in, normies are also getting sucked in, risking their retirement savings and their family's financial security.
Now, if AI could do your job, this would still be a problem. We'd have to figure out what to do with all these technologically unemployed people.
But AI can't do your job. It can help you do your job, but that doesn't mean it's going to save anyone money. Take radiology: there's some evidence that AIs can sometimes identify solid-mass tumors that some radiologists miss, and look, I've got cancer. Thankfully, it's very treatable, but I've got an interest in radiology being as reliable and accurate as possible
If my Kaiser hospital bought some AI radiology tools and told its radiologists: "Hey folks, here's the deal. Today, you're processing about 100 x-rays per day. From now on, we're going to get an instantaneous second opinion from the AI, and if the AI thinks you've missed a tumor, we want you to go back and have another look, even if that means you're only processing 98 x-rays per day. That's fine, we just care about finding all those tumors."
If that's what they said, I'd be delighted. But no one is investing hundreds of billions in AI companies because they think AI will make radiology more expensive, not even if it that also makes radiology more accurate. The market's bet on AI is that an AI salesman will visit the CEO of Kaiser and make this pitch: "Look, you fire 9/10s of your radiologists, saving $20m/year, you give us $10m/year, and you net $10m/year, and the remaining radiologists' job will be to oversee the diagnoses the AI makes at superhuman speed, and somehow remain vigilant as they do so, despite the fact that the AI is usually right, except when it's catastrophically wrong.
"And if the AI misses a tumor, this will be the human radiologist's fault, because they are the 'human in the loop.' It's their signature on the diagnosis."
This is a reverse centaur, and it's a specific kind of reverse-centaur: it's what Dan Davies calles an "accountability sink." The radiologist's job isn't really to oversee the AI's work, it's to take the blame for the AI's mistakes.
This is another key to understanding – and thus deflating – the AI bubble. The AI can't do your job, but an AI salesman can convince your boss to fire you and replace you with an AI that can't do your job. This is key because it helps us build the kinds of coalitions that will be successful in the fight against the AI bubble.
If you're someone who's worried about cancer, and you're being told that the price of making radiology too cheap to meter, is that we're going to have to re-home America's 32,000 radiologists, with the trade-off that no one will every be denied radiology services again, you might say, "Well, OK, I'm sorry for those radiologists, and I fully support getting them job training or UBI or whatever. But the point of radiology is to fight cancer, not to pay radiologists, so I know what side I'm on."
AI hucksters and their customers in the C-suites want the public on their side. They want to forge a class alliance between AI deployers and the people who enjoy the fruits of the reverse centaurs' labor. They want us to think of ourselves as enemies to the workers.
Now, some people will be on the workers' side because of politics or aesthetics. They just like workers better than their bosses. But if you want to win over all the people who benefit from your labor, you need to understand and stress how the products of the AI will be substandard. That they are going to get charged more for worse things. That they have a shared material interest with you.
Will those products be substandard? There's every reason to think so. Earlier, I alluded to "automation blindness, "the physical impossibility of remaining vigilant for things that rarely occur. This is why TSA agents are incredibly good at spotting water bottles. Because they get a ton of practice at this, all day, every day. And why they fail to spot the guns and bombs that government red teams smuggle through checkpoints to see how well they work, because they just don't have any practice at that. Because, to a first approximation, no one deliberately brings a gun or a bomb through a TSA checkpoint.
Automation blindness is the Achilles' heel of "humans in the loop."
Think of AI software generation: there are plenty of coders who love using AI, and almost without exception, they are senior, experienced coders, who get to decide how they will use these tools. For example, you might ask the AI to generate a set of CSS files to faithfully render a web-page across multiple versions of multiple browsers. This is a notoriously fiddly thing to do, and it's pretty easy to verify if the code works – just eyeball it in a bunch of browsers. Or maybe the coder has a single data file they need to import and they don't want to write a whole utility to convert it.
Tasks like these can genuinely make coders more efficient and give them more time to do the fun part of coding, namely, solving really gnarly, abstract puzzles. But when you listen to business leaders talk about their AI plans for coders, it's clear they're not looking to make some centaurs.
They want to fire a lot of tech workers – 500,000 over the past three years – and make the rest pick up their work with coding, which is only possible if you let the AI do all the gnarly, creative problem solving, and then you do the most boring, soul-crushing part of the job: reviewing the AIs' code.
And because AI is just a word guessing program, because all it does is calculate the most probable word to go next, the errors it makes are especially subtle and hard to spot, because these bugs are literally statistically indistinguishable from working code (except that they're bugs).
Here's an example: code libraries are standard utilities that programmers can incorporate into their apps, so they don't have to do a bunch of repetitive programming. Like, if you want to process some text, you'll use a standard library. If it's an HTML file, that library might be called something like lib.html.text.parsing; and if it's a DOCX file, it'll be lib.docx.text.parsing. But reality is messy, humans are inattentive and stuff goes wrong, so sometimes, there's another library, this one for parsing PDFs, and instead of being called lib.pdf.text.parsing, it's called lib.text.pdf.parsing.
Now, because the AI is a statistical inference engine, because all it can do is predict what word will come next based on all the words that have been typed in the past, it will "hallucinate" a library called lib.pdf.text.parsing. And the thing is, malicious hackers know that the AI will make this error, so they will go out and create a library with the predictable, hallucinated name, and that library will get automatically sucked into your program, and it will do things like steal user data or try and penetrate other computers on the same network.
And you, the human in the loop – the reverse centaur – you have to spot this subtle, hard to find error, this bug that is literally statistically indistinguishable from correct code. Now, maybe a senior coder could catch this, because they've been around the block a few times, and they know about this tripwire.
But guess who tech bosses want to preferentially fire and replace with AI? Senior coders. Those mouthy, entitled, extremely highly paid workers, who don't think of themselves as workers. Who see themselves as founders in waiting, peers of the company's top management. The kind of coder who'd lead a walkout over the company building drone-targeting systems for the Pentagon, which cost Google ten billion dollars in 2018.
For AI to be valuable, it has to replace high-wage workers, and those are precisely the experienced workers, with process knowledge, and hard0won intuition, who might spot some of those statistically camouflaged AI errors.
Like I said, the point here is to replace high-waged workers
And one of the reasons the AI companies are so anxious to fire coders is that coders are the princes of labor. They're the most consistently privileged, sought-after, and well-compensated workers in the labor force.
If you can replace coders with AI, who cant you replace with AI? Firing coders is an ad for AI.
Which brings me to AI art. AI art – or "art" – is also an ad for AI, but it's not part of AI's business model.
Let me explain: on average, illustrators don't make any money. They are already one of the most immiserated, precartized groups of workers out there. They suffer from a pathology called "vocational awe." That's a term coined by the librarian Fobazi Ettarh, and it refers to workers who are vulnerable to workplace exploitation because they actually care about their jobs – nurses, librarians, teachers, and artists.
If AI image generators put every illustrator working today out of a job, the resulting wage-bill savings would be undetectable as a proportion of all the costs associated with training and operating image-generators. The total wage bill for commercial illustrators is less than the kombucha bill for the company cafeteria at just one of Open AI's campuses.
The purpose of AI art – and the story of AI art as a death-knell for artists – is to convince the broad public that AI is amazing and will do amazing things. It's to create buzz. Which is not to say that it's not disgusting that former OpenAI CTO Mira Murati told a conference audience that "some creative jobs shouldn't have been there in the first place," and that it's not especially disgusting that she and her colleagues boast about using the work of artists to ruin those artists' livelihoods.
It's supposed to be disgusting. It's supposed to get artists to run around and say, "The AI can do my job, and it's going to steal my job, and isn't that terrible?"
Because the customers for AI – corporate bosses – don't see AI taking workers' jobs as terrible. They see it as wonderful.
But can AI do an illustrator's job? Or any artist's job?
Let's think about that for a second. I've been a working artist since I was 17 years old, when I sold my first short story, and I've given it a lot of thought, and here's what I think art is: it starts with an artist, who has some vast, complex, numinous, irreducible feeling in their mind. And the artist infuses that feeling into some artistic medium. They make a song, or a poem, or a painting, or a drawing, or a dance, or a book, or a photograph. And the idea is, when you experience this work, a facsimile of the big, numinous, irreducible feeling will materialize in your mind.
Now that I've defined art, we have to go on a little detour.
I have a friend who's a law professor, and before the rise of chatbots, law students knew better than to ask for reference letters from their profs, unless they were a really good student. Because those letters were a pain in the ass to write. So if you advertised for a postdoc and you heard from a candidate with a reference letter from a respected prof, the mere existence of that letter told you that the prof really thought highly of that student.
But then we got chatbots, and everyone knows that you generate a reference letter by feeding three bullet points to an LLM, and it'll barf up five paragraphs of florid nonsense about the student.
So when my friend advertises for a postdoc, they are flooded with reference letters, and they deal with this flood by feeding all these letters to another chatbot, and ask it to reduce them back to three bullet points. Now, obviously, they won't be the same bullet-points, which makes this whole thing terrible.
But just as obviously, nothing in that five-paragraph letter except the original three bullet points are relevant to the student. The chatbot doesn't know the student. It doesn't know anything about them. It cannot add a single true or useful statement about the student to the letter.
What does this have to do with AI art? Art is a transfer of a big, numinous, irreducible feeling from an artist to someone else. But the image-gen program doesn't know anything about your big, numinous, irreducible feeling. The only thing it knows is whatever you put into your prompt, and those few sentences are diluted across a million pixels or a hundred thousand words, so that the average communicative density of the resulting work is indistinguishable from zero.
It's possible to infuse more communicative intent into a work: writing more detailed prompts, or doing the selective work of choosing from among many variants, or directly tinkering with the AI image after the fact, with a paintbrush or Photoshop or The Gimp. And if there will every be a piece of AI art that is good art – as opposed to merely striking, or interesting, or an example of good draftsmanship – it will be thanks to those additional infusions of creative intent by a human.
And in the meantime, it's bad art. It's bad art in the sense of being "eerie," the word Mark Fisher uses to describe "when there is something present where there should be nothing, or is there is nothing present when there should be something."
AI art is eerie because it seems like there is an intender and an intention behind every word and every pixel, because we have a lifetime of experience that tells us that paintings have painters, and writing has writers. But it's missing something. It has nothing to say, or whatever it has to say is so diluted that it's undetectable.
The images were striking before we figured out the trick, but now they're just like the images we imagine in clouds or piles of leaves. We're the ones drawing a frame around part of the scene, we're the ones focusing on some contours and ignoring the others. We're looking at an inkblot, and it's not telling us anything.
Sometimes that can be visually arresting, and to the extent that it amuses people in a community of prompters and viewers, that's harmless.
I know someone who plays a weekly Dungeons and Dragons game over Zoom. It's transcribed by an open source model running locally on the dungeon master's computer, which summarizes the night's session and prompts an image generator to create illustrations of key moments. These summaries and images are hilarious because they're full of errors. It's a bit of harmless fun, and it bring a small amount of additional pleasure to a small group of people. No one is going to fire an illustrator because D&D players are image-genning funny illustrations where seven-fingered paladins wrestle with orcs that have an extra hand.
But bosses have and will fire illustrators, because they fantasize about being able to dispense with creative professionals and just prompt an AI. Because even though the AI can't do the illustrator's job, an AI salesman can convince the illustrator's boss to fire them and replace them with an AI that can't do their job.
This is a disgusting and terrible juncture, and we should not simply shrug our shoulders and accept Thatcherism's fatalism: "There is no alternative."
So what is the alternative? A lot of artists and their allies think they have an answer: they say we should extend copyright to cover the activities associated with training a model.
And I'm here to tell you they are wrong:w rong because this would inflict terrible collateral damage on socially beneficial activities, and it would represent a massive expansion of copyright over activities that are currently permitted – for good reason!.
Let's break down the steps in AI training.
First, you scrape a bunch of web-pages This is unambiguously legal under present copyright law. You do not need a license to make a transient copy of a copyrighted work in order to analyze it, otherwise search engines would be illegal. Ban scraping and Google will be the last search engine we ever get, the Internet Archive will go out of business, that guy in Austria who scraped all the grocery store sites and proved that the big chains were colluding to rig prices would be in deep trouble.
Next, you perform analysis on those works. Basically, you count stuff on them: count pixels and their colors and proximity to other pixels; or count words. This is obviously not something you need a license for. It's just not illegal to count the elements of a copyrighted work. And we really don't want it to be, not if you're interested in scholarship of any kind.
And it's important to note that counting things is legal, even if you're working from an illegally obtained copy. Like, if you go to the flea market, and you buy a bootleg music CD, and you take it home and you make a list of all the adverbs in the lyrics, and you publish that list, you are not infringing copyright by doing so.
Perhaps you've infringed copyright by getting the pirated CD, but not by counting the lyrics.
This is why Anthropic offered a $1.5b settlement for training its models based on a ton of books it downloaded from a pirate site: not because counting the words in the books infringes anyone's rights, but because they were worried that they were going to get hit with $150k/book statutory damages for downloading the files.
OK, after you count all the pixels or the words, it's time for the final step: publishing them. Because that's what a model is: a literary work (that is, a piece of software) that embodies a bunch of facts about a bunch of other works, word and pixel distribution information, encoded in a multidimensional array.
And again, copyright absolutely does not prohibit you from publishing facts about copyrighted works. And again, no one should want to live in a world where someone else gets to decide which truthful, factual statements you can publish.
But hey, maybe you think this is all sophistry. Maybe you think I'm full of shit. That's fine. It wouldn't be the first time someone thought that.
After all, even if I'm right about how copyright works now, there's no reason we couldn't change copyright to ban training activities, and maybe there's even a clever way to wordsmith the law so that it only catches bad things we don't like, and not all the good stuff that comes from scraping, analyzing and publishing.
Well, even then, you're not gonna help out creators by creating this new copyright. If you're thinking that you can, you need to grapple with this fact: we have monotonically expanded copyright since 1976, so that today, copyright covers more kinds of works, grants exclusive rights over more uses, and lasts longer.
And today, the media industry is larger and more profitable than it has ever been, and also: the share of media industry income that goes to creative workers is lower than its ever been, both in real terms, and as a proportion of those incredible gains made by creators' bosses at the media company.
So how it is that we have given all these new rights to creators, and those new rights have generated untold billions, and left creators poorer? It's because in a creative market dominated by five publishers, four studios, three labels, two mobile app stores, and a single company that controls all the ebooks and audiobooks, giving a creative worker extra rights to bargain with is like giving your bullied kid more lunch money.
It doesn't matter how much lunch money you give the kid, the bullies will take it all. Give that kid enough money and the bullies will hire an agency to run a global campaign proclaiming "think of the hungry kids! Give them more lunch money!"
Creative workers who cheer on lawsuits by the big studios and labels need to remember the first rule of class warfare: things that are good for your boss are rarely what's good for you.
The day Disney and Universal filed suit against Midjourney, I got a press release from the RIAA, which represents Disney and Universal through their recording arms. Universal is the largest label in the world. Together with Sony and Warner, they control 70% of all music recordings in copyright today.
It starts: "There is a clear path forward through partnerships that both further AI innovation and foster human artistry."
It ends: "This action by Disney and Universal represents a critical stand for human creativity and responsible innovation."
And it's signed by Mitch Glazier, CEO of the RIAA.
It's very likely that name doesn't mean anything to you. But let me tell you who Mitch Glazier is. Today, Mitch Glazier is the CEO if the RIAA, with an annual salary of $1.3m. But until 1999, Mitch Glazier was a key Congressional staffer, and in 1999, Glazier snuck an amendment into an unrelated bill, the Satellite Home Viewer Improvement Act, that killed musicians' right to take their recordings back from their labels.
This is a practice that had been especially important to "heritage acts" (which is a record industry euphemism for "old music recorded by Black people"), for whom this right represented the difference between making rent and ending up on the street.
When it became clear that Glazier had pulled this musician-impoverishing scam, there was so much public outcry, that Congress actually came back for a special session, just to vote again to cancel Glazier's amendment. And then Glazier was kicked out of his cushy Congressional job, whereupon the RIAA started paying more than $1m/year to "represent the music industry."
This is the guy who signed that press release in my inbox. And his message was: The problem isn't that Midjourney wants to train a Gen AI model on copyrighted works, and then use that model to put artists on the breadline. The problem is that Midjourney didn't pay RIAA members Universal and Disney for permission to train a model. Because if only Midjourney had given Disney and Universal several million dollars for training right to their catalogs, the companies would have happily allowed them to train to their heart's content, and they would have bought the resulting models, and fired as many creative professionals as they could.
I mean, have we already forgotten the Hollywood strikes? I sure haven't. I live in Burbank, home to Disney, Universal and Warner, and I was out on the line with my comrades from the Writers Guild, offering solidarity on behalf of my union, IATSE 830, The Animation Guild, where I'm a member of the writers' unit.
And I'll never forget when one writer turned to me and said, "You know, you prompt an LLM exactly the same way an exec gives shitty notes to a writers' room. You know: 'Make me ET, except it's about a dog, and put a love interest in there, and a car chase in the second act.' The difference is, you say that to a writers' room and they all make fun of you and call you a fucking idiot suit. But you say it to an LLM and it will cheerfully shit out a terrible script that conforms exactly to that spec (you know, Air Bud)."
These companies are desperate to use AI to displace workers. When Getty Images sues AI companies, it's not representing the interests of photographers. Getty hates paying photographers! Getty just wants to get paid for the training run, and they want the resulting AI model to have guardrails, so it will refuse to create images that compete with Getty's images for anyone except Getty. But Getty will absolutely use its models to bankrupt as many photographers as it possibly can.
A new copyright to train models won't get us a world where models aren't used to destroy artists, it'll just get us a world where the standard contracts of the handful of companies that control all creative labor markets are updated to require us to hand over those new training rights to those companies. Demanding a new copyright just makes you a useful idiot for your boss, a human shield they can brandish in policy fights, a tissue-thin pretense of "won't someone think of the hungry artists?"
When really what they're demanding is a world where 30% of the investment capital of the AI companies go into their shareholders' pockets. When an artist is being devoured by rapacious monopolies, does it matter how they divvy up the meal?
We need to protect artists from AI predation, not just create a new way for artists to be mad about their impoverishment.
And incredibly enough, there's a really simple way to do that. After 20+ years of being consistently wrong and terrible for artists' rights, the US Copyright Office has finally done something gloriously, wonderfully right. All through this AI bubble, the Copyright Office has maintained – correctly – that AI-generated works cannot be copyrighted, because copyright is exclusively for humans. That's why the "monkey selfie" is in the public domain. Copyright is only awarded to works of human creative expression that are fixed in a tangible medium.
And not only has the Copyright Office taken this position, they've defended it vigorously in court, repeatedly winning judgments to uphold this principle.
The fact that every AI created work is in the public domain means that if Getty or Disney or Universal or Hearst newspapers use AI to generate works – then anyone else can take those works, copy them, sell them, or give them away for free. And the only thing those companies hate more than paying creative workers, is having other people take their stuff without permission.
The US Copyright Office's position means that the only way these companies can get a copyright is to pay humans to do creative work. This is a recipe for centaurhood. If you're a visual artist or writer who uses prompts to come up with ideas or variations, that's no problem, because the ultimate work comes from you. And if you're a video editor who uses deepfakes to change the eyelines of 200 extras in a crowd-scene, then sure, those eyeballs are in the public domain, but the movie stays copyrighted.
But creative workers don't have to rely on the US government to rescue us from AI predators. We can do it ourselves, the way the writers did in their historic writers' strike. The writers brought the studios to their knees. They did it because they are organized and solidaristic, but also are allowed to do something that virtually no other workers are allowed to do: they can engage in "sectoral bargaining," whereby all the workers in a sector can negotiate a contract with every employer in the sector.
That's been illegal for most workers since the late 1940s, when the Taft-Hartley Act outlawed it. If we are gonna campaign to get a new law passed in hopes of making more money and having more control over our labor, we should campaign to restore sectoral bargaining, not to expand copyright.
Our allies in a campaign to expand copyright are our bosses, who have never had our best interests at heart. While our allies in the fight for sector bargaining are every worker in the country. As the song goes, "Which side are you on?"
OK, I need to bring this talk in for a landing now, because I'm out of time, so I'm going to close out with this: AI is a bubble and bubbles are terrible.
Bubbles transfer the life's savings of normal people who are just trying to have a dignified retirement to the wealthiest and most unethical people in our society, and every bubble eventually bursts, taking their savings with it.
But not every bubble is created equal. Some bubbles leave behind something productive. Worldcom stole billions from everyday people by defrauding them about orders for fiber optic cables. The CEO went to prison and died there. But the fiber outlived him. It's still in the ground. At my home, I've got 2gb symmetrical fiber, because AT&T lit up some of that old Worldcom dark fiber.
All things being equal, it would have been better if Worldcom hadn't ever existed, but the only thing worse than Worldcom committing all that ghastly fraud would be if there was nothing to salvage from the wreckage.
I don't think we'll salvage much from cryptocurrency, for example. Sure, there'll be a few coders who've learned something about secure programming in Rust. But when crypto dies, what it will leave behind is bad Austrian economics and worse monkey JPEGs.
AI is a bubble and it will burst. Most of the companies will fail. Most of the data-centers will be shuttered or sold for parts. So what will be left behind?
We'll have a bunch of coders who are really good at applied statistics. We'll have a lot of cheap GPUs, which'll be good news for, say, effects artists and climate scientists, who'll be able to buy that critical hardware at pennies on the dollar. And we'll have the open source models that run on commodity hardware, AI tools that can do a lot of useful stuff, like transcribing audio and video, describing images, summarizing documents, automating a lot of labor-intensive graphic editing, like removing backgrounds, or airbrushing passersby out of photos. These will run on our laptops and phones, and open source hackers will find ways to push them to do things their makers never dreamt of.
If there had never been an AI bubble, if all this stuff arose merely because computer scientists and product managers noodled around for a few year coming up with cool new apps for back-propagation, machine learning and generative adversarial networks, most people would have been pleasantly surprised with these interesting new things their computers could do. We'd call them "plugins."
It's the bubble that sucks, not these applications. The bubble doesn't want cheap useful things. It wants expensive, "disruptive" things: Big foundation models that lose billions of dollars every year.
When the AI investment mania halts, most of those models are going to disappear, because it just won't be economical to keep the data-centers running. As Stein's Law has it: "Anything that can't go on forever eventually stops."
The collapse of the AI bubble is going to be ugly. Seven AI companies currently account for more than a third of the stock market, and they endlessly pass around the same $100b IOU.
Bosses are mass-firing productive workers and replacing them with janky AI, and when the janky AI is gone, no one will be able to find and re-hire most of those workers, we're going to go from disfunctional AI systems to nothing.
AI is the asbestos in the walls of our technological society, stuffed there with wild abandon by a finance sector and tech monopolists run amok. We will be excavating it for a generation or more.
So we need to get rid of this bubble. Pop it, as quickly as we can. To do that, we have to focus on the material factors driving the bubble. The bubble isn't being driven by deepfake porn, oOr election disinformation, or AI image-gen, or slop advertising.
All that stuff is terrible and harmful, but it's not driving investment. The total dollar figure represented by these apps doesn't come close to making a dent in the capital expenditures and operating costs of AI. They are peripheral, residual uses: flashy, but unimportant to the bubble.
Get rid of all those uses and you reduce the expected income of AI companies by a sum so small it rounds to zero.
Same goes for all that "AI Safety" nonsense, that purports to concern itself with preventing an AI from attaining sentience and turning us all into paperclips. First of all, this is facially absurd. Throwing more words and GPUs into the word-guessing program won't make it sentient. That's like saying, "Well, we keep breeding these horses to run faster and faster, so it's only a matter of time until one of our mares gives birth to a locomotive." A human mind is not a word-guessing program with a lot of extra words.
I'm here for science fiction thought experiments, don't get me wrong. But also, don't mistake sf for prophesy. SF stories about superintelligence are futuristic parables, not business plans, roadmaps, or predictions.
The AI Safety people say they are worried that AI is going to end the world, but AI bosses love these weirdos. Because on the one hand, if AI is powerful enough to destroy the world, think of how much money it can make! And on the other hand, no AI business plan has a line on its revenue projections spreadsheet labeled "Income from turning the human race into paperclips." So even if we ban AI companies from doing this, we won't cost them a dime in investment capital.
To pop the bubble, we have to hammer on the forces that created the bubble: the myth that AI can do your job, especially if you get high wages that your boss can claw back; the understanding that growth companies need a succession of ever-more-outlandish bubbles to stay alive; the fact that workers and the public they serve are on one side of this fight, and bosses and their investors are on the other side.
Because the AI bubble really is very bad news, it's worth fighting seriously, and a serious fight against AI strikes at its roots: the material factors fueling the hundreds of billions in wasted capital that are being spent to put us all on the breadline and fill all our walls will high-tech asbestos.
(Image: Cryteria, CC BY 3.0, modified)

An Analysis of the Proposed Spirit Financial-Credit Union 1 Merger. The Consequences for the Credit Union System https://chipfilson.com/2025/12/an-analysis-of-the-proposed-spirit-financal-credit-union-1-merger/
Zillow deletes climate risk data from listings after complaints it harms sales https://www.theguardian.com/environment/2025/dec/01/zillow-removes-climate-risk-data-home-listings
After Years of Controversy, the EU’s Chat Control Nears Its Final Hurdle: What to Know https://www.eff.org/deeplinks/2025/12/after-years-controversy-eus-chat-control-nears-its-final-hurdle-what-know
How the dollar-store industry overcharges cash-strapped customers while promising low prices https://www.theguardian.com/us-news/2025/dec/03/customers-pay-more-rising-dollar-store-costs
#20yrsago Haunted Mansion papercraft model adds crypts and gates https://www.haunteddimensions.raykeim.com/index313.html
#20yrsago Print your own Monopoly money https://web.archive.org/web/20051202030047/http://www.hasbro.com/monopoly/pl/page.treasurechest/dn/default.cfm
#15yrsago Bunnie explains the technical intricacies and legalities of Xbox hacking https://www.bunniestudios.com/blog/2010/usa-v-crippen-a-retrospective/
#15yrsago How Pac Man’s ghosts decide what to do: elegant complexity https://web.archive.org/web/20101205044323/https://gameinternals.com/post/2072558330/understanding-pac-man-ghost-behavior
#15yrsago Glorious, elaborate, profane insults of the world https://www.reddit.com/r/AskReddit/comments/efee7/what_are_your_favorite_culturally_untranslateable/?sort=confidence
#15yrsago Walt Disney World castmembers speak about their search for a living wage https://www.youtube.com/watch?v=f5BMQ3xQc7o
#15yrsago Wikileaks cables reveal that the US wrote Spain’s proposed copyright law https://web.archive.org/web/20140723230745/https://elpais.com/elpais/2010/12/03/actualidad/1291367868_850215.html
#15yrsago Cities made of broken technology https://web.archive.org/web/20101203132915/https://agora-gallery.com/artistpage/Franco_Recchia.aspx
#10yrsago The TPP’s ban on source-code disclosure requirements: bad news for information security https://www.eff.org/deeplinks/2015/12/tpp-threatens-security-and-safety-locking-down-us-policy-source-code-audit
#10yrsago Fossil fuel divestment sit-in at MIT President’s office hits 10,000,000,000-hour mark https://twitter.com/FossilFreeMIT/status/672526210581274624
#10yrsago Hacker dumps United Arab Emirates Invest Bank’s customer data https://www.dailydot.com/news/invest-bank-hacker-buba/
#10yrsago Illinois prisons spy on prisoners, sue them for rent on their cells if they have any money https://www.chicagotribune.com/2015/11/30/state-sues-prisoners-to-pay-for-their-room-board/
#10yrsago Free usability help for privacy toolmakers https://superbloom.design/learning/blog/apply-for-help/
#10yrsago In the first 334 days of 2015, America has seen 351 mass shootings (and counting) https://web.archive.org/web/20151209004329/https://www.washingtonpost.com/news/wonk/wp/2015/11/30/there-have-been-334-days-and-351-mass-shootings-so-far-this-year/
#10yrsago Not even the scapegoats will go to jail for BP’s murder of the Gulf Coast https://arstechnica.com/tech-policy/2015/12/manslaughter-charges-dropped-in-bp-spill-case-nobody-from-bp-will-go-to-prison/
#10yrsago Urban Transport Without the Hot Air: confusing the issue with relevant facts! https://memex.craphound.com/2015/12/03/urban-transport-without-the-hot-air-confusing-the-issue-with-relevant-facts/
#5yrsago Breathtaking Iphone hack https://pluralistic.net/2020/12/03/ministry-for-the-future/#awdl
#5yrsago Graffitists hit dozens of NYC subway cars https://pluralistic.net/2020/12/03/ministry-for-the-future/#getting-up
#5yrsago The Ministry For the Future https://pluralistic.net/2020/12/03/ministry-for-the-future/#ksr
#5yrsago Monopolies made America vulnerable to covid https://pluralistic.net/2020/12/03/ministry-for-the-future/#big-health
#5yrsago Section 230 is Good, Actually https://pluralistic.net/2020/12/04/kawaski-trawick/#230
#5yrsago Postmortem of the NYPD's murder of a Black man https://pluralistic.net/2020/12/04/kawaski-trawick/#Kawaski-Trawick
#5yrsago Student debt trap https://pluralistic.net/2020/12/04/kawaski-trawick/#strike-debt
#1yrago "That Makes Me Smart" https://pluralistic.net/2024/12/04/its-not-a-lie/#its-a-premature-truth
#1yrago Canada sues Google https://pluralistic.net/2024/12/03/clementsy/#can-tech

Madison, CT: Enshittification at RJ Julia, Dec 8
https://rjjulia.com/event/2025-12-08/cory-doctorow-enshittification
Hamburg: Chaos Communications Congress, Dec 27-30
https://events.ccc.de/congress/2025/infos/index.html
Denver: Enshittification at Tattered Cover Colfax, Jan 22
https://www.eventbrite.com/e/cory-doctorow-live-at-tattered-cover-colfax-tickets-1976644174937
We have become slaves to Silicon Valley (Politics JOE)
https://www.youtube.com/watch?v=JzEUvh1r5-w
How Enshittification is Destroying The Internet (Frontline Club)
https://www.youtube.com/watch?v=oovsyzB9L-s
Escape Forward with Cristina Caffarra
https://escape-forward.com/2025/11/27/enshittification-of-our-digital-experience/
Why Every Platform Betrays You (Trust Revolution)
https://fountain.fm/episode/bJgdt0hJAnppEve6Qmt8
"Enshittification: Why Everything Suddenly Got Worse and What to Do About It," Farrar, Straus, Giroux, October 7 2025
https://us.macmillan.com/books/9780374619329/enshittification/
"Picks and Shovels": a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels).
"The Bezzle": a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (the-bezzle.org).
"The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org).
"The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245).
"Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com.
"Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
"Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026
"The Memex Method," Farrar, Straus, Giroux, 2026
"The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026
Today's top sources:
Currently writing:
"The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.
A Little Brother short story about DIY insulin PLANNING

This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
Medium (no ads, paywalled):
Twitter (mass-scale, unrestricted, third-party surveillance and advertising):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla
READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.
ISSN: 3066-764X
A Legal and Political Roundup [The Status Kuo]
Let’s do a roundup of four major stories and developments.
First, the radical Supreme Court majority deployed its “shadow” docket to overturn a three-judge panel, which had ruled Texas’s mid-decade redistricting an illegal racial gerrymander.
Second, Admiral Mitch Bradley testified behind closed doors in Congress, showing a video of strikes that destroyed a vessel off the coast of Trinidad on September 2, first killing nine crewmembers then two more in a double-tap strike 41 minutes later.
Third, a federal grand jury refused to indict New York State Attorney General Letitia James on bogus mortgage fraud charges.
Fourth, I’ll close with some somewhat overdue thoughts on the special election that took place earlier this week in Tennessee’s Seventh Congressional District.
Ready? Let’s dive in.
The gerrymander green light
Yesterday evening, the six Republican-appointed justices of the Supreme Court decided, to no one’s surprise, to help out their own party. They ruled, via the infamous emergency or “shadow” docket, that Texas should be able to use its proposed redrawn map. They arrived at this decision despite a carefully-considered decision by a Trump judicial appointee, who had struck the map down as an illegal racial gerrymander.
The decision is legally unsupportable for at least two big reasons.
First, an appellate court—even SCOTUS—is supposed to defer to the factual findings of the trial court unless there was some kind of clear error. Here, the findings of the panel were well supported by the evidence, but the radical justices didn’t stay in their lane. Instead, they substituted their own assessment of the facts because it suited them. It’s further evidence of their bad faith and partisan capture.
Second, the justices relied on a case called Purcell which generally holds that courts shouldn’t interfere with a redistricting plan when it happens too close to an election. But we’re still a year out from the midterms, and Texas could easily have gone back to its old maps without voters being confused or even realizing it. Moreover, the decision invites mischief going forward. States could intentionally create illegal maps knowing they get a freebie because SCOTUS simply won’t allow courts to step in to fix them.
So what now? SCOTUS believes it has handed the GOP up to five more congressional seats. But California moved earlier this year to counter that with its own partisan gerrymander, essentially negating that move by Texas. The California map is being challenged in federal court too. But there’s a bit of a silver lining: Three of those six radical justices—Thomas, Alito and Gorsuch—indicated that California’s responsive gerrymander was also a political one, at least suggesting that it is also presumably beyond the reach of the federal courts.
Let’s hope that reasoning holds.
The Court put its finger on the scale for the GOP. It’s done so before, and it will do so again, at least until we force reform by way of court expansion and term limits. But as I’ll discuss in my assessment of the election in TN-7 this past Tuesday, the GOP may have made a bad miscalculation—also known as a “dummymander”—by drawing new congressional maps that put its own House incumbents at greater risk.
Texas’s aggressive moves have been ratified by a biased Court, but the state could wind up deeply regretting them. Whether it will is now up to the voters.
Four strikes, you’re out?
Yesterday, Admiral Bradley showed a military video of four separate strikes upon the unarmed vessel that the Defense Department claimed was being used to smuggle drugs. The video raised serious questions over the legality of the second strike, given that the survivors posed no threat. In fact, as Bradley admitted, the survivors were shirtless, without communications, and stranded in the middle of the ocean with their vessel upside down.
Bradley’s explanation for why he ordered more strikes upon the vessel was, as one viewer noted, “insane.” Per the New York Times,
In the briefings, military officials are said to have told lawmakers they assumed the hull might be afloat because it still contained packs of cocaine. They thought that the survivors might eventually have managed to float back to Venezuela, allowing them to try again to deliver that cocaine, or that another boat could come retrieve it. They assumed the survivors could be communicating.
This is a record scratch moment. In what world does our military give an order to kill survivors who might be able to turn their boat back over and might then be able to get “drugs” back to their country? This is all speculative, to start with. But it also seeks to transform a non-capital offense into a capital one, all because the White House has labeled the targets “terrorists.”
But these aren’t “terrorists” with bombs and guns posing a threat to the U.S. Their only alleged crime is drug trafficking, and there isn’t even any evidence of that. Nor was there evidence of any of the things Bradley claimed he was concerned about. For example, he ordered the strike but
the video did not show any radios or satellite phones, according to the people familiar with the briefings, and a surveillance plane apparently did not spot any nearby boat.
Despite this, some GOP senators who viewed the footage left the meeting with a clear intent to protect the White House, Secretary Hegseth and Bradley. Sen. Tom Cotton (R-AK) called the attack “righteous” and “highly lawful and lethal.” He claimed the video showed two survivors trying to flip a boat “loaded with drugs bound for the United States” and declared the double-strikes were “exactly what we’d expect our military commanders to do.”
But Democratic senators and representatives seemed shaken by what they saw. Rep. Jim Himes (D-CT), the top Democrat on the House Intelligence Committee and someone familiar with counterterrorism and covert operations, declared the footage was “one of the most troubling things I’ve seen in my time in public service.” He added, “You have two individuals in clear distress without any means of locomotion with a destroyed vessel who were killed by the United States.” Sen. Jack Reed (D-RI), the ranking Democrat on the Armed Services Committee, said the briefing “confirmed my worst fears about the nature of the Trump administration’s military activities.”
A grand jury returns a “no bill” on James
Just two weeks ago, a judge disqualified Trump lackey Lindsey Halligan from her position as U.S. attorney and dismissed an indictment against New York State Attorney General Letitia James. That dismissal was without prejudice, meaning that in theory the Justice Department could try again. So it did, this time reportedly by bringing in U.S. attorneys from Missouri because no officials in the Eastern District of Virginia would risk putting their names and reputations behind such an indictment.
It bears repeating that it is usually incredibly easy to obtain an indictment. The bar is by definition low. You only need “probable cause” to believe someone has committed a federal offense. And generally, prosecutors only bring cases where they believe they can get a unanimous jury to convict. So it’s exceedingly rare for a grand jury to find no probable cause, reject the government’s case entirely, and return a “no bill” for indictment.
But that’s exactly what happened here. And it is a very bad development for the Justice Department, which is still desperately attempting to carry out Donald Trump’s campaign of political retribution on his enemies.
Grand jurors in the Virginia and D.C. area now understand that they are being used as pawns in Trump’s game, and they are refusing to march forward as requested. What’s more, if federal prosecutors ever do find a grand jury willing to buy their B.S., the odds are very high that they will have a very hard time getting 12 jurors to convict James.
Any reasonable prosecutor would understand this. But we aren’t dealing with reasonable people.
A true test in TN-7
Election watchers generally tend to discount House special election results. After all, they usually occur in “safe” districts for one party or the other because that’s where members are plucked or retire from. Special elections generally don’t attract the attention of the national parties or outside money. And usually, only the highest propensity voters show up for them.
That’s what made TN-7 all the more fascinating, especially for election data crunchers. In this case, even though the district should have been a “safe” R+22 district based on the 2024 results, it was the first special election to take place after the Democrats romped in November’s general election. So the national parties were paying close attention, and lots of outside money flooded in from both sides. Importantly, all this attention and money meant turnout wound up being on a par with the midterms of 2022.
That made the race in TN-7 the most “midterm-like” special election held this year. And guess what? Democrats moved the needle by a lot. It was a 13-point swing their way, even with all that outside money and all that national attention.
In short, both the Dems and the GOP went all in on TN-7, but Republicans still couldn’t stop a double digit move to the left.
The effect wasn’t limited to certain counties. The entire district shifted blue, across the board, in every county. But the biggest blue shift, a +20 move, landed in the bluest county in the district, Davidson. Check out the third column in this chart and, in particular, row two.
To my eye, TN-7 is the clearest indication from any special election result that a massive Blue Wave, if not a Blue Tsunami (a “Blu Tsu”), is building. If this contest reflected how the midterms would go, and voters everywhere were to shift an average +12 toward the Democrats in those midterms, that would wipe out Republicans in nearly all of their newly gerrymandered districts.
Granted, we are a year out, and that’s an eternity in politics. Many things could happen, from war to civil unrest to a stock market crash and full-blown recession. Between now and then, you might even wake up to your phone blowing up over “the big news” that he’s finally gone. (It’s okay to smile.)
But assuming Trump is still dozing through his cabinet meetings, building his golden ballroom and waving off affordability as a “hoax,” and assuming the GOP still has no fix for soaring healthcare costs while Medicaid and food stamps are slashed, these results portend historic losses for the party now in power.
Not even corrupt referees, like we now have on SCOTUS, would be able to save them from being washed away by the coming wave.
John Oliver Auction Raises $1.5 Million For Public Broadcasting [Techdirt]
Not that long ago, John Oliver’s Last Week Tonight did a good bit on why public broadcasting is important. The segment features a lot of insight from UPenn media professor Victor Pickard, whose work on the (many) problems with modern consolidated U.S. corporate media has always been essential reading:
But Oliver also walked the talk. Oliver and his staff subsequently held an auction for all sorts of notable items from the show’s history, including a Bob Ross painting, a prop replica of former Trump FCC boss Ajit Pai’s goofy giant coffee mug, Russell Crowe’s jock strap, a bidet signed by a member of GWAR, and a giant gold-plated re-creation of President Lyndon B. Johnson’s balls:
“All told, the auction raised nearly $1.54 million for the Public Media Bridge Fund, which is assisting local public broadcasters in temporarily finding new funds in the wake of the CPB closure.”
After the White House falsely deemed NPR and PBS a “grift” last April, Republicans successfully pushed for a Senate vote that eliminated the CPB’s entire budget in July. That vote rescinded the $1.1 billion that Congress had allocated to CPB to fund public broadcasting during 2026 and 2027, throwing the already shaky U.S. public broadcasting system into complete existential collapse.
As we’ve noted previously, authoritarians loathe journalism. But they really loathe public broadcasting because, in its ideal form, it untethers journalism from the often perverse financial incentives inherent in our consolidated, billionaire-owned, ad-engagement based corporate media.
A corporate media that is easily bullied, cowed, and manipulated by bad actors looking to normalize, downplay, or validate no limit of terrible corruption and bullshit (see: CBS, Washington Post, the New York Times, the LA Times, and countless others). A media that has increasingly stopped serving the public interest in loyal dedication to our increasingly unhinged extraction class.
One of the real harms of the cuts has been to already struggling local U.S. broadcasting stations. While NPR doesn’t really take all that much money from the public anymore (roughly 1% of NPR’s annual budget comes from the government), the CPB distributed over 70 percent of its funding to about 1,500 public radio and TV stations.
Many of those news stations operated in places where quality, local news is difficult if not impossible to find. Local papers have usually either closed or been purchased by soulless hedge funds that are buying papers, stripping them for parts, and hollowing out and homogenizing their coverage. Most U.S. “local news” is dominated by right wing propaganda pseudo-journalism broadcasters like Sinclair Broadcasting.
U.S. “public broadcasting” was already a shadow of the true concept after years of being demonized and defunded by the right wing, so even calling hybrid organizations like NPR “public” is a misnomer. Still, the underlying concept remains an ideological enemy of authoritarian zealots and corporations alike, because they’re very aware that if implemented properly, public media often provides a challenge to their well-funded war on informed consensus, as Pickard has long explained.
DC lawmakers and regulators (including Democrats) have been an absolute embarrassment on building and maintaining any sort of coherent media reform strategy. The evidence of that apathy has never been less subtle. So a hearty thank you to John Oliver for giving a shit.
Belgium’s Latest Pirate Site-Blocking Order Spares DNS Providers [TorrentFreak]
Over the past few months, Belgium has issued several site-blocking orders targeting hundreds of piracy-linked domain names.
These blockades follow a newly instated two-step process. A local court first issues a blocking order, after which a special government body determines how it will be implemented. This process aims to prevent errors and overblocking.
While site blocking is common in Europe, these new Belgian blockades go beyond the typical ISP blockade. Similar to France and Italy , the orders were also directed at third-party public DNS resolvers.
The first implementation order, issued by the Belgian Department for Combating Online Infringement in April, required both ISPs and DNS resolvers to restrict access to pirate sites. Specifically, Cloudflare, Google, and Cisco’s OpenDNS were ordered to stop resolving over 100 pirate sites or face fines of €100,000 euros per day.
This order prompted significant pushback, most notably from Cisco, which ceased operating its OpenDNS service in Belgium soon after the order was announced.
In July, another order by the Belgian authority ordered blockades of shadow library websites, including Libgen, Zlibrary, and Anna’s Archive. This sweeping court order required ISPs to take action and also involved other intermediaries, such as hosting providers, search engines, and DNS services.
The underlying court order also called for a broad blockade of the Internet Archive’s Open Library service. While that was ultimately prevented, the involvement of a broad range of intermediaries caused concern about the escalating scope of the blocking orders.
On November 26, the Belgian Department for Combating Online Infringement published a new blocking implementation order. While this effectively adds dozens of new domains to the Belgian blocklist, the scope of this order is surprisingly limited.
Instead of casting a wide net, the order strictly targets Belgium’s five major Internet Service Providers: Proximus, Telenet, Orange Belgium, DIGI Communications Belgium, and Mobile Vikings.

The list of “addressees” no longer includes the DNS resolvers, Google, Cloudflare, and Cisco, which were central targets in the April blocking order. There is no mention of hosting services, advertisers, or other intermediaries either.
The official implementation order does not mention the rightsholder(s) who requested the blocking measures, nor does it mention the targeted sites. However, the blocked domains are published in a separate spreadsheet showing that 1337x, Fmovies, Soap2Day, and Sflix branded domains are among the key targets.

Since these pirate targets often switch domain names to evade enforcement, rightsholders can submit a new list of mirror sites or proxies once per week, capped at 50 new domains per week. When these are approved by the Belgian Department, ISPs have five working days to update the blocklist.
The decision to exclude DNS resolvers from this latest order is likely not a coincidence. It might very well be a direct consequence of the legal pushback Cisco initiated earlier this year, when it appealed the April blocking order at the Brussels Business Court.
This appeal was not without result, as the court suspended enforcement of that blocking order against Cisco in July, after which OpenDNS became available again in Belgium.
“The OpenDNS service has been reactivated in Belgium following a decision by the Brussels court to suspend enforcement of the order requiring Cisco to implement DNS blocking measures. The suspension of the order is pending a final ruling in the legal proceedings which remain ongoing,” a Cisco representative wrote in a community update.
To find out more about the suspended blocking measures, we reached out to the Belgian Department for Combating Online Infringement, which did not respond to our inquiry. Without further details, we don’t know whether the suspension also applies to other DNS resolvers. Confusingly, the official transparency portal makes no mention of an appeal at all.
It is likely, however, that since the legality of the blocking orders against third-party DNS resolvers is still being litigated, rightsholders have chosen to limit their blocking requests to ISPs. This would suggest that it’s a pause, not a formal retreat.
—
A copy of the latest blocking implementation order, published by the Department for Combating Infringements of Copyright and Related Rights Committed Online and the Illegal Exploitation of Online Games of Chance on the 26th of November, 2025, is available here (pdf).
The full blocking spreadsheet, last updated November 26, is available at the Belgian government website.
From: TF, for the latest news on copyright battles, piracy and more.
ACIP Meets To Decide If More Newborns Need To Catch Hepatitis B [Techdirt]
ACIP is meeting this week, which means we all get to clench our sphincters as we await whatever small, medium, or large sized horrors will come out of this panel of clowns.
It wasn’t always this way. ACIP, and the larger CDC, used to be the world standard when it came to government bodies dedicated to fighting infectious diseases. RFK Jr. did away with that earlier this year, when he disbanded every member of ACIP and installed a group mostly comprised of Dr. Nicks from the Simpsons in their place.
The focus of the agenda this week will be the vaccination schedule for hepatitis B, particularly the CDC’s long-held guidance for vaccinations to begin within 24 hours of birth. It’s really, really important to note that CDC guidance on this doesn’t take the form of a mandate. Parents have a choice on the timing of the vaccination. Instead, the CDC guidance does two primary things: it mandates coverage of the vaccine by insurance companies and it informs medical professionals on what to recommend to parents that understandably largely follow their doctors’ advice on the matter.
Because Kennedy has commented in the past that he believes this vaccine is responsible for autism disorder diagnoses, and because ACIP is staffed with his handpicked clowns, the medical community is holding its breath to see what decisions are made this week. Since CDC’s vaccination guidance in 1995, hep B infections among infants have dropped by a great deal and the resulting liver cancer in children has essentially gone away. Despite this, and despite just how brutal hep B is as a disease, Kennedy has been coming out against immunization, wielding misinformation as per usual.
On Tucker Carlson’s podcast in June, Kennedy falsely claimed that the hepatitis B birth dose is a “likely culprit” of autism.
He also said the hepatitis B virus is not “casually contagious.” But decades of research shows the virus can be transmitted through indirect contact, when traces of infected fluids like blood enter the body when people share personal items like razors or toothbrushes.
Hepatitis B causes incredible pain, cancer, and death. In children. And Kennedy is wildly wrong; it is incredibly contagious and particularly resilient on surfaces. And, again, this is a vaccine that is still voluntary by parents at birth. There is no government mandate for vaccination, only the recommended vaccination schedule.
Now, ACIP may be discussing the use of combo shots, as it has done in the recent past. That’s still fairly dumb, but it would be a far cry better than altering the recommendations for the first-24 hours immunization, which is a single vaccine, unpaired with any other. But ACIP is no longer trustworthy.
And that’s not me saying it. Take it from Republican Senator and do-nothing coward Bill Cassidy, who both had a heavy hand in getting Kennedy confirmed to DHS and who can’t be bothered to do more than say words about all the harm that confirmation is causing.
Sen. Bill Cassidy (R-La.) on Thursday called a federal vaccine advisory committee “totally discredited” ahead of a vote on whether to change hepatitis B vaccine guidelines, an issue very close to the Louisiana physician. Writing Thursday on the social platform X, Cassidy specifically decried Aaron Siri, a prominent anti-vaccine lawyer who is presenting before the committee this week.
“Aaron Siri is a trial attorney who makes his living suing vaccine manufacturers. He is presenting as if an expert on childhood vaccines. The ACIP is totally discredited. They are not protecting children,” Cassidy wrote.
Neither are you, Senator. If you are interested in doing so, you can introduce articles of impeachment on RFK Jr. today. You’ll have plenty of support from the other side of the aisle, and likely a decent amount from your own.
I write this on Thursday and ACIP has already met. Because everything Kennedy touches is chaos, however, the panel moved its hep B vote to tomorrow, Friday, due to the panel not actually knowing what the fuck it was voting on.
At one point in Thursday’s session, committee member Dr. Joseph Hibbeln said that the group had seen three different versions of questions to vote on in the past 72 hours. A technical issue prevented the new voting language from being put up on slides. The presentation was later moved to the end of the agenda, to be displayed just before the vote. There were questions of how many questions members would be asked to vote on. There were no hard copies of the language available.
“We’re trying to evaluate a moving target,” Hibbeln said.
Panel members presented information on the prevalence of acute and chronic hepatitis B, and discussed transmission and safety data. Former board members and liaisons to medical organizations sharply criticized the presentations and said some data was mischaracterized.
Dr. Jason Goldman, liaison to the ACIP for the American College of Physicians, called the meeting “completely inappropriate” and accused the panel of “wasting taxpayer dollars by not having scientific, rigorous discussion on issues that truly matter.” Goldman also highlighted that the hepatitis B birth dose is not mandated and that parents are encouraged to make decisions in consultation with their doctor.
Chaos, confusion, misinformation, and so on. This is American health in RFK Jr.’s America. MAHA has become how it sounds phoenetically: a laugh track. A joke. And a deeply unfunny joke at that.
So now we wait for tomorrow to see just what horrors this gravel-voiced Cthulu of healthcare has in store for us. It seems the best we can hope for is probably advocacy for individual vaccines versus combo-shots. But I fear it’s going to be much, much worse than that. I’ve never seen a child writhing in pain as he or she dies from liver complications due to hepatitis B.
And I pray I never have to.
Four Horsewomen of the GOP Apocalypse [The Status Kuo]
Speaker Mike Johnson faces a political apocalypse that could end his House majority and speakership early. And it’s largely thanks to four horsewomen who are busy fomenting disarray and destruction in his conference.
There’s crazy Marjorie Taylor Greene of Georgia, one of four crucial signatories to the Epstein Files discharge petition who also announced her early retirement, imperiling that narrow House majority.
Riding in her tracks is equally crazy Nancy Mace of South Carolina, who also signed the Epstein petition. The attention-loving Mace reportedly told colleagues that she’s sick of Johnson and may resign early, too.
A surprise flank attack came from Elise Stefanik, a member of the GOP House leadership. Stefanik recently launched an ugly public spat and declared Johnson wouldn’t survive a roll-call vote.
Rounding things out is Anna Paulina Luna of Florida, who normally sounds batshit crazy but in a moment of lucidity filed yet another discharge petition to ban congressional stock trading.
Like his predecessor Kevin McCarthy, and thanks in part to these four women reps, Speaker Johnson faces huge challenges in maintaining his slim majority, his control of House legislative procedures and the Speakership itself.
The threat of early resignations
This congressional session, Republicans have consistently held only the barest of House majorities. And their 220 votes in that chamber have felt more like 219 with Rep. Thomas Massie (R-KY) a constant thorn in the side of Speaker Johnson and Donald Trump. Massie has been a consistent “no” on all the major spending bills and was a co-sponsor of the Epstein Files Transparency Act along with Rep. Ro Khanna (D-CA).
With Greene departing Congress on January 5, 2026, that number will effectively drop to 218. And more early GOP resignations may be in the works, now that Greene has opened the door. The New York Times reported yesterday, for example, that Mace is also eyeing the exits because of Johnson’s leadership:
Representative Nancy Mace of South Carolina has told people she is so frustrated with the Louisiana Republican and sick of the way he has run the House — particularly how women are treated there — that she is planning to huddle with Representative Marjorie Taylor Greene of Georgia next week to discuss following her lead and retiring early from Congress.
After this news came out, Mace denied the report, but there are apparently people who told the Times otherwise.
The threat of early resignations looms especially large now that it’s clear Republicans will have a very tough time holding the majority. The special elections and the general election last month all point toward a Blue Wave with a shift in the double digits. Should that occur, it would even take out many of the more heavily gerrymandered GOP-friendly districts.
Nor is the outlook likely to improve. On the contrary, anger at House Republicans over issues like affordability is only likely to grow once the GOP’s refusal to extend ACA premium subsidies and its huge cuts to Medicaid and food assistance programs hit in 2026.
There is even a world where early GOP retirements create an opportunity for the Democrats to retake the majority months before the midterm elections. I don’t want to speculate further here, but with only a handful of seats between the parties, any stampede by the GOP could trigger exactly such a disastrous outcome for Johnson and the current Republican majority.
Challenges to leadership
Short of losing the actual majority, Johnson faces the possibility of a leadership challenge from his own party. That was laid bare by Stefanik during her surprisingly sharp attacks upon Johnson over the past few days.
Stefanik was outraged, or so she says, because a provision she wanted inserted into the National Defense Authorization Act didn’t make it into the draft “four corners” legislation. That provision would have required the FBI to notify a member of Congress anytime an investigation was opened on that member—a curious provision for Stefanik to so publicly insist be jammed into the bill.
She wound up winning that fight with Johnson, but in the process dug some pretty deep claw marks. Stefanik criticized Johnson, calling him an ineffective leader who was losing control his party and its members going into the midterms.
“He certainly wouldn’t have the votes to be speaker if there was a roll-call vote tomorrow,” Stefanik warned in an interview with The Wall Street Journal. “I believe that the majority of Republicans would vote for new leadership. It’s that widespread.”
Stefanik’s threat looms large because she is within the top ranks of Republican House leadership. Indeed, Johnson gave her a largely made-up position after she was put up, then taken down, for consideration to be Trump’s U.N. Ambassador.
Johnson tried to turn down the heat, telling reporters, after the two came to agreement on her NDAA provision being added, that “I never understood what all the disturbance was about.” Johnson attributed their spat to a breakdown in communication.
But the threat to file a motion to vacate is now out there. Johnson’s rubber stamping of everything Trump wants has made the GOP within the House of Representatives nearly superfluous—so much so that they could be on break and absent from Washington for months and he just didn’t care. That’s not what many of these members think they signed up for and ran on.
And as another continuing resolution to fund the government looms in January, with another partial government shutdown again possible if they cannot cobble the votes together to pass appropriations bills or yet another extension, Johnson may face a full revolt leading to his ouster. It’s not as if the House GOP hasn’t shown itself willing in the past to collapse into rudderless leadership territory.
Loss of control of the floor
Short of losing his majority or his speakership, Johnson is keenly aware that he is losing his grip on what makes it to the House floor. Under normal circumstances, a House Speaker controls the legislative agenda by controlling the powerful Rules Committee, where bills can get voted out and onto the floor with certain “rules” attached, or where they can languish and die.
There are two ways to sink the Speaker’s ambitions, none of which would have ever happened when Nancy Pelosi was in charge.
The first is to take down the rule on a bill sitting in the Rules Committee. We saw this happen multiple times with the far-right House Freedom Caucus, which used to signal its displeasure with House leadership by voting down the rules on bills that Speaker Johnson wanted to move forward.
And we just saw it threatened again, this time by Mace. As Punchbowl News reported at the height of the public spat between herself and Johnson,
Stefanik is so frustrated that she’s prepared to tank the must-pass defense bill — approved by lawmakers every year for more than six decades — if the speaker doesn’t include a provision requiring the FBI to alert Congress if it opens a counterintelligence investigation into an elected official or candidate. Democrats are opposed to this provision.
“I’ll take down the rule,” Stefanik told us in an interview. Stefanik has made this message clear to House GOP leaders as well.
A second way to thwart the Speaker and cause loss of control of the floor is the now infamous Discharge Petition. The one that forced the Epstein Files Transparency Act to the House floor was so embarrassing to Johnson that he actively refused to seat Rep. Adelita Grijalva (D-AZ), who would be the 218th vote on the petition, for over 50 days. When that petition finally did its thing, it precipitated an avalanche of GOP defections that caused Trump to preemptively grant permission to Republicans to vote for the bill, even after he had worked so hard for so long to stop it.
Now representatives like Luna are rubbing further salt in that wound by filing even more discharge petitions. In so doing, Luna is telling Johnson that she doesn’t care what his legislative agenda or timeline is, because she is willing to press ahead with her own.
Johnson: women “can’t compartmentalize”
Johnson recently claimed women “can’t compartmentalize” their thoughts. He probably regrets saying this and infuriating his detractors even more.
The possible resignations, challenges to leadership and blatant procedural bypasses of Johnson are collectively converging to cast him as ineffective, vulnerable and out of touch. This will make the task of holding the Republican Party together as they face the storm of next year’s midterms extremely challenging.
And Johnson is learning in real time that, despite the mental shortcomings he claims they have, Republican women apparently can direct their anger just fine.
Ctrl-Alt-Speech: Stuck In The Middleware With Youth [Techdirt]
Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.
Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.
In this week’s roundup of the latest news in online speech, content moderation and internet regulation, Ben is joined by Vaishnavi J, former head of youth policy at Meta and founder and principal of Vyanams Strategies, a product advisory firm that helps companies, civil society, and governments build safer age appropriate experiences. Prior to founding Vys, she led video policy at Twitter, built its safety team in APAC and was Google’s child safety polciy lead in APAC. Together Ben and Vaishnavi discuss:
Like Apple, Google’s AI News Tech Misinterprets Stories, Generates Gibberish Headlines [Techdirt]
Despite all the recent hype about “AI,” the technology still struggles with very basic things and remains prone to significant errors. Which makes it maybe not the best idea to rush the nascent technology into widespread adoption in industries prone to all sorts of deep-rooted problems already (like say, health insurance, or journalism).
We’ve already seen how news outlets have gotten egg on their faces by using AI “journalists” who completely make up sources, quotes, facts, and other information. But earlier this year, Apple also had to pull their major news AI system offline after it repeatedly couldn’t generate accurate headlines, and in many instances just fabricated major events that never happened (whoops!).
Google has recently also been experimenting with letting AI generate news headlines for its Discover feature (the news page you reach by swiping right on Google Pixel phones), and the results are decidedly… mixed. The technology, once again, routinely misconstrues meaning when trying to sum up news events:
“I also saw Google try to claim that “AMD GPU tops Nvidia,” as if AMD had announced a new groundbreaking graphics card, when the actual Wccftech story is about how a single German retailer managed to sell more AMD units than Nvidia units within a single week’s span.”
Other times, it just produces gibberish:
“Then there are the headlines that simply don’t make sense out of context, something real human editors avoid like plague. What does “Schedule 1 farming backup” mean? How about “AI tag debate heats”?
Google has already redirected a ton of advertising revenue away from journalists who do actual work, and toward its own synopsis and search tech. Now it’s effectively rewriting the headlines editors and journalists (the good ones, anyway) spend a lot of time working on to try and be as accurate and inviting as possible. And they’re doing an embarrassingly shitty job of it.
Not that the media companies themselves have been doing much better. Most major American media companies are owned by people who see AI not as a way to improve journalism quality and make journalism more efficient, but as a path toward cutting corners and undermining labor.
Meanwhile, in the quest for massive engagement at impossible scale, tech giants like Meta and Google have simply stopped caring so much about quality and accuracy. The results are everywhere, from Google News’ declining quality, to substandard search results, to the slow decline of key, popular services, to platforms filled with absolute clickbait garbage. It’s not been great for informed consensus or factual reality.
You’d like to think that ultimately we emerge from the age of slop with not just better technology, but a better understanding of how to use and adapt to it. But the problem remains that most of the folks dictating the trajectory of this emerging technology have no idea what they’re doing, have prioritized making money over the public interest, or are just foundationally shitty human beings bad at their jobs.
A Surveillance Mandate Disguised As Child Safety: Why The GUARD Act Won’t Keep Us Safe [Techdirt]
A new bill sponsored by Sen. Hawley (R-MO), Sen. Blumenthal (D-CT), Sen. Britt (R-AL), Sen. Warner (D-VA), and Sen. Murphy (D-CT) would require AI chatbots to verify all users’ ages, prohibit minors from using AI tools, and implement steep criminal penalties for chatbots that promote or solicit certain harms. That might sound reasonable at first, but behind those talking points lies a sprawling surveillance and censorship regime that would reshape how people of all ages use the internet.
The GUARD Act may look like a child-safety bill, but in practice it’s an age-gating mandate that could be imposed on nearly every public-facing AI chatbot—from customer-service bots to search-engine assistants. The GUARD Act could force countless AI companies to collect sensitive identity data, chill online speech, and block teens from using the digital tools that they rely on every day.
EFF has warned for years that age-verification laws endanger free expression, privacy, and competition. There are legitimate concerns about transparency and accountability in AI, but the GUARD Act’s sweeping mandates are not the solution.
The GUARD Act doesn’t give parents a choice—it simply blocks minors from AI companions altogether. If a chat system’s age-verification process determines that a user is under 18, that user must then be locked out completely. The GUARD Act contains no parental consent mechanism, no appeal process for errors in age estimation, and no flexibility for any other context.
The bill’s definition of an AI “companion” is ambiguous enough that it could easily be interpreted to extend beyond general-use LLMs like ChatGPT, causing overcautious companies to block young people from other kinds of AI services too. In practice, this means that under the GUARD Act, teenagers may not be able to use chatbots to get help with homework, seek customer service assistance for a product they bought, or even ask a search engine a question. It could also cut off all young people’s access to educational and creative tools that have quickly become a part of everyday learning and life online.
By treating all young people—whether seven or seventeen—the same, the GUARD Act threatens their ability to explore their identities, get answers to questions free from shame or stigma, and gradually develop a sense of autonomy as they mature into adults. Denying teens’ access to online spaces doesn’t make them safer, it just keeps them uninformed and unprepared for adult life.
The GUARD Act’s sponsors claim these rules will keep our children safe, but that’s not true. Instead, it will undermine both safety and autonomy by replacing parental guidance with government mandates and building mass surveillance infrastructure instead of privacy controls.
Teens aren’t the only ones who lose out under the GUARD Act. The bill would require platforms to confirm the ages of all users—young and old—before allowing them to speak, learn, or engage with their AI tools.
Under the GUARD Act, platforms can’t rely on a simple “I’m over 18” checkbox or self-attested birthdate. Instead, they must build or buy a “commercially reasonable” age-verification system that collects identifying information (like a government ID, credit record, or biometric data) from every user before granting them access to the AI service. Though the GUARD Act does contain some data minimization language, its mandate to periodically re-verify users means that platforms must either retain or re-collect that sensitive user data as needed. Both of those options come with major privacy risks.
EFF has long documented the dangers of age-verification systems:
As we’ve said repeatedly, there’s no such thing as “safe” age verification. Every approach—whether it’s facial or biometric scans, government ID uploads, or behavioral or account analysis—creates new privacy, security, and expressive harms.
Though mandatory age-gates provide reason enough to oppose the GUARD Act, the definitions of “AI chatbot” and “AI companion” are also vague and broad enough to raise alarms. In a nutshell, the Act’s definitions of these two terms are so expansive that they could cover nearly any system capable of generating “human-like” responses—including not just general-purpose LLMs like ChatGPT, but also more tailored services like those used for customer service interactions, search-engine summaries, and subject-specific research tools.
The bill defines an “AI chatbot” as any service that produces “adaptive” or “context-responsive” outputs that aren’t fully predetermined by a developer or operator. That could include Google’s search summaries, research tools like Perplexity, or any AI-powered Q&A tool—all of which respond to natural language prompts and dynamically generate conversational text.
Meanwhile, the GUARD Act’s definition of an “AI companion”—a system that both produces “adaptive” or “context-responsive” outputs and encourages or simulates “interpersonal or emotional interaction”—will easily sweep in general-purpose tools like ChatGPT. Courts around the country are already seeing claims that conversational AI tools manipulate users’ emotions to increase engagement. Under this bill, that’s enough to trigger the “AI companion” label, putting AI developers at risk even when they do not intend to cause harm.
Both of these definitions are imprecise and unconstitutionally overbroad. And, when combined with the GUARD Act’s incredibly steep fines (up to $100,000 per violation, enforceable by the federal Attorney General and every state AG), companies worried about their legal liability will inevitably err on the side of prohibiting minors from accessing their chat systems. The GUARD Act leaves them these options: censor certain topics en masse, entirely block users under 18 from accessing their services, or implement broad-sweeping surveillance systems as a prerequisite to access. No matter which way platforms choose to go, the inevitable result for users is less speech, less privacy, and less access to genuinely helpful tools.
While there may be legitimate problems with AI chatbots, young people’s safety is an incredibly complex social issue both on- and off-line. The GUARD Act tries to solve this complex problem with a blunt, dangerous solution.
In other words, protecting young people’s online safety is incredibly important, but to do so by forcing invasive ID checks, criminalizing AI tools, and banning teens from legitimate digital spaces is not a good way out of this.
The GUARD Act would make the internet less free, less private, and less safe for everyone. It would further consolidate power and resources in the hands of the bigger AI companies, crush smaller developers, and chill innovation under the threat of massive fines. And it would cut off vulnerable groups’ ability to use helpful everyday AI tools, further stratifying the internet we know and love.
Lawmakers should reject the GUARD Act and focus instead on policies that provide transparency, more options for users, and comprehensive privacy for all. Help us tell Congress to oppose the GUARD Act today.
Originally posted to the EFF’s Deeplinks blog.
How to Create and Sell Personalized Planners, Journals, and Notebooks [Write, Publish, and Sell]

When you design and customize a planner or journal, you open up a new product line that fits almost any niche. From fitness logs to meal planners, from writing trackers to personalized journals, these low-content books let you reach new buyers and build your brand.
Whether that means building a business around selling custom notebooks, supplementing your other products or services by selling planners, or simply making yourself a journal for your own use, this guide will help you understand how to self-publish a planner, journal, or notebook using Lulu.
You could buy a mass-produced planner from the store. But when you design your own, you have full customization and control. You pick the size and layout, add your own prompts, and control your branding. A lot of writers, artists, and creators want their own notebooks and designs for themselves. A custom planner or journal also makes a terrific gift around the holidays.
Unlike store-bought planners, a self-designed planner or notebook can be tailored precisely to your needs or the preferences of your audience. Perfect for individuals with niche content and businesses looking for unique, branded products.
And with this control comes the opportunity for monetization. For example, if you’re between book releases, you could create a custom planner or notebook. The cover might feature a rendering of your most popular character from past books, and you could include references to your past stories (or previews of an upcoming book) in the planning pages.
For businesses, this opportunity is even bigger. A simple journal with your company logo and branding makes amazing marketing materials. Likewise, depending on the products you sell, you might offer accompanying notebooks or workbooks (or class creators) or a companion notebook to go with an annual guide or manual.
Additionally, you can use Lulu’s print-on-demand to create a variety of custom notebook or planner designs:
You’ve just created an additional means of earning money, marketing to your audience, and keeping your readers happy!
Even with all of our digital notetakers and AI assistants, taking notes by hand is still a common and effective method. Numerous studies over the years have shown that taking handwritten notes leads to better retention and test scores (among students).
Anyone who likes to take notes, create plans, or just journal can use Lulu to print their own custom designs. But these low-content books are most often created by:
Low-content books share some basic features—blank or lined pages, maybe trackers or prompts. But because you’re designing them, you can tailor everything: layout, branding, line spacing, dot grid or blank pages, and anything else you can imagine. In short, you’re creating a product you can sell.
If you want a planner, it can be daily, weekly, or monthly. And it might have columns for the days or use a full page for each day. You might have specific events (like dinner, going to the gym, birthdays, and holidays) pre-set. Or it might be a blank slate, ready for your fans to fill in.
Likewise, you might create a journal specifically for sleep tracking or meal planning. Notebooks can be blank, lined, dot grid, or some combination of the three!
Here’s a quick guide to building your first custom planner, journal, or notebook using Lulu.
The best way to build your project is to start with a template. You can find lots of free planners, journals, and notebook pages on Lulu’s Resources page. We start with simple designs that you can download for free. These versatile starter designs can be easily tailored to your specific needs.
Simple designs are best because you can customize them to fit your needs. This means you’ll need a platform for editing and customizing your pages.
You can find an array of tools that will help you customize your planner, journal, or notebook pages (and design your cover when the time comes!). Here are my top four picks for easy-to-use design software:
Once you’re set up with a template and design platform, you’re ready to create custom pages! For notebooks, planners, and journals, there are a few common designs that consumers tend to be interested in.
Export your final custom design as a PDF. Always look for ‘print-ready’ or other high-quality print settings. This will ensure your file meets Lulu’s print requirements and looks amazing once printed. Here’s an InDesign tutorial we created to show you how to set up and export your files for printing on Lulu.
Once your print-ready PDF is ready, sign in to your Lulu account. Create a new project and select ‘Print Books’ for the project type.

Add some information about your notebook, like a title and language. If you intend to sell on the Lulu Bookstore or use Lulu’s retail distribution, you’ll add copyright, ISBN, and other metadata too.
On the Design step, you’ll choose your trim size, binding, and paper type. Upload your interior file and cover. Set up your description and keywords (important for discoverability), and add payees if you’re using Lulu’s retail options. Then, please, for the love of everything paper and ink, order a proof copy.
If you’re happy with the print, your custom planner is ready to sell on Lulu, through retailers, and on your own site with Lulu Direct.
I love notebooks. I’ve written about this before, but taking notes by hand is one of my favorite things to do. And I love planners, even if I’m terrible at committing to one and using it every day (like I should).
With so many free templates and the ease with which you can customize your project, you basically have endless options. And if you can’t find one that suits your needs, you can always create your own!
To help you create, here are a few of my favorite print-on-demand planner, notebook, and journal ideas.

Create a daily planner where you can include your page count goals, notes about the scene you’ll be working on, and motivational quotes. Writing journals takes a lot of forms, but a planner or task manager is not one you see often.
One of the more common planner designs, you can create a journal and planner with exercises built in. If you’re a fitness instructor or just passionate about physical health, an exercise planner might be the perfect way to share your routine!
Not that I want to suggest teachers do any more work than they already do, but the opportunity to facilitate learning with a custom planner is huge. Each student could have a planner in hand with lesson details and assignments already included. Particularly with digital and distance learning becoming more common, having a clear view of the semester and the assignments is vital. If you want extra credit, you could even personalize each planner with the student's name.
Every week, I sit down with my wife, and we put together a meal plan for the week, then a shopping list for groceries. And every week, I think to myself, ‘this would make a great little notebook.’
If you’re dieting or on a restricted diet, a planner might even be a necessity. So, why not craft one unique to your needs?
Maybe you just took up a new hobby. Or you’ve decided to learn German. Or the trombone.
Whatever skill or hobby you’ve taken up, you can create a custom tracker, planner, and practice calendar. It’s a great way to stay on top of learning that new skill and to document your growing abilities.

Once your project is complete, you’re ready to start selling. Lulu offers three simple paths:
Publish your planner in the Lulu Bookstore and share the link with your audience.
This is the easiest way to start selling immediately. Your fans can buy from a dedicated sales page on Lulu, and you’ll earn 80% of the revenue. We manage all of that for you, paying you out via check or PayPal.
If you want full control over pricing, branding, and customer experience, connect Lulu Direct with:
When you use Lulu Direct, your customers order from your own store, and we handle the printing and shipping—with white label packing slips to keep your brand front and center. You’ll earn all of the revenue from each sale and have better access to your customers’ data for future marketing efforts.
This is the best setup for creators running a planner or notebook business or adding a print-on-demand planner to their existing products.
If you want to offer customized planners—user names, start dates, variable content, custom layouts—the Print API automates everything. For each unique planner, you’ll create a custom PDF using your own developer. Lulu’s API creates a product SKU for that design and matches that to your chosen size, binding, ink, and paper needs.
After that, we print and ship for you, offering a fully automated shopping experience for your customers.
The API connection is free to use and great for:
Creating custom projects isn’t just a creative outlet—it’s a business opportunity. Custom planners, journals, and notebooks are a popular trend right now. If you’ve got an audience who might use a planner, journal, or notebook, offering your own is a great way to supplement your income. And it gets your brand out there, potentially leading to new customers.
Journals and planners are popular and unique ways to take advantage of print-on-demand. Along with the relative ease in creating a custom planner, printable planners offer a terrific opportunity to establish additional revenue streams for your publishing (or other) business.
Daily Deal: The Ultimate Oracle, SAP And Salesforce Training Bundle [Techdirt]
The Ultimate Oracle, SAP and Salesforce Training Bundle has 6 courses to help you brush up on your CRM knowledge. Courses cover database programming languages, data analysis, Recovery Manager, and more. It’s on sale for $25.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Trump Administration Stops Fucking Around On Immigration, Hangs Official ‘Whites Only’ Sign [Techdirt]
A recent shooting involving a former Afghani US counter-terrorism asset who worked with the CIA (!!!) has become the tragedy the Trump administration apparently needed to go from “consistently racist” to “openly racist.” The wounding of two National Guard troops led directly to the president spending the holiday doing what he always does on holidays: ranting about a bunch of shit rather than just wish the people he supposed to serving a happy Thanksgiving.
Trump went on a multi-day Truth Social bender, beginning with multiple invective-filled posts on Thanksgiving that led to a marathon 158-post (!!!) barrage over a three-hour period starting late Monday (December 1) night.
Trump dropped back-to-back “bangers” on Truth Social, both loaded with bigoted language that made it clear the US — under Trump — is only interested in importing white people.
This post first blamed Biden for some stuff before moving on (within the space of a sentence) to declaring the termination of asylum/visa applications from nations Trump considers to be unworthy of entering the former Land of Opportunity.
I will permanently pause migration from all Third World Countries to allow the U.S. system to fully recover, terminate all of the millions of Biden illegal admissions, including those signed by Sleepy Joe Biden’s Autopen, and remove anyone who is not a net asset to the United States, or is incapable of loving our Country, end all Federal benefits and subsidies to noncitizens of our Country, denaturalize migrants who undermine domestic tranquility, and deport any Foreign National who is a public charge, security risk, or non-compatible with Western Civilization.
It appears the administration will decide which countries fit the “Third World” descriptor on a case-by-bigoted-case basis. This means the ten countries considered too inherently dangerous to be allowed to be part of the migration ecosystem are (lol) in the minority. If you’ve been paying attention, the original list of countries whose residents are forbidden from entering the US contains a lot of countries this administration wants to send deportees like Kilmar Abrego Garcia to, despite much-friendlier countries (Costa Rica, for example) offering to take Garcia off the US government’s hands.
Mr. Trump’s June proclamation imposed a near-total restriction on the entry of people from Afghanistan, Myanmar, Chad, the Republic of the Congo, Equatorial Guinea, Eritrea, Haiti, Iran, Libya, Somalia, Sudan and Yemen.
Another 19 countries have been added to Trump’s list of presumable “shit holes” — a list that includes many of his current infatuations:
The Trump administration has halted all immigration applications filed by people from 19 countries, its latest move to restrict legal immigration pathways following the shooting of two National Guard members in Washington, D.C., last week, according to internal government guidance and a source familiar with the move.
[…]
It also partially suspended the entry of travelers and immigrants from Burundi, Cuba, Laos, Sierra Leone, Togo, Turkmenistan and Venezuela.
The latest proclamation means all 19 of these countries are on Trump’s shit list. The limitations are now effectively a complete ban on migration. And former residents of the listed nations can expect to be deported ASAFP. (“Feasibly.”)
That’s not even the end of it. The day after Trump’s thumb-wrecking Truth Social posting spree, DHS dog-killer-in-chief stepped up to the mic to promise even more racism and pain.
Homeland Security Secretary Kristi Noem is recommending that the Trump administration’s travel ban list include between 30 to 32 countries, marking an increase from the current list of 19 countries, according to a source familiar with the matter.
[…]
Noem said Monday that, following a meeting with President Donald Trump, she recommended a “full travel ban” on “every damn country that’s been flooding our nation with killers, leeches, and entitlement junkies.”
Huh. Will the US of A be added to that list, considering it generates plenty of “killers, leeches, and entitlement junkies” on its own? The simple fact is that immigrants commit fewer crimes, pay more than their share of taxes, and generally do everything they can to stand on their own two feet, even when the government insists on depriving them of their bootstraps every time a bunch of bigots seize an inordinate amount of power.
And that leads us back to what’s always been propelling this mass deportation surge: the GOP’s racism, currently embodied by an aged, obese man with bad hair who has never wanted for anything in his life: Donald Trump.
His immediate follow-up (one [1] minute later [!!]) to his pseudo-Thanksgiving well-wishing was this post, which immediately attacked political opponents not just because they opposed him, but because they were not as white as Trump is (current level of spray tan notwithstanding).
I’m going to quote quite a bit of it (for which I kind of apologize) because you have to see all of this for yourself and ABSOLUTELY KNOW this has all been written by a man who currently holds the office of the President of the United States. (All emphasis mine.)
YOUR WALL OF TEXT AWAITS.
The official United States Foreign population stands at 53 million people (Census), most of which are on welfare, from failed nations, or from prisons, mental institutions, gangs, or drug cartels. They and their children are supported through massive payments from Patriotic American Citizens who, because of their beautiful hearts, do not want to openly complain or cause trouble in any way, shape, or form. They put up with what has happened to our Country, but it’s eating them alive to do so! A migrant earning $30,000 with a green card will get roughly $50,000 in yearly benefits for their family. The real migrant population is much higher. This refugee burden is the leading cause of social dysfunction in America, something that did not exist after World War II (Failed schools, high crime, urban decay, overcrowded hospitals, housing shortages, and large deficits, etc.). As an example, hundreds of thousands of refugees from Somalia are completely taking over the once great State of Minnesota. Somalian gangs are roving the streets looking for “prey” as our wonderful people stay locked in their apartments and houses hoping against hope that they will be left alone. The seriously retarded Governor of Minnesota, Tim Walz, does nothing, either through fear, incompetence, or both, while the worst “Congressman/woman” in our Country, Ilhan Omar, always wrapped in her swaddling hijab, and who probably came into the U.S.A. illegally in that you are not allowed to marry your brother, does nothing but hatefully complain about our Country, its Constitution, and how “badly” she is treated, when her place of origin is a decadent, backward, and crime ridden nation, which is essentially not even a country for lack of Government, Military, Police, schools, etc…
Yeah. This is “racist grandpa” shit except that it’s being said by perhaps the most powerful man in the world. There are lies about the costs immigrants create, followed by a bunch of stereotypes, the casual use of the word “retarded” to describe another politician, and the well-past-the-point-of-insinuation claims that Ilhan Omar not only married her brother but comes from a country that shouldn’t even be considered a country.
And if you think that’s the worst thing Trump said about Somalia or Somalians within just the past three days, I have the sort of bad news you knew I’d be delivering when you first starting reading this sentence:
President Donald Trump on Tuesday said he did not want Somali immigrants in the U.S., saying residents of the war-ravaged eastern African country are too reliant on U.S. social safety net and add little to the United States.
[…]
“They contribute nothing. I don’t want them in our country,” Trump told reporters near the end of a lengthy Cabinet meeting. He added: “Their country is no good for a reason. Your country stinks and we don’t want them in our country.”
[…]
Trump also renewed his criticism of Omar, whose family fled the civil war in Somalia and spent several years in a refugee camp in Kenya before coming to the U.S.
“We can go one way or the other, and we’re going to go the wrong way, if we keep taking in garbage into our country,” Trump said. “Ilhan Omar is garbage. She’s garbage. Her friends are garbage.”
Man, I can only hope that when the face-eating leopard party really starts stripping faces off the MAGA faithful, their asylum requests will be rejected with the same callous shrugging about how these people are “garbage” that shouldn’t be allowed to enter other countries because the United States “stinks” and their pasty white nationalists “contribute nothing” to the world at large. And I also hope these little Mussolini wannabes the GOP caters will take a look at history and wonder whether it’s truly worth it to be the worst Americans imaginable just because it plays well with the Nazis.
EU’s Top Court Just Made It Literally Impossible To Run A User-Generated Content Platform Legally [Techdirt]
The Court of Justice of the EU—likely without realizing it—just completely shit the bed and made it effectively impossible to run any website in the entirety of the EU that hosts user-generated content.
Obviously, for decades now, we’ve been talking about issues related to intermediary liability, and what standards are appropriate there. I am an unabashed supporter of the US’s approach with Section 230, as it was initially interpreted, which said that any liability should land on the party who contributed the actual violative behavior—in nearly all cases the speaker, not the host of the content.
The EU has always held itself to a lower standard of intermediary liability, first with the E-Commerce Directive and more recently with the Digital Services Act (DSA), which still generally tries to put more liability on the speaker but has some ways of shifting the liability to the platform.
No matter which of those approaches you think is preferable, I don’t think anyone could (or should) favor what the Court of Justice of the EU came down with earlier this week, which is basically “fuck all this shit, if there’s any content at all on your site that includes personal data of someone you may be liable.”
As with so many legal clusterfucks, this one stems from a case with bad facts, which then leads to bad law. You can read the summary as the CJEU puts it:
The applicant in the main proceedings claims that, on 1 August 2018, an unidentified third party published on that website an untrue and harmful advertisement presenting her as offering sexual services. That advertisement contained photographs of that applicant, which had been used without her consent, along with her telephone number. The advertisement was subsequently reproduced identically on other websites containing advertising content, where it was posted online with the indication of the original source. When contacted by the applicant in the main proceedings, Russmedia Digital removed the advertisement from its website less than one hour after receiving that request. The same advertisement nevertheless remains available on other websites which have reproduced it.
And, yes, no one is denying that this absolutely sucks for the victim in this case. But if there’s any legal recourse, it seems like it should be on whoever created and posted that fake ad. Instead, the CJEU finds that Russmedia is liable for it, even though they responded within an hour and took down the ad as soon as they found out about it.
The lower courts went back and forth on this, with a Romanian tribunal (on first appeal) finding, properly, that there’s no fucking way Russmedia should be held liable, seeing as it was merely hosting the ad and had nothing to do with its creation:
The Tribunalul Specializat Cluj (Specialised Court, Cluj, Romania) upheld that appeal, holding that the action brought by the applicant in the main proceedings was unfounded, since the advertisement at issue in the main proceedings did not originate from Russmedia, which merely provided a hosting service for that advertisement, without being actively involved in its content. Accordingly, the exemption from liability provided for in Article 14(1)(b) of Law No 365/2002 would be applicable to it. As regards the processing of personal data, that court held that an information society services provider was not required to check the information which it transmits or actively to seek data relating to apparently unlawful activities or information. In that regard, it held that Russmedia could not be criticised for failing to take measures to prevent the online distribution of the defamatory advertisement at issue in the main proceedings, given that it had rapidly removed that advertisement at the request of the applicant in the main proceedings.
With the case sent up to the CJEU, things get totally twisted, as they argue that under the GDPR, the inclusion of “sensitive personal data” in the ad suddenly makes the host a “joint controller” of the data under that law. As a controller of data, the much stricter GDPR rules on data protection now apply, and the more careful calibration of intermediary liability rules get tossed right out the window.
And out the window, right with it, is the ability to have a functioning open internet.
The court basically shreds basic intermediary liability principles here:
In any event, the operator of an online marketplace cannot avoid its liability, as controller of personal data, on the ground that it has not itself determined the content of the advertisement at issue published on that marketplace. Indeed, to exclude such an operator from the definition of ‘controller’ on that ground alone would be contrary not only to the clear wording, but also the objective, of Article 4(7) of the GDPR, which is to ensure effective and complete protection of data subjects by means of a broad definition of the concept of ‘controller’.
Under this ruling, it appears that any website that hosts any user-generated content can be strictly liable if any of that content contains “sensitive personal data” about any person. But how the fuck are they supposed to handle that?
The basic answer is to pre-scan any user-generated content for anything that might later be deemed to be sensitive personal data and make sure it doesn’t get posted.
How would a platform do that?
¯\_(ツ)_/¯
There is no way that this is even remotely possible for any platform, no matter how large or how small. And it’s even worse than that. As intermediary liability expert Daphne Keller explains:
The Court said the host has to
- pre-check posts (i.e. do general monitoring)
- know who the posting user is (i.e. no anonymous speech)
- try to make sure the posts don’t get copied by third parties (um, like web search engines??)
Basically, all three of those are effectively impossible.
Think about what the court is actually demanding here. Pre-checking posts means full-scale automated surveillance of every piece of content before it goes live—not just scanning for known CSAM hashes or obvious spam, but making subjective legal determinations about what constitutes “sensitive personal data” under the GDPR. Requiring user identification kills anonymity entirely, which is its own massive speech issue. And somehow preventing third parties from copying content? That’s not even a technical problem—it’s a “how do you stop the internet from working like the internet” problem.
Some people have said that this ruling isn’t so bad, because the ruling is about advertisements and because it’s talking about “sensitive personal data.” But it’s difficult to see how either of those things limit this ruling at all.
There’s nothing inherently in the law or the ruling that limits its conclusions to “advertisements.” The same underlying factors would apply to any third party content on any website that is subject to the GDPR.
As for the “sensitive personal data” part, that makes little difference because sites will have to scan all content before anything is posted to guarantee no “sensitive personal data” is included and then accurately determine what a court might later deem to be such sensitive personal data. That means it’s highly likely that any website that tries to comply under this ruling will block a ton of content on the off chance that maybe that content will be deemed sensitive.
As the court noted:
In accordance with Article 5(1)(a) of the GDPR, personal data are to be processed lawfully, fairly and in a transparent manner in relation to the data subject. Article 5(1)(d) of the GDPR adds that personal data processed must be accurate and, where necessary, kept up to date. Thus, every reasonable step must be taken to ensure that personal data that are inaccurate, having regard to the purposes for which they are processed, are erased or rectified without delay. Article 5(1)(f) of that regulation provides that personal data must be processed in a manner that ensures appropriate security of those data, including protection against unauthorised or unlawful processing.
Good luck figuring out how to do that with third-party content.
And they’re pretty clear that every website must pre-scan every bit of content. They claim it’s about “marketplaces” and “advertisements” but there’s nothing in the GDPR that limits this ruling to those categories:
Accordingly, inasmuch as the operator of an online marketplace, such as the marketplace at issue in the main proceedings, knows or ought to know that, generally, advertisements containing sensitive data in terms of Article 9(1) of the GDPR, are liable to be published by user advertisers on its online marketplace, that operator, as controller in respect of that processing, is obliged, as soon as its service is designed, to implement appropriate technical and organisational measures in order to identify such advertisements before their publication and thus to be in a position to verify whether the sensitive data that they contain are published in compliance with the principles set out in Chapter II of that regulation. Indeed, as is apparent in particular from Article 25(1) of that regulation, the obligation to implement such measures is incumbent on it not only at the time of the processing, but already at the time of the determination of the means of processing and, therefore, even before sensitive data are published on its online marketplace in breach of those principles, that obligation being specifically intended to prevent such breaches.
No more anonymity allowed:
As regards, in the second place, the question whether the operator of an online marketplace, as controller of the sensitive data contained in advertisements published on its website, jointly with the user advertiser, must verify the identity of that user advertiser before the publication, it should be recalled that it follows from a combined reading of Article 9(1) and Article 9(2)(a) of the GDPR that the publication of such data is prohibited, unless the data subject has given his or her explicit consent to the data in question being published on that online marketplace or one of the other exceptions laid down in Article 9(2)(b) to (j) is satisfied, which does not, however, appear to be the case here.
On that basis, while the placing by a data subject of an advertisement containing his or her sensitive data on an online marketplace may constitute explicit consent, within the meaning of Article 9(2)(a) of the GDPR, such consent is lacking where that advertisement is placed by a third party, unless that party can demonstrate that the data subject has given his or her explicit consent to the publication of that advertisement on the online marketplace in question. Consequently, in order to be able to ensure, and to be able to demonstrate, that the requirements laid down in Article 9(2)(a) of the GDPR are complied with, the operator of the marketplace is required to verify, prior to the publication of such an advertisement, whether the user advertiser preparing to place the advertisement is the person whose sensitive data appear in that advertisement, which presupposes that the identity of that user advertiser is collected.
Finally, as Keller noted above, the CJEU seems to think it’s possible to require platforms to make sure content is never displayed on any other platform as well:
Thus, where sensitive data are published online, the controller is required, under Article 32 of the GDPR, to take all technical and organisational measures to ensure a level of security apt to effectively prevent the occurrence of a loss of control over those data.
To that end, the data controller must consider in particular all technical measures available in the current state of technical knowledge that are apt to block the copying and reproduction of online content.
Again, the CJEU appears to be living in a fantasy land that doesn’t exist.
This is what happens when you over-index on the idea of “data controllers” needing to keep data “private.” Whoever revealed sensitive data should have the liability placed on them. Putting it on the intermediary is misplaced and ridiculous.
There is simply no way to comply with the law under this ruling.
In such a world, the only options are to ignore it, shut down EU operations, or geoblock the EU entirely. I assume most platforms will simply ignore it—and hope that enforcement will be selective enough that they won’t face the full force of this ruling. But that’s a hell of a way to run the internet, where companies just cross their fingers and hope they don’t get picked for an enforcement action that could destroy them.
There’s a reason why the basic simplicity of Section 230 makes sense. It says “the person who creates the content that violates the law is responsible for it.” As soon as you open things up to say the companies that provide the tools for those who create the content can be liable, you’re opening up a can of worms that will create a huge mess in the long run.
That long run has arrived in the EU, and with it, quite the mess.
Kanji of the Day: 益 [Kanji of the Day]
益
✍10
小5
benefit, gain, profit, advantage
エキ ヤク
ま.す
利益 (りえき) — profit
収益 (しゅうえき) — earnings
営業利益 (えいぎょうりえき) — operating profit
損益 (そんえき) — profit and loss
不利益 (ふりえき) — disadvantage
売却益 (ばいきゃくえき) — profit on sales
公益法人 (こうえきほうじん) — public-service corporation
国益 (こくえき) — national interest
有益 (ゆうえき) — beneficial
公益 (こうえき) — public interest
Generated with kanjioftheday by Douglas Perkins.
Kanji of the Day: 貫 [Kanji of the Day]
貫
✍11
中学
pierce, 8 1/3lbs, penetrate, brace
カン
つらぬ.く ぬ.く ぬき
一貫して (いっかんして) — consistently
貫く (つらぬく) — to go through
中高一貫校 (ちゅうこういっかんこう) — combined junior high and high school
一貫 (いっかん) — consistency
貫禄 (かんろく) — presence
一貫性 (いっかんせい) — consistency
貫通 (かんつう) — passing through (of a tunnel, bullet, etc.)
裸一貫 (はだかいっかん) — having nothing except one's body
貫入 (かんにゅう) — penetration
貫き通す (つらぬきとおす) — to go through
Generated with kanjioftheday by Douglas Perkins.
Simple and obvious… or nuanced and complicated? [Seth Godin's Blog on marketing, tribes and respect]

Some choices seem obvious, while others demand care and insight.
And some offerings are simple, while others have depth and multiple variables.
As you’ve probably guessed, the choices that are simple and obvious tend to do best in the mass market.
Where did you get your cup of coffee this morning? Did you visit a drive through Dutch Bros. or did you use a lever pull at home to pull a shot with beans you roasted and brewed yourself?
Most successful politicians and movements start in the bottom left and move their way toward simple and obvious.
Successful social media platforms race to the top right hand corner, but the most interesting and generative content online is probably not there…
Choose your quadrants carefully.
| RSS | Site | Updated |
|---|---|---|
| XML | About Tagaini Jisho on Tagaini Jisho | 2025-12-06 11:00 AM |
| XML | Arch Linux: Releases | 2025-12-06 06:00 AM |
| XML | Carlson Calamities | 2025-12-06 06:00 AM |
| XML | Debian News | 2025-12-06 11:00 AM |
| XML | Debian Security | 2025-12-06 11:00 AM |
| XML | debito.org | 2025-12-06 11:00 AM |
| XML | dperkins | 2025-12-06 02:00 AM |
| XML | F-Droid - Free and Open Source Android App Repository | 2025-12-05 08:00 PM |
| XML | GIMP | 2025-12-06 06:00 AM |
| XML | Japan Bash | 2025-12-06 11:00 AM |
| XML | Japan English Teacher Feed | 2025-12-06 11:00 AM |
| XML | Kanji of the Day | 2025-12-06 06:00 AM |
| XML | Kanji of the Day | 2025-12-06 06:00 AM |
| XML | Let's Encrypt | 2025-12-06 06:00 AM |
| XML | Marc Jones | 2025-12-06 06:00 AM |
| XML | Marjorie's Blog | 2025-12-06 06:00 AM |
| XML | OpenStreetMap Japan - 自由な地図をみんなの手で/The Free Wiki World Map | 2025-12-06 06:00 AM |
| XML | OsmAnd Blog | 2025-12-06 06:00 AM |
| XML | Pluralistic: Daily links from Cory Doctorow | 2025-12-06 02:00 AM |
| XML | Popehat | 2025-12-06 06:00 AM |
| XML | Ramen Adventures | 2025-12-06 06:00 AM |
| XML | Release notes from server | 2025-12-06 06:00 AM |
| XML | Seth Godin's Blog on marketing, tribes and respect | 2025-12-06 02:00 AM |
| XML | SNA Japan | 2025-12-06 02:00 AM |
| XML | Tatoeba Project Blog | 2025-12-06 11:00 AM |
| XML | Techdirt | 2025-12-06 11:00 AM |
| XML | The Luddite | 2025-12-06 06:00 AM |
| XML | The Popehat Report | 2025-12-06 02:00 AM |
| XML | The Status Kuo | 2025-12-06 02:00 AM |
| XML | The Stranger | 2025-12-06 06:00 AM |
| XML | Tor Project blog | 2025-12-06 11:00 AM |
| XML | TorrentFreak | 2025-12-06 11:00 AM |
| XML | what if? | 2025-12-06 11:00 AM |
| XML | Wikimedia Commons picture of the day feed | 2025-12-02 07:00 PM |
| XML | Write, Publish, and Sell | 2025-12-06 06:00 AM |
| XML | xkcd.com | 2025-12-06 11:00 AM |