News

Thursday 2026-05-07

03:00 PM

Nintendo Shuts Down Fun Faux ‘Pokemon Documentary’ YouTuber Via Copyright Strikes [Techdirt]

We all should know by now that Nintendo is incredibly protective of its IP. When it comes to anything having to do with Pokémon specifically, all the more so. While they would tell you that they’re just protecting their IP, the end result is that some of the biggest Pokémon fans out there that just want to do some fun things that represent no harm to Nintendo get shut down by threats, lawyers, or copyright strikes.

Take the YouTube series called PokeNational Geographic, for instance. While this YouTube series has been pushing out faux nature documentary videos about Pokémon for several years, the channel behind it just got hit with a bunch of copyright strikes from Nintendo.

In a video posted to an alternate channel, Elious says that Nintendo of America suddenly issued numerous strikes on large batches of his videos, all in the space of 12 hours. At the time he posted the video, a total of 20 videos had been caught up in four separate copyright strikes which encompass the entirety of the videos. With YouTube’s three-strikes policy, this means his channel is now pending deletion by YouTube and will disappear in seven days.

Elious says the strikes claim his channel is inappropriately using “content used in Pokémon video games including audiovisual works, characters, and imagery.” Elious’ videos consist of original 3D animation of various Pokémon in the “wild,” with a David Attenborough–style narration sharing various facts about Pokémon like Magikarp, Squirtle, Magnemite, Snom, Mew, Charizard, and more. He has been producing these videos on this channel since as far back as 2023 without issue, and claims in his video that the only actual content he took directly from the games was “tiny sprite roars” that last less than three seconds, adding that numerous other Pokémon creators on YouTube, as well as AI-produced channels mimicking his own, use images or footage directly from the games with no issue.

So, why now? There’s no way to know for sure, but Elious did recently launch a Patreon account so that fans could compensate them for the series. The general speculation is that once Elious attempted to make any kind of money from his video series, that spurred Nintendo to send the copyright strikes. And for many people, that will make complete sense.

I don’t understand that point of view. Regardless of any money changing hands, this still doesn’t represent any threat or harm to Nintendo or the Pokémon franchise. If anything, fun little fan videos like this only propel interest in the product. They represent free engagement lures for fans of Pokémon. Why in the world is copyright striking this channel to hell a better option than working out a free or cheap licensing arrangement with Elious so that they can keep producing the series and Nintendo can reap some of the benefit?

Or, hell, Nintendo could have tried to have a conversation with Elious, at least.

Elious continues by saying that he isn’t opposed to just deleting all the Pokémon videos if Nintendo of America asks, but he wishes he could keep his nearly 100,000 subscribers so he can keep making videos of other things, as he has on the channel in the past.

“I can’t really fight this,” Elious says. “It all seems legitimate, it does seem to come from the actual, real Nintendo of America. I can’t fight this. I don’t…I don’t know what to do about it because it’ll remove everything. I’m downloading stuff, of course, I have like, all the videos myself. But I’ll never be able to post them again, and I’ll never be able to use this channel again. Almost 100,000 subscribers over three years of making these animations and it’s all going to be gone in seven days.”

It’s simply too bad that Nintendo would rather worship at the altar of intellectual property than get creative with how it can support its fans. Thanks to IP maximalist thought, here is just a little more fun that Nintendo has flushed down the toilet.

Kanji of the Day: 芽 [Kanji of the Day]

✍8

小4

bud, sprout, spear, germ

芽生え   (めばえ)   —   bud
発芽   (はつが)   —   germination
芽キャベツ   (めキャベツ)   —   Brussels sprouts (Brassica oleracea var. gemmifera)
新芽   (しんめ)   —   sprout
芽生える   (めばえる)   —   to bud
麦芽   (ばくが)   —   malt
花芽   (かが)   —   flower bud
萌芽   (ほうが)   —   germination
木の芽   (きのめ)   —   leaf bud
芽を摘む   (めをつむ)   —   to nip (something) in the bud

Generated with kanjioftheday by Douglas Perkins.

Kanji of the Day: 粧 [Kanji of the Day]

✍12

中学

cosmetics, adorn (one's person)

ショウ

化粧   (けしょう)   —   make-up
化粧品   (けしょうひん)   —   cosmetics
化粧水   (けしょうすい)   —   skin lotion
雪化粧   (ゆきげしょう)   —   coating of snow
粧す   (めかす)   —   to adorn oneself
化粧直し   (けしょうなおし)   —   adjusting one's makeup
薄化粧   (うすげしょう)   —   light makeup
化粧下   (けしょうした)   —   make-up base
厚化粧   (あつげしょう)   —   thick makeup
化粧室   (けしょうしつ)   —   toilet

Generated with kanjioftheday by Douglas Perkins.

02:00 PM

Moving Day! [The Status Kuo]

I’m taking the day off to move the family to Kingston! The morning started off spectacularly with Riley dumping her brother Ronan’s powdered formula all over herself. Fun!

But we got everything packed up, and now we’re awaiting the moving vans. And I think the kids like their new room!

For now, lots more room to tumble with their new “big sister” Lia!

I have hours of unpacking work ahead so this short break is very welcome.

I hope to be back writing tomorrow morning. Thanks for all your support and words of encouragement! This has been quite the project but we’re finally moved in…

Jay

08:00 AM

3 Days of Fun with Tor [Tor Project blog]

After organizing a successful first community gathering last year in Denmark, we were eager to find out: Could we get another productive, community-organized meeting off the ground taking into account what we've learned so far?

Preparations

We decided to do the next community gathering organized by us at the same location we used last year: Hylkedam, in Denmark. We knew it worked well, was sufficiently cheap, and we could likely cut down the overall planning overhead given our past experience there. And, indeed, planning was minimal, re-using much of the "playbook" we developed for our first meeting last year. We spent most of our preparation time on revamping our meeting website. We have a shiny new onionized space now, including a public mailing list!

3 days of fun with Tor

We gathered on the weekend of March 13 - 15 at Hylkedam. Overall we were a little less people this time (around 12) but had, on the plus side, participants with backgrounds not being present at our first Tor community gathering: we got the research angle covered this time (with focus on anti-censorship) and had people from the Reproducible Builds project attending. The latter allowed us to think about potentially doing community gatherings together, which would make collaboration and sharing of ideas easier between our projects. Talking about research on the other hand has been very inspiring as we could see what is currently happening in the research world and help shaping particular project plans by explaining related tools and already exsiting projects and needs within the Tor eco system.

Apart from the new contributions we were happy as well to see that various work started at the previous gathering got picked up and pushed forward again, showing the overall commitment of our volunteers in the community. Notably, we saw further improvements to the network social graph proof of concept project and the relay operator Grafana dashboard. We also continued the discussion around consensus-transparency.

We had the usual structure during our meeting days, following the established cycle of: opening session -> structured sessions -> unstructured sessions -> closing circle, which, again, worked pretty well. Unstructured sessions included general free hacking time and room for getting ad hoc together, thinking through or working on a topic that might have come up during the more structured sessions previously or is just not ready for "prime time" yet. We think that this time without a moderator and a clear session time limit is an essential part of making the whole meeting productive, as it gives the participants the freedom to work on random things they are interested in and might get excited about.

For the structured sessions we made sure we had note takers again, so someone not being able to attend can get up to speed afterwards. We had a set of different topics again, ranging from anti-censorship related sessions to an update on upcoming changes for relay operators and a session dedicated to how the community can get organized itself, so we would have similar gatherings or an 'onion festival' in the future. Check out the session notes on our website, in case you are interested!

What's next?

We plan to have more Tor community meetings in the future. As already said: they don't have to be at Hylkedam (we'd like to see other venues as well!), nor does it have to be the same group of people sharing the organization workload. So, if you are excited about what you read in this blog post and are experiencing a serious case of FOMO or want to help organize future community gatherings, get in touch! Our mailing list[3] is a good starting point for that.

The same goes for providing feedback about this format and how we can make such events more inviting and inclusive in the future. Want to be invited, too? Let us know as well!

Trump’s Anti-Migration Purge Is Breaking Up Military Families, Screwing Afghan Allies [Techdirt]

The content of their character was never up for consideration. Under Donald Trump, the only thing that matters is the color of their skin. That’s why almost every single person granted asylum since Trump took office has been white. That’s why Trump has been asking (out loud!) why we keep getting migrants from “shithole” countries (like those located in South America, Africa, and Latin America) rather than blond haired, blue eyed expats from Scandinavian countries whose residents’ lives would become noticeably worse if they chose to move to the US.

The president wraps himself in the flag, delivers a lot of garbled Team USA jingoism, and routinely proclaims we have the best military in the world. But even the people most directly responsible for keeping the US on top of the military game aren’t allowed to remain here if they’re not white.

Jose Serrano, an active duty soldier who served three tours in Afghanistan, said immigration agents arrested his wife April 14 as they attended an appointment with immigration services to take steps toward her permanent residency.

“A person opened the door, escorted us through the hallway, and at the end of the hallway, my wife got arrested,” Serrano said. “Arrested without any order, any warrant … They took away my wife. They don’t tell me anything.”

On top of all this awfulness, this incident shows ICE isn’t actually shifting away from immigration court arrests despite (1) officials saying otherwise, and (2) more importantly, ICE itself supposedly letting officers know that court arrests like these are not allowed under current ICE policy.

The regular awfulness is this: the Trump administration is willing to attack its own military if it means racking up a few more arrests and deportations:

[L]ast April, DHS eliminated a 2022 policy that considered military service of an immediate family member to be a “significant mitigating factor” in deciding whether or not to pursue immigration enforcement. The administration’s new policy states that “military service alone does not exempt aliens from the consequences of violating U.S. immigration laws.”

It’s not just this nation’s relationship with its own military that’s being permanently damaged by Trump’s bigoted war on non-white people. It’s also any future relationships we might have in countries where we’re engaged in combat. When the US began its full withdrawal from Afghanistan, it promised protections to Afghans who worked with the military to provide intelligence or otherwise aided in the US in the decades-long war.

That’s all being tossed aside by Trump because he and his administration simply just don’t like non-white people.

After halting a U.S. resettlement program for Afghans who helped the American war effort, President Trump is in talks to send as many as 1,100 of them to the Democratic Republic of Congo, an aid worker briefed on the plan said Tuesday.

The group includes interpreters for the U.S. military, former members of the Afghan Special Operations forces and family members of American service members. More than 400 children are among them.

The Afghans have been living in limbo in Qatar for over a year. They were taken there after being evacuated by the United States for their own safety because they supported American forces during the war against the Taliban that began in 2001.

Thanks for your help. Now, go fuck yourselves. That’s the message the US is sending to people who aided the US during this war. It’s the kind of message that isn’t likely to score it any allies as it resumes hostilities in the Middle East.

This report says Trump is “in talks” with DRC to pursue this “resettlement” of Afghan allies — one the administration pursues despite the protests of the people who risked their own lives to assist the US during the Afghanistan war.

It’s hard to believe Trump is actually engaged in anything. DRC already has a refugee problem of its own.

More than 600,000 refugees, mostly from the Central African Republic and Rwanda, are currently in Congo, according to the United Nations. Human rights activists say that the country is not equipped to take in more in the midst of fighting with neighboring Rwanda that has displaced even more people because of attacks on refugee camps.

On top of this, many Afghan allies already have family members living in the United States due to previous efforts made by the Biden administration to protect those who aided the US. This forced resettlement in, well, pretty much any African country that agrees to take them divides even more families. It also demonstrates the United States is not to be trusted when it offers favors in return for assistance. All it takes is an election cycle to roll back guarantees and turn trusted allies into just another set of people being moved from “shithole country” to “shithole country” by a bunch of bigots who would rather destroy America than allow any more non-white people to become residents of what used the be the world’s “melting pot.”

At least for now, Trump has seemingly found a willing dumping ground for people he doesn’t want in this country:

On April 17, the U.S. government deported 15 people to the capital of the Democratic Republic of Congo, a deeply impoverished African country that’s been scarred by years of conflict.

The group—comprising men and women from Colombia, Ecuador and Peru—is the first to arrive as part of a secretive migration deal brokered with the Trump administration.

“They took us, they put us on a plane, and they chained us by our hands and feet,” said one Colombian man, sitting on a plastic chair in a shabby hotel near Kinshasa’s airport. The deportees didn’t know their final destination until they were on the plane, he added.

Like El Salvador, I’m sure the DRC is more than happy to take our money to take some people off our hands. And like El Salvador, I’m sure the DRC government doesn’t actually care what happens to any of these people being shoved out of DHS charter flights like so much human refuse. If the US can’t be bothered to care, why should some third party in a developing nation do anything more than allow planes to land so long as the checks keep clearing?

This is what America is now: a place where human rights, civil liberties, and basic human morality are no longer weaved into the fabric of the nation. America is no longer the world’s policeman. It is now the world’s corrupt, racist sheriff.

Matt Taibbi Loses His Vexatious SLAPP Suit As Judge Explains What A ‘Metaphor’ Means [Techdirt]

Perhaps Matt Taibbi’s most famous bit of writing ever was his takedown of Goldman Sachs in Rolling Stone (and then in a book that followed) that opened with the highly evocative metaphor:

The world’s most powerful investment bank is a great vampire squid wrapped around the face of humanity, relentlessly jamming its blood funnel into anything that smells like money.

Even now, if you ask anyone about Taibbi’s writing, the phrase “great vampire squid”* is probably the most likely response.

* For what it’s worth, contrary to the what you might think given the name, vampire squids are (1) not actually squids, (2) not bloodsucking as they’re actually described as gentle scavengers, and (3) pretty small.

So, a question: how do you think that Matt Taibbi (who claims to be a giant free speech supporter) would react if Goldman Sachs had sued him back then claiming that they were not, literally, a cephalopod?

I think he would have been rightly outraged at the abuse of the courts to attack his free speech for his use of a metaphor.

So it was pretty shocking back in January when Taibbi sued author Eoin Higgins over his (excellent) book, Owned: How Tech Billionaires on the Right Bought the Loudest Voices on the Left. The crux of Taibbi’s argument was that he wasn’t literally “owned” by billionaires, and thus it was defamatory:

The Book’s title and subtitle “Owned: How Tech Billionaires on the Right Bought the Loudest Voices on the Left” falsely state that Plaintiff was “owned” and “bought” by billionaires.

Even more ridiculously, Taibbi took to the pages of Bari Weiss and David Ellison’s The Free Press to claim that he was suing a journalist for his reporting “to protect free speech.”

Yeah, sure man, whatever you have to tell yourself to sleep at night.

But, no, vexatious SLAPP suits don’t protect free speech; they do the exact opposite. Higgins wrote a thorough and sharp critique of how a bunch of people, like Taibbi, who had been formerly associated with left-leaning views, seemed in recent years to have drifted sharply rightward — frequently with the financial and institutional backing of right-wing tech billionaires.

Taibbi’s lawsuit was weak from the start, repeatedly insisting that obviously metaphorical statements were defamatory because he wasn’t literally “owned” or that he didn’t make that much money by cozying up to Elon Musk with his ridiculously misleading Twitter Files. Even Taibbi’s amended complaint was laughably bad, whining that because he took no direct payments or “financial inducement” from Elon Musk, that it was unfair to associate him with Elon Musk. This despite Taibbi getting the first exclusive batch of internal Twitter documents, which he did discuss on Twitter (this is pre-X) but absolutely used to burnish his own reputation and that of his Substack newsletter.

Thankfully, Higgins and his publisher, Bold Type Books (a Hachette imprint) had strong representation: Elizabeth McNamara and Leena Charlton from Davis Wright Tremaine — McNamara in particular is well known in media and First Amendment circles as one of the best in the business — and the court has issued a pretty quick and pretty thorough dismissal of the case.

Over and over again, the judge, George B. Daniels, patiently explains to Taibbi that metaphors and opinion are not defamatory. Which, you know, is the kind of thing you’d hope a famous writer like Taibbi would have understood already. Alas.

The Book’s Cover and Jacket

None of the statements Plaintiff identifies on the Book’s cover and jacket, standing alone, are actionable. Statements 1 and 2, the words “Owned” and “Bought” on the Book’s front cover, are susceptible to both literal and metaphorical meanings depending on the surrounding context. Plaintiff acknowledges, however, that the contents of the Book cannot support a literal reading, stating that the “[t]he Book contains no evidence of any financial transaction, payment, contract, or quid pro quo involving Plaintiff.” (Opp. at 4.) In this context, “Owned” and “Bought” naturally read as attention-grabbing rhetoric used to signify Higgin’s opinions and the Book’s conclusions. Aside from the scattered words and phrases discussed below, Plaintiff does not dispute the accuracy of the vast majority of the Book’s factual content that informs these views or point to language suggesting the opinions are based on facts other than those disclosed in the book. See Levin v. McPhee, 119 F.3d 189, 197 (2d Cir. 1997) (noting that “hypothesis or conjecture… may yet be actionable if they imply that the speaker’s opinion is based on the speaker’s knowledge of facts that are not disclosed to the reader”). Plaintiff may not like Higgins’s subjective conclusions, or agree with their accuracy, but that does not make them actionable defamation.

And for all of Taibbi’s “but Elon didn’t give me any money!” whining, that doesn’t matter. That’s not how defamation law works. Because if it did work that way lots of journalists wouldn’t be able to report on anything, for fear of vexatious SLAPP suits like the one Taibbi filed. As the judge explains:

Statement 3, that Plaintiff was in “the snug patronage of billionaires,” is also a nonactionable opinion. Just like “Owned” and “Bought,” the language “snug patronage” does not have a readily understood precise meaning, so there is no way for a reader to determine whether the statement is true or false. The statement also appears as a reviewer comment on the back cover under the heading “Praise for Owned.” From this context, a reader would likely intuit this statement as an opinion of the reviewer, supported by the facts disclosed in the Book, and not a statement of fact about Plaintiff. See Hammer v. Amazon.com, 392 F. Supp. 2d 423, 431 (E.D.N.Y. 2005) (“[T]he average person understands that [book reviews] are the reviewer’s interpretation and not ‘objectively verifiable’ false statements of facts.” (quoting Hammer v. Trendl, No. CV 02- 2462 (ADS), 2003 WL 21466686, at *3 (E.D.N.Y. Jan. 18, 2003)).

Rhetorical statements and opinions cannot be defamatory. Just like calling Goldman Sachs a vampire squid couldn’t be. Just like saying you’re someone’s “crony.” Incredibly, there was even an earlier ruling in the very same district specifically on whether or not calling someone a crony was defamatory. A good lawyer would have known that before suing over the word “crony.”

Statement 4 is a passage from the Book’s left flap that states that Plaintiff was one of the right-wing technology billionaires “cronies.” (Am. Compl. 20.) Courts in this district have previously held that calling someone a “crony,” without more, is nonactionable rhetorical hyperbole. See Cassava Scis., Inc. v. Heilbut, 2024 WL 553806, at *5 (S.D.N.Y. Jan. 5, 2024), report and recommendation adopted sub nom. Cassava Scis., Inc. v. Bredt, 2024 WL 1347362 (S.D.N.Y. Mar. 28, 2024) (holding that a presentation which labeled individuals as “cronies” was nonactionable opinion); cf. Biro, 883 F. Supp. 2d at 463 (“[T]he use of the terms ‘shyster,’ ‘con man,’ and finding an ‘easy mark’ is the type of ‘rhetorical hyperbole’ and ‘imaginative expression’ that is typically understood as a statement of opinion.”) (internal citation omitted). The same is true here. The assertion that Plaintiff is a billionaire’s crony is the sort of excessive, unverifiable language that signals to a reasonable reader that they are reading the speaker’s opinion, and not a statement of fact.

Also a fail: claiming that more general statements not directly about Taibbi could be defamatory about Taibbi. In this case, Taibbi claimed that Higgins book flap saying that the book “follows the money, names names” is somehow defamatory to Taibbi, despite not being directly about him. Again, making claims about general statements like that is a hallmark of vexatious, speech-suppressing SLAPP suits. As the judge notes:

Statement 5 also appears on the left flap and states that the Book “follows the money, names names,” and is a “biting expose of journalistic greed.” (Am. Compl. 24-25.) Plaintiff alleges that “follows the money” and “names names” “represents to readers that the author has traced actual financial relationships and identified specific recipients of improper payments or patronage.” (Id.24.) “In New York, a plaintiff cannot sustain a libel claim if the allegedly defamatory statement is not ‘of and concerning plaintiff but rather only speaks about a group of which the plaintiff is a member.” Chau, 771 F.3d at 129 (internal citation omitted). Statement 5 does not indicate that it is “of and concerning” Plaintiff it describes Higgins’s investigative process for all the Book’s subjects, not only Plaintiff. A reasonable reader would, therefore, not interpret “follows the money” and “names names” as a false statement of fact about Plaintiff.

It’s also not defamatory (and obviously opinion) to call someone “greedy.” You would think that the author of a supposed exposé on Goldman Freaking Sachs would know that. Alas. The judge has to explain it to Taibbi.

Statement 6 states that the Book is an “exposé of journalistic greed,” which Plaintiff alleges “asserts professional dishonesty and unethical conduct.” (Id. 25.) But whether someone is motivated out of greed or ambition is a subjective determination that is not capable of being proven true or false. See Rosa v. Eaton, No. 23 CIV. 6087 (DEH), 2024 WL 3161853 (S.D.N.Y. June 25, 2024) (“[C]ourts have recognized that words like… ‘greedy crooks’ are vague, imprecise statements of hyperbole considered nonactionable opinion.”) Further, the context surrounding the statement, including its placement on the left flap of the Book’s cover, clearly implies that the facts on which this opinion is based can be found within the Book. Cf. Graham v. UMG Recordings, Inc., 806 F. Supp. 3d 454 (S.D.N.Y. 2025) (holding that an album’s cover art shares the same overall context as the recording itself because the cover is “designed to reinforce the message of the [recording.” (internal citation and quotation marks omitted)).

As a kind of SLAPP Hail Mary, Taibbi’s lawyer had admitted that even if all of these statements were protected opinion, you could still claim defamation on the theory of “yeah, but if you lump them all together, people might jump to false and defamatory conclusions” and the judge has to explain that that, for that to be the case, you have to actually show that the statements are really intended to show such a defamatory meaning. And Taibbi’s lawyer couldn’t do that. Because it does not appear to be true.

Plaintiff acknowledges that these statements “might be protected opinion standing alone.” (Opp. at 11.) But he claims that when viewed together, the statements on the Book’s cover and jacket “become implied factual assertions that the accused was actually paid.” (Id.at 12.) Plaintiff is correct that otherwise nonactionable statements may create “false suggestions, impressions, and implications,” and that these false implications can serve as the basis of a defamation claim. See Armstrong v. Simon & Schuster, 85 N.Y.2d 373, 380-81 (1995). But plaintiffs alleging defamation by implication must “make a rigorous showing that the language of the communication as a whole can be reasonably read both to impart a defamatory inference and to affirmatively suggest that the author intended or endorsed that inference.” Stepanov v. Dow Jones & Co., 987 N.Y.S.2d 37, 44 (N.Y. App. Div. 2014) (emphasis added).

Even assuming that Plaintiff has affirmatively alleged a defamation by implication claim-despite not labeling his sole cause of action as such-Plaintiff has failed to allege facts showing that Defendants intended or endorsed the defamatory inference. As stated above, Plaintiff admits that “the Book contains no evidence whatsoever that Plaintiff received payments, sponsorship, or financial inducement from Elon Musk or any other billionaire.” (Am. Compl. 29.) Instead of endorsing the alleged defamatory implication, the Book argues that Plaintiff’s central reason for agreeing to participate in the Twitter Files was to “gain access.” Higgins, supra at 182. Plaintiff also claims that Higgins “admitted contemporaneously that readers expecting proof of who was ‘bought’ would be disappointed.” (Am. Compl. 62.) In short, the Book’s contents and Higgins contemporaneous statements distance the Book from the defamatory implication Plaintiff alleges. See Henry v. Fox News Network LLC, 629 F.Supp.3d 136, 150 (S.D.N.Y. 2022) (finding that a corporate statement did not endorse a defamatory implication because the statement intentionally included “nebulous” phrasing). Without any additional facts pointing to Defendants’ intent, Plaintiff’s defamation by implication claim fails.

There’s more. Taibbi sued Higgins over the phrase “cash in” but the judge points out that doesn’t need to literally mean getting cash:

This context makes clear that the Book’s reference to “cash in” is not referring to literal money, but rather the idea that Plaintiff traded his reputation for access to the Twitter Files. This sort of loose, figurative language would naturally lead a reasonable reader to interpret this as a statement of opinion.

Hilariously, Taibbi had tried to argue that Higgins claiming that Taibbi got a bunch of new Substack followers because of the Twitter Files was defamatory, but Taibbi’s lawyer had to admit during oral arguments that “getting a bunch of new Substack subscribers” is not the kind of statement that injures your reputation. Oh, and also, it turned out to be true.

Similarly, statement 8 is a nonactionable subjective determination. Statement 8 claims that Plaintiff’s Substack “gained thousands of subscriptions” following his work on the Twitter Files, which translated to a “financial windfall.” But as Plaintiff’s counsel acknowledged during oral argument, this statement, “in the abstract,” is not defamatory because it does not tend to injure Plaintiff’s reputation. Oral Arg. Tr. at 44:13-17; see also Chau, 771 F.3d at 127 (“To be actionable … the statement must do more than cause discomfort or affront; the statement is measured not by the sensitivities of the maligned, but the critique of reasonable minds that would think the speech attributes odious or despicable characterizations to its subject.”) And even if one could read a defamatory meaning into these words, Plaintiff admits that he did in fact gain thousands of Substack subscribers following the Twitter Files reporting. (See Am. Compl. 11 38-39 (“The ‘thousands of new subscribers Owned claims Plaintiff gained after publication represented only a small percentage of Plaintiff’s overall readership.”) Whether this “small percentage” of increased subscribers represented a “financial windfall” is a subjective determination.

In other words, the entire case was a garbage, vexatious attack on Higgins’ own speech — and should put to rest forever the idea that Taibbi was ever a true supporter of free speech. He spent years falsely implying that protected speech activities of private companies were an attack on free speech, and now he’s moved on to actually attacking the free speech of others — abusing the power of the courts to cost them time, money, and attention to fight off a vexatious lawsuit.

Honestly, it seems that, if anything, the small, cuddly, vampire squid would likely have a stronger case against Taibbi than Taibbi had against Higgins.

06:00 AM

Voter Suppression In South Dakota Is Well Underway, Even Without SCOTUS’s Help [Techdirt]

It may be almost impossible to devolve this country into a nation of slaveholders, but the Trump administration and all of its MAGA buddies are working hard to make sure a white person’s vote counts more than a vote cast by anyone else.

These bigots recently got an assist from the Supreme Court, which decided minorities can have their votes rendered meaningless so long as the people doing the gerrymandering don’t actually say the quiet part loud. Redistricting for the sole purpose of excluding as many non-whites as possible is perfectly legal if politicians never affirmatively state that the only reason they’re doing this is to make sure minorities can’t vote against their racist asses.

This is all part of what the state of South Dakota is doing now. Governor Larry Rhoden was never elected to his post. He was elevated after Kristi Noem was selected to head the DHS by Donald Trump. (Since she’s about as unemployed as any Trump appointee ever gets, I’m sure she wishes she was back running the state of South Dakota… into the ground.) His most recent brush with the electoral process saw him losing handily to Mike Rounds in the 2014 Senate race.

Rhoden actually needs to win an election if he wishes to remain South Dakota’s governor. And all the MAGA fellatio in the world doesn’t mean much when plenty of other MAGA acolytes are running against him.

So, there’s a mixture of things going on here. There’s Rhoden’s (and the state GOP’s) desire to engage with Trump’s election conspiracies — ones that claim (with zero facts in evidence) that a whole lot of undocumented immigrants are voting in state and local elections.

There’s also a nationwide attempt to deter voting by mail, because these votes more often side with the other team.

In response to completely made-up problems, the GOP passed a bill that Rhoden signed that says state residents must prove their citizenship to engage in local elections. If they can’t, they’re only allowed to participate in federal elections.

According to Rhoden and other GOP alarmists, that’s because too many people who aren’t citizens were granted permission to vote, thanks to what was likely nothing more than a clerical error. South Dakota may be small state in terms of population (~950,000 residents as of 2025), but the “problem” this vaguely written law supposedly addressed was even smaller.

Soulek said only one of the 273 noncitizens had ever cast a ballot. That was during the 2016 general election.

Those are the words of the Director of Elections Rachel Soulek, who works out of the Secretary of State’s office. The Secretary of State blamed this on clerical errors by the Department of Public Safety. The DPS provided the data that Governor Rhoden claims to evidence of widespread election fraud by non-citizens.

One illegal ballot. And that was likely an honest misunderstanding, rather than the criminal intent Rhoden and GOP buddies want to pretend it is.

But the law is on the books. Citizenship must be demonstrated to participate in state and local elections. The problem is that no one running these elections seems to agree what is or isn’t acceptable proof of citizenship.

Hughes County Finance Officer Thomas Oliva, who acts as that county’s auditor, said his office is requiring new voters to show the physical driver’s license.

“The main reasoning behind that is because it’s the back of the license. There’s no other identifying information on the back we can tie back to that person, so we felt it’s in the best interest to see the physical card,” Oliva told News Watch.

Haakon County Auditor Stacy Pinney said she has not run into any issues yet with voter registration but also will require new applicants to physically show the driver’s license.

“I’m going to make it a policy in my office that I want to see the actual card. If I have to verify it, I want to see the real deal,” Pinney told News Watch.

Meanwhile, Harding County Auditor Kathy Glines said her office will accept a photocopy of the driver’s license.

“They would have to send a front and back,” Glines told News Watch.

“I hope they would call before sending it by mail,” she added, referring to the limited hours the office is open.

Everyone appears to be making up their own rules because the law — and the Secretary of State’s office — are being deliberately vague about these requirements, especially in relation to absentee voting. And many people in the state may not know that the law only applies to people who have registered to vote after July of last year, so lots of people are going to be presenting IDs to precinct staffers even if they’re not legally required to do so.

This all adds up to exactly what Governor Rhoden and the GOP want: confusion over who is or isn’t allowed to vote, blended with another law passed by Rhoden that allows pretty much anyone to challenge someone else’s eligibility to vote.

The state could offer much-needed clarification. But it won’t.

As early and absentee voting for the primary election gets underway, Scott-Stoltz hopes officials in Pierre can provide more certainty on the registration process for new voters.

“We’re hoping for more clarification from the secretary’s office before the primary and are looking forward to working with the election board,” she said.

The secretary of state’s office didn’t respond to a request for comment by News Watch.

That’s a feature, not a bug. Those in power definitely prefer incumbent voters over new ones, much like incumbent voters prefer incumbents. They want to keep the jobs they have, rather than allow new voters to upset the incumbent apple cart. They all pretend they love the democratic system, but when it’s time latch onto another 2-4 years in power, they work together to reduce the electorate to the votes they can count on.

05:00 AM

Daily Deal: The Ultimate Microsoft Office Professional 2021 for Windows License + Windows 11 Pro Bundle [Techdirt]

Microsoft Office 2021 Professional is the perfect choice for any professional who needs to handle data and documents. It comes with many new features that will make you more productive in every stage of development, whether it’s processing paperwork or creating presentations from scratch – whatever your needs are. Office Pro comes with MS Word, Excel, PowerPoint, Outlook, Teams, OneNote, Publisher, and Access. Microsoft Windows 11 Pro is exactly that. This operating system is designed with the modern professional in mind. Whether you are a developer who needs a secure platform, an artist seeking a seamless experience, or an entrepreneur needing to stay connected effortlessly, Windows 11 Pro is your solution. The Ultimate Microsoft Office Professional 2021 for Windows + Windows 11 Pro Bundle is on sale for $34.97 for a limited time.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

More Liability Will Make AI Chatbots Worse At Preventing Suicide [Techdirt]

California recently passed a law that will, in practice, cause AI chatbots to respond to any hint of emotional distress by spamming users with 988 crisis line numbers, or by cutting off the conversation entirely. The law requires chatbot providers to implement “a protocol for preventing the production of suicidal ideation” if they’re going to engage in mental health conversations at all, with liability waiting for any provider whose conversation is later linked to harm. New York is considering going further, with a bill that would simply ban chatbots from engaging in discussions “suited for licensed professionals.” Similar proposals are moving in other states.

If you’ve been reading Techdirt for any length of time, you know exactly what’s happening here. It’s the same moral panic playbook we’ve seen deployed against cyberbullying, then against social media, and now against generative AI. Something terrible happens. A handful of tragic stories emerge. Lawmakers, desperate to show they’re doing something, reach for the most visible technology in the room and start passing laws designed to stop it from doing whatever it was supposedly doing. The possibility that the technology might actually be helping more people than it’s hurting, or that the proposed fix might make things worse, rarely enters the conversation.

Professor Jess Miers and her student Ray Yeh had a terrific piece at Transformer last month that actually engages with the data and the incentive structures here, and their central argument may seem counterintuitive to many: the way to make AI chatbots safer for people in mental health distress might be to reduce liability for providers. For many people, I’m sure, that will sound backwards. That is, until you actually think through how the current liability regime shapes behavior — as well as reflect on what we know about Section 230’s liability regime in a different context.

First, though, the empirical reality that rarely makes it into the moral panic coverage. People are using AI chatbots for mental health support at massive scale, and a lot of them say it’s helping:

A small number of tragic stories have spurred lawmakers into regulating how chatbots should help people who are dealing with mental health issues. Yet chatbots have emerged as first aid for people experiencing mental health issues, providing genuine benefit to those who aren’t in crisis but are not OK either. Heavy-handed legislation risks derailing this breakthrough in support, creating more problems than it solves.

Over a million people are using general-purpose chatbots for emotional and mental health support per week. In the US, those that use chatbots in this way primarily seek help with anxiety, depression, relationship problems, or for other personal advice. As conversational systems, chatbots can sustain coherent exchanges while conveying apparent empathy and emotional understanding. Many chatbots also draw on broad knowledge of psychological concepts and therapeutic approaches, offering users coping strategies, psychoeducation, and a space to process difficult experiences.

In a study of more than 1,000 users of Replika — a general-purpose chatbot with some cognitive behavioral therapy-informed features — most described the chatbot as a friend or confidant. Many reported positive life changes, and 30 people said Replika helped them avoid suicide. Similar patterns appear among younger chatbot users. In a study of 12–21-year-olds — a group for whom suicide is the second leading cause of death — 13% of respondents used chatbots for some kind of mental health advice, of which more than 92% said the advice was helpful.

There are, obviously, some limits to the Replika study, including that the data is from a few years ago, and it involves self-reporting, which can always lead to some wacky results. But it is notable that this study was done by Stanford academics (i.e., not Replika itself) and was good enough to get published in Nature. And it does seem notable that even with the methodological limitations, so many people self-reported that the service helped them avoid suicide. For all the attention-grabbing stories of chatbots being blamed for encouraging suicidal ideation, that seems important. Same with the claim of 92% that the mental health advice was helpful.

It feels like these kinds of numbers should be at the center of any serious policy conversation. Instead, they’re almost entirely absent from the legislative discussion, which focuses exclusively on the (very real, very tragic, but still somewhat rare) cases where things went wrong.

A big part of the reason chatbots are filling this gap is that the traditional mental health system isn’t remotely equipped to meet existing demand. Nearly half of Americans with a known mental health condition never seek professional help. There are plenty of reasons for this, ranging from the cost of mental health treatment, to the general stigma of being seen as needing such help, not to mention potential professional and social consequences.

As Miers and Yeh put it: “many stay silent, waiting to see if things get worse.”

Chatbots, whatever their limitations, offer something the professional system largely cannot: they’re always available in a form many people feel more comfortable talking with:

By contrast, chatbots offer low-friction, low-stakes, and always-available support. People are often more willing to speak candidly with computers, knowing that there is no human on the other side to judge or feel burdened. Some people even find chatbots to be more compassionate and understanding than human healthcare providers. AI users may feel more comfortable sharing embarrassing fears, or questions they might otherwise hold back. For clinicians, discussing these interactions can surface insights into patients’ thoughts and emotions that were once difficult to access. For now, chatbot providers generally refrain from contacting law enforcement, leading to more candid conversations.

So what does the California-style regulatory approach actually do to this ecosystem? Faced with liability for any conversation later linked to harm, and unable to reliably predict which conversations those will be (in part because, as we covered recently, even clinicians who specialize in suicide prevention admit they often can’t predict it), providers will default to the behavior that minimizes legal exposure whether or not it helps users. That means reflexively pushing 988 at any mention of distress, or cutting off conversations entirely, or simply refusing to engage with mental health topics at all.

And that kind of defensive posturing can be actively harmful to those most at risk:

Suicide prevention is about connecting people to the right support. Sometimes that means crisis care like hotlines or immediate medical treatment. But blunt, impersonal responses can backfire. Pushing 988 at the first mention of distress may seem neutral, but for some, it triggers shame, and deepens hopelessness. For some, suicide prevention “signposting” causes frustration, especially for those who already know those resources exist. People often turn to the Internet, or a chatbot, because they’re looking for something else. Abruptly ending conversations can have the same effect. That’s why suicide prevention protocols like Question, Persuade, Refer(QPR) prioritize trust-building and open dialogue before offering help.

So the regulatory regime mandates behavior that can actively escalate distress, all while still leaving providers exposed to blame if tragedy follows anyway. It’s the worst of both worlds: worse outcomes for users, continued liability for providers, and a chilling effect on the research and development that might actually improve things.

We don’t need to speculate about whether this dynamic plays out in practice. We’ve already watched it happen with social media:

The social media ecosystem has already shown this dynamic. In response to regulatory pressure, major online services heavily moderate, or outright prohibit, suicide-related discussions, sometimes hiding content that could otherwise destigmatize mental health. That merely displaces the conversations, and the people having them, often into spaces with less oversight and support.

If this sounds familiar, it’s because it is. It’s the same pattern that emerges whenever policymakers try to make sensitive topics go away through platform liability: the topics don’t go away, they just migrate to darker corners where nobody is watching at all. A mental health crisis doesn’t magically disappear just because Instagram or TikTok hid the conversation. Those in need of help are more likely to then end up somewhere with fewer guardrails, fewer resources, and fewer people equipped to help.

This leads directly back to the core of the argument, which may feel a bit backwards at first. If we want chatbot providers to build genuinely better systems for handling mental health conversations — systems that can identify distress patterns, offer appropriate triage, connect users to professional care when that’s what’s needed, and sustain helpful conversation when it isn’t — we need a liability environment that doesn’t punish the attempt.

This is, incidentally, exactly the logic that produced Section 230 in the first place. Before Section 230, the Stratton Oakmont v. Prodigy ruling created a perverse situation where platforms that tried to moderate content faced more liability than platforms that did nothing. The obvious result, had that stood, would have been less moderation, not more, because the smart legal advice would have been “don’t touch anything.” Section 230 fixed that by ensuring that the act of moderation itself didn’t create liability, which in turn made it possible for platforms to actually invest in moderation systems. Contrary to the widespread belief among the media and politicians, Section 230 didn’t eliminate accountability — it smartly redirected incentives toward the behavior we actually wanted.

The same logic applies here. A targeted liability shield for AI providers engaged in mental health support could give them the space to invest in building better suicide detection, better triage pathways, and better handoffs to human professionals. But that won’t happen if every such attempt turns into a potential lawsuit. The research to enable this is already happening despite the hostile incentive environment:

Meanwhile, emerging research suggests chatbots show real promise for mental health support. Trained on large-scale data and refined with clinical input, large language models are getting better at spotting patterns of distress and responding to suicidal ideation in nuanced, personalized ways. In a recent UCLA study, researchers found that LLMs can detect forms of emotional distress associated with suicide that existing methods often miss—opening the door to earlier, more effective intervention. According to another study, the most promising approach may be a hybrid where AI flags risk in real time, and trained humans step in with targeted support.

That hybrid model — AI identifying risk, trained humans providing targeted intervention — is exactly the kind of system you’d want chatbot providers racing to build. Instead, the current regulatory trajectory is telling them: build that, and you’re just creating a liability sinkhole. Every time your system engages with a mental health conversation, you’ve created a potential future lawsuit. Better to just block the conversation entirely and hope the user finds help somewhere else.

I get that some people will reasonably worry that “less liability” sounds like a giveaway to AI companies that are already acting irresponsibly. But Miers and Yeh aren’t arguing that chatbots should be able to impersonate licensed therapists, or that there should be no accountability for products designed to be used by vulnerable users. The American Psychological Association’s approach — prevent chatbots from posing as licensed professionals, limit designs that mimic humans, expand AI literacy — is perfectly compatible with a liability shield for thoughtful, helpful mental health support. The point is to stop punishing the specific behavior we want more of: chatbots that try to actually help people who are struggling, including by building better pathways to professional care for those who need it.

Simply putting liability on the companies is unlikely to do that.

And for people in acute crisis, professional intervention is still a necessity. Nobody serious is arguing chatbots should wholly replace crisis lines or psychiatric care. The argument is that the vast majority of people using chatbots for mental health support are not in acute crisis — they’re anxious, lonely, depressed, processing a breakup, working through stress, looking for someone to talk to at 3am when their therapist isn’t available and calling 988 feels like overkill. For that population — which is the overwhelming majority — the regulatory regime being built assumes the worst and mandates responses that often make things worse.

The deeper problem, as we’ve written before, is that the entire framing of “AI causes suicide” relies on a confidence about the mechanics of suicide that clinicians themselves don’t have. About half of people who die by suicide deny suicidal intent to their doctors in the weeks or month before their death. Experts who have spent decades studying this admit they often cannot predict it even when treating patients directly. The idea that we can identify which chatbot conversation “caused” which outcome, and design liability around that identification, assumes a causal clarity that doesn’t exist anywhere in the actual science.

Good policy here would look very different from what’s being proposed. Miers and Yeh point to a Pennsylvania proposal that would fund development of AI models designed to identify suicide risk factors among veterans — incentivizing the research we actually need rather than punishing it. They suggest liability shields modeled on Section 230 that would encourage continued investment in safer, more responsive systems. They warn specifically against imposing a clinical regulatory framework (with its mandatory reporting requirements) onto general-purpose chatbots, because doing so would replicate exactly the barriers that already keep many people from seeking professional help.

None of this is as emotionally satisfying as “ban the thing that hurt people.” Moral panics rarely are, because moral panics are fundamentally about finding something to blame rather than about the harder work of actually understanding what’s happening and designing interventions that might help. But for the over one million people per week currently turning to chatbots for mental health support — a group that includes at least the thirty Replika users who credit the chatbot with keeping them alive — the difference between a regulatory regime that punishes thoughtful engagement and one that incentivizes it is the difference between having somewhere to turn at 3am or running into a wall of “please call 988” followed by a terminated conversation.

We’ve watched this movie before with social media. We know how it ends. The conversations just move somewhere worse, with fewer resources and less oversight. The tragedies keep happening — they just stop being visible to anyone who might be in a position to help. And the technology gets worse at the thing we want it to be better at, because the legal environment has made getting better into a liability.

If lawmakers are serious about mental health outcomes rather than political theater, they should be asking how to make chatbots better at this — how to build the hybrid human-AI triage systems the research is pointing toward, how to turn these tools into genuine funnels toward professional care when that’s what’s needed, how to preserve the candid, low-stakes space that people clearly find valuable. That project requires a liability regime that rewards trying to be better rather than punishing it. The alternative is what California just passed, and what New York is considering, and what we’ll keep getting until someone in the policy conversation is willing to notice that the intuitive answer here is producing the exact opposite of the intended outcome.

It’s a counterintuitive approach. It’s also the only one that has any chance of actually working.

01:00 AM

Your work diary [Seth Godin's Blog on marketing, tribes and respect]

Five short entries a day.

  • A generous act of leadership
  • A thank-you note sent
  • Curiosity explored, or a hard question asked
  • A new skill learned
  • An interaction with a customer or co-worker that increased empathy

It’s easy to imagine that if you do this 200 workdays in a row, your career will advance. And it makes it easier to prepare for your annual review or that next job interview.

Like most habits, the hardest part is committing to begin.

      

Pluralistic: In praise of vultures (06 May 2026) [Pluralistic: Daily links from Cory Doctorow]

->->->->->->->->->->->->->->->->->->->->->->->->->->->->-> Top Sources: None -->

Today's links

  • In praise of vultures: They screw you because they can.
  • Hey look at this: Delights to delectate.
  • Object permanence: Linus v MSFT; Argentina v MSFT; Danny Hillis on theme parks v games; Smartfilter v Distributed Boing Boing; Rental laptops filled with spyware; Torture didn't help capture bin Laden; Massively parallel Apple //e; Stephen Harper v election law; John Deere v Iowa cartoonist; Qualia.
  • Upcoming appearances: Guelph, Barcelona, Berlin, Hay-on-Wye, London, NYC, Edinburgh.
  • Recent appearances: Where I've been.
  • Latest books: You keep readin' em, I'll keep writin' 'em.
  • Upcoming books: Like I said, I'll keep writin' 'em.
  • Colophon: All the rest.



A down-at-heel frontier courtroom presided over by a flustered judge and his miserable clerk. In the foreground is a vulture in a powdered barrister's wig.

In praise of vultures (permalink)

One of my bedrock beliefs is that capitalists really hate capitalism. They may name their beloved institutes after the likes of Adam Smith, but they ignore everything Smith had to say about the necessity of competition to keep markets from turning into monopolies:

https://pluralistic.net/2023/06/09/commissar-merck/#price-giver

The theory of capitalism holds that markets are a kind of distributed computer that aggregates trillions of decisions from billions of market participants in order to optimize production and distribution of goods and services, creating a "Pareto-optimal" world where no one can be made better off without making someone else worse off.

Whether or not you believe that this computer exists and functions as predicted, one indisputable fact about it is that it requires the freedom to choose in order to work. The point of market-as-computer is that it aggregates decisions, so it can only work if everyone is as free as possible to decide.

But that's not the world capitalists want. For capitalists, the point is to restrict other people's choices in order to maximize your own freedom. That's how we get economic doctrines like "revealed preferences": the idea that if a person says they want one thing, but does another thing, then you can tell what they really prefer by looking at the latter and disregarding the former. This is the kind of doctrine you can only fully embrace after sustaining the kind of highly specific neurological injury that is induced by taking an economics degree, an injury that makes you incapable of perceiving or reasoning about power. Under the doctrine of revealed preferences, someone who sells their kidney to make the rent has a revealed preference for only having one kidney:

https://pluralistic.net/2026/03/30/players-of-games/#know-when-to-fold-em

Capitalism is supposed to run on risk: the risk of being overtaken by a competitor drives businesses to deliver better services more efficiently, thus producing a bounty for all. But capitalists really hate risk, hence the drive to monopoly: Mark Zuckerberg admitted, in writing, that he only bought Instagram so that he wouldn't have to compete with it ("It is better to buy than to compete" -M. Zuckerberg):

https://pluralistic.net/2025/11/20/if-you-wanted-to-get-there/#i-wouldnt-start-from-here

Capitalists hate capitalism, but they love feudalism. Feudalism is like capitalism, in that you have a ruling class that creams off the surplus generated by labor; but under feudalism, society is organized to protect rents (money you get from owning stuff) over profits (money you get from doing stuff). The beauty of rents is that they are insulated from risk: if you own a coffee shop, you're in constant danger of being put out of business by a better coffee shop. But if you own the building and your coffee shop tenant goes under, well, you've still got the building, and hey, now it's on the same hot block as the amazing new cafe that's driving its competitors out of business:

https://pluralistic.net/2023/09/28/cloudalists/#cloud-capital

Douglas Rushkoff calls this "going meta": don't drive a taxi, rent a medallion to a taxi driver. Don't rent a medallion, start a ride-hailing app company. Don't start a ride-hailing company, invest in the company. Don't invest in the company, but options on the company's shares. Each layer of indirection takes you further from the delivery of a useful service – and insulates you further from risk:

https://pluralistic.net/2022/09/13/collapse-porn/#collapse-porn

Monopoly is to capitalism as gerrymandering is to democracy, a way to strip out any meaningful choice. Think of the two giant packaged goods companies that fill your grocery aisles: Procter & Gamble and Unilever. Practically everything on your grocer's shelves is made by a division of one of these two massive conglomerates. If you try to "vote with your wallet" by buying a low-packaging version of a product, it's going to be sold to you by the same company that sells the high-packaging version. If you switch to an artisanal brand of cookies made by a local family business, Unilever or P&G will buy that company and issue a press release declaring that they made the acquisition because they know "their customers value choice":

https://pluralistic.net/2024/05/18/market-discipline/#too-big-to-care

Gerrymandering strips your vote of any impact on political outcomes. Monopoly strips your purchases of any ability to influence economic outcomes. Wrap both of them in "revealed preferences" and you get a system that endlessly narrates its ability to deliver choice, and then blames your misery on your having chosen badly.

This is the method of the entire conservative project. As Dan Savage says: the thing that unites conservative assaults on voting, birth control, abortion and no-fault divorce is the stripping away of choice. Conservatives are trying to create a world populated by husbands you can't divorce, pregnancies you can't prevent or terminate, and politicians you can't vote out of office. Add to that Trump's assault on the National Labor Relations Board, his reversal of the FTC's ban on noncompetes, and his protection of "TRAP" agreements that force employees to pay thousands of dollars if they quit their jobs, and you get "jobs you can't quit":

https://pluralistic.net/2025/09/09/germanium-valley/#i-cant-quit-you

Conservative strongmen like Trump and Musk exalt the value of self-determination – for themselves, at everyone else's expense. Trump's ability to stiff the contractors that built his hotels and Musk's ability to rain flaming rocket debris down on the people who live near his company town require that everyone else be stripped of protections. They get to determine their own course in life by taking away your ability to determine your own. Their right to swing their fists ends two inches past your nose:

https://pluralistic.net/2026/04/21/torment-nexusism/#marching-to-pretoria

Cheaters and bullies hate the rule of law, hence Trump's endless repetition of Nixon's mantra: "When the president does it, that means it is not illegal." But not everyone can be president, and the world is full of would-be Trumps in positions of power who would like to be able to commit crimes without fear of legal repercussions. For these people, we have something called "binding arbitration."

"Binding arbitration" is a widely used contractual term that forces you to surrender your right to sue a company that wrongs you. Instead of suing, binding arbitration forces you to take your case to an "arbitrator"; that is, a lawyer who is paid by the company that cheated you or maimed you or killed your loved one. The arbitrator decides whether their client is guilty, and, if so, how much that client owes you. The entire process is confidential and it is non-precedential, meaning that if a company rips off millions of people in the same way, each of them has to arbitrate their claims separately, and people who are successful can't share their tactical notes with the people who are next in line to plead for justice.

That makes binding arbitration another key weapon in the conservative movement's war on choice: not just jobs you can't quit and politicians you can't vote out of office, but also companies you can't sue. Binding arbitration is a creation of the Federalist Society and their champion Antonin Scalia, who authored a series of Supreme Court dissents and (ultimately) decisions that opened the door for binding arbitration everywhere:

https://pluralistic.net/2025/10/27/shit-shack/#binding-arbitration

Given the Fedsoc's role in shoving binding arbitration down every worker and shopper's throat, it's decidedly odd that they invited Ashley Keller to be their keynote debater in 2021, where he argued that "concentrated corporate power is a greater threat than government power":

https://www.youtube.com/watch?v=aY5MrHGjVT8

Keller is a powerhouse lawyer, and an avowed conservative, who has pioneered many tactics for overcoming binding arbitration clauses. He helped create "mass arbitration," bringing thousands of arbitration cases on behalf of Uber drivers who'd had their wages stolen by the company. Since Uber has to pay the arbitrators in each of those cases, they faced a much larger bill than they would face in any possible class action suit:

https://www.reuters.com/article/otc-uber-frankel-idUKKCN1P42OH/

Mass arbitration cases spread to all kinds of large firms that used petty grifts to steal from thousands or even millions of people, like Intuit, who deceive – and rip off – millions of Americans every year with their fake Turbotax "free file" system:

https://pluralistic.net/2022/02/24/uber-for-arbitration/#nibbled-to-death-by-ducks

Mass arbitration worked so well that Amazon actually revised its terms of service to remove binding arbitration from their terms of service, because they realized that they'd be better off facing class action suits:

https://pluralistic.net/2021/06/02/arbitrary-arbitration/#petard

Of course, the point of binding arbitration was never to create a streamlined system of justice – it was to bring about a world of no justice, where you have no right to sue. It's part of the decades-old "tort reform" movement that the business lobby has used to take away your right to sue altogether. Any time you hear about a seemingly crazy lawsuit (like the urban legends about the McDonald's "hot coffee" case), you're being propagandized for a world without legal consequences for companies that defraud you, steal from you, injure you, or kill you:

https://pluralistic.net/2022/06/12/hot-coffee/#mcgeico

That's why companies (like Bluesky) are now trying terms of service that also ban you from mass arbitration, while retaining the right to consolidate claims into a mass arbitration case if that's advantageous to them:

https://pluralistic.net/2025/08/15/dogs-breakfast/#by-clicking-this-you-agree-on-behalf-of-your-employer-to-release-me-from-all-obligations-and-waivers-arising-from-any-and-all-NON-NEGOTIATED-agreements

But Keller keeps finding creative ways around binding arbitration. He's currently bringing thousands of arbitration claims against Google, on behalf of advertisers whom Google stole from (Google is a thrice-convicted monopolist, and they lost a case last year over their monopolization of ad-tech, where they were found to have defrauded advertisers).

He also just argued before the Supreme Court in a case against Monsanto over the company's attempt to escape liability for causing cancer in farmworkers with their Roundup pesticide:

https://www.npr.org/2026/04/27/nx-s1-5793804/supreme-court-monsanto-roundup-arguments

Keller appears in the latest episode of the Organized Money podcast, for a fascinating interview about his work and outlook, and how he reconciles his work fighting corporate power with his identity as a movement conservative:

https://www.organizedmoney.fm/p/the-conservative-who-torments-big

Keller's first big, important point is that (basically), capitalists hate capitalism (see above). He cites Milton Friedman, who "always said that the tort system is the best way to ensure that companies behave and follow the rules." For Keller (and Friedman) the alternative to private litigation against bad businesses is "government regulation and the alphabet soup of Washington, DC agencies [that] try and police these companies."

But, of course, the businesses that want binding arbitration and tort reform (so they can't be sued) also want to "dismantle the administrative state" (so they can't be regulated). They're the impunity movement, the "when the president does it, that means it is not illegal" movement, the "heads I win, tails you lose" movement. They're the caveat emptor movement, the "that makes me smart" movement:

https://pluralistic.net/2024/12/04/its-not-a-lie/#its-a-premature-truth

They don't want efficient markets, with the ever-present threat of a better competitor putting them out of business. They want feudalism. They want to go meta. They want to have the kind of self-determination you can only achieve by taking away everyone else's self-determination.

I was very struck by Keller's claim to be engaged in an exercise that Milton Friedman identified as the best one for making markets work. One of Keller's most forceful points is that class action suits are especially important for reining in petty, recurrent grifts, the junk fees that are the hallmark of enshittification.

He quotes his old boss, the archconservative judge Richard Posner, who said "Only a lunatic or a fanatic sues for $20." But if you multiply a $20 junk fee by ten million purchases, a company can use that fact to make hundreds of millions of dollars. That's real folding money, which is why every company has figured out a way to whack you for a $20 junk fee.

There are two ways to end this racket: one is litigation, the other is regulation, and the capitalism-hating-capitalists who run the world want to kill both. That's why the business lobby smears lawyers like Keller as being "vultures." But as Matt Stoller says, "vultures look aggressive and whatnot, but when you actually get rid of vultures out of an ecosystem, all sorts of things go haywire."

I love this point. Vultures live off the disgusting, rotting crap that would otherwise pile up around us, breeding disease and emitting an unbearable stench. If plaintiff-side, no-win/no-fee lawyers are vultures, then junk fees, wage theft, and the million petty frauds they fight are the disgusting, rotting crap that vultures feed off of – and the harder we make it for our noble vulture lawyers, the more disgusting, rotting crap we have to live with, hence the unbearable stench that is all around us.

Listening to Keller was a fascinating exercise. I thoroughly disagree with him about many things – the way he characterized Section 230 of the Communications Decency Act couldn't have been more wrong – but it's quite bracing to hear a capitalist who doesn't hate capitalism defend it against the vast majority of capitalists, who hate capitalism more than any socialist ever did.


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#25yrsago Torvalds responds to Microsoft's Craig Mundie https://web.archive.org/web/20011019132822/http://web.siliconvalley.com/content/sv/2001/05/03/opinion/dgillmor/weblog/torvalds.htm

#25yrsago Bankrupt Argentina considers banning proprietary code and switching to free software https://web.archive.org/web/20010614131152/https://www.wired.com/news/business/0,1367,43529,00.html

#20yrsago Danny Hillis on how games are(n’t) like a theme park https://web.archive.org/web/20060513182649/https://www.wired.com/wired/archive/14.04/disney.html

#20yrsago Mission Impossible opening marked by anti-Scientology flyover https://web.archive.org/web/20060514000636/http://hailxenu.net/

#20yrsago SmartFilter targets Distributed Boing Boing – how to defeat it https://memex.craphound.com/2006/05/04/smartfilter-targets-distributed-boing-boing-how-to-defeat-it/

#15yrsago John Ashcroft assumes charge of “ethics and professionalism” for Blackwater https://web.archive.org/web/20110507103749/https://www.wired.com/dangerroom/2011/05/blackwaters-new-ethics-chief-john-ashcroft/

#15yrsago Rumsfeld and other US officials say torture didn’t help catch bin Laden https://web.archive.org/web/20110505012303/https://www.wired.com/dangerroom/2011/05/surveillance-not-waterboarding-led-to-bin-laden/

#15yrsago Rental laptops equipped with spyware that can covertly activate the webcam and take screenshots https://web.archive.org/web/20110506130156/http://www.ajc.com/business/pa-suit-furniture-rental-933410.html

#15yrsago Parallel machine made out of 17 stitched-together Apple //e’s https://web.archive.org/web/20110504194313/http://home.comcast.net/~mjmahon/AppleCrateII.html

#15yrsago Sarah Palin and James Lankford: giving $4 billion of taxpayer money to oil companies doesn’t matter https://web.archive.org/web/20110505220640/https://thinkprogress.org/2011/05/03/palin-lankford-oil-subsidies/

#15yrsago Stephen Harper violated election laws https://web.archive.org/web/20110701000000*/http://www.examiner.com/canada-headlines-in-canada/stephen-harper-breaks-election-rules-campaigns-on-radio-on-election-day

#15yrsago History and future of bin Ladenist extremism https://www.juancole.com/2011/05/obama-and-the-end-of-al-qaeda.html

#10yrsago Belushi widow & Aykroyd produce Blues Brothers animated series https://deadline.com/2016/05/the-blues-brothers-animated-comedy-series-dan-aykroyd-1201748389/

#10yrsago Chinese censorship: arbitrary rule changes are a form of powerful intermittent reinforcement https://www.techdirt.com/2016/05/04/why-growing-unpredictability-chinas-censorship-is-feature-not-bug/

#10yrsago US government and SCOTUS change cybercrime rules to let cops hack victims’ computers https://www.wired.com/2016/05/now-government-wants-hack-cybercrime-victims/

#10yrsago After advertiser complaints, Farm News fires editorial cartoonist who criticized John Deere & Monsanto https://web.archive.org/web/20160505042150/https://www.kcci.com/news/longtime-iowa-farm-cartoonist-fired-after-creating-this-cartoon/39337816

#10yrsago Outstanding rant about establishment pearl-clutching over Trump https://web.archive.org/web/20160505033357/https://theconcourse.deadspin.com/george-will-is-a-haughty-dipshit-1774449290

#10yrsago The Planet Remade: frank, clear-eyed book on geoengineering, climate disaster, & humanity’s future https://memex.craphound.com/2016/05/04/the-planet-remade-frank-clear-eyed-book-on-geoengineering-climate-disaster-humanitys-future/

#5yrsago Qualia https://pluralistic.net/2021/05/04/law-and-con/#law-n-econ

#5yrsago Whales decry the casino economy https://pluralistic.net/2021/05/04/law-and-con/#all-bets-are-off


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026 (https://us.macmillan.com/books/9780374621568/thereversecentaursguidetolifeafterai/)

  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, April 20, 2027

  • "The Memex Method," Farrar, Straus, Giroux, 2027



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America. Third draft completed. Submitted to editor.

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.

  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Bluesky (no ads, possible tracking and data-collection):

https://bsky.app/profile/doctorow.pluralistic.net

Medium (no ads, paywalled):

https://doctorow.medium.com/

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

Wednesday 2026-05-06

11:00 PM

This Trump FCC Cybersecurity ‘Fix’ Is About To Make Hardware Way More Expensive For Everyone [Techdirt]

Last week the Trump FCC quietly announced that it was cooking up a new ban on any labs that have testing offices in China from testing electronic ‌devices such as smartphones, cameras and computers for sale in the United States.

That’s going to create some major issues given that roughly 75% of all U.S.-bound electronics are currently tested in Chinese facilities. Many of these operations are owned by U.S. or European companies that have testing facilities in China because that’s where the lion’s share of technology is manufactured, so it’s simply more efficient for testing evolving iterations of new product.

That these companies have offices in China doesn’t inherently mean the testing labs are somehow all magically compromised and in dutiful service to the Chinese government, though that’s certainly the implication the xenophobic Trump administration is making (and has made before in previous, similar announcements).

One major problem outside of the raw logistics of it all: Carr’s planned cybersecurity fix would be significantly more expensive, driving up costs for everyone:

“27 of the affected facilities are Chinese subsidiaries of major Western testing firms, including Intertek, SGS, TUV Rheinland, and Bureau Veritas. Those companies operate labs in the U.S., Europe, and Taiwan that can absorb redirected work, but the shift won’t be seamless. Basic FCC certification testing runs between $400 and $1,300 at Chinese labs, compared with $3,000 to $4,000 at U.S. equivalents.”

Who is going to eat the difference in those costs? You are, of course. In addition to the higher costs from the AI boom, the tariffs, and Trump’s pointless war in Iran. Whatever companies lobbied Carr and Trump will do great. You probably won’t.

Given the terrible nature of smart IOT home security standards (more a byproduct of unregulated crony capitalism than China-based testing locations), having a more direct line of control over the testing of U.S. bound hardware makes superficial sense.

But then you have to remember that this is Brendan Carr, who does nothing authentically in the public interest, and is likely just looking to drive more business to a handful of U.S. companies that lobbied for his attention. And you have to remember that these folks, as you saw when they talked about shifting smartphone production to the States, don’t actually know what the fuck they’re doing.

The other major problem: Trump and Carr’s rabid deregulatory, anti-governance zealotry on other fronts has repeatedly worked to undermine U.S. cybersecurity, making these sorts of fixes leaky and highly performative, even if they were to be successful (which they won’t be).

While Carr and Trump profess to be super worried about Chinese threats to national security, with their other hand the Trump administration has gutted government cybersecurity programs (including a board investigating the biggest Chinese hack of U.S. telecom networks in history), dismantled the Cyber Safety Review Board (CSRB) (responsible for investigating significant cybersecurity incidents), and fired oodles of folks doing essential work at the Cybersecurity and Infrastructure Security Agency (CISA).

Brendan Carr is also engaged in a massive effort to destroy whatever’s left of the FCC’s consumer protection and corporate oversight authority, despite the fact that the recent historic Chinese Salt Typhoon hack (caused in large part because major telecoms were too incompetent to change default administrative passwords) was a direct byproduct of this exact type of mindless deregulation.

The Trump administration’s stacked courts are also making it extremely difficult to hold telecoms accountable for literally anything (see the Fifth Circuit’s recent reversal of a fine against AT&T for spying on customer movement), which also undermines consumer privacy and national security, and ensures zero real repercussions for companies that fail to secure their networks and sensitive data.

So, with one hand you have Carr claiming he’s “fixing cybersecurity” with stuff like this or his recent foreign router “ban” (which as we’ve noted is really a lazy extortion scheme), while with the other he’s doing everything in his power to ensure that domestic telecoms don’t really have anything even vaguely resembling meaningful privacy and security oversight.

Here’s where I’ll remind you that because the U.S. is too corrupt to pass even a basic modern privacy law, we also have a vast and largely unregulated data broker industry that hoovers up your every movement and online habit, then sells access to it to any random asshole (including foreign and domestic government intelligence agencies).

Here too, weird zealots like Trump and Carr have rolled back efforts to regulate data brokers or do anything about it. As authoritarian racists, they’re too blinded by personal self-enrichment and racism to have any genuine understanding of how any of this stuff actually works.

As with the TikTok “ban” (which basically involved shoveling ownership to Trump’s billionaire buddies), so much of this is heavily xenophobic, nationalistic, transactional, self-serving, and performatively detached from any actual reality. By the time the check comes due, guys like Carr and Trump will already be off to the next grift.

NVIDIA’s Shadow Library Scripts ‘Have No Other Purpose’ Than Infringement, Judge Rules [TorrentFreak]

nvidia logoChip giant NVIDIA has been one of the main financial beneficiaries in the artificial intelligence boom.

Revenue surged due to high demand for its AI-learning chips and data center services, and the end doesn’t appear to be in sight.

Besides selling the most sought-after hardware, NVIDIA is also developing its own models, including NeMo Megatron models. These were trained using NVIDIA’s own hardware and with help from large text libraries, much like other tech giants do.

Authors Sue NVIDIA for Copyright Infringement

This includes authors, who, in various lawsuits, accused tech companies of training their models on pirated books. In early 2024, for example, several authors, including Abdi Nazemian, sued NVIDIA over alleged copyright infringement.

Through the class action lawsuit, they claimed that the company’s AI models were trained on the Books3 dataset that included copyrighted works taken from the ‘pirate’ site Bibliotik.

As the case progressed, the authors also brought up NVIDIA’s contacts with Anna’s Archive, inquiring about “high-speed access” to the shadow library’s massive collection of pirated books.

NVIDIA Wants Case Dismissed

In January, NVIDIA fired back with a comprehensive motion to dismiss, calling the authors’ allegations speculative, vague, and legally insufficient. At the California federal court, NVIDIA argues that the authors’ complaint is built on speculation rather than facts.

Specifically, the company asked the court to dismiss the direct copyright infringement claims linked to Bibliotik, Books3, and The Pile dataset.

In addition, the motion also targets the contributory copyright infringement allegations, which center on scripts and tools NVIDIA allegedly distributed so corporate customers could automatically download ‘The Pile,’ the dataset that contains Books3.

The authors’ script allegations

script

The chip giant initially asked the court to dismiss claims relating to Anna’s Archive, Z-Library, LibGen, Sci-Hub, and the Slimpajama dataset as well, but it withdrew this request in March, which substantially narrowed the dispute.

Scripts Have No Other Purpose than Infringement

In an order issued yesterday, U.S. District Judge Jon Tigar denied most of the dismissal request. Importantly, the contributory infringement claim survives, even after the Supreme Court’s Cox v. Sony ruling, which significantly impacts many copyright infringement cases.

NVIDIA argued that Cox tightened the standard, requiring “active encouragement through specific acts,” while stressing that the NeMo Megatron Framework as a whole has substantial non-infringing uses. Marketing or promoting this framework as a piracy tool was needed to prove this claim, NVIDIA argued.

Judge Tigar rejected the framing. Instead of analyzing the Megatron framework as a whole, he zeroed in on the specific scripts that NVIDIA distributed to clients so they could automatically download and preprocess The Pile dataset. Those scripts have no purpose other than enabling infringement, the court concluded.

“The scripts are alleged to have no other purpose than to speed up the process of infringement, unlike the digital video recorder systems at issue in Sony Corp. or the internet service provided in Cox,” Judge Tigar wrote.

This appears to be the first AI training case to apply the new Cox standard, and the result didn’t go the way NVIDIA hoped. The scripts it offered satisfied both the new ‘inducement’ and ‘tailored to infringement’ standards required for a contributory infringement finding.

BitTorrent Is ‘Merely a Tool’

Regarding the direct copyright infringement claims, NVIDIA also asked the court to dismiss “allegations concerning its ‘use of any [sic] BitTorrent Protocol.'”

The request was pretty thin, Judge Tigar noted, pointing out that the complaint contains exactly one reference to BitTorrent. That reference doesn’t point to any of NVIDIA’s alleged wrongdoing. It’s a descriptive line about Bibliotik distributing pirated works via the protocol.

Judge Tigar refused to dismiss all BitTorrent allegations, stressing that “BitTorrent is merely a tool, not a library or dataset.” He also offered a rather colorful analogy.

“Asking to dismiss allegations concerning BitTorrent is like asking to dismiss allegations concerning paintbrushes in a case about a dolphin painting,” the order reads, citing Folkens v. Wyland Worldwide, a copyright dispute over a painting of two dolphins crossing underwater.

dismiss

NVIDIA’s interest in stripping BitTorrent from the case is easier to understand in light of Meta’s troubles in a parallel AI lawsuit. There, Meta’s BitTorrent seeding resulted in direct copyright infringement claims. NVIDIA appears to have wanted that door closed before discovery could open it.

Lawsuit Moves Forward

NVIDIA did get a small win as Judge Tigar dismissed the vicarious copyright infringement claim.

To state that claim, the authors needed to plausibly allege that NVIDIA had both the legal right to control the direct infringers and a direct financial interest in the infringement. Tigar found neither was adequately pleaded, but allowed the authors 21 days to address the deficiencies and refile.

For now, it is clear that this legal battle between the authors and NVIDIA is far from over.

The same also applies to a long list of other AI training lawsuits, which continue to grow every month. That includes a lawsuit filed against Meta and Mark Zuckerberg yesterday by major publishers, which, like many others, also accuses Meta of training on pirated books.

A copy of U.S. District Court Judge Jon Tigar’s order on NVIDIA’s motion to dismiss is available here (pdf).

From: TF, for the latest news on copyright battles, piracy and more.

02:00 PM

Steven Soderbergh On AI In Films: If There’s a Filmmaking Tool, I’m Going To Explore It [Techdirt]

While we’ve taken some issues with his approach to copyright laws and enforcement in the past, there is no doubting that Steven Soderbergh is a filmmaking legend. This is a man who directed films like Traffic and Ocean’s 11. He talks about, and cares about, the art of filmmaking. And he’s apparently beginning to use AI in some limited ways.

You really have to pay attention to Soderbergh’s specific comments on how he’s using it, because I would argue that it’s exactly the right artistic approach to the conversation: limited, targeted uses that help achieve the artist’s vision rather than replace everything in a film with garbage slop. Interestingly, articles like this one from Salon still frame all of this as some betrayal of art on Soderbergh’s part. Here’s how Soderbergh describes how he’s using AI as part of an upcoming film about John Lennon and Yoko Ono.

“AI has been helpful in creating thematically surreal images that occupy a dream space rather than a literal space,” Soderbergh said. “And it’s been really fun because you need a Ph.D. in literature to tell it what to do.” Soderbergh relented that generative programs require “very close human supervision,” before going on to admit that he’s also using “a lot of AI” for an upcoming film about the Spanish-American War, to generate images of archaic warships and God knows what else.

I very much understand Soderbergh’s description of how he’s using this tool for his films, but I have no idea what the hell the commentary from Salon around the quote is on about. “And God knows what else” is perhaps the silliest comment in the post, because that statement only works if Soderbergh himself happens to be God.

I don’t believe he is, to be clear. And I think an artist like this one who finds the tool useful in achieving his overall artistic vision is something we should be paying attention to, not dismissing out of hand. The Salon piece notes that Soderbergh has routinely been a director who has embraced the use of new technology before launching into this diatribe.

But just because Soderbergh jumping at AI could be seen from a mile away doesn’t make it any less disappointing, nor does it excuse his reluctance to thoughtfully engage with others’ criticisms about the technology. If “The Christophers” is to be believed, art that tries to imitate a certain style is little more than hollow, emotionless posturing. Generative AI is the same: mere mimicry, devoid of the humanity that makes art . . . well, art. And by being so willfully averse to acknowledging the ways AI and art conflict — not to mention its ramifications for others in his industry — Soderbergh’s take on an artist losing his touch in “The Christophers” is disappointingly apt.

Of course the art that AI “creates” is mimicry and devoid of humanity. That’s definitionally how the tool works. And anyone who thinks they’re going to rely on an AI tool to “create art” is on a fool’s mission. It simply won’t work because it’s not designed to work that way. Instead, it’s a tool to get you some components of what you need to create an overall artistic vision, which is still led by a very human artist. Will there be work done by an AI on the margins in filmmaking that would normally have been done via paid workers in the industry. Perhaps. Likely, even. But will the limited use of these tools also lower the barrier of entry in terms of skill set needed and budget to produce films, thereby creating even more output of films overall? I’m struggling to see how that would not be the case.

And at the end of the day, there’s still an artist calling the shots. Perhaps fewer overall total artists involved in a single movie, but the limited use of AI tools doesn’t somehow suck the entire soul from a film anymore than the ease of digital footage editing over the use of film does. And just like a movie that is almost nothing other than pretty CGI graphics, but which otherwise sucks, lazy people trying to create entire films with AI are going to fail. And fail hard.

Say it with me now: there is more nuance to this conversation than the hardliners and evangelists are bothering to acknowledge.

In a follow-up chat with Variety, Soderbergh expanded on his initial comments about using AI in future films. “I’m just not threatened by it . . . Ten years ago, I would have needed to engage a visual effects house at an unbelievable cost to come up with this stuff,” he said. “No longer. My job is to deliver a good movie, period. And this tool showed up at a moment when I needed it. I don’t think it’s the solution to everything, and I don’t think it’s the death of everything . . . There are some people that I have absolute love and respect for that refuse to engage with it. That’s their privilege. But I’m not built that way. You show me a new tool, I want to get my hands on it and see what’s going on.”

That’s an artist saying that, folks, not some Silicone Valley tech bro. And, to be clear, he might get it wrong. He may use the tool and his product might suck out loud. But to try to abort the use of a tool before it’s even been explored seems silly.

Kanji of the Day: 角 [Kanji of the Day]

✍7

小2

angle, corner, square, horn, antlers

カク

かど つの

外角   (がいかく)   —   external angle
内角   (ないかく)   —   interior angle
一角   (いっかく)   —   corner
角度   (かくど)   —   angle
互角   (ごかく)   —   equal (in ability)
三角   (さんかく)   —   triangle
角界   (かくかい)   —   the world of sumo
折角   (せっかく)   —   with trouble
街角   (まちかど)   —   street corner
多角的   (たかくてき)   —   multilateral

Generated with kanjioftheday by Douglas Perkins.

Kanji of the Day: 弥 [Kanji of the Day]

✍8

中学

all the more, increasingly

ミ ビ

や いや いよ.いよ わた.る

弥生   (いやおい)   —   third month of the lunar calendar
弥生時代   (やよいじだい)   —   Yayoi period (c. 300 BCE-300 CE)
阿弥陀   (あみだ)   —   Amitabha (Buddha)
阿弥陀如来   (あみだにょらい)   —   Amitabha Tathagata
南無阿弥陀仏   (なむあみだぶつ)   —   Namu Amida Butsu
沙弥   (さみ)   —   male Buddhist novice
元の木阿弥   (もとのもくあみ)   —   ending up right back where one started
阿弥陀堂   (あみだどう)   —   temple hall containing an enshrined image of Amitabha
弥次   (やじ)   —   hooting
弥勒   (みろく)   —   Maitreya (Bodhisattva)

Generated with kanjioftheday by Douglas Perkins.

花博鶴見緑地で学ぶ 地図とWikipediaの編集体験 [OpenStreetMap Japan]

花博記念公園鶴見緑地を題材に、地域の情報をオープンデータとして記録・発信する方法を体験するイベントです。 午前中は現地での調査と撮影を行い、午後は会場に移動して、OpenStreetMapやWikipediaの編集に取り組みます。 地域の情報を「見つける」「記録する」「公開する」までの流れを、一日のプログラムとして体験していただきます。 OpenStreetMapやWikipediaの編集経験がない方でも参加しやすい内容を予定しています。 イベント申し込みは以下のサイトからお願いします。 https://countries-romantic.connpass.com/event/389840/

OSMコミュニティ [OpenStreetMap Japan]

OSMの活動は全世界各地で行われており、様々な言語で情報がやりとりされています。 基本的に使われる言語は英語ですが、OSMではそれぞれの地域に、メーリングリストが用意されており、その地域のなかでのコミュニケーションを容易にしています。 また、各地で開かれるマッピングパーティでは、その地域に住んでいる、あるいは関心を持っているひとが集まり、その地域の地図データを作成することを通じて、地域のことを知り、知識と技術を交換し、地図データを豊かにする活動が行われています。 ここでは、オンラインとオフラインのコミュニティについて紹介します。

オンラインコミュニティ

コミュニティとは、「そこに行けば誰かがいる」という、部活動の部室のようなものです。 そこに言葉を投げかければ、誰かが答えてくれるかもしれませんし、場合によっては他の部屋やコミュニティまで声が広がるこ

マッピングパーティー in シーパスパーク [OpenStreetMap Japan]

2026年のインターナショナルオープンデータデイは、泉大津市のシーパスパークでマッピングパーティーをします。 参加無料です。初心者歓迎です。皆様のお越しをお待ちしています。(^^)/ みんなで、誰でも使える泉大津の地図を作っていきましょう。 今回は合わせて、uMapの編集方法なども説明しますよ。 OpenStreetMapやuMap、オープンデータやオンラインの地図に興味がある方にオススメです。 対象は、小学生以上です、低学年の方は、どなたか付き添いをお願いします。 日時 2026年3月7日(土)13時~15時(途中参加、途中退出自由です) 会場 大阪府泉大津市 シーパスパーク内ワークショップスペース(地図は下にあります) 詳しくは、connpassのサイトをご覧ください https://connpass.com/event/385494/

マッパーズサミット2026 [OpenStreetMap Japan]

みんなで作る自由な地図であるOpenStreetMapの会合、ミートアップ、交流会です。 地図を書く方(マッパー)のノウハウを共有し、タグ付けのアイデアを考えたり、話し合ったりする内容です。 上級者の方はもちろん、初心者のみなさんも気軽に参加して、楽しく交流しつつマッパー力を上げませんか? 議題提供、参加登録は以下のサイトまでお願いします。 https://osm.connpass.com/event/380259/

発表者募集中! <State of the Map Japan 2025 in Osaka> [OpenStreetMap Japan]

OpenStreetMapの日本国内カンファレンス「State of the Map Japan 2025」が大阪で開催されます。 気になる開催日は12月6日(土)。スケジュールを始めとした詳細については、検討および調整中です。 また、今回はWikimedians of Japan User Groupいのち会議との合同開催です。 そして、ただいま発表者を募集中です。 OpenStreetMapに関することなら、マッピング、ビジネス、研究、開発な

OpenStreetMap Advent Calendar 2025実施中! [OpenStreetMap Japan]

今年もオープンストリートマップ(OpenStreetMap)の2025年アドベントカレンダーです。 今年の振り返りや自己紹介など、内容は問いません。 ブログが無くてもこのQiitaやOpenStreetMapのユーザー日記、zenn、note、medium.com などに記事を書いて登録することもできます。 以下のサイトから、じゃんじゃん書いちゃってください! https://qiita.com/advent-calendar/2025/osmjp

みんなで「あさけ」界隈を歩いてウィキペディアと世界地図に足跡を残そう! [OpenStreetMap Japan]

WikipediaとOpenStreetMapの街の情報を編集し、世界に向けて発信するオープンデータソンを開催します。 四日市市のあさけ界隈(旧朝明郡のうちあさけプラザ周辺)にある旧東海道や伊勢湾から琵琶湖へと近江商人たちが往来した八風街道の街並み、聖武天皇行幸や戦国時代の戦乱に縁のある寺社を訪ね歩いたあと、あさけプラザの会議室で、ウィキペディアタウン&マッピング。 ・日時 2025年11月29日(土)8:45~17:15  (少雨決行)  ・集合 あさけプラザ 2F 第4・第5会議室 四日市市下之宮町296番地1 ・内容 ①地図班(スマホでオープンストリートマップに書き込みます) 講師 坂ノ下勝幸(諸国・浪漫)     ②ウィキペディア班(パソコンやスマホでウィキペディアに書き込みます) 講師 Miy

オープンデータを作ろう! with ウィキメディアもくもく会 in 北九州 [OpenStreetMap Japan]

ウィキメディア編集のもくもく会と合わせて街歩きを行います。 史跡や残したい地域の景色などを登録して、OpenStreetMapの情報を充実できれば、と思います。ご興味ある方ご検討よろしくお願いいたします。 会場の正面玄関が施錠されておりますので、告知サイトよりご登録ください。 告知サイトの参加者用リンクより、到着連絡用のフォームをご用意していますので、そちらから会場到着をご連絡ください。 https://techplay.jp/event/987206

OSMF Japan企業賛助会員: TomTom社の参加 [OpenStreetMap Japan]

OpenStreetMap活動を支援するOSMF Japanは、TomTom社が賛助会員として参加されることをお知らせいたします。 TomTom社は、OpenStreetMap Foundation(OSMF)のPlatinum Corporate Memberとして世界的なOSM活動を支援されていますが、この度、日本におけるOSM活動についても直接的な支援をいただけることとなりました。 同社は世界全域を対象とする道路データおよび車両移動軌跡を有する国際企業であり、世界各地でOSMコミュニティの発展に貢献されて

RSSSiteUpdated
XML About Tagaini Jisho on Tagaini Jisho 2026-05-07 08:00 PM
XML Arch Linux: Releases 2026-05-07 03:00 PM
XML Carlson Calamities 2026-05-07 03:00 PM
XML Debian News 2026-05-07 08:00 PM
XML Debian Security 2026-05-07 08:00 PM
XML debito.org 2026-05-07 08:00 PM
XML dperkins 2026-05-07 02:00 PM
XML F-Droid - Free and Open Source Android App Repository 2026-05-07 12:00 PM
XML GIMP 2026-05-07 03:00 PM
XML Japan Bash 2026-05-07 08:00 PM
XML Japan English Teacher Feed 2026-05-07 08:00 PM
XML Kanji of the Day 2026-05-07 03:00 PM
XML Kanji of the Day 2026-05-07 03:00 PM
XML Let's Encrypt 2026-05-07 03:00 PM
XML Marc Jones 2026-05-07 03:00 PM
XML Marjorie's Blog 2026-05-07 03:00 PM
XML OpenStreetMap Japan 2026-05-07 03:00 PM
XML OsmAnd Blog 2026-05-07 03:00 PM
XML Pluralistic: Daily links from Cory Doctorow 2026-05-07 02:00 PM
XML Popehat 2026-05-07 03:00 PM
XML Ramen Adventures 2026-05-07 03:00 PM
XML Release notes from server 2026-05-07 03:00 PM
XML Seth Godin's Blog on marketing, tribes and respect 2026-05-07 02:00 PM
XML SNA Japan 2026-05-07 02:00 PM
XML Tatoeba Project Blog 2026-05-07 08:00 PM
XML Techdirt 2026-05-07 08:00 PM
XML The Business of Printing Books 2026-05-07 03:00 PM
XML The Luddite 2026-05-07 03:00 PM
XML The Popehat Report 2026-05-07 02:00 PM
XML The Status Kuo 2026-05-07 02:00 PM
XML The Stranger 2026-05-07 03:00 PM
XML Tor Project blog 2026-05-07 08:00 PM
XML TorrentFreak 2026-05-07 08:00 PM
XML what if? 2026-05-07 08:00 PM
XML Wikimedia Commons picture of the day feed 2026-05-04 05:00 AM
XML xkcd.com 2026-05-07 08:00 PM