News

Saturday 2026-03-14

11:00 PM

Updated Debian 13: 13.4 released [Debian News]

The Debian project is pleased to announce the fourth update of its stable distribution Debian 13 (codename trixie). This point release mainly adds corrections for security issues, along with a few adjustments for serious problems. Security advisories have already been published separately and are referenced where available.

07:00 PM

Visible measures [Seth Godin's Blog on marketing, tribes and respect]

When an organization is known for speed and quality, it’s likely that if times get tough, quality will suffer before speed does. That’s because customers notice speed right away, but it takes a while to come to a conclusion about quality.

If a musician or politician is known for showmanship and wise insights, the showmanship will probably outlast the wisdom.

When we measure and compare the easily visible, we may be setting ourselves up for disappointment.

      

01:00 PM

At The WBC: Mark DeRosa Screwed Up & Then MLB Streisanded The Story [Techdirt]

The World Baseball Classic is currently going on and I absolutely adore it. Essentially a World Cup for baseball, 20 nations are playing against one another in a banger of a tune-up for the Major League Baseball season. It’s a flamboyant delight, with cultural celebrations such as the Italian team doing a shot of espresso after they hit home runs in the dugout.

The American team is managed by former major leaguer Mark DeRosa. While I won’t bore you with too many gory details, DeRosa royally fucked up during the tail end of pool play. Through a complicated series of winning scenarios and tie-breaker rules, the American team headed into its game with Italy needing to win to secure its place in the playoffs. DeRosa, it appears, was under an entirely different impression. These were his comments before the game with Italy.

After the game, he mentioned that some of his players were “dragging” on the field and he essentially put in a lineup that didn’t include many of the normal starting players. If you don’t know professional baseball culture, there’s a reason for the dragging. With nothing at stake, it’s pretty clear DeRosa thought the playoffs were already secured… and told his players to go out and celebrate that night. They likely did, late into the night and with the help of plenty of alcohol. Then they lost to Italy, which meant they needed Italy to win or to get into tie-breaking scenarios against their next game with Mexico. They got lucky in that Italy did beat Mexico in the next game, but the fuck up took things out of the hands of Team USA, leaving it up to their rivals.

You may not care about any of the above, but baseball fans do. DeRosa, in his day job, is also an employee of MLB, serving as a commentator on the MLB channel. MLB itself took down the original video of DeRosa’s comments and put up a version in which you don’t hear DeRosa’s mistake nor his admitting later that he screwed up.

Also, this reporting from The Athletic doesn’t actually make things look better for DeRosa and Team USA:

“The league appears to have taken down video that included DeRosa’s mistaken comments from MLB.com, with attempts by The Athletic to access it yielding error messages early Wednesday morning. A version of the interview that remained on MLB Network’s Facebook page appeared to be condensed and did not include the now-scrutinized remarks.”

I really don’t know what MLB was thinking here. American baseball fans would somehow forget what they heard DeRosa say? A screw up that could have bounced the American team from the WBC entirely would somehow fly under the radar?

Regardless, the Streisand Effect took over and now then the reporting on all of this went into wide circulation. In discussing MLB’s attempt at the hidden ball trick, reporting on DeRosa’s fuck up went through another, and larger, round of reporting. By trying to hide what DeRosa did, MLB made it public all the more.

This is classic Streisand Effect stuff at work and I can barely believe that Major League Baseball thought this isn’t exactly what would occur.

10:00 AM

Kanji of the Day: 世 [Kanji of the Day]

✍5

小3

generation, world, society, public

セイ セ ソウ

世界   (せかい)   —   the world
世代   (せだい)   —   generation
世紀   (せいき)   —   century
世帯   (しょたい)   —   household
世の中   (よのなか)   —   society
世界選手権   (せかいせんしゅけん)   —   world championship
世界的   (せかいてき)   —   worldwide
世界観   (せかいかん)   —   world view
世界中   (せかいじゅう)   —   around the world
世論調査   (せろんちょうさ)   —   public opinion poll

Generated with kanjioftheday by Douglas Perkins.

Kanji of the Day: 辛 [Kanji of the Day]

✍7

中学

spicy, bitter, hot, acrid

シン

から.い つら.い -づら.い かのと

辛い   (からい)   —   spicy
辛口   (からくち)   —   dry taste (e.g., sake, wine)
香辛料   (こうしんりょう)   —   spice
辛抱   (しんぼう)   —   patience
唐辛子   (とうがらし)   —   capsicum (Capsicum annuum, esp. the cultivated chili peppers)
辛み   (からみ)   —   hot taste
辛勝   (しんしょう)   —   narrow victory
辛くも   (からくも)   —   barely
甘辛   (あまから)   —   sweetness and saltiness
辛うじて   (かろうじて)   —   barely

Generated with kanjioftheday by Douglas Perkins.

09:00 AM

The IRS’s Verification System for Sharing Taxpayer Data With ICE Would Have Accepted ‘Don’t Care 12345’ as a Valid Address [Techdirt]

We’re a couple weeks late to this one, but it deserves more attention than it received. As the Washington Post first reported, a federal judge has found that the IRS violated federal law 42,695 times when it handed over confidential taxpayer addresses to ICE last summer. But the raw number, staggering as it is, undersells how absurd this whole thing was. The details of how it happened are so much worse.

Federal law has a pretty basic safeguard built in: before the IRS can hand over a taxpayer’s home address to another agency, the requesting agency has to provide the name and address of the person they’re looking for — specifically to prevent the government from using tax records as a fishing expedition against people it hasn’t already identified.

Can you guess how the Trump IRS’s actual verification process worked when ICE wanted addresses? I’m betting you absolutely can.

The judge, U.S. District Judge Colleen Kollar-Kotelly, laid it out in devastating detail. When ICE sent over its massive datafile of 1.28 million records, the IRS ran two different matching processes. For requests where ICE included a Social Security number, the IRS used something called “TIN Matching” — which checked that the name and SSN matched IRS records. What TIN Matching did not do was verify that ICE had actually provided a real address. The only address-related check was an automated filter that looked for whether the address field contained something resembling a zip code — meaning, any five-digit or nine-digit number.

That was it. That was the safeguard.

As Judge Kollar-Kotelly pointedly observed:

A zip code is not an address, and a zip code proxy, as the IRS would define it, might as well be a set of random numbers. For instance, ICE could have submitted a request with an “address” like, “Don’t Care 12345,” or, “00000,” and still received a taxpayer’s address through the IRS’s TIN Matching process.

And this was the process used for the overwhelming majority of the disclosures. Of the 47,289 taxpayer addresses the IRS shared with ICE, 90.3% — those 42,695 — went through TIN Matching, the process that never actually checked the address. Only 9.7% went through a process that bothered to verify ICE had provided a matching address.

So when the IRS’s own Chief Risk and Control Officer, Dottie Romo, filed a supplemental declaration with the court admitting the agency “may have supplied last known addresses to ICE” in cases where the data was “either incomplete or insufficiently populated,” that was putting it generously. The judge’s opinion catalogs what ICE actually submitted as “addresses” in many of these cases:

In other words, the IRS not only failed to ensure that ICE’s request for confidential taxpayer address information met the statutory requirements, but this failure led the IRS to disclose confidential taxpayer addresses to ICE in situations where ICE’s request for that information was patently deficient. The IRS, for example, disclosed to ICE the last known addresses for taxpayers in situations where ICE supplied an “address of the taxpayer” in its request that contained “language indicating that the address was not complete, such as ‘Failed to Provide,’ ‘Unknown Address,’ or ‘NA NA.’” ….The IRS also disclosed to ICE the last known addresses of taxpayers where the ICE-supplied address was missing essential information, such as “a street name or street number.” … Still more, the IRS disclosed to ICE the last known addresses of taxpayers where the ICE-supplied address “referred to, described, or named specific locations”—examples of which are “jails, detention facilities, or prisons”—and “the corresponding city, state, and zip code” for those locations, but did not include “the street names and street numbers where the buildings or facilities are located.”

“Failed to Provide.” “Unknown Address.” “NA NA.” The system was designed not to catch these deficient requests. The TIN Matching process, as the judge noted, “was not designed to identify the additional types of data insufficiencies.” Of course it wasn’t. Because the process never looked at the address field in any meaningful way to begin with.

Nina Olson, founder of the Center for Taxpayer Rights (which brought the suit), told the Washington Post there was no precedent for anything like this:

“I don’t know of any opinion about the IRS like this. The kinds of mass requests that are coming in are unprecedented.”

And then there’s the timeline of what happened after the government figured out what it had done, which is deeply disturbing as well. The Department of Treasury identified the problems on January 23, 2026. That very same day, it notified DHS. Also on that very same day, the sole ICE official who had access to the illegally disclosed taxpayer data gave two additional ICE officials access to it. The stated reason was “for the purpose of allowing [them] to create an adequate system of safeguards for the data.”

So on the day they found out the data was obtained in violation of federal law, the first move was to give more people access to the illegally obtained data.

And when did the government get around to telling the court and the plaintiffs about these 42,695 violations of federal law? Nearly three weeks later, on February 11. As the judge noted, Defendants “informed DHS right away, but they waited nearly three weeks to inform Plaintiffs and the Court.” The opinion goes on to observe that this, along with the broader pattern, “undercut many representations made by Defendants during this litigation” and reflects, “at the very least, a disconnect between the agency clients and counsel, which leads to some concern regarding the completeness of the administrative record.”

“Some concern.” That’s judicial restraint doing a lot of heavy lifting.

The case is now before the DC Circuit, where the government is appealing Judge Kollar-Kotelly’s earlier order blocking the data-sharing arrangement. In the meantime, DHS has been defending the program as essential to immigration enforcement, with a spokesperson offering the standard line to the Washington Post about how “information sharing across agencies is essential to identify who is in our country, including violent criminals.” Which might be more compelling if the agency’s actual implementation hadn’t involved waving through requests with “NA NA” where the address was supposed to go.

A judge has now formally documented that the IRS broke federal taxpayer confidentiality law tens of thousands of times in a single data dump, using a verification process so hollow that literal gibberish would have passed muster — and when the government discovered this, its first move was to expand access to the illegally obtained data and wait three weeks before telling the court. And yet the government is still fighting to keep the underlying program alive.

08:00 AM

Roblox Rolls Out AI-Powered Real-Time Rephrasing Of Profanity Within Chat [Techdirt]

The power of the latest generation of AI systems is such that previously impractical applications are not just possible, but scalable. For example, moving beyond basic early AI text translation tools, it is now possible to use live translation to communicate in another language in real time. For many people that will be a real boon, especially when they are traveling. But here’s something that is likely to prove more controversial: real-time rephrasing of profanity within chat. It’s a new AI-powered feature from Roblox that is designed to “keep gameplay fluid while maintaining civility within chat”:

Roblox is leveraging AI to automatically rephrase profanity. Rather than displaying only hashmarks, filtered text will be translated into more respectful language that remains closer to the user’s original intent. For example, a message that violates Roblox’s profanity policies, such as “Hurry TF up!” would previously have appeared as “####” within experience chat. That will now be rephrased to “Hurry up!” This new layer is designed to maintain civility by rephrasing the language and replacing “stop signs” with real-time guidance.

Specifically:

When a message violates Roblox’s profanity policy, everyone in the chat is notified that the text has been rephrased to keep things civil. While rephrasing reduces some of the disruption in the chat, Roblox’s multilayered safety system remains in effect for more serious behavior. Rephrasing is available exclusively for in-experience chat between age-checked users in similar age groups and is supported in all languages currently available through Roblox’s automatic translation tools.

Alongside this new AI-based capability, Roblox is also tweaking its text filtering system:

Early results from Roblox’s testing show significant improvements in detecting leet-speak, or letters replaced with numbers or symbols, and more sophisticated attempts to bypass filters.

Parents may applaud real-time rephrasing as a way for the service to nudge younger users away from bad language in their interaction with others, without stopping them playing altogether. But it creates a dangerous proof of concept that others may build on, particularly in jurisdictions that want stricter controls on what people say online.

It’s easy to imagine situations where Chinese AI systems, for example, rephrase people’s language on social media in real time to promote “social harmony”. Not only the style but even the content’s details could be subtly changed away from controversy towards conformity. It would be possible for rephrasing to be visible only to others, so the person making a comment might not even be aware that their words were being subverted in this way. Something similar is already happening with Chinese AI chatbots that censor their own answers, without acknowledging that fact. As Chinese AI companies become increasingly important players in the online world, this kind of covert rephrasing by them — and others — is another issue people will need to watch out for in our brave new AI world.

Follow me @glynmoody on Bluesky and on Mastodon.

07:00 AM

Down in D.C. Today [The Status Kuo]

I’m in meetings of the Finance Committee for the Human Rights Campaign throughout the day, so I decided to take a day off from writing so I could focus with a fresh brain on the hard numbers!

Speaking of hard numbers, it’s honestly been a rough couple of weeks. I’ve sadly lost dozens more paid subscribers through attrition than have signed up. I understand that it’s one of the perils of depending on voluntary donations to keep going. But I’d love to keep all of my content here free, especially for those on fixed income or disability. If you’re financially able to support and have been meaning to take out a paid subscription, at the cost of buying me coffee once a month, I’d be so grateful!

Subscribe now

And if you’re already a supporter, thank you, thank you, thank you! You make this newsletter possible. I’ll be back tomorrow with Skeets and Giggles, which I hope to finish by 8am before our second round of meetings begin here in D.C.!

With gratitude,

Jay

06:00 AM

Normalizing app store choice [F-Droid - Free and Open Source Android App Repository]

This Week in F-Droid

TWIF curated on Thursday, 13 Mar 2026, Week 11

F-Droid core

The countdown to September lock-down continues. Did you do your part? Did you contact your local representative yet? What did they answer?

Did you install F-Droid on all you family’s members and close friends devices? Why not? Have them try a good FLOSS app today, we don’t have millions of apps but we host good apps that offer the transparency users deserve.

Let’s show them that one does not need to be an advanced users to get good, privacy respecting apps, and when everyone installs F-Droid we are normalizing the freedom of users to decide that for themselves.

Want to be more daring? Install or update to the latest F-Droid Basic version 2.0-alpha4. It brings:

  • Tweak category layouts to separate groups of categories (Thanks Peter)
  • Show banner alerting users to how Google developer verification threatens F-Droid (Thanks Peter)
  • Evaluate and translate more strings
  • DNS cache feature 2.0 refactor (Thanks Matthew Bogner)
  • Update color definitions for light and dark themes (Thanks proletarius101)
  • Consistent chip layout throughout (Thanks Peter)
  • Show us in TV launchers (Feedback welcomed!)
  • Fix bugs with keyboard navigation
  • Show Discover screen faster and animate items
  • Keep filter icon visible in app lists
  • Allow filtering app lists by anti-features
  • …and more

And by the time you read this, 2.0-alpha5 might be live too…

Want a good “starter” app in the world of FLOSS apps for Android? Have NewPipe, it was updated to 0.28.4 and, besides the usual fixes and improvements, it added a start dialogue to inform users about the Google plan to lock-down Android soon.

Community News

In stats news, @kitswas announces that all-time stat badges are available thanks to @BURG3R5. Now you can get badges like Downloads (all time) from the badge builder.

And our own @grote is researching integrating download stats in F-Droid Client (hint: Basic 2.0-alpha already includes stats) and they’ve quickly put up a website that shows downloads in total and per category for the last year. Note that the numbers only cover our own servers and not our dozen or so mirrors, since these are used at random for each download, you might need to multiply the numbers.

AndrOBD was updated to V2.7.4 after some months of development. Does the changelog sound nice? Oh, new contributors have arrived…

sing-box was updated to 1.13.2 with a refactored UI and many improvements.

Removed Apps

3 apps were removed
  • NClientV2: An unofficial NHentai Client (See new apps below)
  • Smile SS14: Smile from Space Stastion 14 as a ANeko skin
  • WhatsDown: Fast encrypted chats for the family (Metadata needs to be updated to make it clear it’s a fork of ArcaneChat)

Newly Added Apps

38 apps were newly added

Downgraded Apps

1 app was downgraded
  • Amarok was downgraded from 0.10.0 to 0.9.3 so the dev fixes a thing or two.

Updated Apps

346 more apps were updated
(expand for the full list)

Lack of news made us skip last week’s TWIF, hence this list only grew larger.

Thank you for reading this week’s TWIF 🙂

Please subscribe to the RSS feed in your favourite RSS application to be updated of new TWIFs when they come up.

You are welcome to join the TWIF forum thread. If you have any news from the community, post it there, maybe it will be featured next week 😉

To help support F-Droid, please check out the donation page and contribute what you can.

04:00 AM

Trump Rolls Out White Carpet For White Migrants [Techdirt]

Roughly a year ago — as Trump was trying to turn anti-genocide protests into deportable antisemitism — his administration made it clear it was only willing to support white people with antisemitic views. The administration threw some anti-Israel filters into the mix for DHS vetting of incoming migrants, blending them with the anti-Trump filters that equated opposing Trump and his open bigotry with hating America.

But the administration gave some antisemites a free pass… if they were white enough.

One of the white Afrikaners brought into the US as refugees by the Trump administration this week has a history of antisemitic social media posts, despite the White House using alleged antisemitism as a rationale for deporting pro-Palestinian protesters.

Charl Kleinhaus posted on X in 2023 that “Jews are untrustworthy and a dangerous group.” In another post last fall, he shared a rightwing, nationalist YouTube video that was later removed, titled: “‘We’ll shoot ILLEGAL Immigrants!’ – Poland’s Illegal Islamic immigrant solution,” with clapping emojis.

A number of Kleinhaus’s posts also promote the conspiracy theory that white people in South Africa are being particularly persecuted.

Trump apparently believes white South Africans are so extremely persecuted (after decades of subjugating Black South Africans), they deserve to avail themselves of everything offered by the Land of the Free. Trump wants incoming whites so badly he’s willing to rewrite the rules to flood the zone with new bigots.

The U.S. aims to process 4,500 refugee applications from white South Africans per month, far above President Donald Trump’s stated refugee program cap, and is installing trailers on embassy property in Pretoria to support the effort, a U.S. contracting document said.

The new target, contained in a previously unreported document from the U.S. State Department dated January 27, signals a push to ramp up admissions from South Africa, while refugee applications from other areas have been severely curtailed.

Notably, this isn’t a free pass for any South African. The document apparently extends this to whites only, which is a hell of a thing to do in this day and age. I have to imagine even the most racist of Afrikaners might balk a bit at boarding a plane where non-whites are being ejected over whatever country the plane flies over on its way to Land of Opportunity.

No one ever said this administration is clever. But it’s not accurate to call it stupid. It’s something else entirely: a collection of shitbirds who are so contemptuous of everyone else in the nation that it will stroke itself off publicly and shrug off complaints with non-sequiturs liberally peppered with phrases like “leftist media” or “activist judges.”

This administration will not only piss down your leg and tell you it’s raining, but swing by later to bitch about people “illegally” benefiting from the uninvited precipitation. It’s shit stacked on shit stacked on shit topped off by wedding cake figurines standing on the necks of fallen lawn jockeys.

If you want this to be your nation, please get the fuck out. Get your Fourth Reich on elsewhere, you absolute mooks. And I dare you explain how THIS isn’t pure racism, especially when you’ve gone all in on the “get the foreigners out” purge this government has been engaged in since Trump returned to the Oval Office.

Daily Deal: Luminar Mobile for iOS And Android [Techdirt]

Luminar Mobile is your all-in-one creative companion designed for iOS, Android OS, and Chrome OS. Powered by an intuitive, touch-responsive interface, it lets you enhance photos effortlessly—anytime, anywhere. Whether you’re adjusting lighting, perfecting portraits, or adding artistic flair, Luminar Mobile delivers pro-level results in the palm of your hand. It’s on sale for $20.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Ninth Circuit Guts California’s Kids Code Once Again [Techdirt]

It’s been a little while since we last wrote about California’s deeply problematic “Age Appropriate Design Code,” which tried to force internet companies into taking blatantly unconstitutional steps to pressure companies into magically preventing all “harms” to kids. The law has bounced between the district court and the Ninth Circuit multiple times — and yesterday, once again, most of the law was deemed effectively unconstitutional and tossed out. The ruling is procedurally messy in annoying ways, but most of that we can blame on the Supreme Court. But we’ll get to that.

The bill, in somewhat troubling fashion, was drafted and heavily pushed for by a British Baronness/Hollywood director who made a documentary about kids and smartphones and got so freaked out by her own documentary, that she decided that she would single-handedly destroy the open internet for children, First Amendment be damned. Trade association NetChoice challenged the law (as it has challenged many state laws) and has been mostly successful.

As I explained to a court myself, the law was both impossible to comply with and a clear attack on free expression. The court agreed and threw out the law as unconstitutional. It went to the Ninth Circuit which mostly agreed that the law was unconstitutional. Unfortunately, right before the Ninth Circuit ruled, the Supreme Court’s Moody decision made a mess of things. While that ruling effectively killed unconstitutional bills in Florida and Texas that sought to regulate social media, the ruling went deep into the silly weeds, arguing that challenging an entire law as unconstitutional on its face (a “facial challenge”) required a nearly impossible set of standards to meet, preferring parties challenge the law “as applied” (i.e., once it actually violating people’s rights directly).

Because of that, part of the law was sent back to the lower court, where it was again deemed unconstitutional and blocked by injunction. And then that ruling was appealed again, leading to this Ninth Circuit ruling, which lifts part of the injunction, sending the case back down to the lower court yet again. But it effectively wipes out large parts of the Age Appropriate Design Code as clearly unconstitutional. Basically, all the parts in the law that actually do things are dead, because they pretty clearly regulate speech in violation of the First Amendment. The case — and what remains of the law — lives on as a procedural zombie, still technically breathing but stripped of its real teeth.

It’s a good ruling, though made slightly annoying by the procedural situation created by the Supreme Court’s Moody ruling.

Digging in: the court struck down most of the scary problematic provisions of the law, rightly noting that they violated the First Amendment on vagueness grounds. First up, there were provisions that tried to limit how websites could handle a child’s personal information, but this was a smokescreen. While it was dressed up to look like a “privacy” bill, the law really sought to impact what content kids could read, saying you couldn’t use data in a way that harmed the “well-being” of a child, and that the use had to be in the “best interests” of the child. There was also a provision regarding whether or not the data was used in way that was “materially detrimental” to the child. These are all super vague terms that were clearly really meant to be “don’t show kids content that might make them sad.”

The court said this is a problem:

NetChoice persuasively argues that the risk of subjective enforcement is particularly high because, as it contemplates “material detriment” to “a child,” the provision must be assessed as to any single child whose personal information is accessed by a covered online practice.

California argued there was no problem with requiring sites to create systems in the “best interests” of children, but the court rightly notes that you can’t just create general rules that accomplish that:

When evaluating the “best interests of the child” in family law proceedings, California courts recognize that “bright line rules in this area are inappropriate” and that “each case must be evaluated on its own unique facts.” In re Marriage of LaMusga, 88 P.3d 81, 91 (Cal. 2004) (citation modified). The standard operates through a specific child’s circumstances and factual record. See id. The data use restrictions ask something categorically different: covered businesses must determine prospectively whether a given practice is in “the best interests of” not any one child but “children”—a class of users that includes every child anywhere who can access a covered online practice. Cal. Civ. Code § 1798.99.31(b)(1)⁠–⁠(4) (emphasis added). Then covered businesses must tailor their practice accordingly. Applied at that scale, without the individualized, highly specific factual record giving the standard meaning in contexts such as a family law case, “best interests of children” cannot provide “sufficient notice of what is proscribed,”

Then there’s the issue of “dark patterns,” which is one of my least favorite terms that has become popular over the last few years. In practice it’s become a catch-all for ‘anything on any website that makes people do things I don’t like,’ and it’s not remotely well-defined. And that’s a problem when you have to get past the “vagueness” bar to be acceptable under the First Amendment:

As with the data use restrictions, the State’s plain-meaning argument is unconvincing where the range of harms that could plausibly qualify as “materially detrimental” is vast, spanning everything from financial exploitation to sleep loss, distraction, or hurt feelings. The fact that “dark pattern” is a defined term in the CAADCA does not help a covered business distinguish between these harms. And the prohibition’s use of the singular “child,” like in the data use restrictions, suggests that it is actionable based on a single child’s response to an online interface, meaning that a business designing a product accessed by millions of child users could face liability whenever any one of them experiences a harm that a regulator deems “material.”

So that’s gone too.

The court also highlights how the state does a lot of fear-mongering on edge cases that would clearly and somewhat obviously lead to mass censorship to avoid potential liability:

The State cites examples like “using a child’s information to connect them to a person that seeks to abuse the child, such as through sexploitation,” or “[u]sing a child’s information to recommend illegal products such as tobacco, alcohol, or gambling[.]” But these are extreme examples at the margins of what might be materially detrimental to a child’s well-being. The more difficult questions arise with examples like sleep loss, distraction, or hurt feelings. As the district court reasoned, and NetChoice argues on appeal, the CAADCA does not provide any guidance as to the breadth of conduct that “material[] detriment[] to the physical health, mental health, or wellbeing of a child” may reach.

This is what happens when headline-chasing regulators write laws based on moral panics and feel-good concepts like ‘well-being of children,’ assuming that either websites will nerd harder and somehow make it work, or the courts will sort it out on the back end.

But that’s not how the First Amendment works. There’s a reason why there’s a vagueness doctrine that is used to throw out laws that try to tapdance around it this way.

That said, not all of the ruling goes NetChoice’s way. Indeed, early on, the ruling gives a bit of a benchslap to NetChoice for continuing to challenge this law “facially” without meeting the near impossible standard setup by the Supreme Court in Moody:

NetChoice has been a party to many such cases—several before our court and the Supreme Court—and is presumably aware of the expectations for a facial challenge. At the risk of repetition, we offer similar guidance to NetChoice today.

The Moody ruling basically said that if you’re doing a facial challenge, you need to detail every possible application of the law and then show that a “substantial majority” are unconstitutional. That’s effectively impossible, especially since the law is written so broadly as to encompass things that go beyond just speech. That means, because the law also applies to commerce and other things, the facial challenge parts fail:

First, the State persuasively argues that whether “it is reasonable to expect” that a business’s “online service, product, or feature would be accessed by children,” … “says nothing about the nature of the business providing that service, product, or feature.” Indeed, as the State proffers, children “are capable of using ride sharing service[s] like Lyft or Waymo, electronic ticketing services such as StubHub or Ticketmaster, financial transaction services such as Paypal or Venmo, fitness products such as NFL Play 60 or Peloton, health-related services such as iHealth, or education-related products such as Wolfram Mathematica.” The CAADCA’s substantive requirements would “appl[y] evenhandedly” to any of these businesses if they are likely to be accessed by children, regardless of the content available through their online service.

This seems silly, but it’s what the Supreme Court now requires. Send your complaints to them, not the Ninth Circuit. The court effectively admits that the Supreme Court set an impossible standard in Moody:

To be sure, as we observed in NetChoice SB 976, “[d]oing so would entail the ‘daunting, if not impossible’ task of canvassing how the Act applies to an ‘ever-growing number of apps, services, functionalities, and methods for communication and connection.’” Id. at 1021 (first quoting Moody, 603 U.S. at 745 (Barrett, J., concurring); and then quoting Moody, 603 U.S at 725 (majority opinion)). We recognized that “such a showing” might be “unrealistic.” Id. But we nevertheless stated then, and maintain now, that “[i]t is a mystery how NetChoice could expect to prevail on a facial challenge without candidly disclosing the platforms that it thinks the challenged laws reach” and whether the coverage definition unduly burdens those platforms’ expression.

There is, also, a separate question of how a facial challenge to a law like this could even be possible with the sort of page limits courts require.

What this means, in practice, is that for states to survive a facial challenge, just make laws as crazily broad as possible, meaning it would be impossible to catalog all the many, many ways it might be enforced. That seems really bad. But, thanks to this Supreme Court, it’s what we’ve got.

The court does send the “age estimation” part back to the lower court, mostly because they say the record isn’t well enough developed (meaning we get to go through all of this yet again). There is some troubling language regarding last year’s ruling in FSC v. Paxton regarding age verification. As you’ll recall, the very prude conservative wing of the Supreme Court effectively overturned a couple decades worth of precedent to say “age verification online is fine for porn because porn is not protected by the First Amendment when kids see it.”

Many people insisted that this ruling was okay because it was limited to adult content, but so far we’ve seen state after state — and a few courts — suggest that it’s now “open season” on age verification laws. The language that shows up here is at least worrisome that the Ninth Circuit is open to a broad reading of the Supreme Court’s ruling:

NetChoice’s reading of Free Speech Coalition v. Paxton, 606 U.S. 461 (2025), also does not persuade. Free Speech Coalition considered a statute that required covered entities to make adult website visitors submit to an age verification system using either “government-issued identification” or “a commercially reasonable method that relies on public or private transactional data.” Id. at 467 (quoting Tex. Civ. Prac. & Rem. Code § 129B.003(b)(2)). The Supreme Court observed only that, with respect to that system, there is an “incidental burden that age verification necessarily has on an adult’s First Amendment right to access speech that is obscene only to minors.” Id. at 495. The Court said nothing about the effect of age estimation on First Amendment burdens generally, especially where age estimation is not required as a precondition to access content. To the contrary, the Court observed that “adults have no First Amendment right to avoid age verification, and the [challenged law] can readily be understood as an effort to restrict minors’ access.”

To the extent NetChoice argues that the age estimation requirement “require[s] consideration of content or proxies for content,” see NetChoice I, 113 F.4th at 1118, the age estimation requirement may impliedly regulate speech—but we cannot confidently draw that conclusion on this record, either.

More and more for the courts to argue about, I guess.

There’s also another bit of the lawsuit that has been revived, regarding “severability,” specifically regarding whether or not you can keep some parts of the law even as the bigger parts are struck down as unconstitutional. It’s another bit for them to argue about in more detail at the lower court, but not really the main point of all of this. The specifics here are that the law has a “notice-and-cure” provision if the Attorney General found a website to be violating the law. So there’s a question of whether or not that specific provision can be left alive, though I’m unsure what good it does if the rest of the law is found to be unconstitutional. But also, as the appeals court notes, this is basically all just on an underdeveloped record, so they’re sending it back to the lower court for more.

Either way, the key elements of California’s AADC have now been struck down as unconstitutional at the Ninth Circuit — for the second time — after two prior rejections at the district court level. The data use provisions and the dark patterns nonsense are gone on vagueness grounds. In some ways, that’s actually a stronger outcome than if the initial facial challenge had succeeded: there’s now clear appellate language explaining why this kind of vague “well-being” language can’t survive First Amendment scrutiny. California could theoretically go back and try to define things more narrowly, but chances are they’d find themselves right back at the First Amendment wall, because their ultimate goal has always been censorship dressed up as child safety.

The annoying part is the procedural mess the Supreme Court’s Moody decision created. We’re heading into round three at the district court, burning more time and resources on a law that should have been dead on arrival. This is exactly what we warned Governor Gavin Newsom and Attorney General Rob Bonta would happen when they first backed this law. They got the political headlines. Everyone else got years of litigation. And the law they championed is now a procedural zombie — technically still breathing, but stripped of everything that made it dangerous in the first place.

Friday 2026-03-13

11:00 PM

Trump DOJ Wimps Out On Ticketmaster, Again Revealing Hollowness Of MAGA ‘Antitrust’ [Techdirt]

Last election season, you might recall how the Trump campaign lied to everyone repeatedly about how his second administration would “rein in big tech,” and be a natural extension of the Lina Khan antitrust movement. As we noted at the time, that was always an obvious fake populist lie, but it was propped up anyway by a lazy press and a long line of useful idiots (including some purported “antitrust experts“.)

This last year has truly revealed the con: Trump not only has demolished regulatory independence, media consolidation rules, and consumer protection standards, he’s rubber stamped every shitty merger his administration has come into contact with (provided companies promise to be more racist), and fired the few Republicans in his administration that even vaguely cared about antitrust.

Trump’s latest betrayal to the the MAGA antitrust movement (that never really existed outside the skulls of rubes) is his DOJ’s surprise blindsiding of states by striking a pathetic settlement with Ticketmaster that doesn’t really fix the actual problem: monopoly.

The Biden DOJ and most US states sued Live Nation and its Ticketmaster subsidiary back in 2024, alleging that Live Nation has a monopoly on “the delivery of nearly all live music in America today.”

But while a new Trump settlement with the company requires $280 million in civil penalties and a 15% cap on service fees for people who want to use their amphitheaters, it backs off any attempt to pursue a break up of Live Nation and Ticketmaster, the one move that would actually (more permanently) help protect consumers, artists, and the live music market from predatory behavior.

The Trump DOJ and pedophile protector Pam Bondi struck the deal behind closed doors and didn’t bother to tell any of the 27 states (including many Republican ones) currently fighting Ticketmaster in court. It’s another win for Bondi loyalists (whose function is to blindly serve our mad idiot king) and the final middle finger to Gail Slater and Mark Hamer types that at least sometimes cared about antitrust.

States are, you may be unsurprised to learn, pissed off and planning to continue the fight alone, though they say the Trump DOJ has caused potentially irreparable harm:

“The case went to trial, and testimony began last week in US District Court for the Southern District of New York. But the US and Live Nation informed the court of a proposed settlement on March 8, taking state attorneys general by surprise. The judge presiding over the case reportedly said in court today that the way the settlement was announced “is absolutely unacceptable.

States reserving the right to continue litigation filed a motion for mistrial, saying they need time to prepare for a new trial and evaluate the terms of the settlement between the US and Live Nation. The “sudden disappearance” of the US from the case will likely give the jury the incorrect impression that Live Nation’s “antitrust violations have been cured or resolved, or that Proceeding Plaintiff States’ claims lack merit,” the states said.”

This was always going to be the outcome. There were constant signs. Trump is an autocrat, fascist, and opportunist who believes in nothing beyond his own pursuit of power and wealth. The corruption and autocracy was always going to dominate any serious Republican interest in antitrust (which, let’s be honest, even among Gail Slater types was historically inconsistent at best).

The MAGA base belief in this line of bullshit was one thing, but Trump’s antitrust bona fides were also propped up by folks like purported progressive antitrust expert Matt Stoller, who praised guys like JD Vance and Josh Hawley for being serious anti-corporatists, when the entire thing was always a con designed to give phony populist credibility to autocrats who never had to actually earn it.

The U.S. press also played a giant role here. They spent years propping up Trump’s false claims he “wanted to rein in big tech,” when what the authoritarians really wanted was to abuse government power to scare (quite successfully as it turned out) tech companies away from doing even the most basic content moderation of right wing race-baiting propaganda online.

Now, unsurprisingly, here we are, staring down the barrel of democracy demolishing authoritarianism, with unchecked corporate power in full alignment with the effort.

08:00 PM

Piracy Giant HiAnime.to Announces Mysterious ‘Goodbye’ [TorrentFreak]

hianimeThe anime industry has experienced a surge in popularity, but this growth is not limited to legal streaming platforms.

A significant portion of the demand for anime arrives from unofficial channels, with several major pirate websites dedicated solely to anime content.

This includes HiAnime.to, which, with an estimated 150 million+ monthly visits is one of the most trafficked websites on the Internet. However, a message now displayed across the site’s main domains suggests that may be about to change.

“It’s time to say goodbye. And thank you for a wonderful journey with great moments,” the message reads, also shown on other official domains, such as HiAnime.me.

HiAnime.to says Goodbye

hianime goodbye

The HiAnime name first appeared under its current name in March 2024, as a rebranding of the Aniwatch website, which was known as Zoro.to before that. Since then, its popularity has continued to grow. Until now.

Fear, Uncertainty, and Doubt

While the goodbye message seems crystal clear, the site’s official Discord server and Reddit community don’t appear convinced. While it is unclear whether the operators are moderating these communities, the mods and admins caution people not to jump to conclusions.

“We are currently aware of the situation and are actively reviewing the matter. We are monitoring the situation and attempting to obtain further clarification as of the moment,” a status message in the Discord channel reads.

Discord message

discord

At the same time, a Reddit thread urges people not to panic and stop sharing unverified information.

Reddit thread

reddit

Legal Pressure

At TorrentFreak, we can verify that the “goodbye” message posted on the official HiAnime domains reads like a shutdown notice. Time will tell whether the site will indeed remain offline. It’s also an option that it will rebrand yet again.

HiAnime has had its fair share of legal pressure over the past two years. The MPA’s Alliance for Creativity and Entertainment has targeted the site on multiple occasions, for example.

Earlier this month, the pressure further increased as the U.S. Trade Representative added HiAnime to its annual list of notorious piracy markets.

USTR lists HiAnime.to

ustr

There is no evidence to suggest that the legal pressure has anything to do with the goodbye message on the site, but it would be a fitting explanation. If any new information comes in, we will update this article accordingly.

From: TF, for the latest news on copyright battles, piracy and more.

07:00 PM

“It’s faster to just do it myself” [Seth Godin's Blog on marketing, tribes and respect]

Here’s a simple rubric for outsourcing:

If you’re never going to need to do this again, and it’s easier to do it than to instruct someone else to do it, by all means, do it yourself.

If doing it yourself will give you joy or satisfaction that is greater than the productivity boost you’ll get from leverage or better tools, please do it yourself.

But if you’re going to do it more than once, and the customer can’t tell if you did it yourself or not, perhaps you should have someone else do it or build the tools to get it done more efficiently.

Next time will happen sooner than you expect. Better to invest a bit more now than to spend for that shortcut again and again.

      

Pluralistic: Three more AI psychoses (12 Mar 2026) [Pluralistic: Daily links from Cory Doctorow]

->->->->->->->->->->->->->->->->->->->->->->->->->->->->-> Top Sources: None -->

Today's links



A cross-section of a man's head. His brain has been replaced with an intricate mass of wooden gearing, being pumped and cranked by three 16th century druges. Behind them is a blown up view of a microchip.  Behind the head is a stylized illustration of grey matter, blown out with lots of saturation and blended in places with tumbled rocks.

Three more AI psychoses (permalink)

"AI psychosis" is one of those terms that is incredibly useful and also almost certainly going to be deprecated in smart circles in short order because it is: a) useful; b) easily colloquialized to describe related phenomena; and c) adjacent to medical issues, and there's a group of people who feel very strongly any metaphor that implicates human health is intrinsically stigmatizing and must be replaced with an awkward, lengthy phrase that no one can remember and only insiders understand.

So while we still can, let us revel in this useful term to talk about some very real pathologies in our world.

Formally, "AI psychosis" describes people who have delusions that are possibly induced, and definitely reinforced and magnified, by a chatbot. AI psychosis is clearly alarming for people whose loved ones fall prey to it, and it has been the subject of much press and popular attention, especially in the extreme cases where it has resulted in injury or death.

It's possible for AI psychosis to be both a new and alarming phenomenon and also to be on a continuum with existing phenomena. Paranoid delusions aren't new, of course. Take "Morgellons Disease," a psychosomatic belief that you have wires growing in your body, which causes sufferers to pick at their skin to the point of creating suppurating wounds. Morgellons emerged in the 2000s, but the name refers to a 17th-century case-report of a patient who suffered from a similar delusion:

https://en.wikipedia.org/wiki/A_Letter_to_a_Friend

Morgellons is both a 400 year old phenomenon and an internet pathology. How can that be? Because the internet makes it easier for people with sparsely distributed traits to locate one another, which is why the internet era is characterized by the coherence of people with formerly fringe characteristics into organized blocs, for better (gender minorities, #MeToo) and worse (Nazis).

Morgellons is rare, but if you suffer from it, it's easy for you to locate virtually every other person in the world with the same delusion and for all of you to reinforce and egg on your delusional beliefs.

Morgellons isn't the only delusion that the internet reinforces, of course. "Gang stalking delusion" is a belief in a shadowy gang of sadistic tormentors who sneak hidden messages into song lyrics and public signage and innuendo in overheard snatches of other people's conversations. It is an incredibly damaging delusion that ruins people's lives.

Gang stalking delusion isn't new, either – as with Morgellons, there are historical accounts of it going back centuries. But the internet supercharged gang stalking delusion by making it easy for GSD sufferers to find one another and reinforce one another's beliefs, helping each other spin elaborate explanations for why the relatives, therapists, and friends who try to help them are actually in on the conspiracy. The result is that GSD sufferers end up ever more isolated from people who are trying mightily to save them, and more connected to people who drive them to self-harm.

Enter chatbots. Ready access to eager-to-please LLMs at every hour of the day or night means that you don't even have to find a forum full of people with the same delusion as you, nor do you have to wait for a reply to your anguished message. The LLM is always there, ready to fire back a "yes-and" improv-style response that drives you deeper and deeper into delusion:

https://pluralistic.net/2025/09/17/automating-gang-stalking-delusion/

It's possible that there are delusions that are even more rare than GSD or Morgellons that AI is surfacing. Imagine if you were prone to fleeting delusional beliefs (and whomst amongst us hasn't experienced the bedrock certainty that we put something down right here, only to find it somewhere else and not have any idea how that happened?). Under normal circumstances, these cognitive misfires might be fleeting moments of discomfort, quickly forgotten. But if you are already habituated to asking a chatbot to explain things you don't understand, it might well yes-and you into an internally consistent, entirely wrong belief – that is, a delusion.

Think of how often you noticed "42" after reading Hitchhiker's Guide to the Galaxy, or how many times "6-7" crops up once you've experienced a baseline of exposure to adolescents. Now imagine that an obsequious tale-spinner was sitting at your elbow, helpfully noting these coincidences and fitting them into a folie-a-deux mystery play that projected a grand, paranoid narrative onto the world. Every bit of confirming evidence is lovingly cataloged, all disconfirming evidence is discounted or ignored. It's fully automated luxury QAnon – a self-baking conspiracy that harnesses an AI in service to driving you deeper and deeper into madness:

That's the original "AI psychosis" that the term was coined to describe. As Sam Cole notes in her excellent "How to Talk to Someone Experiencing 'AI Psychosis,'" mental health practitioners are not entirely comfortable with the "psychosis" label:

https://www.404media.co/ai-psychosis-help-gemini-chatgpt-claude-chatbot-delusions/

"Psychosis" here is best understood as an analogy, not a diagnosis, and, as already noted, there is a large cohort of very persistent people who make it their business to eradicate analogies that make reference to medical or health-related phenomena. But these analogies are very hard to kill, because they do useful work in connecting unfamiliar, novel phenomena with things we already understand.

It's true that these analogies can be stigmatizing, but they needn't be. As someone with an autoimmune disorder, I am not bothered by people who describe ICE as an autoimmune disorder in which antibodies attack the host, threatening its very life. I am capable of understanding "autoimmune disorder" as referring to both a literal, medical phenomenon; and a figurative, political one. I have never found myself confusing one for the other.

"AI psychosis" is one of those very useful analogies, and you can tell, because "AI psychosis" has found even more metaphorical uses, describing other bad beliefs about AI. Today, I want to talk about three of these AI psychoses, and how they relate to one another: the investor AI delusion, the boss AI delusion, and the critic AI delusion.

Let's start with the investors' delusion. AI started as an investment project from the usual suspects: venture capitalists, private wealth funds, and tech monopolists with large cash reserves and ready access to loans during the cheap credit bubble. These entities are accustomed to making large, long-shot bets, and they were extremely motivated to find new markets to grow into and take over.

Growing companies need to keep growing, but not because they have "the ideology of a tumor." Growing companies' imperative to keep growing isn't ideological at all – it's material. Growth companies' stock trade at a high multiple of their "price to earnings ratio" (PE ratio), which means that they can use their stock like money when buying other companies and hiring key employees.

But once those companies' growth slows down, investors revalue those shares at a much lower PE multiplier, which makes individual executives at the company (who are primarily paid in stock) personally much poorer, prompting their departure, while simultaneously kneecapping the company's ability to grow through acquisition and hiring, because a company with a falling share price has to buy things with cash, not stock. Companies can make more of their own stock on demand, simply by typing zeroes into a spreadsheet – but they can only get cash by convincing a customer, creditor or investor to part with some of their own:

https://pluralistic.net/2025/03/06/privacy-last/#exceptionally-american

Tech companies have absurdly large market shares – think of Google's 90% search dominance – and so they've spent 15+ years coming up with increasingly absurd gambits to convince investors that they will continue to grow by capturing other markets. At first, these companies claimed that they were on the verge of eating one another's lunches (Google would destroy Facebook with G+; Facebook would do the same to Youtube with the "pivot to video").

This has a real advantage in that one need not speculate about the potential value of Facebook's market – you only have to look at Facebook's quarterly reports. But the downside is that Facebook has its own ideas about whether Google is going to absorb its market, and they are prone to forcefully make the case that this won't happen.

After a few tumultuous years, tech giants switched to promoting growth via speculative new markets – metaverse, web3, crypto, blockchain, etc. Speculative new markets are speculative, and the weakness of that is that no one can say how big those markets might be. But that's also the strength of those markets, because if no one can say how big those markets might be, then who's to say that they won't be very big indeed?

There's a different advantage to confining your concerns to imaginary things: imaginary things don't exist, so they don't contest your public statements about them, nor do they make demands on you. Think of how the right concerns itself with imaginary children (unborn babies, children in Wayfair furniture; children in nonexistent pizza parlor basements, children undergoing gender confirmation surgery). These are very convenient children to advocate for, since, unlike real children (hungry children, children killed in the Gaza genocide, children whose parents have been kidnapped by ICE, children whom Matt Goetz and Donald Trump trafficked for sex, children in cages at the US border, trans kids driven to self-harm and suicide after being denied care), nonexistent children don't want anything from you and they never make public pronouncements about whether you have their best interests at heart.

But as the AI project has required larger and larger sums to keep the wheels spinning, the usual suspects have started to run out of money, and now AI hustlers are increasingly looking to tap public markets for capital. They want you to invest your pension savings in their growth narrative machine, and they're relying on the fact that you don't understand the technology to trick you into handing over your money.

There's a name for this: it's called the "Byzantine premium" – that's the premium that an investment opportunity attracts by being so complicated and weird that investors don't understand it, making them easy to trick:

https://pluralistic.net/2022/03/13/the-byzantine-premium/

AI is a terrible economic phenomenon. It has lost more money than any other project in human history – $600-700b and counting, with trillions more demanded by the likes of OpenAI's Sam Altman. AI's core assets – data centers and GPUs – last 2-3 years, though AI bosses insist on depreciating them over five years, which is unequivocal accounting fraud, a way to obscure the losses the companies are incurring. But it doesn't actually matter whether the assets need to be replaced every two years, every three years, or every five years, because all the AI companies combined are claiming no more than $60b/year in revenue (that number is grossly inflated). You can't reach the $700b break-even point at $60b/year in two years, three years, or five years.

Now, some exceptionally valuable technologies have attained profitability after an extraordinarily long period in which they lost money, like the web itself. But these turnaround stories all share a common trait: they had good "unit economics. Every new web user reduced the amount of money the web industry was losing. Every time a user logged onto the web, they made the industry more profitable. Every generation of web technology was more profitable than the last.

Contrast this with AI: every user – paid or unpaid – that an AI company signs up costs them money. Every time that user logs into a chatbot or enters a prompt, the company loses more money. The more a user uses an AI product, the more money that product loses. And each generation of AI tech loses more money than the generation that preceded it.

To make AI look like a good investment, AI bosses and their pitchmen have to come up with a story that somehow addresses this phenomenon. Part of that story relies on the Byzantine premium: "Sure, you don't understand AI, but why would all these smart people commit hundreds of billions of dollars to AI if they weren't confident that they would make a lot of money from it?" In other words, "A pile of shit this big must have a pony underneath it somewhere!"

This is a great narrative trick, because it turns losing money into a virtue. If you've convinced a mark that the upside of the project is a multiple of the capital committed to it, then the more money you're losing, the better the investment seems.

So this is the first AI psychosis: the idea that we should bet the world's economy on these highly combustible GPUs and data centers with terrible unit economics and no path to break-even, much less profitability.

Investors' AI psychosis is cross-fertilized by our second form of AI psychosis, which is the bosses' AI psychosis: bosses' bottomless passion for firing workers and replacing them with automation.

Bosses are easy marks for anything that lets them fire workers. After all, the ideal firm is one that charges infinity for its outputs (hence the market's passion for monopolies) and pays nothing for its inputs (e.g. "academic publishing").

This means that the fact that a chatbot can't do your job isn't nearly as important as the fact that an AI salesman can convince your boss to fire you and replace you with a chatbot that can't do your job. Bosses keep replacing humans with defective chatbots, with catastrophic consequences, like Amazon's cloud service crashing:

https://www.techradar.com/pro/recent-aws-outages-blamed-on-ai-tools-at-least-two-incidents-took-down-amazon-services

Bosses are haunted by the ego-shattering knowledge that they aren't in the driver's seat: if the boss doesn't show up for work, everything continues to operate just fine. If the workers all stay home, the business grinds to a halt. In their secret hearts, bosses know that they're not in the driver's seat – they're in the back seat, playing with a Fisher Price steering wheel. AI dangles the possibility of wiring that toy steering wheel directly into the drive-train, so that the company's products go directly from the boss's imagination to the public without the boss having to ask people who know how to do things to execute their cockamamie schemes:

https://pluralistic.net/2026/01/05/fisher-price-steering-wheel/#billionaire-solipsism

This is a powerfully erotic proposition for bosses, the realization of the libidinal fantasy in which sky-high CEO salaries can be justified by the fact that everything that happens in the company is truly, directly attributable to the boss. Like the delusional person who can be led deeper and deeper into a fantasy world by a chatbot, a boss's delusion that they are worth thousands of times more than their workers makes them easy prey for a chatbot salesman that pushes them deeper and deeper into that delusion, until they bet the whole company on it.

Now we come to the third and final novel AI psychosis, the critics' psychosis, that AI is an abnormally terrible technology. This is a species of "criti-hype," which is when critics repeat the hyped-up claims of the companies they're targeting, but as criticism (think of all the people who believed and uncritically amplified the ad-tech industry's self-serving claims of being able to control our minds by "hacking our dopamine loops"):

https://peoples-things.ghost.io/youre-doing-it-wrong-notes-on-criticism-and-technology-hype/

AI is a normal technology. The people who made it, and the circumstances under which it was made, are normal. Its uses and abuses are normal. That doesn't make it good, but it does make it unexceptional:

https://www.normaltech.ai/p/a-guide-to-understanding-ai-as-normal

The exceptional part of AI isn't the technology, it's the bubble. There's nothing about AI per se that makes it exceptionally prone to devouring our natural resources, or endangering our jobs, or abetting war crimes. That's all because of the bubble, and the bubble relies on the idea that AI is exceptional, not normal. Repeating and amplifying claims about AI's exceptionalism helps the AI companies, because they rely on exceptionalism to keep the capital flowing and the bubble inflating.

AI is a normal technology. It's normal for a technology to be invented by unlikable and immoral people and institutions. Not every technology is invented by a shitty person, but shitty people and institutions are well represented (and possibly disproportionately represented) in the history of technology. Charles Babbage invented the idea of general purpose computers as a way of improving labor control on slave plantations:

https://logicmag.io/supa-dupa-skies/origin-stories-plantations-computers-and-industrial-control/

Ada Lovelace wasn't interested in making slavery more efficient, but neither was she driven by pure scientific inquiry. She invented programming to help her bet on the horses (it didn't work):

https://en.wikipedia.org/wiki/Ada_Lovelace

The silicon transistor was co-invented by William Shockley, one of history's great pieces of shit, a eugenicist who was so committed to exterminating all non-white people that he never managed to ship a commercial product:

https://pluralistic.net/2021/10/24/the-traitorous-eight-and-the-battle-of-germanium-valley/

IBM built the tabulators for Auschwitz. HP were the Pentagon's go-to contractors for any tech project that was so dirty no one else would touch it. We only got Unix because Bell Labs committed so many antitrust violations that they weren't allowed to productize it themselves.

It's not exceptional for AI companies to have terrible, piece-of-shit founders. It's not exceptional for these companies to participate in war crimes. It's not exceptional for these founders to want to pauperize workers. It's not exceptional for these companies to lie about their products, bankrupt naive investors through stock swindles, and pitch themselves to investors as a way for capital to win the class war.

None of this means that AI companies are good, it just means that they are not exceptional. And because they aren't exceptional, the same dynamics that govern other technologies apply to AI companies' products. Their utility is a function of what they do, not who made them or how they were sold. The utility of AI products is based on whether people find ways to use them that make them happy – not whether the people who made those technologies are good people, or whether the funding for the technology was fraudulent, or whether other people use the technology to harm others.

Automation comes in two flavors: there's automation that produces things more quickly (and hence more cheaply), and there's automation that makes better things. Generally, capital prefers to use automation to increase the pace at which things are made, while workers prefer to use automation to improve the quality of the things they make.

Think of a hobbyist who pines for an automated soldering machine. That hobbyist longs to make board-level repairs and modifications that require precision that humans struggle to match. The hobbyist is a centaur, using a machine to help achieve human goals.

Now think of a factory owner who invests in an assembly line of the same machines: that boss wants to fire a bunch of workers and make the survivors of the purge take up the slack. The boss want to achieve corporate goals, to "sweat the assets," making maximum use of the soldering machines. The pace at which the line runs is set to be the maximum that the workers can match. The workers on the line are "reverse centaurs" – humans who are pressed into service as peripherals for machines, at a pace that is constantly at the very limit of their endurance.

Reverse centaurs are trapped in capital's automation plan – to make everything faster and cheaper. But that's the result of bosses. It's not the result of technology.

This is not to say that technology is apolitical. Only a fool would imagine that there are no politics embedded in technology. But you'd be a far greater fool if you asserted that the politics of a technology were simple, clear, and immutable.

Nor is this to say that when workers get to decide when and how to use technology, we will always make wise decisions. Perhaps the hobbyist who opts for an automated soldering machine will lose out on the opportunity to refine their hand-eye coordination in ways that will have many other benefits to their practice.

Or perhaps attempting to improve their hand-eye coordination to that point will wreck so many projects that they grow discouraged and give up altogether. Others' choices that seem unwise to you might have perfectly good explanations that aren't visible from your perspective. Ultimately, the world is a better place where workers get to decide which parts of their jobs they want to automate and which parts they want to lean into.

This is an extremely normal technological situation: for a new technology to be promoted and productized by shitty people who have grandiose goals that would be apocalyptic should they ever come to pass – and for some people to find uses of that technology that are nevertheless beneficial to them and their communities.

The belief that AI is an exceptionally bad technology (as opposed to an exceptionally bad economic bubble) drives AI critics into their own absurd culs-de-sac.

There are many, many skilled and reliable practitioners of technical and creative trades who've found extremely reasonable, normal ways in which AI has automated some part of their job. They aren't hyperventilating about how AI has changed everything forever and the world is about to end. They're not mistaking AI for god, or a therapist.

They're just treating AI like a normal technology, like a plugin. Programmers' tools have acquired useful automation plugins at regular intervals for decades – syntax checkers, advanced debuggers, automated wireframe utilities. For many programmers – including several of my acquaintance, whom I know to be both thoughtful and skilled – AI is another plugin, one they find useful enough to be modestly enthusiastic about.

It is nuts to deny the experiences these people are having. They're not vibe-coding mission-critical AWS modules. They're not generating tech debt at scale:

https://pluralistic.net/2026/01/06/1000x-liability/#graceful-failure-modes

They're just adding another automation tool to a highly automated practice, and using it when it makes sense. Perhaps they won't always choose wisely, but that's normal too. There's plenty of ways that pre-AI automation tools for software development led programmers astray. A skilled, centaur-configured programmer learns from experience which automation tools they should trust, and under which circumstances, and guides themselves accordingly.

It's only the belief that AI is exceptional – exceptionally wicked, but exceptional nevertheless – that leads critics to decide that they are a better judge of whether a skilled worker should or should not use certain automation tools, and to make that judgment not based on the quality of the work in question, but on the moral character of the tool itself.

AI is just normal. The bubble is what drives the environmental costs. If the only LLMs were a couple big data-centers at Sandia National Labs, no one would be particularly exercised about the water and energy demands they represented. Big scientific endeavors – from NASA launches to the large Hadron Collider – often come with immense material and energy needs. The bubble causes massive, wasteful, duplicative efforts that chase diminishing returns through farcical scale.

Nor are AI bros exceptional. The stock swindlers who've blown $700b (and counting) on AI aren't cyber-Svengalis with the power to cloud investors' minds. They're just running the same con that tech has been running ever since its returns started to taper off and survival became a matter of ginning up enthusiasm for speculative new ventures.

That doesn't mean those people aren't awful shits. Fuck those people. It just means that they're normal awful shits. We don't have to burnish their reputations by elevating them to the status of archdemons who taint everything they touch with unwashable sin. Sam Altman isn't Lex Luthor. He's just a conman:

https://open.substack.com/pub/garymarcus/p/breaking-sam-altmans-greed-and-dishonesty?r=8tdk6&utm_medium=ios

The fact that these bros are just normal assholes means that we don't have to treat everything they do as a sin. Scraping the entirety of human knowledge to make something new out of it isn't "stealing." Depending on why you're doing it, it can be archiving, or making a search engine:

https://pluralistic.net/2023/09/17/how-to-think-about-scraping/

Too many AI critics have started from the undeniable fact that these guys are odious creeps who boast about wanting to ruin the lives of workers and then worked backwards to find the sin. The sin isn't performing mathematical analysis on all the books ever written. That's actually kind of awesome. It's the kind of thing Aaron Swartz used to do – like when he ingested every law review article ever published and used it to trace the way that oil companies' donations to law schools resulted in profs writing articles about why Big Oil can't be held liable for trashing the planet:

https://web.archive.org/web/20111129181943/https://www.stanfordlawreview.org/print/article/punitive-damages-remunerated-research-and-legal-profession

AI bros' sin isn't making copies of published works. Hammering servers with badly behaved crawlers is a dick move and fuck them for doing it. But if these jerks made well-behaved scrapers that placed no abnormal demand on servers, it's not like their critics would say, "Oh, I guess it's fine, then."

AI bros' sin is running an economy-destroying, planet-wrecking stock swindle whose raison d'etre is pauperizing every worker and transferring 100% of the dying world's wealth to a small cadre of morbidly wealthy, eminently guillotineable plutes. Making plugins? That's not exceptional. It's just normal.

The fact that something is normal doesn't make it good. There's a lot of normal things that I'd like to throw into the Sun. But we don't do ourselves any favors when we amplify our enemies' self-aggrandizing narratives by accusing them of being exceptional, even when we mean "exceptionally evil." They're normal assholes.

Fuck 'em.

(Image: ZeptoBars, CC BY 3.0, modified)


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#15yrsago Notorious financier gets a “super-injunction” prohibiting the press from revealing that he is a banker https://www.telegraph.co.uk/finance/newsbysector/banksandfinance/8373535/Sir-Fred-Goodwin-former-RBS-chief-obtains-super-injunction.html

#10yrsago Shortly after her death, Harper Lee’s heirs kill cheap paperback edition of To Kill a Mockingbird https://newrepublic.com/article/131400/mass-market-edition-kill-mockingbird-dead

#10yrsago Web security company breached, client list (including KKK) dumped, hackers mock inept security https://arstechnica.com/information-technology/2016/03/after-an-easy-breach-hackers-leave-tips-when-running-a-security-company/

#10yrsago Microsoft spams corporate users with messages denigrating their IT departments https://web.archive.org/web/20160309195537/https://www.infoworld.com/article/3042397/microsoft-windows/admins-beware-domain-attached-pcs-are-sprouting-get-windows-10-ads.html

#10yrsago Cycle and Recycle: gorgeous photos of the European recycling process https://www.wired.com/2016/03/paul-bulteel-cycle-recyle-europe-recycles-tons-of-waste-and-its-pretty-gorgeous/

#10yrsago Fellowships for “Robin Hood” hackers to help poor people get access to the law https://web.archive.org/web/20160304221459/https://labs.robinhood.org/fellowship/

#10yrsago 3D printed battle-armor for cats https://web.archive.org/web/20160311224139/http://sinkhacks.com/making-3d-printed-cat-armor/

#10yrsago Great moments in the history of black science fiction https://web.archive.org/web/20160308034421/http://www.fantasticstoriesoftheimagination.com/a-crash-course-in-the-history-of-black-science-fiction/

#1yrago Daniel Pinkwater's "Jules, Penny and the Rooster" https://pluralistic.net/2025/03/11/klong-you-are-a-pickle-2/#martian-space-potato


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026

  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2027

  • "The Memex Method," Farrar, Straus, Giroux, 2027



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America (1081 words today, 48461 total)

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.

  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Bluesky (no ads, possible tracking and data-collection):

https://bsky.app/profile/doctorow.pluralistic.net

Medium (no ads, paywalled):

https://doctorow.medium.com/
https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

02:00 PM

MAHA Institute: Nix The Entire Childhood Vaccine Schedule [Techdirt]

If you agree with me that what RFK Jr. has done at HHS — particularly when it comes to altering vaccine schedules, approvals, research, and access — is bad well, you ain’t seen nothing yet.

Kennedy rode Trump’s coattails, building his own Make America Health Again (MAHA) movement on the back of the wider MAGA orgy of fascism Trump has constructed. The MAGA people are generally those who have followed Kennedy’s checkered career for years and not only act as his public ally for all the crazy shit he says and does, but also serve to push him even further than he’s already gone. And while not every idea coming out of the MAHA people is horrible, the majority certainly are.

So, back to vaccines. Kennedy has already done immense harm to vaccination policy and research in America, particularly when it comes to children. But the MAHA Institute, a D.C. think tank that pushes Kennedy’s wider agenda, would like to please just do away with all childhood vaccine schedules until each shot can “be proven” to be safe.

Leaders of the MAHA Institute, the Robert F. Kennedy Jr.-allied think tank pushing Make America Health Again movement policies, stated their position on vaccines unequivocally on Monday: “The childhood vaccination schedule needs to be eliminated,” the policy group’s president, Mark Gorton, said.

“All vaccines need to be removed from the market until they can be proven to be safe and effective,” Gorton told an audience of supporters gathered in the Willard Hotel’s Crystal Room for a panel discussion on the “Massive Epidemic of Vaccine Injury.”

Now, Kennedy didn’t attend the event. He doesn’t determine its agenda. He isn’t directly responsible for what is said by this group. But if you go through all the other nonsense these people are saying, you will recognize that much of it aligns directly with claims Kennedy has made over the years and into the present. And the history of MAHA Institute events and its guests certainly portrays a sense that the government listens to these people.

The event, just a block from the White House, comes at an interesting time for the MAHA movement in Washington. It is clear that the institute, and the movement it is part of, have the administration’s ear; attendees of past events have included senior HHS adviser Calley Means and Food and Drug Administration official Sara Brenner.

And that should be particularly terrifying, given that you can very easily get these same people to admit that they just make shit up when it suits them.

Gorton displayed slides with titles like “The Polio Fraud” and “The flu shot has given 1,900,000 Americans Alzheimer’s,” and, simply, “VACCINES ARE THE GREATEST SCAM IN MEDICAL HISTORY.”

At another moment, Gorton claimed that HHS had commissioned more than 100 studies into vaccine injuries. When asked by NOTUS where he got that number, he said Kennedy had previously stated his desire to further study vaccines.

“I don’t know much more than they’re commissioning a bunch of studies,” Gorton told NOTUS.

So what would eliminating approval for childhood vaccinations in a full sweep in America mean if it happens? Healthcare facilities would be entirely overrun. Hospitals would have to exponentially increase the size of their pediatric wards. Trillions of dollars would need to be spent to deal with the illnesses that would result. Real estate would have to be set aside to serve as graveyards filled with tiny little coffins.

This is from the CDC’s own website in 2023.

Among children born during 1994–2023, routine childhood vaccinations will have prevented approximately 508 million cases of illness, 32 million hospitalizations, and 1,129,000 deaths, resulting in direct savings of $540 billion and societal savings of $2.7 trillion.

Gone are the days of any of us thinking that an idea or plan is just too crazy for this particular administration to enact. We simply can’t afford to bet on that sort of minimal sense-making occurring any longer.

So sit up and pay attention, because anything that remotely looks like the eradication of childhood vaccines in America would be no less than a childhood healthcare holocaust.

09:00 AM

Sell Books Without Sacrificing Brand Control [The Business of Printing Books]

Sell Books Without Sacrificing Brand Control

Take a moment now and look at the books on your bookshelf. Spines out, titles running down them. And there, usually at the top or the bottom, is a publisher logo. Random House, Tor, Bantam—these little indicators of the publisher are so common that they’ve become ubiquitous. Just a part of a book cover, right? 

Pull one out and flip to the back cover. You’ll see a little blurb, almost always next to the barcode at the bottom. It likely has more info about the publisher, maybe even a URL to their main site, and a callout for the cover artist. 

This is important for branding efforts publishers do to build customer recognition. Brand consistency plays a huge part in business growth. If you read ten books and you see the same publisher logo on the spine of seven of them, you’re likely going to want more books from that publisher. Right? Likewise, if you read something and didn’t care for it, you might shy away from other books from that publisher.

Even in a time when browsing the bookstore is less and less common, your book’s cover is still a key marketing asset. 

So why would you let some other business steal your real estate?


Brand Consistency Is a Must for Growing Businesses

I wrote that H2 and stared at it for a long time. Because I’ll be damned if it doesn’t read like something ChatGPT would churn out. But no, that buzzword-rich phrase is all me.

Anyway, it’s true. Branding is trust. You trust Nike to sell you high-quality sneakers. You trust Apple to offer sleek, easy-to-use tech. 

That trust starts when each company delivers on its promise. But it persists because we begin to associate that brand with that promise. Your brand is what you use to associate a positive user experience or quality product with your business. 

For books and publishers, branding is often less emphasized. There are dozens of fantasy publishers out there, and every one of them has published great books and mediocre books. This was true—in the age of traditional publishing.

But we’re evolving. And that branding is more important now than ever before. 

Think about the way you would buy a book in 2010. First, you’d be on Amazon. So you’re already getting slapped with their branding. Then you get the book, and it’s got the publisher's branding on the cover. And on the description. You would see the author’s image on the sales page, but it’s tiny. In this example, the person who actually wrote this book is, at best, third in line for branding.

Now think about how independent authors operate in 2026. They’re building a following for themselves, usually through social media or email subscribers. They’re sharing content through owned channels. And what about if you’re a business that relies on selling books? Being in third place in the branding race is not at all ideal.

How On-Demand Production & Fulfillment Are Changing the Game

In the last ten years, we’ve experienced a monumental, though somewhat gradual, shift in the way content, services, and products are shared and sold. Historically, anyone who created anything relied on a retailer of some kind to distribute that product. This created a situation where people who had content to share were reliant on rented land to get exposure and earn money. 

Thanks to platforms like Shopify and Lulu, you can take control and own your content distribution. Not to mention capturing customer data in the process. 

Print-on-demand was initially a focus for individual creators. It’s affordable, with very low cost to entry, and gives anyone access to print and sell books. Now, as digital printing matures, we’re seeing businesses and publishers taking advantage of this method. 

Automated printing is a lifesaver for publishers’ backlists. Custom integrations, particularly when paired with AI tools, have led to an entirely new industry built around offering one-off, custom books. 

Availability and personalization are both reasons print-on-demand has become such a popular way to sell and dropship products. But another crucial, though often less acknowledged, factor is inventory. For publishers, inventory costs and management are a constant concern, particularly for backlist titles. On-demand production means no warehouses, no handling, and no costs that come with them.

How On-Demand Production Enables Brand Consistency

Okay, so I’ve outlined why print-on-demand is such a powerful method for growing a business, and the inherent problems with relying on another brand to sell your products. Now let’s bring it all together.

The bottom line is: you don’t need retailers to sell your book. Which is great, cause Amazon, and its ilk are becoming less and less appealing platforms.

With today’s tools, you can create your own storefront and product catalog pretty easily. With your books uploaded to Lulu, our APIs and direct ecommerce integrations (with Shopify, Wix, and WooCommerce) connect our powerhouse print-on-demand and fulfillment network to your store. All you need are the customers.

This is the backbone of direct-to-consumer retail, a growing trend among creators and businesses. You get control over your products, more data than a retailer would ever imagine sharing, and you’ll have far better margins because you’re not splitting your revenue with a retail site.

Finally, you’ll be able to center everything you do around your singular brand. Here’s how. 

Sell Books Without Sacrificing Brand Control

Sell Your Book, Your Way

Sell books on your Wix, Shopify, or WooCommerce website with Lulu Direct.
Or use our Order Import tool for your next book launch.

Learn About Lulu Direct

White-Label Printing: Keep Your Brand Front and Center

This is, quite possibly, the most important thing you can do for brand consistency with your products. 

You’ve got your logo and business name on your site, in your book, in your emails, and you’re using them on your social media profiles. Why would you want some other company’s logo on the packing slip when they get your book? 

When we built Lulu Direct to simplify ecommerce integrations, we asked a lot of booksellers and creators what they needed most. 

The list was long. But one thing that popped up often was branding.

Digging into this, we discovered that a lot of people using Lulu to sell books wished they could remove our branding from their packing slips and present their own. So we built white-labeling into Lulu Direct.

Presenting your own brand on your customer’s packing slips is part of delivering a seamless shopping experience that keeps the focus on you and your brand. It also helps build and reinforce brand recognition, something important if you want readers to suggest your book to their friends or otherwise spread the word about your works. 

This is how individuals and businesses create a holistic brand experience, a proven tactic for direct-to-consumer sellers.

Customization Features That Protect Your Brand Identity

Along with helping you provide a cohesive brand experience based entirely on your brand needs, Lulu also provides a range of product customizations that further ensure you’re putting your brand at the center of your business. 

That means you can define a distinct size, ink, paper, trim, and layout for your books. If you use that same format for every book you produce, you’ll develop a signature look or style that readers will instantly recognize. 

Creating your books from your files means you can align those books with your digital branding. Use the same color palette, apply your logo, and add your brand information to the cover and interior, all as you see fit. 

Along with that, using our APIs allows you to request a unique printing (called a ‘print job’) every time an order is placed. Unlike the traditional Lulu system, where you upload a single interior and cover file to be used for every order, our API connections allow you to specify a unique file.

The benefit of this is the option to offer customization and personalization. This opens a whole new world of possibilities. Some companies, like Adorabooks, use custom printing to offer unique children’s books. The possibilities are pretty much endless, though, with the right tools on your site to gather information and generate the personalized file for printing.

Personalization, especially in an on-demand world, is likely the next big thing in dropshipping products. Lulu makes sure it's an option for you and your brand.

Bundling and Product Variants for a Cohesive Brand Offering

Another important way to keep your brand in focus is to ensure your site and product pages are cohesive. Lulu offers two options that are big for streamlining your product catalog:

Both of these features are pretty common ways to upsell and cross-sell that traditional retail sites have utilized for years. Here’s a pretty common example from Bookshop.org.

Sell Books Without Sacrificing Brand Control

Each version—hardcover, ebook, and paperback—is a product variant. This cuts down on store pages since you can offer all versions of your book on a single page. Which is a big deal when you’re running a business with numerous products to sell or if you’re a solo creator trying to manage your entire business by yourself.

Similarly, you can create bundles of multiple products to increase your order value and entice readers. Think about the trilogy sets of books you see in bookstores. Or maybe you bundle a short story or snippet from a new book along with a popular product.

Both variants and product bundles might seem more like marketing than branding, but it’s crucial to have these offerings to provide a simple, cohesive, and on-brand experience for your customers. 

Scaling Your Business Without Losing Brand Control

Lulu’s inventory-free model with global distribution and automated workflows might sound like a very technical platform. At the end of the day, what we do is print and ship books. Scalable fulfillment is a challenge we solve for growing businesses.

If you’re selling books and find yourself at a point that demands growing to meet demand, brand control is an easy-to-overlook pitfall. What I mean is that it is easy to get into the weeds of technical work to grow your business. That work is time-consuming and can be tedious. Like building out a website or developing new products.  

Your brand can get lost in there. 

This concern is one we’ve heard from booksellers a lot in the last few years. And it’s the reason we’ve built in tools like automated white-label shipping to help you keep your brand front and center.

Sell Books Without Sacrificing Brand Control

Your Free Lulu Account

Create a Lulu Account today to print and publish your book for readers all around the world

Create a Free Account

Kanji of the Day: 犬 [Kanji of the Day]

✍4

小1

dog

ケン

いぬ いぬ-

愛犬   (あいけん)   —   pet dog
子犬   (こいぬ)   —   puppy
盲導犬   (もうどうけん)   —   guide dog
柴犬   (しばいぬ)   —   shiba inu (dog breed)
犬猫   (いぬねこ)   —   dogs and cats
飼い犬   (かいいぬ)   —   pet dog
犬種   (けんしゅ)   —   dog breed
負け犬   (まけいぬ)   —   loser
小型犬   (こがたけん)   —   small-breed dog
大型犬   (おおがたけん)   —   large-breed dog

Generated with kanjioftheday by Douglas Perkins.

Kanji of the Day: 征 [Kanji of the Day]

✍8

中学

subjugate, attack the rebellious, collect taxes

セイ

遠征   (えんせい)   —   expedition
征服   (せいふく)   —   conquest
出征   (しゅっせい)   —   going to war
征伐   (せいばつ)   —   conquest
東征   (とうせい)   —   eastern expedition
征討   (せいとう)   —   subjugation
征服者   (せいふくしゃ)   —   conqueror
征く   (ゆく)   —   to conquer
長征   (ちょうせい)   —   lengthy military expedition
遠征軍   (えんせいぐん)   —   expeditionary force

Generated with kanjioftheday by Douglas Perkins.

07:00 AM

Over the top [Seth Godin's Blog on marketing, tribes and respect]

Unreasonable commitment is unreasonable. It happens before there’s a guarantee it will work. It’s out of proportion to what others think is standard. Unreasonable commitment is dedication, persistence, care, energy, connection and investment that doesn’t seem to make sense.

You can’t do this in everything, and you probably can’t do it all the time. That’s why it’s unreasonable to expect.

I’ve been fortunate enough to do hundreds of podcasts. The hosts are even kinder and more professional than you’d imagine, showing up for months or years with virtually no listeners. They do it because they care.

But only one podcast host had me in tears before we began recording.

Last September, I spent the day with Mel Robbins and her team of more than a dozen professionals. We recorded for four hours, two episodes worth, and then they quietly spent six months editing the work.

Mel’s even more Mel-like in person. She’s fully present, committed and yes, over the top. Our conversation led to my new book and course, and it also reminded me that better is possible. Not just for the person in front of the camera, but for everyone on the team, for the guests and for the people listening.

Neil Pasricha wrote about Mel a decade ago. Before last year’s bestseller or the Golden Globe nomination or the podcast hit its stride. It’s a choice.

Unreasonable commitment doesn’t seem like a good plan until after it works.

      

Weasel Words: OpenAI’s Pentagon Deal Won’t Stop AI‑Powered Surveillance [Techdirt]

OpenAI, the maker of ChaptGPT, is rightfully facing widespread criticism for its decisions to fill the gap the U.S. Department of Defense (DoD) created when rival Anthropic refused to drop its restrictions against using its AI for surveillance and autonomous weapons systems. After protests from both users and employees who did not sign up to support government mass surveillance—early reports show that ChaptGPT uninstalls rose nearly 300% after the company announced the deal—Sam Altman, CEO of OpenAI, conceded that the initial agreement was “opportunistic and sloppy.” He then re-published an internal memo on social media stating that additions to the agreement made clear that “Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, [and] FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.”

Trouble is, the U.S. government doesn’t believe “consistent with applicable laws” means “no domestic surveillance.” Instead, for the most part, the government has embraced a lax interpretation of “applicable law” that has blessed mass surveillance and large-scale violations of our civil liberties, and then fought tooth and nail to prevent courts from weighing in. 

“Intentionally” is also doing an awful lot of work in that sentence. For years the government has insisted that the mass surveillance of U.S. persons only happens incidentally (read: not intentionally) because their communications with people both inside the United States and overseas are swept up in surveillance programs supposedly designed to only collect communications outside the United States. 

The company’s amendment to the contract continues in a similar vein, “For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.” Here, “deliberate” is the red flag given how often intelligence and law enforcement agencies rely on incidental or commercially purchased data to sidestep stronger privacy protections.

Here’s another one: “The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities. The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.” What, one wonders, does “unconstrained” mean, precisely—and according to whom? 

Lawyers sometimes call these “weasel words” because they create ambiguity that protects one side or another from real accountability for contract violations. As with the Anthropic negotiations, where the Pentagon reportedly agreed to adhere to Anthropic’s red lines only “as appropriate,” the government is likely attempting to publicly commit to limits in principle, but retain broad flexibility in practice.

OpenAI also notes that the Pentagon promised the NSA would not be allowed to use OpenAI’s tools absent a new agreement, and that its deployment architecture will help it verify that no red lines are crossed. But secret agreements and technical assurances have never been enough to rein in surveillance agencies, and they are no substitute for strong, enforceable legal limits and transparency.

OpenAI executives may indeed be trying, as claimed, to use the company’s contractual relationship with the Pentagon to help ensure that the government should use AI tools only in a way consistent with democratic processes. But based on what we know so far, that hope seems very naïve.

Moreover, that naïvete is dangerous. In a time when governments are willing to embrace extreme and unfounded interpretations of “applicable laws,” companies need to put some actual muscle behind standing by their commitments. After all, many of the world’s most notorious human rights atrocities have historically been “legal” under existing laws at the time. OpenAI promises the public that it will  “avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power,” but we know that enabling mass surveillance does both.     

OpenAI isn’t the only consumer-facing company that is, on the one hand, seeking to reassure the public that they aren’t participating in actions that violate human rights while, on the other, seeking to cash in on government mass surveillance efforts.  Despite this marketing double-speak, it is very clear that companies just cannot do both. It’s also clear that companies shouldn’t be given that much power over the limits of our privacy to begin with. The public should not have to rely on a small group of people—whether CEOs or Pentagon officials—to protect our civil liberties.

Reposted from the EFF’s Deeplinks blog.

The Trump Blunder Pattern [The Status Kuo]

I’m writing for The Big Picture substack today, which I do once a week as part of a team of writers. As the name implies, I use the opportunity to step back and take stock of the broader and often confusing political landscape we’re now in. It’s a great complement to my deeper daily dives here at The Status Kuo. You can sign up below to get my Big Picture column in your inbox each week.

My own work there is offered for free without a paywall, but we do always appreciate paid supporters who make our work possible, and paid subscribers receive bonus material such as our guest columns, our Friday summary of the top news stories of the week and our popular Sunday round-up of the “week in wins.”

Add Me To The Big Picture

Today’s topic for my column is Trump blunders. We’re in a monumental one right now in Iran, with no clear or easy end in sight. But if it follows the pattern he’s set with his past mistakes (I examine three of his biggest ones: DOGE, Liberation Day tariffs and the ICE surge), we can make an intelligent assessment of how this is likely to go in Iran.

I get that it’s a bit of a pain to subscribe to two different newsletters in order to read all of my writings. But The Big Picture is something I began with my team and I’m quite proud of, and if you’re not already subscribed, I hope you’ll find our insights there valuable and clarifying.

I’ll be off tomorrow in meetings all day at the Human Rights Campaign in D.C., but back with Skeets and Giggles on Saturday!

Jay

06:00 AM

The Wyden Siren Goes Off Again: We’ll Be “Stunned” By What the NSA Is Doing Under Section 702 [Techdirt]

Senator Ron Wyden says that when a secret interpretation of Section 702 is eventually declassified, the American public “will be stunned” to learn what the NSA has been doing. If you’ve followed Wyden’s career, you know this is not a man prone to hyperbole — and you know his track record on these warnings is perfect.

Just last month, we wrote about the Wyden Siren — the pattern where Senator Ron Wyden sends a cryptic public signal that something terrible is happening behind the classification curtain, can’t say what it is, and then is eventually proven right. Every single time. The catalyst then was a two-sentence letter to CIA Director Ratcliffe expressing “deep concerns about CIA activities.”

Well, the siren is going off once again. This time, Wyden took to the Senate floor to deliver a lengthy speech, ostensibly about the since approved (with support of many Democrats) nomination of Joshua Rudd to lead the NSA. Wyden was protesting that nomination, but in the context of Rudd being unwilling to agree to basic constitutional limitations on NSA surveillance. But that’s just a jumping off point ahead of Section 702’s upcoming reauthorization deadline. Buried in the speech is a passage that should set off every alarm bell:

There’s another example of secret law related to Section 702, one that directly affects the privacy rights of Americans. For years, I have asked various administrations to declassify this matter. Thus far they have all refused, although I am still waiting for a response from DNI Gabbard. I strongly believe that this matter can and should be declassified and that Congress needs to debate it openly before Section 702 is reauthorized. In fact, when it is eventually declassified, the American people will be stunned that it took so long and that Congress has been debating this authority with insufficient information.

You can see the full video here if you want.

Here’s a sitting member of the Senate Intelligence Committee — someone with access to the classified details — is telling his colleagues and the public that there is a secret interpretation of Section 702 that “directly affects the privacy rights of Americans,” that he’s been asking multiple administrations to declassify it, that they’ve all refused, and that when it finally comes out, people will be stunned.

If you’ve followed Wyden for any amount of time, this all sounds very familiar. In 2011, Wyden warned that the government had secretly reinterpreted the PATRIOT Act to mean something entirely different from what Congress and the public understood. He couldn’t say what. Nobody believed it could be that bad. Then the Snowden revelations showed the NSA was engaged in bulk collection of essentially every American’s phone metadata. In 2017, he caught the Director of National Intelligence answering a different question than the one Wyden asked about Section 702 surveillance. The pattern repeats. The siren sounds. Years pass. And then, eventually, we find out it was worse than we imagined.

Now here he is, doing the exact same thing with Section 702 yet again, now that it’s up for renewal. Congress is weeks away from a reauthorization vote, and Wyden is explicitly telling his colleagues (not for the first time) they are preparing to vote on a law whose actual meaning is being kept secret from them as well as from the American public:

The past fifteen years have shown that, unless the Congress can have an open debate about surveillance authorities, the laws that are passed cannot be assumed to have the support of the American people. And that is fundamentally undemocratic. And, right now, the government is relying on secret law with regard to Section 702 of FISA. I’ve already mentioned the provision that was stuck into the last reauthorization bill, that could allow the government to force all sorts of people to spy on their fellow citizens. I have explained the details of how the Biden Administration chose to interpret it, and how the Trump Administration will interpret it, are a big secret. Americans have the right to be confused and angry that this is how the government and Congress choose to do business.

That’s a United States senator who has a long history of calling out secret interpretations that lead to surveillance of Americans — standing on the Senate floor and warning, once again, that there’s a secret interpretation of Section 702 authorities. One that almost certainly means mass surveillance.

And Wyden knows exactly how this plays out. He’s been through the reauthorization cycle enough times to know the playbook the intelligence community runs every time 702 is up for renewal:

I’ve been doing this a long time, so I know how this always goes. Opponents of reforming Section 702 don’t want a real debate where Members can decide for themselves which reform amendments to support. So what always happens is that a lousy reauthorization bill magically shows up a few days before the authorization expires and Members are told that there’s no time to do anything other than pass that bill and that if they vote for any amendments, the program will die and terrible things will happen and it will be all their fault.

Don’t buy into that.

He’s right. Every time reauthorization is on the table, no real debate happens, and then just before the authorization is about to run out, some loyal soldier of the surveillance brigade in Congress will scream “national security” at the top of their lungs, insist there’s no time to debate this or people will die, and then promises that we need to just re-authorize for a few more years, at which point we’ll be able to hold a debate on the surveillance.

A debate that never arrives.

But even setting aside the secret interpretation Wyden can’t discuss, his speech highlights something almost as damning: just how spectacularly the supposed “reforms” from the last reauthorization have failed. Remember, one of the big “concessions” to get the last reauthorization across the finish line was a requirement that “sensitive searches” — targeting elected officials, political candidates, journalists, and the like — would need the approval of the FBI’s Deputy Director.

This was in response to some GOP elected officials being on the receiving end of investigations during the Biden era, freaking out that the NSA appeared to be doing the very things plenty of civil society and privacy advocates had been telling them about for over a decade while they just yelled “national security” back at us.

So how are those small “reforms” working out? Here’s Wyden:

The so-called big reform was to require the approval of the Deputy FBI Director for these sensitive searches.

Until two months ago, the Deputy FBI Director was Dan Bongino. As most of my colleagues know, Mr. Bongino is a longtime conspiracy theorist who has frequently called for specious investigations of his political opponents. This is the man whom the President and the U.S. Senate put in charge of these incredibly sensitive searches. And Bongino’s replacement as Deputy Director, Andrew Bailey, is a highly partisan election denier who recently directed a raid on a Georgia election office in an effort to justify Donald Trump’s conspiracy theories. I don’t know about my colleagues, but this so-called reform makes me feel worse, not better.

So the grand reform that was supposed to provide meaningful oversight of the FBI’s most sensitive surveillance activities ended up placing that authority in the hands of a conspiracy theorist, followed by a partisan election denier. And just to make the whole thing even more farcical, Wyden notes that the FBI has refused to even keep a basic record of these searches:

But it’s even worse than it looks. The FBI has refused to even keep track of all of the sensitive searches the Deputy Director has considered. The Inspector General urged the FBI to just put this information into a simple spreadsheet and they refused to do it. That is how much the FBI does not want oversight.

They won’t maintain a spreadsheet. The Inspector General asked them to track their use of a sensitive surveillance power using what amounts to a basic Excel file, and the FBI said no. That’s the state of “reform” for Section 702 after the last re-auth.

Wyden has also been sounding the alarm about the expansion of who can be forced to spy on behalf of the government, thanks to a provision jammed into the last reauthorization that expanded the definition of “electronic communications service provider” to cover essentially anyone with access to communications equipment. As Wyden explained:

Two years ago, during the last reauthorization debacle, something really bad happened. Over in the House, existing surveillance law was changed so that the government could force anyone with “access” to communications to secretly collect those communications for the government. As I pointed out at the time, that could mean anyone installing or repairing a cable box, or anyone responsible for a wifi router. It was a jaw-dropping expansion of authorities that could end up forcing countless ordinary Americans to secretly help the government spy on their fellow citizens.

The Biden administration apparently promised to use this authority narrowly. But, of course, the Trump administration has made no such promise. As we say with every expansion of executive authority, just imagine how the worst possible president from the opposing party would use it. And now we don’t have to wonder any more.

Wyden correctly points out that secret promises from a prior administration are worth exactly nothing:

But here’s the other thing – whatever secret promise the Biden Administration made about using these vast, unchecked authorities with restraint, the current administration clearly isn’t going to feel bound by that promise. So whatever the previous administration intended to accomplish with that provision, there is absolutely nothing preventing the current administration from conscripting those cable repair and tech support men and women to secretly spy on Americans.

So to tally this up: Congress is about to vote on reauthorizing Section 702 with a secret legal interpretation that Wyden says will stun the public when it’s eventually revealed, with “reforms” that placed surveillance approval authority in the hands of conspiracy theorists who won’t even keep a spreadsheet, with a massively expanded definition of who can be forced to help the government spy, with secret promises about restraint that the current administration has no intention of honoring, and with a nominee to lead the NSA who won’t commit to following the Constitution.

The Wyden Siren is blaring. And if history is any guide — and it has been, without exception — whatever is behind the classification curtain is worse than what we can see from the outside.

RSSSiteUpdated
XML About Tagaini Jisho on Tagaini Jisho 2026-03-15 02:00 AM
XML Arch Linux: Releases 2026-03-14 10:00 AM
XML Carlson Calamities 2026-03-14 10:00 AM
XML Debian News 2026-03-15 02:00 AM
XML Debian Security 2026-03-15 02:00 AM
XML debito.org 2026-03-15 02:00 AM
XML dperkins 2026-03-14 07:00 PM
XML F-Droid - Free and Open Source Android App Repository 2026-03-14 06:00 AM
XML GIMP 2026-03-14 10:00 AM
XML Japan Bash 2026-03-15 02:00 AM
XML Japan English Teacher Feed 2026-03-15 02:00 AM
XML Kanji of the Day 2026-03-14 10:00 AM
XML Kanji of the Day 2026-03-14 10:00 AM
XML Let's Encrypt 2026-03-14 10:00 AM
XML Marc Jones 2026-03-14 10:00 AM
XML Marjorie's Blog 2026-03-14 10:00 AM
XML OpenStreetMap Japan - 自由な地図をみんなの手で/The Free Wiki World Map 2026-03-14 10:00 AM
XML OsmAnd Blog 2026-03-14 10:00 AM
XML Pluralistic: Daily links from Cory Doctorow 2026-03-14 07:00 PM
XML Popehat 2026-03-14 10:00 AM
XML Ramen Adventures 2026-03-14 10:00 AM
XML Release notes from server 2026-03-14 10:00 AM
XML Seth Godin's Blog on marketing, tribes and respect 2026-03-14 07:00 PM
XML SNA Japan 2026-03-14 07:00 PM
XML Tatoeba Project Blog 2026-03-15 02:00 AM
XML Techdirt 2026-03-15 02:00 AM
XML The Business of Printing Books 2026-03-14 10:00 AM
XML The Luddite 2026-03-14 10:00 AM
XML The Popehat Report 2026-03-14 07:00 PM
XML The Status Kuo 2026-03-14 07:00 PM
XML The Stranger 2026-03-14 10:00 AM
XML Tor Project blog 2026-03-15 02:00 AM
XML TorrentFreak 2026-03-15 02:00 AM
XML what if? 2026-03-15 02:00 AM
XML Wikimedia Commons picture of the day feed 2026-03-09 01:00 AM
XML xkcd.com 2026-03-15 02:00 AM