Voter Suppression In South Dakota Is Well Underway, Even Without SCOTUS’s Help [Techdirt]
It may be almost impossible to devolve this country into a nation of slaveholders, but the Trump administration and all of its MAGA buddies are working hard to make sure a white person’s vote counts more than a vote cast by anyone else.
These bigots recently got an assist from the Supreme Court, which decided minorities can have their votes rendered meaningless so long as the people doing the gerrymandering don’t actually say the quiet part loud. Redistricting for the sole purpose of excluding as many non-whites as possible is perfectly legal if politicians never affirmatively state that the only reason they’re doing this is to make sure minorities can’t vote against their racist asses.
This is all part of what the state of South Dakota is doing now. Governor Larry Rhoden was never elected to his post. He was elevated after Kristi Noem was selected to head the DHS by Donald Trump. (Since she’s about as unemployed as any Trump appointee ever gets, I’m sure she wishes she was back running the state of South Dakota… into the ground.) His most recent brush with the electoral process saw him losing handily to Mike Rounds in the 2014 Senate race.
Rhoden actually needs to win an election if he wishes to remain South Dakota’s governor. And all the MAGA fellatio in the world doesn’t mean much when plenty of other MAGA acolytes are running against him.
So, there’s a mixture of things going on here. There’s Rhoden’s (and the state GOP’s) desire to engage with Trump’s election conspiracies — ones that claim (with zero facts in evidence) that a whole lot of undocumented immigrants are voting in state and local elections.
There’s also a nationwide attempt to deter voting by mail, because these votes more often side with the other team.
In response to completely made-up problems, the GOP passed a bill that Rhoden signed that says state residents must prove their citizenship to engage in local elections. If they can’t, they’re only allowed to participate in federal elections.
According to Rhoden and other GOP alarmists, that’s because too many people who aren’t citizens were granted permission to vote, thanks to what was likely nothing more than a clerical error. South Dakota may be small state in terms of population (~950,000 residents as of 2025), but the “problem” this vaguely written law supposedly addressed was even smaller.
Soulek said only one of the 273 noncitizens had ever cast a ballot. That was during the 2016 general election.
Those are the words of the Director of Elections Rachel Soulek, who works out of the Secretary of State’s office. The Secretary of State blamed this on clerical errors by the Department of Public Safety. The DPS provided the data that Governor Rhoden claims to evidence of widespread election fraud by non-citizens.
One illegal ballot. And that was likely an honest misunderstanding, rather than the criminal intent Rhoden and GOP buddies want to pretend it is.
But the law is on the books. Citizenship must be demonstrated to participate in state and local elections. The problem is that no one running these elections seems to agree what is or isn’t acceptable proof of citizenship.
Hughes County Finance Officer Thomas Oliva, who acts as that county’s auditor, said his office is requiring new voters to show the physical driver’s license.
“The main reasoning behind that is because it’s the back of the license. There’s no other identifying information on the back we can tie back to that person, so we felt it’s in the best interest to see the physical card,” Oliva told News Watch.
Haakon County Auditor Stacy Pinney said she has not run into any issues yet with voter registration but also will require new applicants to physically show the driver’s license.
“I’m going to make it a policy in my office that I want to see the actual card. If I have to verify it, I want to see the real deal,” Pinney told News Watch.
Meanwhile, Harding County Auditor Kathy Glines said her office will accept a photocopy of the driver’s license.
“They would have to send a front and back,” Glines told News Watch.
“I hope they would call before sending it by mail,” she added, referring to the limited hours the office is open.
Everyone appears to be making up their own rules because the law — and the Secretary of State’s office — are being deliberately vague about these requirements, especially in relation to absentee voting. And many people in the state may not know that the law only applies to people who have registered to vote after July of last year, so lots of people are going to be presenting IDs to precinct staffers even if they’re not legally required to do so.
This all adds up to exactly what Governor Rhoden and the GOP want: confusion over who is or isn’t allowed to vote, blended with another law passed by Rhoden that allows pretty much anyone to challenge someone else’s eligibility to vote.
The state could offer much-needed clarification. But it won’t.
As early and absentee voting for the primary election gets underway, Scott-Stoltz hopes officials in Pierre can provide more certainty on the registration process for new voters.
“We’re hoping for more clarification from the secretary’s office before the primary and are looking forward to working with the election board,” she said.
The secretary of state’s office didn’t respond to a request for comment by News Watch.
That’s a feature, not a bug. Those in power definitely prefer incumbent voters over new ones, much like incumbent voters prefer incumbents. They want to keep the jobs they have, rather than allow new voters to upset the incumbent apple cart. They all pretend they love the democratic system, but when it’s time latch onto another 2-4 years in power, they work together to reduce the electorate to the votes they can count on.
Daily Deal: The Ultimate Microsoft Office Professional 2021 for Windows License + Windows 11 Pro Bundle [Techdirt]
Microsoft Office 2021 Professional is the perfect choice for any professional who needs to handle data and documents. It comes with many new features that will make you more productive in every stage of development, whether it’s processing paperwork or creating presentations from scratch – whatever your needs are. Office Pro comes with MS Word, Excel, PowerPoint, Outlook, Teams, OneNote, Publisher, and Access. Microsoft Windows 11 Pro is exactly that. This operating system is designed with the modern professional in mind. Whether you are a developer who needs a secure platform, an artist seeking a seamless experience, or an entrepreneur needing to stay connected effortlessly, Windows 11 Pro is your solution. The Ultimate Microsoft Office Professional 2021 for Windows + Windows 11 Pro Bundle is on sale for $34.97 for a limited time.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
More Liability Will Make AI Chatbots Worse At Preventing Suicide [Techdirt]
California recently passed a law that will, in practice, cause AI chatbots to respond to any hint of emotional distress by spamming users with 988 crisis line numbers, or by cutting off the conversation entirely. The law requires chatbot providers to implement “a protocol for preventing the production of suicidal ideation” if they’re going to engage in mental health conversations at all, with liability waiting for any provider whose conversation is later linked to harm. New York is considering going further, with a bill that would simply ban chatbots from engaging in discussions “suited for licensed professionals.” Similar proposals are moving in other states.
If you’ve been reading Techdirt for any length of time, you know exactly what’s happening here. It’s the same moral panic playbook we’ve seen deployed against cyberbullying, then against social media, and now against generative AI. Something terrible happens. A handful of tragic stories emerge. Lawmakers, desperate to show they’re doing something, reach for the most visible technology in the room and start passing laws designed to stop it from doing whatever it was supposedly doing. The possibility that the technology might actually be helping more people than it’s hurting, or that the proposed fix might make things worse, rarely enters the conversation.
Professor Jess Miers and her student Ray Yeh had a terrific piece at Transformer last month that actually engages with the data and the incentive structures here, and their central argument may seem counterintuitive to many: the way to make AI chatbots safer for people in mental health distress might be to reduce liability for providers. For many people, I’m sure, that will sound backwards. That is, until you actually think through how the current liability regime shapes behavior — as well as reflect on what we know about Section 230’s liability regime in a different context.
First, though, the empirical reality that rarely makes it into the moral panic coverage. People are using AI chatbots for mental health support at massive scale, and a lot of them say it’s helping:
A small number of tragic stories have spurred lawmakers into regulating how chatbots should help people who are dealing with mental health issues. Yet chatbots have emerged as first aid for people experiencing mental health issues, providing genuine benefit to those who aren’t in crisis but are not OK either. Heavy-handed legislation risks derailing this breakthrough in support, creating more problems than it solves.
Over a million people are using general-purpose chatbots for emotional and mental health support per week. In the US, those that use chatbots in this way primarily seek help with anxiety, depression, relationship problems, or for other personal advice. As conversational systems, chatbots can sustain coherent exchanges while conveying apparent empathy and emotional understanding. Many chatbots also draw on broad knowledge of psychological concepts and therapeutic approaches, offering users coping strategies, psychoeducation, and a space to process difficult experiences.
In a study of more than 1,000 users of Replika — a general-purpose chatbot with some cognitive behavioral therapy-informed features — most described the chatbot as a friend or confidant. Many reported positive life changes, and 30 people said Replika helped them avoid suicide. Similar patterns appear among younger chatbot users. In a study of 12–21-year-olds — a group for whom suicide is the second leading cause of death — 13% of respondents used chatbots for some kind of mental health advice, of which more than 92% said the advice was helpful.
There are, obviously, some limits to the Replika study, including that the data is from a few years ago, and it involves self-reporting, which can always lead to some wacky results. But it is notable that this study was done by Stanford academics (i.e., not Replika itself) and was good enough to get published in Nature. And it does seem notable that even with the methodological limitations, so many people self-reported that the service helped them avoid suicide. For all the attention-grabbing stories of chatbots being blamed for encouraging suicidal ideation, that seems important. Same with the claim of 92% that the mental health advice was helpful.
It feels like these kinds of numbers should be at the center of any serious policy conversation. Instead, they’re almost entirely absent from the legislative discussion, which focuses exclusively on the (very real, very tragic, but still somewhat rare) cases where things went wrong.
A big part of the reason chatbots are filling this gap is that the traditional mental health system isn’t remotely equipped to meet existing demand. Nearly half of Americans with a known mental health condition never seek professional help. There are plenty of reasons for this, ranging from the cost of mental health treatment, to the general stigma of being seen as needing such help, not to mention potential professional and social consequences.
As Miers and Yeh put it: “many stay silent, waiting to see if things get worse.”
Chatbots, whatever their limitations, offer something the professional system largely cannot: they’re always available in a form many people feel more comfortable talking with:
By contrast, chatbots offer low-friction, low-stakes, and always-available support. People are often more willing to speak candidly with computers, knowing that there is no human on the other side to judge or feel burdened. Some people even find chatbots to be more compassionate and understanding than human healthcare providers. AI users may feel more comfortable sharing embarrassing fears, or questions they might otherwise hold back. For clinicians, discussing these interactions can surface insights into patients’ thoughts and emotions that were once difficult to access. For now, chatbot providers generally refrain from contacting law enforcement, leading to more candid conversations.
So what does the California-style regulatory approach actually do to this ecosystem? Faced with liability for any conversation later linked to harm, and unable to reliably predict which conversations those will be (in part because, as we covered recently, even clinicians who specialize in suicide prevention admit they often can’t predict it), providers will default to the behavior that minimizes legal exposure whether or not it helps users. That means reflexively pushing 988 at any mention of distress, or cutting off conversations entirely, or simply refusing to engage with mental health topics at all.
And that kind of defensive posturing can be actively harmful to those most at risk:
Suicide prevention is about connecting people to the right support. Sometimes that means crisis care like hotlines or immediate medical treatment. But blunt, impersonal responses can backfire. Pushing 988 at the first mention of distress may seem neutral, but for some, it triggers shame, and deepens hopelessness. For some, suicide prevention “signposting” causes frustration, especially for those who already know those resources exist. People often turn to the Internet, or a chatbot, because they’re looking for something else. Abruptly ending conversations can have the same effect. That’s why suicide prevention protocols like Question, Persuade, Refer(QPR) prioritize trust-building and open dialogue before offering help.
So the regulatory regime mandates behavior that can actively escalate distress, all while still leaving providers exposed to blame if tragedy follows anyway. It’s the worst of both worlds: worse outcomes for users, continued liability for providers, and a chilling effect on the research and development that might actually improve things.
We don’t need to speculate about whether this dynamic plays out in practice. We’ve already watched it happen with social media:
The social media ecosystem has already shown this dynamic. In response to regulatory pressure, major online services heavily moderate, or outright prohibit, suicide-related discussions, sometimes hiding content that could otherwise destigmatize mental health. That merely displaces the conversations, and the people having them, often into spaces with less oversight and support.
If this sounds familiar, it’s because it is. It’s the same pattern that emerges whenever policymakers try to make sensitive topics go away through platform liability: the topics don’t go away, they just migrate to darker corners where nobody is watching at all. A mental health crisis doesn’t magically disappear just because Instagram or TikTok hid the conversation. Those in need of help are more likely to then end up somewhere with fewer guardrails, fewer resources, and fewer people equipped to help.
This leads directly back to the core of the argument, which may feel a bit backwards at first. If we want chatbot providers to build genuinely better systems for handling mental health conversations — systems that can identify distress patterns, offer appropriate triage, connect users to professional care when that’s what’s needed, and sustain helpful conversation when it isn’t — we need a liability environment that doesn’t punish the attempt.
This is, incidentally, exactly the logic that produced Section 230 in the first place. Before Section 230, the Stratton Oakmont v. Prodigy ruling created a perverse situation where platforms that tried to moderate content faced more liability than platforms that did nothing. The obvious result, had that stood, would have been less moderation, not more, because the smart legal advice would have been “don’t touch anything.” Section 230 fixed that by ensuring that the act of moderation itself didn’t create liability, which in turn made it possible for platforms to actually invest in moderation systems. Contrary to the widespread belief among the media and politicians, Section 230 didn’t eliminate accountability — it smartly redirected incentives toward the behavior we actually wanted.
The same logic applies here. A targeted liability shield for AI providers engaged in mental health support could give them the space to invest in building better suicide detection, better triage pathways, and better handoffs to human professionals. But that won’t happen if every such attempt turns into a potential lawsuit. The research to enable this is already happening despite the hostile incentive environment:
Meanwhile, emerging research suggests chatbots show real promise for mental health support. Trained on large-scale data and refined with clinical input, large language models are getting better at spotting patterns of distress and responding to suicidal ideation in nuanced, personalized ways. In a recent UCLA study, researchers found that LLMs can detect forms of emotional distress associated with suicide that existing methods often miss—opening the door to earlier, more effective intervention. According to another study, the most promising approach may be a hybrid where AI flags risk in real time, and trained humans step in with targeted support.
That hybrid model — AI identifying risk, trained humans providing targeted intervention — is exactly the kind of system you’d want chatbot providers racing to build. Instead, the current regulatory trajectory is telling them: build that, and you’re just creating a liability sinkhole. Every time your system engages with a mental health conversation, you’ve created a potential future lawsuit. Better to just block the conversation entirely and hope the user finds help somewhere else.
I get that some people will reasonably worry that “less liability” sounds like a giveaway to AI companies that are already acting irresponsibly. But Miers and Yeh aren’t arguing that chatbots should be able to impersonate licensed therapists, or that there should be no accountability for products designed to be used by vulnerable users. The American Psychological Association’s approach — prevent chatbots from posing as licensed professionals, limit designs that mimic humans, expand AI literacy — is perfectly compatible with a liability shield for thoughtful, helpful mental health support. The point is to stop punishing the specific behavior we want more of: chatbots that try to actually help people who are struggling, including by building better pathways to professional care for those who need it.
Simply putting liability on the companies is unlikely to do that.
And for people in acute crisis, professional intervention is still a necessity. Nobody serious is arguing chatbots should wholly replace crisis lines or psychiatric care. The argument is that the vast majority of people using chatbots for mental health support are not in acute crisis — they’re anxious, lonely, depressed, processing a breakup, working through stress, looking for someone to talk to at 3am when their therapist isn’t available and calling 988 feels like overkill. For that population — which is the overwhelming majority — the regulatory regime being built assumes the worst and mandates responses that often make things worse.
The deeper problem, as we’ve written before, is that the entire framing of “AI causes suicide” relies on a confidence about the mechanics of suicide that clinicians themselves don’t have. About half of people who die by suicide deny suicidal intent to their doctors in the weeks or month before their death. Experts who have spent decades studying this admit they often cannot predict it even when treating patients directly. The idea that we can identify which chatbot conversation “caused” which outcome, and design liability around that identification, assumes a causal clarity that doesn’t exist anywhere in the actual science.
Good policy here would look very different from what’s being proposed. Miers and Yeh point to a Pennsylvania proposal that would fund development of AI models designed to identify suicide risk factors among veterans — incentivizing the research we actually need rather than punishing it. They suggest liability shields modeled on Section 230 that would encourage continued investment in safer, more responsive systems. They warn specifically against imposing a clinical regulatory framework (with its mandatory reporting requirements) onto general-purpose chatbots, because doing so would replicate exactly the barriers that already keep many people from seeking professional help.
None of this is as emotionally satisfying as “ban the thing that hurt people.” Moral panics rarely are, because moral panics are fundamentally about finding something to blame rather than about the harder work of actually understanding what’s happening and designing interventions that might help. But for the over one million people per week currently turning to chatbots for mental health support — a group that includes at least the thirty Replika users who credit the chatbot with keeping them alive — the difference between a regulatory regime that punishes thoughtful engagement and one that incentivizes it is the difference between having somewhere to turn at 3am or running into a wall of “please call 988” followed by a terminated conversation.
We’ve watched this movie before with social media. We know how it ends. The conversations just move somewhere worse, with fewer resources and less oversight. The tragedies keep happening — they just stop being visible to anyone who might be in a position to help. And the technology gets worse at the thing we want it to be better at, because the legal environment has made getting better into a liability.
If lawmakers are serious about mental health outcomes rather than political theater, they should be asking how to make chatbots better at this — how to build the hybrid human-AI triage systems the research is pointing toward, how to turn these tools into genuine funnels toward professional care when that’s what’s needed, how to preserve the candid, low-stakes space that people clearly find valuable. That project requires a liability regime that rewards trying to be better rather than punishing it. The alternative is what California just passed, and what New York is considering, and what we’ll keep getting until someone in the policy conversation is willing to notice that the intuitive answer here is producing the exact opposite of the intended outcome.
It’s a counterintuitive approach. It’s also the only one that has any chance of actually working.
Your work diary [Seth Godin's Blog on marketing, tribes and respect]
Five short entries a day.
It’s easy to imagine that if you do this 200 workdays in a row, your career will advance. And it makes it easier to prepare for your annual review or that next job interview.
Like most habits, the hardest part is committing to begin.
Pluralistic: In praise of vultures (06 May 2026) [Pluralistic: Daily links from Cory Doctorow]
->->->->->->->->->->->->->->->->->->->->->->->->->->->->->
Top Sources:
None
-->

One of my bedrock beliefs is that capitalists really hate capitalism. They may name their beloved institutes after the likes of Adam Smith, but they ignore everything Smith had to say about the necessity of competition to keep markets from turning into monopolies:
https://pluralistic.net/2023/06/09/commissar-merck/#price-giver
The theory of capitalism holds that markets are a kind of distributed computer that aggregates trillions of decisions from billions of market participants in order to optimize production and distribution of goods and services, creating a "Pareto-optimal" world where no one can be made better off without making someone else worse off.
Whether or not you believe that this computer exists and functions as predicted, one indisputable fact about it is that it requires the freedom to choose in order to work. The point of market-as-computer is that it aggregates decisions, so it can only work if everyone is as free as possible to decide.
But that's not the world capitalists want. For capitalists, the point is to restrict other people's choices in order to maximize your own freedom. That's how we get economic doctrines like "revealed preferences": the idea that if a person says they want one thing, but does another thing, then you can tell what they really prefer by looking at the latter and disregarding the former. This is the kind of doctrine you can only fully embrace after sustaining the kind of highly specific neurological injury that is induced by taking an economics degree, an injury that makes you incapable of perceiving or reasoning about power. Under the doctrine of revealed preferences, someone who sells their kidney to make the rent has a revealed preference for only having one kidney:
https://pluralistic.net/2026/03/30/players-of-games/#know-when-to-fold-em
Capitalism is supposed to run on risk: the risk of being overtaken by a competitor drives businesses to deliver better services more efficiently, thus producing a bounty for all. But capitalists really hate risk, hence the drive to monopoly: Mark Zuckerberg admitted, in writing, that he only bought Instagram so that he wouldn't have to compete with it ("It is better to buy than to compete" -M. Zuckerberg):
https://pluralistic.net/2025/11/20/if-you-wanted-to-get-there/#i-wouldnt-start-from-here
Capitalists hate capitalism, but they love feudalism. Feudalism is like capitalism, in that you have a ruling class that creams off the surplus generated by labor; but under feudalism, society is organized to protect rents (money you get from owning stuff) over profits (money you get from doing stuff). The beauty of rents is that they are insulated from risk: if you own a coffee shop, you're in constant danger of being put out of business by a better coffee shop. But if you own the building and your coffee shop tenant goes under, well, you've still got the building, and hey, now it's on the same hot block as the amazing new cafe that's driving its competitors out of business:
https://pluralistic.net/2023/09/28/cloudalists/#cloud-capital
Douglas Rushkoff calls this "going meta": don't drive a taxi, rent a medallion to a taxi driver. Don't rent a medallion, start a ride-hailing app company. Don't start a ride-hailing company, invest in the company. Don't invest in the company, but options on the company's shares. Each layer of indirection takes you further from the delivery of a useful service – and insulates you further from risk:
https://pluralistic.net/2022/09/13/collapse-porn/#collapse-porn
Monopoly is to capitalism as gerrymandering is to democracy, a way to strip out any meaningful choice. Think of the two giant packaged goods companies that fill your grocery aisles: Procter & Gamble and Unilever. Practically everything on your grocer's shelves is made by a division of one of these two massive conglomerates. If you try to "vote with your wallet" by buying a low-packaging version of a product, it's going to be sold to you by the same company that sells the high-packaging version. If you switch to an artisanal brand of cookies made by a local family business, Unilever or P&G will buy that company and issue a press release declaring that they made the acquisition because they know "their customers value choice":
https://pluralistic.net/2024/05/18/market-discipline/#too-big-to-care
Gerrymandering strips your vote of any impact on political outcomes. Monopoly strips your purchases of any ability to influence economic outcomes. Wrap both of them in "revealed preferences" and you get a system that endlessly narrates its ability to deliver choice, and then blames your misery on your having chosen badly.
This is the method of the entire conservative project. As Dan Savage says: the thing that unites conservative assaults on voting, birth control, abortion and no-fault divorce is the stripping away of choice. Conservatives are trying to create a world populated by husbands you can't divorce, pregnancies you can't prevent or terminate, and politicians you can't vote out of office. Add to that Trump's assault on the National Labor Relations Board, his reversal of the FTC's ban on noncompetes, and his protection of "TRAP" agreements that force employees to pay thousands of dollars if they quit their jobs, and you get "jobs you can't quit":
https://pluralistic.net/2025/09/09/germanium-valley/#i-cant-quit-you
Conservative strongmen like Trump and Musk exalt the value of self-determination – for themselves, at everyone else's expense. Trump's ability to stiff the contractors that built his hotels and Musk's ability to rain flaming rocket debris down on the people who live near his company town require that everyone else be stripped of protections. They get to determine their own course in life by taking away your ability to determine your own. Their right to swing their fists ends two inches past your nose:
https://pluralistic.net/2026/04/21/torment-nexusism/#marching-to-pretoria
Cheaters and bullies hate the rule of law, hence Trump's endless repetition of Nixon's mantra: "When the president does it, that means it is not illegal." But not everyone can be president, and the world is full of would-be Trumps in positions of power who would like to be able to commit crimes without fear of legal repercussions. For these people, we have something called "binding arbitration."
"Binding arbitration" is a widely used contractual term that forces you to surrender your right to sue a company that wrongs you. Instead of suing, binding arbitration forces you to take your case to an "arbitrator"; that is, a lawyer who is paid by the company that cheated you or maimed you or killed your loved one. The arbitrator decides whether their client is guilty, and, if so, how much that client owes you. The entire process is confidential and it is non-precedential, meaning that if a company rips off millions of people in the same way, each of them has to arbitrate their claims separately, and people who are successful can't share their tactical notes with the people who are next in line to plead for justice.
That makes binding arbitration another key weapon in the conservative movement's war on choice: not just jobs you can't quit and politicians you can't vote out of office, but also companies you can't sue. Binding arbitration is a creation of the Federalist Society and their champion Antonin Scalia, who authored a series of Supreme Court dissents and (ultimately) decisions that opened the door for binding arbitration everywhere:
https://pluralistic.net/2025/10/27/shit-shack/#binding-arbitration
Given the Fedsoc's role in shoving binding arbitration down every worker and shopper's throat, it's decidedly odd that they invited Ashley Keller to be their keynote debater in 2021, where he argued that "concentrated corporate power is a greater threat than government power":
https://www.youtube.com/watch?v=aY5MrHGjVT8
Keller is a powerhouse lawyer, and an avowed conservative, who has pioneered many tactics for overcoming binding arbitration clauses. He helped create "mass arbitration," bringing thousands of arbitration cases on behalf of Uber drivers who'd had their wages stolen by the company. Since Uber has to pay the arbitrators in each of those cases, they faced a much larger bill than they would face in any possible class action suit:
https://www.reuters.com/article/otc-uber-frankel-idUKKCN1P42OH/
Mass arbitration cases spread to all kinds of large firms that used petty grifts to steal from thousands or even millions of people, like Intuit, who deceive – and rip off – millions of Americans every year with their fake Turbotax "free file" system:
https://pluralistic.net/2022/02/24/uber-for-arbitration/#nibbled-to-death-by-ducks
Mass arbitration worked so well that Amazon actually revised its terms of service to remove binding arbitration from their terms of service, because they realized that they'd be better off facing class action suits:
https://pluralistic.net/2021/06/02/arbitrary-arbitration/#petard
Of course, the point of binding arbitration was never to create a streamlined system of justice – it was to bring about a world of no justice, where you have no right to sue. It's part of the decades-old "tort reform" movement that the business lobby has used to take away your right to sue altogether. Any time you hear about a seemingly crazy lawsuit (like the urban legends about the McDonald's "hot coffee" case), you're being propagandized for a world without legal consequences for companies that defraud you, steal from you, injure you, or kill you:
https://pluralistic.net/2022/06/12/hot-coffee/#mcgeico
That's why companies (like Bluesky) are now trying terms of service that also ban you from mass arbitration, while retaining the right to consolidate claims into a mass arbitration case if that's advantageous to them:
But Keller keeps finding creative ways around binding arbitration. He's currently bringing thousands of arbitration claims against Google, on behalf of advertisers whom Google stole from (Google is a thrice-convicted monopolist, and they lost a case last year over their monopolization of ad-tech, where they were found to have defrauded advertisers).
He also just argued before the Supreme Court in a case against Monsanto over the company's attempt to escape liability for causing cancer in farmworkers with their Roundup pesticide:
https://www.npr.org/2026/04/27/nx-s1-5793804/supreme-court-monsanto-roundup-arguments
Keller appears in the latest episode of the Organized Money podcast, for a fascinating interview about his work and outlook, and how he reconciles his work fighting corporate power with his identity as a movement conservative:
https://www.organizedmoney.fm/p/the-conservative-who-torments-big
Keller's first big, important point is that (basically), capitalists hate capitalism (see above). He cites Milton Friedman, who "always said that the tort system is the best way to ensure that companies behave and follow the rules." For Keller (and Friedman) the alternative to private litigation against bad businesses is "government regulation and the alphabet soup of Washington, DC agencies [that] try and police these companies."
But, of course, the businesses that want binding arbitration and tort reform (so they can't be sued) also want to "dismantle the administrative state" (so they can't be regulated). They're the impunity movement, the "when the president does it, that means it is not illegal" movement, the "heads I win, tails you lose" movement. They're the caveat emptor movement, the "that makes me smart" movement:
https://pluralistic.net/2024/12/04/its-not-a-lie/#its-a-premature-truth
They don't want efficient markets, with the ever-present threat of a better competitor putting them out of business. They want feudalism. They want to go meta. They want to have the kind of self-determination you can only achieve by taking away everyone else's self-determination.
I was very struck by Keller's claim to be engaged in an exercise that Milton Friedman identified as the best one for making markets work. One of Keller's most forceful points is that class action suits are especially important for reining in petty, recurrent grifts, the junk fees that are the hallmark of enshittification.
He quotes his old boss, the archconservative judge Richard Posner, who said "Only a lunatic or a fanatic sues for $20." But if you multiply a $20 junk fee by ten million purchases, a company can use that fact to make hundreds of millions of dollars. That's real folding money, which is why every company has figured out a way to whack you for a $20 junk fee.
There are two ways to end this racket: one is litigation, the other is regulation, and the capitalism-hating-capitalists who run the world want to kill both. That's why the business lobby smears lawyers like Keller as being "vultures." But as Matt Stoller says, "vultures look aggressive and whatnot, but when you actually get rid of vultures out of an ecosystem, all sorts of things go haywire."
I love this point. Vultures live off the disgusting, rotting crap that would otherwise pile up around us, breeding disease and emitting an unbearable stench. If plaintiff-side, no-win/no-fee lawyers are vultures, then junk fees, wage theft, and the million petty frauds they fight are the disgusting, rotting crap that vultures feed off of – and the harder we make it for our noble vulture lawyers, the more disgusting, rotting crap we have to live with, hence the unbearable stench that is all around us.
Listening to Keller was a fascinating exercise. I thoroughly disagree with him about many things – the way he characterized Section 230 of the Communications Decency Act couldn't have been more wrong – but it's quite bracing to hear a capitalist who doesn't hate capitalism defend it against the vast majority of capitalists, who hate capitalism more than any socialist ever did.

"The Score Is Four/and Next Time More" https://rickperlstein.substack.com/p/the-score-is-fourand-next-time-more
Bodyform | Never Just a Period https://www.youtube.com/watch?v=GpFYcj2sJ3A
Getting Digital Fairness Right: EFF's Recommendations for the EU's Digital Fairness Act https://www.eff.org/deeplinks/2026/04/dos-and-donts-eus-digital-fairness-act-effs-recommendation-regulating-digital
DHS Demanded Google Surrender Data on Canadian’s Activity, Location Over Anti-ICE Posts https://www.wired.com/story/dhs-demanded-google-surrender-data-on-canadians-activity-location-over-anti-ice-posts/
#25yrsago Torvalds responds to Microsoft's Craig Mundie https://web.archive.org/web/20011019132822/http://web.siliconvalley.com/content/sv/2001/05/03/opinion/dgillmor/weblog/torvalds.htm
#25yrsago Bankrupt Argentina considers banning proprietary code and switching to free software https://web.archive.org/web/20010614131152/https://www.wired.com/news/business/0,1367,43529,00.html
#20yrsago Danny Hillis on how games are(n’t) like a theme park https://web.archive.org/web/20060513182649/https://www.wired.com/wired/archive/14.04/disney.html
#20yrsago Mission Impossible opening marked by anti-Scientology flyover https://web.archive.org/web/20060514000636/http://hailxenu.net/
#20yrsago SmartFilter targets Distributed Boing Boing – how to defeat it https://memex.craphound.com/2006/05/04/smartfilter-targets-distributed-boing-boing-how-to-defeat-it/
#15yrsago John Ashcroft assumes charge of “ethics and professionalism” for Blackwater https://web.archive.org/web/20110507103749/https://www.wired.com/dangerroom/2011/05/blackwaters-new-ethics-chief-john-ashcroft/
#15yrsago Rumsfeld and other US officials say torture didn’t help catch bin Laden https://web.archive.org/web/20110505012303/https://www.wired.com/dangerroom/2011/05/surveillance-not-waterboarding-led-to-bin-laden/
#15yrsago Rental laptops equipped with spyware that can covertly activate the webcam and take screenshots https://web.archive.org/web/20110506130156/http://www.ajc.com/business/pa-suit-furniture-rental-933410.html
#15yrsago Parallel machine made out of 17 stitched-together Apple //e’s https://web.archive.org/web/20110504194313/http://home.comcast.net/~mjmahon/AppleCrateII.html
#15yrsago Sarah Palin and James Lankford: giving $4 billion of taxpayer money to oil companies doesn’t matter https://web.archive.org/web/20110505220640/https://thinkprogress.org/2011/05/03/palin-lankford-oil-subsidies/
#15yrsago Stephen Harper violated election laws https://web.archive.org/web/20110701000000*/http://www.examiner.com/canada-headlines-in-canada/stephen-harper-breaks-election-rules-campaigns-on-radio-on-election-day
#15yrsago History and future of bin Ladenist extremism https://www.juancole.com/2011/05/obama-and-the-end-of-al-qaeda.html
#10yrsago Belushi widow & Aykroyd produce Blues Brothers animated series https://deadline.com/2016/05/the-blues-brothers-animated-comedy-series-dan-aykroyd-1201748389/
#10yrsago Chinese censorship: arbitrary rule changes are a form of powerful intermittent reinforcement https://www.techdirt.com/2016/05/04/why-growing-unpredictability-chinas-censorship-is-feature-not-bug/
#10yrsago US government and SCOTUS change cybercrime rules to let cops hack victims’ computers https://www.wired.com/2016/05/now-government-wants-hack-cybercrime-victims/
#10yrsago After advertiser complaints, Farm News fires editorial cartoonist who criticized John Deere & Monsanto https://web.archive.org/web/20160505042150/https://www.kcci.com/news/longtime-iowa-farm-cartoonist-fired-after-creating-this-cartoon/39337816
#10yrsago Outstanding rant about establishment pearl-clutching over Trump https://web.archive.org/web/20160505033357/https://theconcourse.deadspin.com/george-will-is-a-haughty-dipshit-1774449290
#10yrsago The Planet Remade: frank, clear-eyed book on geoengineering, climate disaster, & humanity’s future https://memex.craphound.com/2016/05/04/the-planet-remade-frank-clear-eyed-book-on-geoengineering-climate-disaster-humanitys-future/
#5yrsago Qualia https://pluralistic.net/2021/05/04/law-and-con/#law-n-econ
#5yrsago Whales decry the casino economy https://pluralistic.net/2021/05/04/law-and-con/#all-bets-are-off

Barcelona: Internet no tiene que ser un vertedero (Global Digital Rights Forum), May 13
https://encuentroderechosdigitales.com/en/speakers/
Virtual: How to Disenshittify the Internet with Wendy Liu (EFF), May 14
https://www.eff.org/event/effecting-change-enshittification
Berlin: Re:publica, May 18-20
https://re-publica.com/de/news/rp26-sprecher-cory-doctorow
Berlin: Enshittification at Otherland Books, May 18
https://www.otherland-berlin.de/de/event-details/cory-doctorow-in-der-friesenstrasse-23-kreuzberg-praesentiert-von-otherland.html
Berlin: Enshittification at Otherland Books, May 19
https://www.otherland-berlin.de/de/event-details/cory-doctorow.html
Hay-on-Wye: HowTheLightGetsIn, May 22-25
https://howthelightgetsin.org/festivals/hay/big-ideas-2
SXSW London, Jun 2
https://www.sxswlondon.com/session/how-big-tech-broke-the-internet-b3c4a901
NYC: The Reverse Centaur's Guide to Life After AI with Jonathan Coulton (The Strand), Jun 24
https://www.strandbooks.com/cory-doctorow-the-reverse-centaur-s-guide-to-life-after-ai.html
Edinburgh International Book Festival with Jimmy Wales, Aug 17
https://www.edbookfest.co.uk/events/the-front-list-cory-doctorow-and-jimmy-wales
Artificial Intelligence: The Ultimate Disruptor, with Astra Taylor and Yoshua Bengio (CBC Ideas)
https://www.cbc.ca/listen/live-radio/1-23-ideas/clip/16210039-artificial-intelligence-the-ultimate-disruptor
When Do Platforms Stop Innovating and Start Extracting? (InnovEU)
https://www.youtube.com/watch?v=cccDR0YaMt8
Pete "Mayor" Buttigieg (No Gods No Mayors)
https://www.patreon.com/posts/pete-mayor-with-155614612
The internet is getting worse (CBC The National)
https://youtu.be/dCVUCdg3Uqc?si=FMcA0EI_Mi13Lw-P
"Enshittification: Why Everything Suddenly Got Worse and What to Do About It," Farrar, Straus, Giroux, October 7 2025
https://us.macmillan.com/books/9780374619329/enshittification/
"Picks and Shovels": a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels).
"The Bezzle": a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (thebezzle.org).
"The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org).
"The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245).
"Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com.
"Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
"Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026
"The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027
"Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, April 20, 2027
"The Memex Method," Farrar, Straus, Giroux, 2027
Today's top sources:
Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America. Third draft completed. Submitted to editor.
"The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.
A Little Brother short story about DIY insulin PLANNING

This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
Bluesky (no ads, possible tracking and data-collection):
https://bsky.app/profile/doctorow.pluralistic.net
Medium (no ads, paywalled):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla
READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.
ISSN: 3066-764X
This Trump FCC Cybersecurity ‘Fix’ Is About To Make Hardware Way More Expensive For Everyone [Techdirt]
Last week the Trump FCC quietly announced that it was cooking up a new ban on any labs that have testing offices in China from testing electronic devices such as smartphones, cameras and computers for sale in the United States.
That’s going to create some major issues given that roughly 75% of all U.S.-bound electronics are currently tested in Chinese facilities. Many of these operations are owned by U.S. or European companies that have testing facilities in China because that’s where the lion’s share of technology is manufactured, so it’s simply more efficient for testing evolving iterations of new product.
That these companies have offices in China doesn’t inherently mean the testing labs are somehow all magically compromised and in dutiful service to the Chinese government, though that’s certainly the implication the xenophobic Trump administration is making (and has made before in previous, similar announcements).
One major problem outside of the raw logistics of it all: Carr’s planned cybersecurity fix would be significantly more expensive, driving up costs for everyone:
“27 of the affected facilities are Chinese subsidiaries of major Western testing firms, including Intertek, SGS, TUV Rheinland, and Bureau Veritas. Those companies operate labs in the U.S., Europe, and Taiwan that can absorb redirected work, but the shift won’t be seamless. Basic FCC certification testing runs between $400 and $1,300 at Chinese labs, compared with $3,000 to $4,000 at U.S. equivalents.”
Who is going to eat the difference in those costs? You are, of course. In addition to the higher costs from the AI boom, the tariffs, and Trump’s pointless war in Iran. Whatever companies lobbied Carr and Trump will do great. You probably won’t.
Given the terrible nature of smart IOT home security standards (more a byproduct of unregulated crony capitalism than China-based testing locations), having a more direct line of control over the testing of U.S. bound hardware makes superficial sense.
But then you have to remember that this is Brendan Carr, who does nothing authentically in the public interest, and is likely just looking to drive more business to a handful of U.S. companies that lobbied for his attention. And you have to remember that these folks, as you saw when they talked about shifting smartphone production to the States, don’t actually know what the fuck they’re doing.
The other major problem: Trump and Carr’s rabid deregulatory, anti-governance zealotry on other fronts has repeatedly worked to undermine U.S. cybersecurity, making these sorts of fixes leaky and highly performative, even if they were to be successful (which they won’t be).
While Carr and Trump profess to be super worried about Chinese threats to national security, with their other hand the Trump administration has gutted government cybersecurity programs (including a board investigating the biggest Chinese hack of U.S. telecom networks in history), dismantled the Cyber Safety Review Board (CSRB) (responsible for investigating significant cybersecurity incidents), and fired oodles of folks doing essential work at the Cybersecurity and Infrastructure Security Agency (CISA).
Brendan Carr is also engaged in a massive effort to destroy whatever’s left of the FCC’s consumer protection and corporate oversight authority, despite the fact that the recent historic Chinese Salt Typhoon hack (caused in large part because major telecoms were too incompetent to change default administrative passwords) was a direct byproduct of this exact type of mindless deregulation.
The Trump administration’s stacked courts are also making it extremely difficult to hold telecoms accountable for literally anything (see the Fifth Circuit’s recent reversal of a fine against AT&T for spying on customer movement), which also undermines consumer privacy and national security, and ensures zero real repercussions for companies that fail to secure their networks and sensitive data.
So, with one hand you have Carr claiming he’s “fixing cybersecurity” with stuff like this or his recent foreign router “ban” (which as we’ve noted is really a lazy extortion scheme), while with the other he’s doing everything in his power to ensure that domestic telecoms don’t really have anything even vaguely resembling meaningful privacy and security oversight.
Here’s where I’ll remind you that because the U.S. is too corrupt to pass even a basic modern privacy law, we also have a vast and largely unregulated data broker industry that hoovers up your every movement and online habit, then sells access to it to any random asshole (including foreign and domestic government intelligence agencies).
Here too, weird zealots like Trump and Carr have rolled back efforts to regulate data brokers or do anything about it. As authoritarian racists, they’re too blinded by personal self-enrichment and racism to have any genuine understanding of how any of this stuff actually works.
As with the TikTok “ban” (which basically involved shoveling ownership to Trump’s billionaire buddies), so much of this is heavily xenophobic, nationalistic, transactional, self-serving, and performatively detached from any actual reality. By the time the check comes due, guys like Carr and Trump will already be off to the next grift.
NVIDIA’s Shadow Library Scripts ‘Have No Other Purpose’ Than Infringement, Judge Rules [TorrentFreak]
Chip giant NVIDIA has been one of the main financial beneficiaries in the artificial intelligence boom.
Revenue surged due to high demand for its AI-learning chips and data center services, and the end doesn’t appear to be in sight.
Besides selling the most sought-after hardware, NVIDIA is also developing its own models, including NeMo Megatron models. These were trained using NVIDIA’s own hardware and with help from large text libraries, much like other tech giants do.
This includes authors, who, in various lawsuits, accused tech companies of training their models on pirated books. In early 2024, for example, several authors, including Abdi Nazemian, sued NVIDIA over alleged copyright infringement.
Through the class action lawsuit, they claimed that the company’s AI models were trained on the Books3 dataset that included copyrighted works taken from the ‘pirate’ site Bibliotik.
As the case progressed, the authors also brought up NVIDIA’s contacts with Anna’s Archive, inquiring about “high-speed access” to the shadow library’s massive collection of pirated books.
In January, NVIDIA fired back with a comprehensive motion to dismiss, calling the authors’ allegations speculative, vague, and legally insufficient. At the California federal court, NVIDIA argues that the authors’ complaint is built on speculation rather than facts.
Specifically, the company asked the court to dismiss the direct copyright infringement claims linked to Bibliotik, Books3, and The Pile dataset.
In addition, the motion also targets the contributory copyright infringement allegations, which center on scripts and tools NVIDIA allegedly distributed so corporate customers could automatically download ‘The Pile,’ the dataset that contains Books3.

The chip giant initially asked the court to dismiss claims relating to Anna’s Archive, Z-Library, LibGen, Sci-Hub, and the Slimpajama dataset as well, but it withdrew this request in March, which substantially narrowed the dispute.
In an order issued yesterday, U.S. District Judge Jon Tigar denied most of the dismissal request. Importantly, the contributory infringement claim survives, even after the Supreme Court’s Cox v. Sony ruling, which significantly impacts many copyright infringement cases.
NVIDIA argued that Cox tightened the standard, requiring “active encouragement through specific acts,” while stressing that the NeMo Megatron Framework as a whole has substantial non-infringing uses. Marketing or promoting this framework as a piracy tool was needed to prove this claim, NVIDIA argued.
Judge Tigar rejected the framing. Instead of analyzing the Megatron framework as a whole, he zeroed in on the specific scripts that NVIDIA distributed to clients so they could automatically download and preprocess The Pile dataset. Those scripts have no purpose other than enabling infringement, the court concluded.
“The scripts are alleged to have no other purpose than to speed up the process of infringement, unlike the digital video recorder systems at issue in Sony Corp. or the internet service provided in Cox,” Judge Tigar wrote.
This appears to be the first AI training case to apply the new Cox standard, and the result didn’t go the way NVIDIA hoped. The scripts it offered satisfied both the new ‘inducement’ and ‘tailored to infringement’ standards required for a contributory infringement finding.
Regarding the direct copyright infringement claims, NVIDIA also asked the court to dismiss “allegations concerning its ‘use of any [sic] BitTorrent Protocol.'”
The request was pretty thin, Judge Tigar noted, pointing out that the complaint contains exactly one reference to BitTorrent. That reference doesn’t point to any of NVIDIA’s alleged wrongdoing. It’s a descriptive line about Bibliotik distributing pirated works via the protocol.
Judge Tigar refused to dismiss all BitTorrent allegations, stressing that “BitTorrent is merely a tool, not a library or dataset.” He also offered a rather colorful analogy.
“Asking to dismiss allegations concerning BitTorrent is like asking to dismiss allegations concerning paintbrushes in a case about a dolphin painting,” the order reads, citing Folkens v. Wyland Worldwide, a copyright dispute over a painting of two dolphins crossing underwater.

NVIDIA’s interest in stripping BitTorrent from the case is easier to understand in light of Meta’s troubles in a parallel AI lawsuit. There, Meta’s BitTorrent seeding resulted in direct copyright infringement claims. NVIDIA appears to have wanted that door closed before discovery could open it.
NVIDIA did get a small win as Judge Tigar dismissed the vicarious copyright infringement claim.
To state that claim, the authors needed to plausibly allege that NVIDIA had both the legal right to control the direct infringers and a direct financial interest in the infringement. Tigar found neither was adequately pleaded, but allowed the authors 21 days to address the deficiencies and refile.
For now, it is clear that this legal battle between the authors and NVIDIA is far from over.
The same also applies to a long list of other AI training lawsuits, which continue to grow every month. That includes a lawsuit filed against Meta and Mark Zuckerberg yesterday by major publishers, which, like many others, also accuses Meta of training on pirated books.
—
A copy of U.S. District Court Judge Jon Tigar’s order on NVIDIA’s motion to dismiss is available here (pdf).
From: TF, for the latest news on copyright battles, piracy and more.
Steven Soderbergh On AI In Films: If There’s a Filmmaking Tool, I’m Going To Explore It [Techdirt]
While we’ve taken some issues with his approach to copyright laws and enforcement in the past, there is no doubting that Steven Soderbergh is a filmmaking legend. This is a man who directed films like Traffic and Ocean’s 11. He talks about, and cares about, the art of filmmaking. And he’s apparently beginning to use AI in some limited ways.
You really have to pay attention to Soderbergh’s specific comments on how he’s using it, because I would argue that it’s exactly the right artistic approach to the conversation: limited, targeted uses that help achieve the artist’s vision rather than replace everything in a film with garbage slop. Interestingly, articles like this one from Salon still frame all of this as some betrayal of art on Soderbergh’s part. Here’s how Soderbergh describes how he’s using AI as part of an upcoming film about John Lennon and Yoko Ono.
“AI has been helpful in creating thematically surreal images that occupy a dream space rather than a literal space,” Soderbergh said. “And it’s been really fun because you need a Ph.D. in literature to tell it what to do.” Soderbergh relented that generative programs require “very close human supervision,” before going on to admit that he’s also using “a lot of AI” for an upcoming film about the Spanish-American War, to generate images of archaic warships and God knows what else.
I very much understand Soderbergh’s description of how he’s using this tool for his films, but I have no idea what the hell the commentary from Salon around the quote is on about. “And God knows what else” is perhaps the silliest comment in the post, because that statement only works if Soderbergh himself happens to be God.
I don’t believe he is, to be clear. And I think an artist like this one who finds the tool useful in achieving his overall artistic vision is something we should be paying attention to, not dismissing out of hand. The Salon piece notes that Soderbergh has routinely been a director who has embraced the use of new technology before launching into this diatribe.
But just because Soderbergh jumping at AI could be seen from a mile away doesn’t make it any less disappointing, nor does it excuse his reluctance to thoughtfully engage with others’ criticisms about the technology. If “The Christophers” is to be believed, art that tries to imitate a certain style is little more than hollow, emotionless posturing. Generative AI is the same: mere mimicry, devoid of the humanity that makes art . . . well, art. And by being so willfully averse to acknowledging the ways AI and art conflict — not to mention its ramifications for others in his industry — Soderbergh’s take on an artist losing his touch in “The Christophers” is disappointingly apt.
Of course the art that AI “creates” is mimicry and devoid of humanity. That’s definitionally how the tool works. And anyone who thinks they’re going to rely on an AI tool to “create art” is on a fool’s mission. It simply won’t work because it’s not designed to work that way. Instead, it’s a tool to get you some components of what you need to create an overall artistic vision, which is still led by a very human artist. Will there be work done by an AI on the margins in filmmaking that would normally have been done via paid workers in the industry. Perhaps. Likely, even. But will the limited use of these tools also lower the barrier of entry in terms of skill set needed and budget to produce films, thereby creating even more output of films overall? I’m struggling to see how that would not be the case.
And at the end of the day, there’s still an artist calling the shots. Perhaps fewer overall total artists involved in a single movie, but the limited use of AI tools doesn’t somehow suck the entire soul from a film anymore than the ease of digital footage editing over the use of film does. And just like a movie that is almost nothing other than pretty CGI graphics, but which otherwise sucks, lazy people trying to create entire films with AI are going to fail. And fail hard.
Say it with me now: there is more nuance to this conversation than the hardliners and evangelists are bothering to acknowledge.
In a follow-up chat with Variety, Soderbergh expanded on his initial comments about using AI in future films. “I’m just not threatened by it . . . Ten years ago, I would have needed to engage a visual effects house at an unbelievable cost to come up with this stuff,” he said. “No longer. My job is to deliver a good movie, period. And this tool showed up at a moment when I needed it. I don’t think it’s the solution to everything, and I don’t think it’s the death of everything . . . There are some people that I have absolute love and respect for that refuse to engage with it. That’s their privilege. But I’m not built that way. You show me a new tool, I want to get my hands on it and see what’s going on.”
That’s an artist saying that, folks, not some Silicone Valley tech bro. And, to be clear, he might get it wrong. He may use the tool and his product might suck out loud. But to try to abort the use of a tool before it’s even been explored seems silly.
Kanji of the Day: 角 [Kanji of the Day]
角
✍7
小2
angle, corner, square, horn, antlers
カク
かど つの
外角 (がいかく) — external angle
内角 (ないかく) — interior angle
一角 (いっかく) — corner
角度 (かくど) — angle
互角 (ごかく) — equal (in ability)
三角 (さんかく) — triangle
角界 (かくかい) — the world of sumo
折角 (せっかく) — with trouble
街角 (まちかど) — street corner
多角的 (たかくてき) — multilateral
Generated with kanjioftheday by Douglas Perkins.
Kanji of the Day: 弥 [Kanji of the Day]
弥
✍8
中学
all the more, increasingly
ミ ビ
や いや いよ.いよ わた.る
弥生 (いやおい) — third month of the lunar calendar
弥生時代 (やよいじだい) — Yayoi period (c. 300 BCE-300 CE)
阿弥陀 (あみだ) — Amitabha (Buddha)
阿弥陀如来 (あみだにょらい) — Amitabha Tathagata
南無阿弥陀仏 (なむあみだぶつ) — Namu Amida Butsu
沙弥 (さみ) — male Buddhist novice
元の木阿弥 (もとのもくあみ) — ending up right back where one started
阿弥陀堂 (あみだどう) — temple hall containing an enshrined image of Amitabha
弥次 (やじ) — hooting
弥勒 (みろく) — Maitreya (Bodhisattva)
Generated with kanjioftheday by Douglas Perkins.
花博鶴見緑地で学ぶ 地図とWikipediaの編集体験 [OpenStreetMap Japan]
花博記念公園鶴見緑地を題材に、地域の情報をオープンデータとして記録・発信する方法を体験するイベントです。 午前中は現地での調査と撮影を行い、午後は会場に移動して、OpenStreetMapやWikipediaの編集に取り組みます。 地域の情報を「見つける」「記録する」「公開する」までの流れを、一日のプログラムとして体験していただきます。 OpenStreetMapやWikipediaの編集経験がない方でも参加しやすい内容を予定しています。 イベント申し込みは以下のサイトからお願いします。 https://countries-romantic.connpass.com/event/389840/
OSMコミュニティ [OpenStreetMap Japan]
OSMの活動は全世界各地で行われており、様々な言語で情報がやりとりされています。 基本的に使われる言語は英語ですが、OSMではそれぞれの地域に、メーリングリストが用意されており、その地域のなかでのコミュニケーションを容易にしています。 また、各地で開かれるマッピングパーティでは、その地域に住んでいる、あるいは関心を持っているひとが集まり、その地域の地図データを作成することを通じて、地域のことを知り、知識と技術を交換し、地図データを豊かにする活動が行われています。 ここでは、オンラインとオフラインのコミュニティについて紹介します。
マッピングパーティー in シーパスパーク [OpenStreetMap Japan]
2026年のインターナショナルオープンデータデイは、泉大津市のシーパスパークでマッピングパーティーをします。 参加無料です。初心者歓迎です。皆様のお越しをお待ちしています。(^^)/ みんなで、誰でも使える泉大津の地図を作っていきましょう。 今回は合わせて、uMapの編集方法なども説明しますよ。 OpenStreetMapやuMap、オープンデータやオンラインの地図に興味がある方にオススメです。 対象は、小学生以上です、低学年の方は、どなたか付き添いをお願いします。 日時 2026年3月7日(土)13時~15時(途中参加、途中退出自由です) 会場 大阪府泉大津市 シーパスパーク内ワークショップスペース(地図は下にあります) 詳しくは、connpassのサイトをご覧ください https://connpass.com/event/385494/
マッパーズサミット2026 [OpenStreetMap Japan]
みんなで作る自由な地図であるOpenStreetMapの会合、ミートアップ、交流会です。 地図を書く方(マッパー)のノウハウを共有し、タグ付けのアイデアを考えたり、話し合ったりする内容です。 上級者の方はもちろん、初心者のみなさんも気軽に参加して、楽しく交流しつつマッパー力を上げませんか? 議題提供、参加登録は以下のサイトまでお願いします。 https://osm.connpass.com/event/380259/
発表者募集中! <State of the Map Japan 2025 in Osaka> [OpenStreetMap Japan]
OpenStreetMapの日本国内カンファレンス「State of the Map Japan 2025」が大阪で開催されます。 気になる開催日は12月6日(土)。スケジュールを始めとした詳細については、検討および調整中です。 また、今回はWikimedians of Japan User Group、いのち会議との合同開催です。 そして、ただいま発表者を募集中です。 OpenStreetMapに関することなら、マッピング、ビジネス、研究、開発な
OpenStreetMap Advent Calendar 2025実施中! [OpenStreetMap Japan]
今年もオープンストリートマップ(OpenStreetMap)の2025年アドベントカレンダーです。 今年の振り返りや自己紹介など、内容は問いません。 ブログが無くてもこのQiitaやOpenStreetMapのユーザー日記、zenn、note、medium.com などに記事を書いて登録することもできます。 以下のサイトから、じゃんじゃん書いちゃってください! https://qiita.com/advent-calendar/2025/osmjp
みんなで「あさけ」界隈を歩いてウィキペディアと世界地図に足跡を残そう! [OpenStreetMap Japan]
WikipediaとOpenStreetMapの街の情報を編集し、世界に向けて発信するオープンデータソンを開催します。 四日市市のあさけ界隈(旧朝明郡のうちあさけプラザ周辺)にある旧東海道や伊勢湾から琵琶湖へと近江商人たちが往来した八風街道の街並み、聖武天皇行幸や戦国時代の戦乱に縁のある寺社を訪ね歩いたあと、あさけプラザの会議室で、ウィキペディアタウン&マッピング。 ・日時 2025年11月29日(土)8:45~17:15 (少雨決行) ・集合 あさけプラザ 2F 第4・第5会議室 四日市市下之宮町296番地1 ・内容 ①地図班(スマホでオープンストリートマップに書き込みます) 講師 坂ノ下勝幸(諸国・浪漫) ②ウィキペディア班(パソコンやスマホでウィキペディアに書き込みます) 講師 Miy
オープンデータを作ろう! with ウィキメディアもくもく会 in 北九州 [OpenStreetMap Japan]
ウィキメディア編集のもくもく会と合わせて街歩きを行います。 史跡や残したい地域の景色などを登録して、OpenStreetMapの情報を充実できれば、と思います。ご興味ある方ご検討よろしくお願いいたします。 会場の正面玄関が施錠されておりますので、告知サイトよりご登録ください。 告知サイトの参加者用リンクより、到着連絡用のフォームをご用意していますので、そちらから会場到着をご連絡ください。 https://techplay.jp/event/987206
OSMF Japan企業賛助会員: TomTom社の参加 [OpenStreetMap Japan]
OpenStreetMap活動を支援するOSMF Japanは、TomTom社が賛助会員として参加されることをお知らせいたします。 TomTom社は、OpenStreetMap Foundation(OSMF)のPlatinum Corporate Memberとして世界的なOSM活動を支援されていますが、この度、日本におけるOSM活動についても直接的な支援をいただけることとなりました。 同社は世界全域を対象とする道路データおよび車両移動軌跡を有する国際企業であり、世界各地でOSMコミュニティの発展に貢献されて
京都!街歩き!マッピングパーティ 京丹後市網野町 島津地区 [OpenStreetMap Japan]
京都を街歩きして、楽しみながら 自由な地図である OpenStreetMap を作り上げていくマッピングパーティ! 次回は ちりめんと稲穂の実り麗しい郷、京丹後市網野町 島津地区。 区長さんのご案内でゆるーり街歩きしながらサーベイ(現地調査) 午後は OpenStreetMap にマッピング(地図編集)、 マッピングの後は海の幸・山の幸で懇親会!! [申込みURL] https://openstreetmap-kyoto.connpass.com/event/364106/
OpenStreetMap浪江町マッピングパーティー [OpenStreetMap Japan]
福島県双葉郡浪江町でOpenStreetMapマッピングパーティーを2025年9月6日(土)開催します。 浪江駅周辺から国道6号線までの範囲でOpenStreetMapマッピングとMapillary(マピラリー)撮影を行います。 東日本大震災と東京電力福島第一原子力発電所事故後、復興中の浪江町OSMを充実できればとマッピングパーティーを企画します。 https://www.openstreetmap.org/#map=14/37.49202/140.99008 この機会に、ぜひご参加ください! OpenStreetMap・マッピングパーティーに関心のある皆さま、初心者歓迎します。 [申込みURL] https://peatix.com/event/4521047/view
OpenStreetMap 21歳の誕生日会! [OpenStreetMap Japan]
# OpenStreetMap 21th Birthday 2004年に始まったOpenStreetMapプロジェクト。 世界地図を世界のみんなで書いて共有する壮大なプロジェクトも開始されて21年もの歳月が経ちました。 # お誕生日会! まぁ、そういった訳で、お誕生日会を開きます! ぜひ、OpenStreetMapの21歳をお祝いに来てください! # 場所は阪急淡路駅近くの「コワーキングスペース金甘」です。 https://www.openstreetmap.org/node/9327336802 # 概要 * 日時:2025/08/16(土) 18:30~22:30頃(途中参加、退出自由) * 参加費は無料ですが、食べ物、飲み物のお持ち込み大歓迎! * バースデーケーキを注文する予定なので、費用を割り勘させてください! * コーヒー屋さんの出店も予定
豊津と垂水を地図と写真にアーカイブ! [OpenStreetMap Japan]
春の町を歩いて写真(Wikimedia Commons)と地図(OpenStreetMap)に残していこう! 吹田市にある「阪急豊津駅」。その周辺には色々と興味深い場所があるようですよ? 初心者大歓迎!スマホ片手に地図と写真で地域の魅力を発信する方法を体験しよう! [申込み先URL] https://countries-romantic.connpass.com/event/347992/
京都!街歩き!マッピングパーティ:第58回 松華堂庭園 [OpenStreetMap Japan]
京都を街歩きして、楽しみながら 自由な地図である OpenStreetMap を作り上げていくマッピングパーティ! 次回は松華堂弁当ゆかりの松華堂庭園。 公共交通機関では行きにくい所なので、石清水八幡宮駅近くから車で現地へ ゆるーり観光しながらサーベイ(現地調査) 向日市の会議室に移動して OpenStreetMap にマッピング(地図編集)、 マッピングの後は激辛懇親会!! [申込みURL] https://openstreetmap-kyoto.connpass.com/event/345304/
地図を作ろう!歩いて発見!魚町マッピング大作戦 [OpenStreetMap Japan]
魚町商店街を皆さんと一緒に探索し、見つけた観光スポットを地図のオープンデータ「OpenStreetMap」に反映させていきます。 より多くの人に魚町商店街の魅力を知ってもらうきっかけを一緒に作りましょう。 * 本イベントでは、ノートパソコンを使用します。当日は、各自ノートパソコンをお持ちください。 ** OpenStreetMapのアカウント登録をしますので、登録用のメールアドレスをご準備ください ※本イベントはインターナショナル・オープンデータ・デイ2025のイベントとして実施します。 皆様のお越しをお待ちしております。 [申し込みURL] https://techplay.jp/event/972743
OpenStreetMapの編集とuMapの体験 in 北助松(IODD2025泉大津) [OpenStreetMap Japan]
2025年3月1日9時~15時まで、大阪府泉大津市にある、助松神社と南海電鉄北助松駅周辺の商店街のマッピングパーティーを開催します。 また、uMapの体験もします。 申込はconnpassからお願いします。 https://connpass.com/event/344707/ イベントページはこちらです。詳しくはこちらから。 http://blog.livedoor.jp/sensyu_od_gis/archives/43597324.html このイベントは、インターナショナルオープンデータデイ2025として行います。 皆様のお越しをお待ちしています。
京都!街歩き!マッピングパーティ:第57回 大原野神社・正法寺 [OpenStreetMap Japan]
京都を街歩きして、楽しみながら 自由な地図である OpenStreetMap を作り上げていくマッピングパーティ! 次回は 大原野神社 と すぐ隣の正法寺。 公共交通機関では行きにくい所なので、ライドアンドパークの拠点 西山天王山駅から山下の車で現地へ ゆるーり観光しながらサーベイ(現地調査) 向日市の会議室に移動して OpenStreetMap にマッピング(地図編集)、 マッピングの後は激辛懇親会!! [申込みURL] https://openstreetmap-kyoto.connpass.com/event/343658/
| RSS | Site | Updated |
|---|---|---|
| XML | About Tagaini Jisho on Tagaini Jisho | 2026-05-07 06:00 AM |
| XML | Arch Linux: Releases | 2026-05-06 02:00 PM |
| XML | Carlson Calamities | 2026-05-06 02:00 PM |
| XML | Debian News | 2026-05-07 06:00 AM |
| XML | Debian Security | 2026-05-07 06:00 AM |
| XML | debito.org | 2026-05-07 06:00 AM |
| XML | dperkins | 2026-05-07 01:00 AM |
| XML | F-Droid - Free and Open Source Android App Repository | 2026-05-06 11:00 AM |
| XML | GIMP | 2026-05-06 02:00 PM |
| XML | Japan Bash | 2026-05-07 06:00 AM |
| XML | Japan English Teacher Feed | 2026-05-07 06:00 AM |
| XML | Kanji of the Day | 2026-05-06 02:00 PM |
| XML | Kanji of the Day | 2026-05-06 02:00 PM |
| XML | Let's Encrypt | 2026-05-06 02:00 PM |
| XML | Marc Jones | 2026-05-06 02:00 PM |
| XML | Marjorie's Blog | 2026-05-06 02:00 PM |
| XML | OpenStreetMap Japan | 2026-05-06 02:00 PM |
| XML | OsmAnd Blog | 2026-05-06 02:00 PM |
| XML | Pluralistic: Daily links from Cory Doctorow | 2026-05-07 01:00 AM |
| XML | Popehat | 2026-05-06 02:00 PM |
| XML | Ramen Adventures | 2026-05-06 02:00 PM |
| XML | Release notes from server | 2026-05-06 02:00 PM |
| XML | Seth Godin's Blog on marketing, tribes and respect | 2026-05-07 01:00 AM |
| XML | SNA Japan | 2026-05-07 01:00 AM |
| XML | Tatoeba Project Blog | 2026-05-07 06:00 AM |
| XML | Techdirt | 2026-05-07 06:00 AM |
| XML | The Business of Printing Books | 2026-05-06 02:00 PM |
| XML | The Luddite | 2026-05-06 02:00 PM |
| XML | The Popehat Report | 2026-05-07 01:00 AM |
| XML | The Status Kuo | 2026-05-07 01:00 AM |
| XML | The Stranger | 2026-05-06 02:00 PM |
| XML | Tor Project blog | 2026-05-07 06:00 AM |
| XML | TorrentFreak | 2026-05-07 06:00 AM |
| XML | what if? | 2026-05-07 06:00 AM |
| XML | Wikimedia Commons picture of the day feed | 2026-05-04 05:00 AM |
| XML | xkcd.com | 2026-05-07 06:00 AM |