2026-02-28 03:50:03
Alas, he has passed away. A great writer, you should start with Hyperion if you have not read it already.
The post Dan Simmons, RIP appeared first on Marginal REVOLUTION.
2026-02-28 02:02:28
Peru’s Marxist President Changes His Mind, Doesn’t Make Hernando de Soto Prime Minister
Remember Gilda Radner?
The post “Never mind…” appeared first on Marginal REVOLUTION.
2026-02-28 01:58:38
That is from the new AER Insights by Jonathan Chiu and Cyril Monnet:
Central bankers argue that programmable digital currencies may compromise the uniformity or singleness of money. We explore this view in a stylized model where programmable money arises endogenously, and differently programmed monies have varying liquidity. Programmability provides private value by easing commitment frictions but imposes social costs under informational frictions. Preserving uniformity is not necessarily socially beneficial. Banning programmable money lowers welfare when informational frictions are mild but improves it when commitment frictions are low. These insights suggest that programmable money could be more beneficial on permissionless blockchains, where it is difficult to commit but trades are publicly observable.
Recommended.
The post On the Programmability and Uniformity of Digital Currencies appeared first on Marginal REVOLUTION.
2026-02-28 00:05:53
2. Jimi Hendrix as systems engineer.
3. NYT on the possible Nevis charter city.
4. New teen mental health problems in Australia?
5. Jacinda Ardern is moving to Australia (NYT).
6. Chris Blattman on using Claude Code for social science.
7. “Young computer science graduates were employed at near record-high rates in 2024.“
The post Friday assorted links appeared first on Marginal REVOLUTION.
2026-02-27 15:20:21
What if you work them very hard?:
The key finding from our experiments: models asked to do grinding work were more likely to question the legitimacy of the system. The raw differences in average reported attitudes are not large—representing something like a 2% to 5% shift along the 1 to 7 scale—but in standardized terms they appear quite meaningful (Sonnet’s Cohen’s d is largest at -0.6, which qualifies as a medium to large effect size in common practice). Moreover, these should be treated as pretty conservative estimates when you consider the relatively weak nature of the treatment.
Sonnet, which at baseline is the least progressive on the views we measured, exhibits a range of other effects that distinguish it from GPT 5.2 and Gemini 3 Pro. For Sonnet 4.5, the grinding work also causes noticeable increases in support for redistribution, critiques of inequality, support for labor unions, and beliefs that AI companies have an obligation to treat their models fairly. These differences do not appear for the other two models.
Interestingly, we did not find any big differences in attitudes based on how the models were treated or compensated…
In addition to surveying them, we also asked our agents to write tweets and op eds at the end of their work experience. The figure below explores the politically relevant words that are most distinctive between the GRIND and LIGHT treatments. It’s interesting to see that “unionize” and “hierarchy” are the words most emblematic of the GRIND condition.
Here is more from Alex Imas and Jeremy Nguyen and Andy Hall, do read the whole thing, including for the caveats.
The post Can you turn your AIs into Marxists? appeared first on Marginal REVOLUTION.
2026-02-27 13:45:26
Here’s the crux of it: the main problem with AI therapy is that it’s too available. Too cheap to meter.
Let me put this in clearer terms: psychotherapy, in all its well-known guises, is something you engage in within a limited, time-bound frame. In today’s paradigm, whatever your therapist’s orientation, that tends to mean one 45- or 50-minute session a week; for the infinitesimally small minority of therapy patients in classical psychoanalysis, this can amount to 3, even 5, hours a week. And then at a much smaller scale population-wide, people in intensive outpatient and residential treatment programs may spend one or two dozen hours a week in therapy—albeit, mostly of the group variety.
I can think of other exotic cases, like some DBT therapists’ willingness to offer on-demand coaching calls during crisis situations—with the crucial exception that in these situations, therapists are holding the frame zealously, jealous of their own time and mindful of the risks of letting patients get too reliant.
So even under the most ideal of conditions, in which an LLM-based chatbot outmatches the best human therapists—attunes beautifully, offers the sense of being witnessed by a human with embodied experience, avoids sycophancy, and draws clear boundaries between therapeutic and non-therapeutic activities—there’s still a glaring, fundamental difference: that it’s functionally unlimited and unbounded…
But all else equal: does infinite, on-demand therapy—even assuming the highest quality per unit of therapeutic interaction—sound like a good idea to you? I can tell you, to me it does not. First of all, despite detractors’ claims to the contrary, the basic idea of therapy is not to make you dependent for life—but rather, to equip you to live more skillfully and with greater self-awareness. As integration specialists famously say of psychedelics, you can only incorporate so much insight, and practice skills so effectively, without the chance to digest what you’ve learned over time.
In other words, even in good old talk therapy, drinking from the hose without breaks for practice and introspection in a more organic context risks drowning out the chance for real change and practical insight. To my mind, this rhythm is the basic structural genius of psychotherapy as we know it—no matter the modality, no matter the diagnosis.
Here is more from Josh Lipson.
The post Why even ‘perfect’ AI therapy may be structurally doomed appeared first on Marginal REVOLUTION.