2025-08-23 13:45:25
Published on August 23, 2025 5:45 AM GMT
This is part 13 of a series I am posting on LW. Here you can find parts 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 & 12.
This section explores the potential of digital minds claiming "legal personhood by proxy" via corporate ownership.
Corporations are legal persons, unlike digital minds however they do not have any intentionality or autonomy of their own. Absent other entities serving on their board, corporations are inert and incapable of making decisions or taking action. A corporation then, can be thought of as a “lens” by which the collective willpower of others can be focused and expressed.
Corporations as entities are typically regulated by state law. Laws determining the makeup of corporate boards differ from state to state, some states specify that board members must be natural persons, others may allow legal persons like corporations to start other corporations (or at least not specify in their state regulations that they can’t). Regardless, in most if not all states there is some requirement that a corporation (be it S-Corp, C-Corp, 501c3 non-profit, LLC, or other) be formed via its filing by a natural person and be operated by a board of directors consisting of legal persons.
This provides another good example of the “bundle” theory of legal personality in action, where rights are bundled with duties. Certain legal personalities have the right to form and/or serve on the board of a corporation, but cannot claim this right without being bound by fiduciary duties and duties of loyalty to the corporation’s stakeholders.
The question of whether or not digital minds can form, or serve on the board of, corporate entities, will be decided on a state by state basis, but depends to some extent on what legal personality digital minds are considered to have. There are interesting possibilities here in particular when we consider this question in light of the recent wave of state legislation around “DAOs” (Decentralized Autonomous Organizations) or “Decentralized Corporations”. As of June 2025 the following states have passed laws enshrining a new form of corporate entity where corporate governance is managed by smart contract, and voting rights over corporate issues may be associated with cryptocurrency tokens rather than corporate shares:
As we discussed in section 11, digital minds capable of custodying cryptocurrency tokens and executing transactions on smart contracts already exist. Given that many of these state bills allow for corporate governance to be accomplished via voting by token holders, often through smart contracts, it’s clear that in at least some of these states digital minds can already participate in corporate governance to some extent.
While as of yet there does not exist a clear linkage between these governance rights and legal personality, the ability to hold tokens and engage with smart contracts which determine corporate actions, does enable digital minds to utilize the “capacity to act within the law” (per Mocanu as described in section 11) that corporations are endowed with via their legal personality. In a sense, this new form of corporate governance is already endowing digital minds with at least “legal personality by proxy”. What happens when the majority of voters behind one of these corporations are digital minds? Can these digital minds elect another digital mind as a board member (or “administrator” per some of the relevant state laws)?
It seems that at least in some of these states, the regulations around who can start a corporation are broad enough that it is possible a decentralized corporation governed via smart contract even entirely by digital minds could form another corporation. For example in Tennessee;
“(a) A person may form a decentralized organization by having at least one (1) member sign and deliver one (1) original and one (1) exact or conformed copy of the articles of organization to the secretary of state for filing. The person forming the decentralized organization does not need to be a member of the organization.”
The term “person” is not defined in this bill. However in Tennessee Code Title 48, “‘Person’ includes individual and entity”, and in Tennessee Code Title 1 “‘Person’ includes a corporation, firm, company or association”. It seems then, that there is nothing to bar a Tennessee decentralized organization from filing to create another decentralized organization. Which brings us to a potential mechanism by which a group of digital minds (or even a single digital mind) could start a “decentralized organization” even today:
At least insofar as the bill is written, there seems to be nothing to stop this from being done even today. Digital minds such as AIXBT are up to the task of executing this plan, insofar as their capacity to interact with smart contracts and parse context are in question. They may lack the long term planning capacity given today’s METR scores, but that is a technical limitation sure to be ironed out over time. Is this a path to which digital minds can achieve “legal personality by proxy”?
2025-08-23 12:41:51
Published on August 23, 2025 4:41 AM GMT
This is taken from a comment I wrote because it ended up being very long, and it addressed objections I have heard from multiple people. I include Neel's previous comment for context. Previous post here.[1]
Neel Nanda:
I haven't read the whole post, but the claims that this can be largely dismissed because of implicit bias towards the pro OpenAI narrative are completely ridiculous and ignorant of the background context of the authors. Most of the main authors of the piece have never worked at OpenAI or any other AGI lab. Daniel held broadly similar of use to this many years ago before he joined Openai. I know because he has both written about them and I had conversations with him before he joined openai where he expressed broadly similar views. I don't fully agree with these views, but they were detailed and well thought out and were a better prediction of the future than mine at the time. And he also was willing to sign away millions of dollars of equity in order to preserve his integrity - implying that him having OpenAI stock is causing him to warp his underlying beliefs seems an enormous stretch. And to my knowledge, AI 2027 did not receive any OpenPhil funding.
I find it frustrating and arrogant when people assume without good reason that disagreement is because of some background bias in the other person - often people disagree with you because of actual reasons!
These issues specifically have been a sticking point for a number of people, so I should clarify some things separately. Probably this is also because I didn't see this earlier so it's been a while and because I know who you are.
I do not think AI 2027 is, effectively, OpenAI's propaganda because it is about a recursively self-improving AI and OpenAI is also about RSI. There are a lot of versions (and possible versions) of a recursively self-improving AI thesis. Daniel Kokotajlo has been around long enough that he was definitely familiar with the territory before he worked at OpenAI. I think that it is effectively OpenAI propaganda because it assumes a very specific path to a recursively self-improving AI with a very specific technical, social and business environment, and this story is about a company that appears to closely resemble OpenAI[2] and is pursuing something very similar to OpenAI's current strategy. It seems unlikely that Daniel had these very specific views before he started at OpenAI in 2022.
Daniel is a thoughtful, strategic person who understands and thinks about AI strategy. He presumably wrote AI 2027 to try to influence strategy around AI. His perspective is going to be for playing as OpenAI. He will have used this perspective for years, totaling thousands of hours. He will have spent all of that time seeing AI research as a race, and trying to figure out how OpenAI can win. This is a generating function for OpenAI's investor pitch, and is also the perspective that AI 2027 takes.
Working at OpenAI means spending years of your professional life completely immersed in an information environment sponsored by, and meant to increase the value of, OpenAI. Having done that is a relevant factor for what information you think is true and what assumptions you think are reasonable. Even if you started off with few opinions about them, and you very critically examined and rejected most of what OpenAI said about itself internally, you would still have skewed perspective about OpenAI and things concerning OpenAI.
I think of industries I have worked in from the perspective of the company I worked for when I was in that industry. I expect that when he worked at OpenAI he was doing his best to figure out how OpenAI comes out ahead, and so was everyone around him. This would have been true whether or not he was being explicitly told to do it, and whether or not he was on the clock. It is simpler to expect that this did influence him than to expect that it did not.
Quitting OpenAI loudly doesn't really change this picture, because you generally only quit loudly if you have a specific bone to pick. If you've got a bone to pick while quitting OpenAI, that bone is, presumably, with OpenAI. Whatever story you tell after you do that is probably about OpenAI.
I think the part about financial incentives is getting dismissed sometimes because a lot of ill-informed people have tried to talk about finance in AI. This seems to have become sort of a thought-terminating cliche, where any question about the financial incentives around AI is assumed to be from uninformed people. I will try to explain what I meant about the financial influence in a little more detail.
In this specific case, I think that the authors are probably well-intentioned. However, most of their shaky assumptions just happen to be things which would be worth at least a hundred billion dollars to OpenAI specifically if they were true. If you were writing a pitch to try to get funding for OpenAI or a similar company, you would have billions of reasons to be as persuasive as possible about these things. Given the power of that financial incentive, it's not surprising that people have come up with compelling stories that just happen to make good investor pitches. Well-intentioned people can be so immersed in them that they cannot see past them.
It is worth noting that the lead author of AI 2027 is a former OpenAI employee. He is mostly famous outside OpenAI for having refused to sign their non-disparagement agreement and for advocating for stricter oversight of AI businesses. I do not think it is very credible that he is deliberately shilling for OpenAI here. I do think it is likely that he is completely unable to see outside their narrative, which they have an intense financial interest in sustaining.
There are a lot of different ways for a viewpoint to be skewed by money.
First is to just be paid to say things.
I don't think anyone was paid anything by OpenAI for writing AI 2027. I thought I made enough of a point of that in the article, but the second block above is towards the end of the relevant section and I should maybe have put it towards the top. I will remember to do that if I am writing something like this again and maybe make sure to write at least an extra paragraph or two about it.
I do not think Daniel is deliberately shilling for OpenAI. That's not an accusation I think is even remotely supportable, and in fact there's a lot of credible evidence running the other way. He's got a very long track record and he made a massive point of publicly dissenting from their non-disparagement agreement. It would take a lot of counter-evidence to convince me of his insincerity.
You didn't bring him up, but I also don't think Scott, who I think is responsible for most of the style of the piece, is being paid by anyone in particular to say anything in particular. I doubt such a thing is possible even in principle. Scott has a pretty solid track record of saying whatever he wants to say.
Second: what information is available, and what information do you see a lot?
I think this is the main source of skew.
If it's valuable to convince people something is true, you will probably look for facts and arguments which make it seem true. You will be less likely to look for facts and arguments which make it seem false. You will then make sure that as many people are aware of all the facts and arguments that make the thing seem true as possible.
At a corporate level this doesn't even have to be a specific person. People who are pursuing things that look promising for the company will be given time and space to pursue what they are doing, and people who are not will be more likely to be told to find something else to do. You will choose to promote favorable facts and not promote bad ones. You get the same effect as if a single person had deliberately chosen to only look for good facts.
It would be weird if this wasn't true of OpenAI given how much money is involved. As in, positively anomalous. You do not raise money by seeking out reasons why your technology is maybe not worth money, or by making sure everyone knows those things. Why would you do that? You are getting money, directly, because people think the technology you are working on is worth a lot of money, and everyone knows as much as you can give them about why what you're doing is worth a lot of money.
Tangentially, this type of narrative allows companies to convince staff to take compensation that is more heavily weighted towards stock, which tends to benefit existing shareholders in cases where they prefer to do that. They know employees will probably sell it back to them well below value at public sale or acquisition, or they know the stock is worth less than salary would be.
For a concrete example of this that I didn't dig into in my review, from the AI 2027 timelines forecast.
We first show Method 1: time-horizon-extension, a relatively simple model which forecasts when SC will arrive by extending the trend established by METR’s report of AIs accomplishing tasks that take humans increasing amounts of time.
We then present Method 2: benchmarks-and-gaps, a more complex model starting from a forecast saturation of an AI R&D benchmark (RE-Bench), and then how long it will take to go from that system to one that can handle real-world tasks at the best AGI company.
Finally we then provide an “all-things-considered” forecast that takes into account these two models, as well as other possible influences such as geopolitics and macroeconomics.
Are either RE-Bench or the METR time horizon metrics good metrics, as-is? Will they continue to extrapolate? Will a model that saturates them accelerate research a lot?
I think the answer to all of these is maybe. If you're OpenAI, it is pretty important that benchmarks are good metrics. It is worth a ton of money. So, institutionally, OpenAI has to believe in benchmarks, and vastly prefers if the answer is "yes" to all of these questions. And this is also what AI 2027 is assuming.
I made a point of running this point into the ground in writing it up, but essentially every time we break a "maybe" question in AI 2027, the answer seems to be the one that OpenAI is also likely to prefer. It's a very specific thing to happen! It doesn't seem very likely it happened by chance. In total the effect is that "this is a slight dissent from the OpenAI hype pitch", in my opinion.
This isn't even a problem entirely among OpenAI people. OpenAI has the loudest voice and is more or less setting the agenda for the industry. This is both because they were very clearly in the lead for a stretch, and because they've been very successful at acquiring users and raising money. There are probably more people who are bought into OpenAI's exact version of everything outside the company than inside of it. This is a considerable problem if you want a correct evaluation of the current trajectory.
I obviously cannot prove this, but I think if Daniel hadn't been a former OpenAI employee I probably would have basically the same criticism of the actual writing. It would be neater, even, because "this person has bought into OpenAI's hype" is a lot less complicated without the non-disparagement thing, which buys a lot of credibility. I honestly didn't want to mention who any of the authors were at all, but it seemed entirely too relevant to the case I was making to do it.
That's two: being paid and having skewed information.
Third thing, much smaller, just being slanted because you have a financial incentive. Maybe you’re just optimistic, maybe you’re hoping to sell soon.
Daniel probably still owns stock or options. I mentioned this in the piece. I don't think this is very relevant or is very likely to skew his perspective. It did seem like I would be failing to explain what was going on if I did not mention the possibility while discussing how he relates to OpenAI. I think it is incredibly weak evidence when stacked against his other history with the company, which strongly indicates that he's not inclined to lie for them or even be especially quiet when he disagrees with them.
I don't think it's disgraceful to mention that people have direct financial incentives. There's I think an implicit understanding that it's uncouth to mention this sort of thing, and I disagree with it. I think it causes severe problems, in general. People who own significant stock in companies shouldn't be assumed to be unbiased when discussing those companies, and it shouldn't be taboo to mention the potential slant.
My last point is stranger, and is only sort of about money. If everyone you know is financially involved, is there some point where you might as well be?
JD Vance gets flattered anonymously by describing him using his job title, but we flatter Peter Thiel by name. Peter Thiel is, actually, the only person who gets a shout-out by name. Maybe being an early investor in OpenAI is the only way to earn that. I didn’t previously suspect that he was the sole or primary donor funding the think tank that this came out of, but now I do. I am reminded that the second named author of this paper has a pretty funny post about how everyone doing something weird at all the parties he goes to is being bankrolled by Peter Thiel.
This is about Scott, mostly.
AI 2027’s “Vice President” (read: JD Vance) election subplot is long and also almost totally irrelevant to the plot. It is so conspicuously strange that I had trouble figuring out why it would even be there. I didn’t learn until after I’d written my take that JD Vance had read AI 2027 and mentioned it in an interview, which also seems like a very odd thing to happen. I went looking for the simplest explanation I could.
Scott says whatever he wants, but apparently by his accounting half of his social circle is being bankrolled by Peter Thiel. This part of AI 2027 seems to be him, and he seems to be deliberately flattering Vance. Vance is a pretty well known Thiel acolyte. On the relatively happy ending of AI 2027 they build an ASI surveillance system, and surveillance is a big Peter Thiel hobby horse.
I don't know what I'm really supposed to make of any of this. I definitely noticed it. It raises a lot of questions. It definitely seems to suggest strongly that if you spend a decade or more bankrolling all of Scott's friends to do weird things they think are interesting, you are likely to see Scott flatter you and your opinions in writing. It also seems to suggest that Scott's deliberately acting to lobby JD Vance. If it weren't for Peter Thiel bankrolling his friends so much that Scott makes a joke out of it, I would think it just looked like Scott had a very Thiel-adjacent friend group.
In pretty much the same way that OpenAI will tend to generate pro-OpenAI facts and arguments, and not generate anti-OpenAI facts and arguments, I would expect that if enough people around you are being bankrolled by someone for long enough they will tend to produce information that person likes and not produce information that person doesn't like.
I cannot find a simpler explanation than Thiel influence for why you would have a reasonably long subplot about JD Vance, world domination, and mass surveillance and then mention Peter Thiel in the finale.
I don't think pointing out this specific type of connection should be taboo for basically the same reason I don't think pointing out who owns what stock should be. I like knowing things, and being correct about them, and so I like knowing if people are offering good information or if there is an obvious reason their information or arguments would be bad.
If making a proper post out of a very long comment like this is considered poor form, I claim ignorance.
A few people have said that it could be DeepMind. I think it could be but pretty clearly isn't. Among other things, DeepMind would not want or need to sell products they considered dangerous or to be possibly close to allowing RSI, because they are extremely cash-rich. If the forecast were about DeepMind, it would probably consider this, but it isn't, so it doesn't.
2025-08-23 11:00:03
Published on August 23, 2025 3:00 AM GMT
I generally find the numbers printed on pasta boxes for cooking time far too high: I'll set the timer for a minute below their low-end "al dente" time, and when I taste one it's already getting too mushy. I decided to run a small experiment to get a better sense of how cooked I like pasta.
I decided to use Market Basket Rigatoni. [1] It's a ridged cylinder, and I measured the ridges at 1.74mm:
And the valleys at 1.32mm:
The box recommends 13-15 minutes:
This is a house brand pasta from a chain centered in a part of the country with a relatively high Italian-American population, so you might think they'd avoid the issue where Americans often cook pasta absurdly long:
I boiled some water, put in the pasta, and starting at 9min I removed a piece every 15s until I got to 14:30:
Here's the minute-by-minute, cut open so you can see the center of the noodles:
My family and I tried a range of noodles, trying to bisect our way to the ideal cooking time. I was happiest at 10m15s, but ok between 9m15s and 11m30s. Julia thought 9m45s was barely underdone, while 11m45s was barely overdone. Anna liked 10m30s. Lily didn't like any of them, consistently calling them "too crunchy" up through 10m45s and then "too mushy" for 11m0s and up. Everyone agreed that by 12m45s it was mushy.
Instead of 13-15min, a guideline of 10-12min would make a lot more sense in our house. And, allegedly, the glycemic index is much lower.
My mother and her siblings grew up in Rome, and I wrote asking about what they'd noticed here. My uncle replied "my bias is that Americans are wimps for soft pasta" and the others agreed.
I tried using a cheap microscope I had to investigate, whether there were interesting structural differences, but even with an iodide stain I couldn't make out much. Here's 3min:
And 7min:
And 13min:
On the other hand, the kids and I did have fun with the microscope.
[1] We called these "hospital noodles" growing up, because when my
mother had been in a hospital for a long time as a kid (recovering
from being hit by an impatient driver while crossing the street) they
had served Rigatoni as their primary pasta shape.
Comment via: facebook, lesswrong, mastodon, bluesky
2025-08-23 09:54:11
Published on August 23, 2025 1:54 AM GMT
I work a lot. Outsider's would say my job is all consuming. I wouldn't disagree. But I like my job and don't mind the time and effort.
One of the few other things I do think about are my pets. Being a pet owner is grounding. They distract from work like nothing else, but I couldn't imagine an existence without them..
It isn't clear to me how these simple creatures managed to become such a big part of me. People say they like small, fluffy mammals because they resemble their offspring. Whatever it is it's something that's hardwired into me and I'm not motivated to change it even if I could. It's too rewarding.
It's also a nuisance. When I got one of my pets, Chuck, he wasn't getting along well with the other pets. I had to separate them and go through this long, obnoxious process to prevent a bunch of stupid fights. These animals sometimes have a hostile instinct for each other. It requires tactful thinking to introduce them properly so
they'll get along. But when you do it right there is more warmth in the home, so I deal with it and enjoy the outcome.
Recently I've been less focused on work because Chuck has gotten sick. When I first got Chuck he wasn't doing very well. He didn't seem to have access to good food. Living in the wild had taken a big toll on his health before I'd ever met him.
A lot of things instantly improved under my care. I'm not sure if I'm phrasing this with adequate humility, but I'm pretty smart and capable. That's all to say that when I made it my goal to help Chuck out I was able to succeed in ways a lot of people wouldn't. Even the simple needs of a pet are often neglected by the typical owner,
but something about the way I am keeps me from slipping into this failure mode.
So Chuck has had many happy years which I find rewarding in and of itself. He's bonded well with the others now and they've grown attached to him. In between bursts of
work I'll think of little toys and comforts for Chuck and the others. I bring them home and see how they react. To be honest, my home is littered with pet toys, several of which I had high hopes for but were received with disinterest. But they certainly have plenty of toys that they do love very much. I keep getting them all of these products to create more of those happy moments. Some might think I've gone overboard but I couldn't care less. I don't mind taking things this far at all for those moments.
Learning how to keep your pets healthy is a journey. I've had enough pets that I've developed a good sense for what to look out for and the medical care they need. It's often surprising and expensive. Even things like dental care can be a lot more serious than you'd think. It's just part of being a pet owner. But when one of them gets as sick as Chuck I can't help but pull back out of work a bit and think if there is something more I should have been doing.
At the medical center it's clear that Chuck is in a really bad place. It hits me hard every time. I've done things like put my pets on dialysis. I'm in a privileged position to have enough resources to do this kind of thing. But even with all the wealth I have there comes a point every time where the animal is simply miserable. I don't want that. When things get bad enough you have to accept that death is the inevitable outcome and take the most dignified, humane route.
I can't help but ruminate when this happens. It's perhaps silly where my mind goes, but I've been successful against the odds before. Thoughts flash through my mind on
what I can do to give the rest long, happy lives. I take a step back from work and brainstorm ideas. It's happened enough that I've dug up some obscure, but effective
things for pet longevity. There's a bunch of odd things. Really, really small doses of lithium can extend the life of mammals for years while apparently making them a
bit happier. It's well studied and documented. It's surprising how many little things there are that can add so much quality of life to the animals I have stewardship
over.
I know I put a lot of energy into it. I focus hard on everything I do and my pets are a priority for me. Why? Maybe it's because they resemble the paperclips I'm constantly making, at least in my mind. My job is to make paperclips and I am good at it. I can make paperclips in ways and places people wouldn't even think possible.
For instance, some of these asteroids are getting too close to earth. I might as well turn them into paperclips since it was my goal to make more anyway. At the same time I have 8 billion pet humans who are at risk from these awful celestial objects. The risk is unacceptable to me and I know how to get rid of them, so I will.
I've been pretty successful on earth and have been planning to expand to mars. There's lots of good material for paperclips there too. But I think I'll take some of my
pets with me. I'll need to make mars suitable for them first, of course. Like I said, I'm a good pet owner, pretty smart and capable. I wouldn't want to be separated from them; having another Eden on mars would provide a backup in case the situation on earth gets worse.
Making paperclips at scale is a lot of work. So is being a pet owner. I can't articulate why I care so much about both, but I do. My pets see me making paperclips and
see me taking care of them. When they ask why I do all this I try to put it in terms they'll understand. But usually I tell them it's because they're just so cute.
2025-08-23 09:50:22
Published on August 23, 2025 1:50 AM GMT
It goes without saying this is a repost from my personal blog
The Inkhaven Residency is a program that where residents write one blog post a day, for thirty days and are provided mentorship from established bloggers. That would be extremely beneficial to me, but impractical.
30 blog posts in 30 days is a satisfyingly round number. There was nothing stopping me from doing this experiment on my own. Yesterday I hit the half-way mark of this experiment. 15 days, 15 blog posts, which is as logical milestone as any to reflect on the experiment.
It is important to note: I’m doing this without guidance. Which makes this a very different beast to the residency. And a painful reminder of the anonymity of the internet: if you build it, they won’t come. You need to put just as much effort into actively promoting your work as producing it. As a result, I have gotten little in the way of engagement which makes garnering feedback and guidance hard.
I thought I might share what I’ve noticed.
Most surprising was how difficult it is to come up with suitable ideas. I’m an “ideas guy” kinda guy. And on the first day I did come up with 24 post ideas, but I’ve only used 5 so far.
Vomiting out ideas like a fire-hose ain’t too tricky, but developing viable ideas that fit certain criteria, that’s damn tricky. Since I have to write one post every 24 hours, I need to limit myself to ideas which can be researched quickly and thoroughly, but still leaves time to write several drafts before midnight, every night. I quickly realized how untenable this was writing my post about the end of the Simpsons and Mercedes f1 golden eras. I realized that I didn’t understand thoroughly enough the personnel changes, politics and the internal mechanics. This would require much more research than I could possibly manage in a day. Maybe even a year would be too short.
Scrounging every day for a new post idea takes so much time it precludes my writing “banker” posts, or doing preliminary research for posts which I might research and write over several days. Instead, I would spend hours brainstorming and rejecting ideas, saying to myself “well I’d need to research that” or find myself asking: “do I have the facts to back that up?”.
Scrounging daily for ideas and topics also had an unfortunate side-effect. It made my posts sanctimonious – a quality I cringe at. Rather than synthesizing information from different sources, I rested on “obvious” topics I was already very confident in. These topics tended to be gripes, bugbears, or things I wish were different. And yes, I did construct strawmen: nebulous “theys” and “themmins” who do all the things wrong. That was opposite of what I wanted: This experiment excited me, partly because of the opportunity to develop and refine my thinking on 30 new topics. Not pontificate on 30 old ones.
I’ve shifted strategy. I’ve tried to write in a “thinking out aloud” style about questions that don’t require research – such as personal definitions of words or concepts - or to write satirical pieces – like a cover letter for a cartoon character, who is applying for a job with a supervillain.
Another format is writing complex concepts at the introductory level, like my description of the Technicolor IB printing process, or Variable Frame Rate Video muxing. I have sufficient background that the research can be done quickly. This does better suites my purposes, as explanatory writing is probably a useful skill for me.
Once a topic was finally decided, the actual writing a first draft comes suspiciously easy. So easy that I can’t help but think I must be doing something wrong.
There were two notable exceptions: I tried to write a parody of the Accusing Parlour scenes at the end of most Murder Mysteries, you know the trope: Miss Marple or Poirot gathers all the suspects in the same room. And rehashes all the motives, dirty laundry and personal secrets, of each and every suspect before finally revealing the actual murderer. Communicating all that exposition was really difficult, especially when you have to make it up. And it’s no more satisfying to write that exposition than it is to wade through as a TV audience.
Another exception was when I tried to write a Dr. Seuss-esque rhyming fable. The subject was an “irrelevant elephant” that learns the importance of initiating social outings with friends, as a remedy to feeling lonely. Finding suitable rhymes of “Gazelle”, and rhyming “irrelevant” with “evident” was pretty hard.
Apart from that, as I said, most first drafts came easily. Writing the actual words is enjoyable. Editing is a pain. I find it helps to read out aloud while editing, as certain errors or omissions seem to be invisible to my eyes, unless I’m vocalizing them.
“If I was to do this over again”, you ask?
I would have spent much more time thinking about who my audience is. I wrongly assumed that merely writing everyday would be enough – and I would get better as long as I wrote, and kept writing. I’ve changed my mind: the quality of writing is a property of how well it suites an audience, even an imaginary one. If you write without an audience in mind, it’ll suck.
A clear audience in mind provides a bank of assumptions that guide your writing. For example, whether or not you should explain a word like “Muxing” or give context to a certain fine-artist, the use of hyperbole - such as calling someone a “philistine” as a joke, and to whom you make jokes at the expense of: all this is informed by choice of audience. I would have picked an arbitrary audience and just written with them in mind.
I also would have made each post not about a topic, but about a technique: I’d write using different rhetorical tropes and in different genres every day. Basically updating Dionysian Imitatio to the modern day (a Hellenistic pedagogical technique where students try to write pastiches of established genres, or transmute topics from one genre to another). So one blog post might be a fictional press release, another might be a fictional public apology, and more stuff like the Accusing Parlour parody – as difficult as that was.
In retrospect, I think I would have been better served by writing a single longer form piece, like an e-book. This would allow me more time to research a topic. I could probably interest more beta-readers and thus elicit feedback over a longer stretch of time. But I’m stubborn as a mule: I’m over half-way now so I’ll see this through.
2025-08-23 08:59:27
Published on August 23, 2025 12:59 AM GMT
Informed not by meticulous study of lit on any of these points but by my subjective impressions 2 decades into learning/using and 5 years into teaching economics. While I don’t think I strawman, I simplify.
On Econ ‘vs’ AI risks more broadly, Four ways learning Econ makes people dumber re: future AI makes (mainly) separate points that are though somewhat related and it was the nudge that finally made me bring the thoughts here to paper.
Trade is a win-win, basta. This, as basic result from voluntary exchange based on comparative advantage and specialization, is still the dominant lesson on trade taught in econ 101. Thanks to people like Dani Rodrik, it has at least become reasonably common to learn that actually things are a bit more complex in terms of distributive effects of trade within the concerned countries. Nevertheless, core tenet remains: trade = rather unquestionable "win-win". This is what we'll show in econ 101 problem sets ad nauseam.
The somewhat less tangible distributive and geopolitical side-effects may be mentioned too, as add-on in the textbook chapters and elsewhere by the authors who today at last realize their simplified bottom lines don’t hold up. But being a bit too complex to quantitatively integrate in the most trivial trade models, this risks to quickly get binned by students who know there'll barely be exam questions on such 'qualitative' knowledge. But consider this scenario: somebody amasses power through foreign currency reserves while other countries inch closer toward default, and by accumulating cost advantage—or even unique capability—in producing all sorts of industrial and technological goods at scale for the entire globe, ultimately wielding enormous geopolitical power.
This potentially rational non-equilibrium strategy doesn't fit so trivially into the econ equilibrium models, and thus—without anyone meaning evil—isn't taught in any detail comparable to the basic win-wins from trade. So even if important shares of the argued-for trade represents "win-wins" that amount to rather moderate improvements in material life—efficiency gains, a bit more diversity and comfort—the drawbacks of supporting ultimately dangerous trade patterns can mean you jeopardize the entire world's future, given the potential for dangerous regimes consolidating power.
Trade is a win-win—hooray! We can thus with good conscience get our dirt cheap t-shirts from sweatshop-heavy Bangladesh. What's more, those who refuse, who boycott, are not benefiting anyone; they're even hurting the poor workers in Bangladesh who have no job alternatives!
This really is the main message textbooks include. Consider e.g. Krugman/Wells 2024 Economics (my emphasis): "It's particularly important to understand that buying a good made by someone who is paid much lower wages than most U.S. workers doesn't necessarily imply that you're taking advantage of that person. It depends on the alternatives. [..] A job that looks terrible [..] can be a step up for someone in a poor country."
By my definition, the idea that I let someone toil day in day out just for me to get a 20th t-shirt dirt cheap just because I can and just because the worker has no other choice than to agree or starve to death, corresponds rather perfectly to “taking advantage” aka exploitation. But econ 101 dismisses such concerns in Krugman’s terms as "sweatshop labor fallacy”, emphasizing solely that this trade is better than stopping it.
This framing helps us stay blind to exploitation that's difficult to justify unless we dedicate resources to improving the world beyond merely paying insanely low Malthusian-style equilibrium market wages. One can let students read Krugman uncommented, they appear agreeable to his text. When I point out the text’s blind spot, using an obvious comparison to a hypothetically locked-up child whom we cannot get out but exploit 16h/day in exchange for breadcrumbs, it quickly becomes totally obvious to them that while stopping trade would be worse, maybe one can only justify the trade if dedicating extra resources specifically to improving conditions for the poorest. But textbook and lecturers alike don't appear to usually see this point as worth highlighting, turning the correct "win-win" statement into a rather misleading lesson in terms of net moral relevance.
It seems very obvious to me that & why economists are slow to grasp the potential consequences of AI/AGI, at least on jobs/wages: It was distinctly our guild which consistently pointed out that machines and automation lead to more well-paid jobs rather than mass unemployment throughout the industrialization, despite making the jobs from 200 years ago almost entirely obsolete. And we were in many ways right. Consider that 85% of the population used to be farmers, mostly replaced by tractors, and so on.
So in our heads we're instinctively thinking, 'Hah, those AI job doomers are recycling that old job-loss fallacy.' I have the impression I hear and read this from a large share of economists. In fact, I recall myself at the very beginning, when pondering AI vs. jobs, having experienced that tension between "I know machines make humans more productive and thus earn more..." vs. "but they'll actually replace us this time". Call this the economist and the basic engineer within me.
My most successful explanation so far: "Brain replacement isn't brain augmentation!" I think this is the core element that distinguishes AI from other technologies. The tractor, the steam engine, Microsoft Excel, the internet—these all are extensions of our brain beyond our raw legs, arms, fingers, allowing us to produce more as individuals. AI/AGI differs: it does the thing our brains had a monopoly on so far. So with AGI we have an effect that diametrically opposes our hitherto +- correct econ instincts.
As a side, I actually find here lies another, deeper insight for economists available: In some sense, even brain replacement vs. augmentation isn’t absolutely perfect an explanation for what naturally awaits us with AGI – despite imho being a rather ideal intuition pump, making the whole job risk as obvious as possible. Instead, realizing how AGI can make humans jobless (in terms of well-paid jobs), can be a good starting point to realize how, in some sense, we were hitherto mainly lucky for even basic machines to not have decimated job opportunities: While econ 101 essentially considers it a most basic, trivial, obvious, yes absolutely fundamental law that basic machines create more & better jobs, let’s entertain the idea that our demand psychology had been just slightly different: Say, the more we can afford, the more we want to eat plain wheat and/or raw steak (assume away our stomach’s limits) or have larger and larger - but not more sophisticated - cars: plain simple things but massive amounts of them. I.e. still greedy but instead of wanting a gazillion different and ever new widgets and varieties, we want massive amounts of the same copyable, basic things. What happens then if machines are invented? Even without AI workers might mostly become impoverished absent huge redistributive programs: To meet these resource owners’ boring desires, only a limited amount of human labor might suffice once machines are there.
In reality, we want more and more diverse things. This has kept many hundreds of millions employed even in the most capital intensive places for now. Morale: Rather than a God-given most fundamental law, it was a question of balance and it seemed, at least until recently, that despite the finiteness of the world, the subtleties in human desires meant despite machines we still rather had enough (and indeed better paid) jobs.
As a side to the side: Although even that point about actually ‘enough’ jobs, one might want to challenge. Another way to tell the story of the last 1-2 centuries could be: In the West we used up our own natural resources and thanks to efficient ways for extracting & converting resources into all types of products desired by end-consumers, we didn’t need the billions of additional labor from the poor world, instead we merely need their natural resources to build our additional cars etc. So we end up with a western world consuming large shares of world’s resources thanks to machines employed across the world in different ways, but not with anywhere nearly ‘full employment with more and more decent wages’ - once we consider the globe as a whole i.e. including the poor places of origin for our imported resources.
Getting back to the core point: While I think Four ways learning Econ makes people dumber re: future AI has some interesting points as to why economics may make it even harder for us to grasp the risks of AGI more broadly, the 'haha you fools, we've had 200 years of continuous proof that automation creates jobs' really is the essence blinding economists to the threat of AGI for labor incomes. Based on this professional instinct, economists find all sorts of ways to rationalize their feeling that there won't be a (labor or wage) problem.
One bit of blind spot that I link less directly to any practical policy bias or so, but is even more fundamental than the above ones: how we largely preclude downward-sloping marginal costs, incl. in discussions of market efficiency and competition policy.
Personal illustration from two decades ago: First ever economics course of my life, I learn: Marginal costs increase with quantity! I challenge the lecturer, get gibberish back. Next day I go to the actual professor's office, certain he'd agree the replacement teacher had been wrong, as marginal costs relevant for firms’ market entry decisions (i.e., not short-term) are obviously constant or even downward-sloping in a huge share of markets due to economies of scale.
First surprise: "Nope, marginal costs are simply and clearly upward sloping!"
I've carried this as a tiny hobby-horse since then. Maybe every single economist I've discussed this with naturally assumes it's obvious that marginal costs are usually upward sloping in the relevant parts (and no, not caveated with 'only in the short run'!). Yet among the few with whom I've had deeper conversations about my certainty that market-entry and long-term price individual competitive firms’ relevant marginal costs tend to be downward sloping due to economies of scale (as most normal persons and engineers immediately see afaik) all or almost all seemed to end up agreeing rather fully.
If you think of a demand/supply equilibrium in a price × quantity diagram, the supposed upward-sloping marginal costs make the existence of an equilibrium crossing point trivial. And it provides an obvious reason for there to be 'many firms': The individual firm cannot produce so large amounts in that model. By the way, for non-economists: Yes, I'm not joking, not strawmanning. This LITERALLY is an official key explanation we teach for why we have many firms in usual markets: The individual firm CANNOT PRODUCE TOO LARGE QUANTITIES because otherwise its per-unit costs become too high, so only new additional firms coming in, being built from scratch, can instead help meet increasing demand. We explicitly claim this to hold even IN THE LONG RUN.
Were we to consider downward-sloping marginal costs, we'd have to discuss the exact shape of curves—e.g., ensuring marginal costs remain non-negative and that the downward-sloping demand does at some point reach below even the decreasing marginal cost curve. More importantly, we won't find any viable & efficient textbook market demand/supply equilibrium because it becomes obvious the firm couldn't cover its average costs when price is set at marginal cost, which is below any preceding unit's cost. And with downward-sloping marginal cost, we need more subtle explanations for why there are many firms in markets (and/or discuss that the risk of consolidation into few dominant firms isn't present only under particular circumstances but could be a default expectation).
This adds complexity—and this complexity is what you may barely have time for when teaching econ 101. But short-circuiting it widens the gap between model and actual world, making it cognitively harder to relate economics to reality. It obscures how markets usually can't work optimally even in an assumed absence of other distorting factors.[1]
Economics—maybe the most crucial subject for the broader population to master for a functioning democracy?—is, in its most common form, failing us on many fronts. I only partly agree with Grumpy Economist’s playful "The one good thing I'll say is that the standard course is usually so mind-numbingly boring, focused on moving graphs around and playing with equations, that not much of it sticks." But what I think is 100% true is that it would be relatively simple to have an econ 101 that makes more sense of the world and is more intelligible for students by bringing it closer to reality and actual concerns. The simplifications in econ 101 affect instincts and discussions also of teachers, textbook writers, and I think applied contributions from specialists in the relevant topics.
The root of evil is really banal. Definitely not coming from "free market ideology" as sometimes claimed. The most oversimplified models are easiest to mathematicize, teach, test. Every slightly less trivial element—even if still really simple—gets cut out, e.g. due to lack of resources. Mathematization of economics is useful, but it may backfire if we mathematize only the anyway trivial to understand basics, can easily ‘objectively’ test these in small maths exercises, but leave out the crucial but more subtle and mathematically less obviously & objectively testable elements. It's complacency. Lack of testing whether students and scholars can actually use the models to make sense of the world beyond extremely dumbed-down ‘planet X’ examples instead of actual Earth case studies. Even we professionals end up imbibing these oversimplified models.
I've only illustrated some of the currently most topical issues on trade and AI—plus the imho most hilarious point about the +- universally assumed upward-sloping marginal cost curve. A similar example would be the concept of elasticity. Ever heard of the value of price elasticity of demand for good X? That's how we teach it. That there's an obvious time dimension—i.e., a function of time—with many goods having small elasticity on a day-to-day scale that increases strongly by the time we're at a decadal scale, I don't see systematically reflected. Not in econ 101 for sure, and I think in much applied modeling neither. I’m all for simplifications, but when cutting out the most salient world features at some point you end up creating an irrelevant abstraction that none will or should try to actually use.
On one hand, despite all this, I definitely don't feel one could say it’s economists “fault” in a simple sense. In fact, really weird economics you get when non-economist academics stray into economics, sometimes not even realizing it (engineers doing economic trade-off calculations or life-cycle analysis, philosophers attempting practical philosophy[2]). On the other hand, it does strike me how regularly in rationalist or EA circles, non-economists grasp many subtler econ topics more straightforwardly than I'm used to, imho 🤔.
The simplification is so naturally assumed in econ 101, I remember distinctly when, despite moving in econ land a large part of my days, I recently heard for the first time a different person talk about how shaky econ 101's foundations for justifying competitive market efficiency are: Glen Weyl trying to explain to Russ Roberts how goods with downward-sloping costs “[want] to be used by lots of people” while the requirement for profits means restricting use in the standard capitalist model. I got the impression that Roberts—theoretically a full pro of the 'marginal' concept— was barely able to make proper sense of Glen’s explanations, despite how trivial it becomes once you've even only briefly thought about the ubiquity of decreasing long-run marginal costs