MoreRSS

site iconMarginal RevolutionModify

Blog of Tyler Cowen and Alex Tabarrok, both of whom teach at George Mason University.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Marginal Revolution

*Being and Time: An Annotated Translation*

2026-02-24 16:32:49

Translated from the German by Cyril Welch.

Periodically I am asked if I have read Being and Time, and I always give the same response: “I have looked at every page.”

I also have spent time with it in German, though not for every page.  But have I read it?  Read it properly?  Can anyone?

Is the book worth some study?  Yes.  But.

People, this volume is the best chance you are going to get.

The post *Being and Time: An Annotated Translation* appeared first on Marginal REVOLUTION.

Is there an aggregate demand problem in an AGI world?

2026-02-24 13:54:33

No.  Let’s say AI is improving very rapidly, and affecting the world more rapidly and more radically than I think is plausible.  Let’s just say.

All of a sudden there are incredible things you can spend your money on.

Since there is (possibly) radical deflation, you might be tempted to just hold all your money and buy nothing.  Pick vegetables from your garden.  But the high marginal utility of the new goods and services will get you to spend, especially since you know that plenitude will bring you, in relative terms, a lower marginal utility for marginal expenditures in the future.

You might even go crazy spending.  If nothing else, buy new and improved vegetable seeds for your garden.  That same example shows that spending is robust to you losing your job, even assuming no reemployment is possible.  In this world, there are significant Pigou effects on wealth.

Fed policy has no problem mattering in this world.  Other people of course will wish to use the new Fed-sprayed liquidity to invest.  They might even invest in AI-related goods and services, not all of which will be controlled by “billionaires.”

Liquidity trap arguments, if they are to work at all, require a pretty miserable environment for investment and also consumption.

Note by the way, that liquidity traps were supposed to apply to currency only!  If you try to apply the concept to money more generally, when most forms of holding money bear interest rates of return, the whole concept collapses.

So there is not an aggregate demand problem in this economy, even if the social situation feels volatile or uncomfortable.  After that, Say’s Law holds.  If AI produces a lot more stuff, income is generated from that and the economy keeps going, whether or not the resulting distribution pleases your sense of morality.  Along the way, prices adjust as need be.  If unemployment rises significantly, prices fall too, all the more.  I am not saying everyone ends up happy here, but you cannot have a) a flood of goods and services, b) billions accruing to the AI owners, without also c) prices are at a level where most people can afford to buy a whole bunch of things.  Otherwise, where do you think all the AI revenue is coming from?  The new output has to go somewhere, and sorry people it is simply not all trapped in currency hoards.  Be just a little Walrasian here, please.  (I would call it Huttian instead.)

Besides, why assume that “the machines” here are reaping all the surplus?  Are they the scarce factor of production?  Maybe it is hard to say in advance, but do not take any particular assumptions for granted here, ask to see them spelt out.  One simple scenario is that the regions with energy and data centres become much wealthier, and people need to move to those areas.  Maybe they do not do this quickly enough, a’la our earlier history with the Rust Belt.  That is a problem worth worrying about, but it is nothing like the recent collapse concerns that have been circulating.

The whole Citrini scenario is incorrect right off the bat.  Very little of it is based on sound macroeconomic reasoning.  See Eli’s very good comments too.  Nicholas also.  Dare I say they should have consulted with the AIs for a bit longer?

The post Is there an aggregate demand problem in an AGI world? appeared first on Marginal REVOLUTION.

The Software Upgrade in Chinese Civic Behaviour

2026-02-24 02:34:54

I have not been to China recently enough to judge these claims:

Behaviour is notoriously harder to engineer than buildings. A recent trip to the Fragrant Hills in western Beijing on a newly constructed metro line, had me marveling at the improved crowd-management. Despite massive groups of domestic tourists from around the country thronging the area, in what would not-so-long-ago have been a scenario for a potential stampede, the crowds moved in relative order. The park environs were spick and span with no litter in sight; not a single old codger sneaking a cigarette.

There was some amount of strident rule-announcing on loudspeakers: stay on the designated tracks, no smoking etc., but overall, it was possible to enjoy the natural beauty, notwithstanding the hordes of day-trippers. The toilets were not fragrant, despite the nomenclature of the spot itself, but they were clean, and the seats were free of the tell-tale footprints that indicate squatting rather than sitting. Barely anyone gave me, an obvious foreigner, a second glance. In contrast, there was a time in 2002 when a cyclist fell off his bike in his shock at having spotted dark-skinned me walking along a road in the outskirts of Beijing.

So how had the Chinese been pacified/disciplined/habituated to ways of behaviour that went so against their until-very-recent, loophole-finding, chaos-shuffling, phlegm-expectorating deportment in public spaces?

The answer, as answers to sociological questions invariably are, is multipronged.

Some of it is more money.

Here is more by Pallavi Aiyar.  Via Malinga Fernando.

The post The Software Upgrade in Chinese Civic Behaviour appeared first on Marginal REVOLUTION.

Daniel Litt on AI and Math

2026-02-23 20:16:19

Daniel Litt is a professor of mathematics at the University of Toronto. He has been active in evaluating AI models for many years and is generally seen as a skeptic pushing back at hype. He has a very interesting statement updating his thoughts:

In March 2025 I made a bet with Tamay Besiroglu, cofounder of RL environment company Mechanize, that AI tools would not be able to autonomously produce papers I judge to be at a level comparable to that of the best few papers published in 2025, at comparable cost to human experts, by 2030. I gave him 3:1 odds at the time; I now expect to lose this bet.

Much of what I’ll say here is not factually very different from what I’ve written before. I’ve slowly updated my timelines over the past year, but if one wants to speculate about the long-term future of math research, a difference of a few years is not so important. My trigger for writing this post is that, despite all of the above, I think I was not correctly calibrated as to the capabilities of existing models, let alone near-future models. This was more apparent in the mood of my comments than their content, which was largely cautious.

To be sure, the models are not yet as original or creative as the very best human mathematicians (who is?) but:

Can an LLM invent the notion of a scheme, or of a perfectoid space, or whatever your favorite mathematical object is? (Could I? Could you? Obviously this is a high bar, and not necessary for usefulness.) Can it come up with a new technique? Execute an argument that isn’t “routine for the right expert”? Make an interesting new definition? Ask the right question?

…I am skeptical that there is any mystical aspect of mathematics research intrinsically inaccessible to models, but it is true that human mathematics research relies on discovering analogies and philosophies, and performing other non-rigorous tasks where model performance is as yet unclear.

The post Daniel Litt on AI and Math appeared first on Marginal REVOLUTION.