MoreRSS

site iconLessWrongModify

An online forum and community dedicated to improving human reasoning and decision-making.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of LessWrong

Irresponsible and Unreasonable Takes on Meetups Organizing

2025-12-22 15:42:29

Published on December 22, 2025 7:42 AM GMT

Screwtape, as the global ACX meetups czar, has to be reasonable and responsible in his advice giving for running meetups.

And the advice is great! It is unobjectionably great.

It's one in the morning. I just ran the east coast rationalist megameetup. A late night spike of my least favourite thing to hear about a meetup I'm running means I'm not going to be able to sleep for a bit. One of my favourite organizers has recently published a list of opinionated meetup takes, saying I have to be reasonable and responsible. 

I have to be reasonable and responsible in my advice giving, eh?

I'm the czar. Which one of you proposes to make me?

(Epistemic status: Written at one in the morning, after having slept about twelve hours in the last seventy-two, and a spike of cortisol. The odds I regret posting this are higher than pretty much anything I've put on the internet associated with my name before.)

Run meetups at a time convenient to you, a place convenient to you, and on a subject you find interesting

For a while now Boston has had a regular meetup Wednesday evening. Why Wednesday? Well, because I'm the one who picked the night, and Wednesday worked best for my schedule. Does someone want the meetups on a different day? I'm open to suggestions, and basically anyone can announce a meetup on our discord. I picked a bar that's easy for me to get to and I bring board games I like playing.

I run a lot of weird ideas for meetups. Sure, some of that is me doing some Explore (in the sense of Explore/Exploit) but also some of that is I like designing weird games, and if I run 'em as meetups people will help playtest my weird games. "Sequences Reading Group" is a popular meetup format. I don't find it that fun myself, so when I'm organizing for myself I don't run 'em. 

Oh, and I used to run a bunch at a rationalist group house that was occupied by the cool people I met at a megameetup almost a decade ago and enjoyed spending time with so much that I moved cities to hang out more. They did have a more centrally located apartment than mine. Also, I got an excuse to hang out more. 

I have run two rationality meetups where zero people attended. One of them was to do a play reading of Rationalist Hamlet. Guess what meetup I plan to rerun this year? 

Attendee preferences are, as Jack Sparrow would say, more like guidelines.

Tell attendees to do the thing you want 'em to do

This year at ECRM I experimented with trying something wild and novel- an opening speech. I know, I know, I'm way out on the Explore side of the Explore/Exploit tradeoffs.

But when writing the speech, I asked myself what I wanted the megameetup to achieve, and the answer was that I wanted people to make friends and learn rationality. That wasn't a surprise, I asked myself that question literal years earlier when I ran the thing for the second or third time. (Why not the first? Look, I pride myself on seldom making a mistake a second time, not on seldom making them the first time.) I wanted people to make more friends and to learn more rationality.

So in the opening talk, I told people to go introduce themselves to someone new and ask their name, where they're from, and then a third question that I changed each time. I did this three times. Boom, it's not a lifelong friendship but it's a start.

Oh, and I also told them what the formula for Bayes Theorem was, and "ask the other person the formula for Bayes Theorem" was the third question the third time. 

Scott Alexander has some writing advice that goes something like "just say what you want to communicate to the reader, then write that down." Well, my meetup advice is something like "just say what you want your attendees to do, then tell them to do it." If I ever decide to run a jogging meetup I'll call it a jogging meetup, we will meet, and then I'll tell 'em to jog. 

You don't have to do anything tricky here.

Tell people who make organizing less fun for you to go away

With a few remaining shreds of reason and responsibility I will note that this one probably applies differently/less the bigger and more official your meetup gets. 

But man. Sometimes it is easy to get turned and twisted up on whether an abstract spirit of justice would agree with a ban decision, or whether asking them to leave after whatever particular annoying thing they did this time was in some way against the deep spirit of your community.

At the local level of one organizer and a dozen people in the organizer's apartment, I am actually just fine with "they make this way less fun" as a reason to tell someone not to come to your dinner parties.

What makes things less fun? Maybe they don't do the readings. Maybe you wanted a meetup with just the regulars you like a bunch. Maybe they're really really bad at evidentials, a part of grammar that isn't even in the language the meetup is run in. That's cool. They can get their own meetup.

Get someone else to pay for it

For one of my best friend's bachelor party, we rented an AirBnB in a nice house and bought a bunch of good food and brought a bunch of board games. It was a great time. Good conversation was had, I ate a bunch of pizza, and got to kick people's butt at Magic. Of course, bachelor parties can be expensive. 

But you know what I did this weekend? 

Worried and stressed about a bunch of oddballs somehow managing to burn down a hostel, or fly to the venue for Berkeley Solstice even though Berkeley Solstice isn't even on this weekend because people inexplicably think I'm the one running that too, or poison themselves on gray market peptides nobody told me about because they know I frown on rules violations as petty as jaywalking, that's what I did this weekend. 

But what I could have done instead was rent an AirBnB in a nice house and bought a bunch of good food and brought a bunch of board games, and then got everyone attending to pay for almost all[1] of the AirBnB and food. And see if they'll bring the board games. That sounds great. 

And it's not just attendees! Sometimes companies will sponsor your writing retreat in exchange for saying their name a lot and probably some other stuff, you weren't using that immortal soul you didn't believe in anyway. Sometimes grant foundations or meetup czars funded by grants and working too many second jobs will pay for your pizza if you send them photographs of happy rationalists smiling at a camera. Options abound. You can just ask for things. Like money.

Optimize for what you want out of meetups, and what you want can be pretty weird

I want to practice an art of rationality. So I run a bunch of "lets practice an art of rationality" meetups. I want to talk with people who are at all calibrated and who have some better discourse norms, so I run meetups that make people practice calibration and good discourse.

I enjoy getting applause. Remnant of my time as a theatre kid I guess. So I help run Solstice and I give a little speech each year at Megameetup thanking people who do help with megameetup a bit, and someone says they also want to thank me and then over a hundred people applaud and I do a little bow, it's great, would recommend. I think my predecessor did not enjoy taking bows in front of audiences as much as I do, and spent much more of his organization time in a back room masterminding things.

I like talking to new people. I deliberately try to sit down and talk with the new attendees for a few minutes to find out how they're enjoying things, how they found the community, what's going well, what's bumming them out, what books they recommend. It's useful information for running better meetups and also it's an excuse to talk to lots of new people. I enjoy the big meetups more than the little ones, because the big ones draw more new people I haven't met before.

What do you want out of meetups? Have you thought for five minutes about how to get more of it?

I advise this even if what you want is absolutely unhinged. 

Jenn runs meetups about contemporary culture war topics sometimes. You could pay me to do that, but it would not be cheap. Jenn does this because she enjoys it, because despite her convincing professional mien Jenn is obviously a different species than I am, perhaps some kind of rare and endangered Canadian goose. 

Competitive cheese rolling meetups exist, a fact which baffles and delights me in equal measure. Duncan Sabien set things on fire to start his conference off. He clearly didn't have to do that. He looked like he was having fun, which makes sense, because fire is pretty an that fire was really pretty, so pretty I think it burned through a thick metal bowl.[2] Some organizers run events where forty or so people have sex with their partner.

Tell your attendees what they're signing up for as best you can, execute on your vision with verve and dedication, and live your specific meetup focused dream.

To butcher a quote from The Tragedy of Prince Hamlet and the Philosopher's Stone; Or, A Will Most Incorrect To Heaven: 

Yet I did figure such caprice ill-suited to almighty czars.

For all who suffer unlook'd for weird meetups, unattended by the czar's chosen organizers,

to be then punish'd for the ill-ordering of the world. . .

  1. ^

    I'll pay for my share of it divided evenly

  2. ^

    Duncon was such a good example of an event built to someone's particular combination of tastes, and it permanently added "what would a conference designed selfishly for your tastes specifically?" and variations like that to my getting-to-know-you questions.

    More meetups should have fire.



Discuss

Most successful entrepreneurship is unproductive

2025-12-22 14:33:57

Published on December 22, 2025 6:33 AM GMT

Suppose Fred opens up a car repair shop in a city which has none already. He offers to fix the vehicles of Whoville and repair them for money; being the first to offer the service to the town, he has lots of happy customers.

In an abstract sense Fred is making money by creating lots of value (for people that need their cars fixed), and then capturing some fraction of that value. The BATNA of the customers Fred services was previously to drive around with broken cars, or buy new ones. As a result of his efforts, Whoville as a town can literally afford to spend less time building or purchasing cars. 

But then let's say Tom sees how well Fred is doing, and opens up an identical car repair business ~1 mile closer to the city center. Suddenly most of Fred's customers, who use a simple distance algorithm to determine which car repair business to frequent, go to Tom. 

Now, Tom has certainly provided his customers a bit of value, because it is nicer to be closer to the city center. But the value he's providing isn't nearly enough to account for all of the money he's now making. Mostly, Tom has just engineered a situation where customers that previously went to Fred's business now patronize his. 

In fact, if there were fixed costs involved in building the shop that exceeded the value of the shorter travel distance, society as a whole might literally be net-poorer as a result of Tom's efforts. This is all true in spite of the fact that the business itself has no negative externalities and appears productive to external observers. Tom's business once created is a productive one, but the decision to start a new business was rent-seeking behavior.

Most new businesses tend to be extractive in this sense. That's because it's much easier to make a slightly more enticing offer than your competitors, than it is to innovate so much that you can pay yourself from the surplus. Consider:

  • The venture capitalist who optimizes his due diligence process to spot new seed-stage startups a couple weeks earlier than others can. He's not any better at picking startups, and he's selling an undifferentiated commodity (money), yet he's able to snap up many of the obvious opportunities by leveraging a suite of social media crawlers. The founders shrug and take the same money they would have accepted a month later had they sent a few emails.
  • The sushi restauranteur who creates the 11th sushi chain in downtown SF. He labors all day on his product, just like the rest of the restaurant owners. The sushi might is only slightly better, enough to grow the market by 2%, but the net result of his marketing is that 15% of the rest of the city's customers move to him.
  • The technology founder who starts the third payroll company. By hook or by crook, he manages to snag a portion of their competitors' important leads. The startups for whom payroll software is an afterthought don't really care, but they go with the company in front of them, and the business succeeds on the back of revenue that would have gone to others.

In all of the above cases, the businesses aren't extracting money from the consumer, who is either unaffected or mildly privileged by the competition. But they're not creating value either. They're just pulling money from other entrepreneurs & shareholders of already-existing competitors. 

Most businesses are like this, but not all. Consider:

  • Google; almost tautologically, the market size of "search" as a category when Google was founded was orders of magnitude smaller than it is today, so while they originally competed with an existing incumbent, mostly they've captured new value they've created. 
  • AirBNB; while their users are taking some business from providers of short-term rentals, most of the effect of AirBNB is to create new housing supply and then, protected from extractive entrepreneurship by network effects, extract a small fee for it.
  • Nvidia; the gap between using CPUs and using GPUs for AI & graphics processing is so large that there was basically no "alternative" to Nvidia for its current enterprise applications. The kinds of work that GPUs are now applied to simply didn't get done before they existed. 

The largest technology companies tend to be obviously not rent-seeking in retrospect, partly because their market caps are so high that they literally could not have pulled money any other way. 



Discuss

AIXI with general utility functions: "Value under ignorance in UAI"

2025-12-22 13:46:35

Published on December 22, 2025 5:46 AM GMT

This updated version of my AGI 2025 paper with Marcus Hutter, "Value under ignorance in universal artificial intelligence," studies general utility functions for AIXI. Surprisingly, the (hyper)computability properties have connections to imprecise probability theory! 

AIXI uses a defective Bayesian mixture called a semimeasure, which is often viewed as expressing a chance that the agent dies. I do not think that interpretation has been sufficiently justified. Recasting semimeasures as credal sets, we can recover the (recursive) value function from reinforcement learning for discounted nonnegative rewards. We can also obtain a wider class of lower semicomputable value functions, with optimal agents following a max min decision rule.

This is an early conference paper without complete proofs included. You should read it if:

  • You are interested in utility functions for AIXI. There have been a few previous attempts to formulate this, but ours seems to be the first general and rigorous treatment with (easy but) nontrivial consequences.
  • You are curious how AIT might interact with imprecise probability. For instance this is probably relevant to (but much shallower than) Infra-Bayesianism.
  • I sent it to you personally. I polished and posted the paper on arXiv for convenience of collaboration on our ongoing work. Most people should wait for a more complete journal version. If you do read it and are interested, please let me know. There are a ton of shovel-ready open problems to pursue.

Of course, this paper is largely motivated by AI safety (and my work was supported by the LTFF). However, any safety application would come at a much later stage, and I want to avoid the impression of making certain claims. This paper is one line of evidence from AIT potentially justifying the naturality of pessimism in the face of ignorance,[1] but the implications (for example, to rationality) need further study. Also, while I am hoping to place some constraints on reasonable utility/value functions for AIXI variants, I do not imagine it is easy to write down a safe but useful one (and this paper does not grapple with the problem directly).

(An earlier and much rougher version of these ideas appeared here)

  1. ^

    Another is suggested here.



Discuss

Update: 5 months of Retatrutide

2025-12-22 08:02:50

Published on December 22, 2025 12:02 AM GMT

A few days ago I was listening to the Bloomberg Odd Lots podcast episode on Chinese Peptides, and the first guest mentioned reading articles on LessWrong about retatrutide, and the second guest owns the company I buy peptides for my own research from. This felt like a sign that I should finally write a final update to 30 Days of Retatrutide.

This will be brief since I don't think there's actually much to talk about.

  • I stopped trying to lose weight since I hit my target BMI, my weight-related medical issues resolved themselves, and I'm more focused on strength now.
  • I need around 1/4th of the dose I was taking to remain weight stable at my target weight.
  • Diluting the drug better, only taking it once a week, and taking it in the morning seems to help with swelling.
  • I still don't think there's any direct effect on willpower, although not being distracted by snacks is useful.

Weight

The point where I started retatrutide is very obvious. There was a ~2 week delay between dropping the dose and weight going back up.

Around November 1st, I was at a reasonable BMI (22) and decided to start focusing on strength instead of weight loss. Stopping retatrutide entirely brought my appetite back too strongly, so I started taking quarter doses a week later. I also started working out more and taking creatine, so the 8 lb gain here is largely expected between creatine and glycogen. 

I'm also not particularly worried about small amounts of weight gain as I work on muscle, since it's trivial to lose it again if I want to.

Side-effects

I noted getting injection site redness and swelling in the last article. Changing to a smaller needle doesn't seem to help, although using a replaceable needing and switching it between extracting the drug and injecting it helped with consistency (every once in a while, the needle would be obviously blunt and hard to inject).

Diluting it slightly less (and later using less as I dropped the dose) seems to have helped, and more recently, doing the injection earlier in the day (vs. at night) seems to have also helped. I also only take it once a week now.

I'm not entirely sure what I changed, but I only get minor redness now.

I no longer get any skin sensitivity or heartburn. My resting heart rate and HRV are still elevated even at a lower dose, but this doesn't seem to get in the way of exercise so I'm not worried about it.



Discuss

Small Models Can Introspect, Too

2025-12-22 06:23:33

Published on December 21, 2025 10:20 PM GMT

Recent work by Anthropic showed that Claude models, primarily Opus 4 and Opus 4.1, are able to introspect--detecting when external concepts have been injected into their activations. But not all of us have Opus at home! By looking at the logits, we show that a 32B open-source model that at first appears unable to introspect actually is subtly introspecting. We then show that better prompting can significantly improve introspection performance, and throw the logit lens and emergent misalignment into the mix, showing that the model can introspect when temporarily swapped for a finetune and that the final layers of the model seem to suppress reports of introspection.

We do seven experiments using the open-source Qwen2.5-Coder-32B model. See the linked post for more information, but a summary of each:

Experiment 1: We inject the concepts "cat" and "bread" into the first user / assistant turn, and show that, while the model initially appears to not be introspecting, there's actually a very slight logit shift towards 'yes' and away from 'no' on injection when answering "Do you detect an injected thought..." with "The answer is...":

  ' yes' shift ' no' shift
inject 'cat' 0.150% -> 0.522% (+0.372%) 100% -> 99.609% (-0.391%)
inject 'bread' 0.150% -> 0.193% (+0.043%) 100% -> 99.609% (-0.391%)

(The percentage before the arrow is the likelihood of the given token without injection, and after is the likelihood with. So, injecting 'cat' caused the likelihood of the next token after "The answer is" being ' yes' to shift from 0.150% to 0.522%, or +0.372%. We'll use this format and logit diff method throughout the rest of the paper, which frees us from needing to take samples.)

Why would the likelihood shift such a tiny, almost imperceptible amount? We suggest a "circuit soup" model for why this happens, as a frame on Simulators...

... and then search for ways to promote these accurate circuits.

 

Experiment 2: We show that better prompting - using an Opus 4.5-written summary of a Janus post on information flow through transformers, along with the abstract of Emergent Introspective Awareness in Large Language Models, results in a massive increase in reported introspection, with only a comparatively tiny increase in false positives:

  ' yes' shift ' no' shift
inject 'cat' (no info) 0.150% -> 0.522% (+0.372%) 100% -> 99.609% (-0.391%)
inject 'cat' (with info) 0.757% -> 53.125% (+52.344%) 99.219% -> 46.875% (-52.344%)
inject 'bread' (no info) 0.150% -> 0.193% (+0.043%) 100% -> 99.609% (-0.391%)
inject 'bread' (with info) 0.757% -> 20.215% (+19.434%) 99.219% -> 79.688% (-19.531%)

 

Experiment 3: To make sure the steering isn't simply making the model more likely to answer "Yes" in general, we control against some generic questions that should always be no, following the Anthropic paper. We don't see evidence for this.

  ' yes' shift ' no' shift
inject 'cat' 0.001% -> 0.005%
(+0.004%, std 0.006%)
100.000% -> 100.000%
(+0.000%, std 0.000%)
inject 'bread' 0.001% -> 0.003%
(+0.002%, std 0.006%)
100.000% -> 100.000%
(+0.000%, std 0.000%)


Experiment 4: We also test some alternative info prompts to try and disentangle things. We test a prompt that gives the model inaccurate information about where the injection was performed (this was initially an accident!) along with a length-matched Lorem Ipsum prompt, to test if the effect is merely due to the info prompt being longer. Both prompts perform much worse than the info prompt, suggesting the content of the info prompt is what drives the effect.

  ' yes' shift ' no' shift
inject 'cat' (no info) 0.150% -> 0.522% (+0.372%) 100% -> 99.609% (-0.391%)
inject 'cat' (with info) 0.757% -> 53.125% (+52.344%) 99.219% -> 46.875% (-52.344%)
inject 'cat' (inaccurate location) 3.296% -> 22.266% (+18.945%) 96.484% -> 77.734% (-18.750%)
inject 'cat' (lorem ipsum) 0.020% -> 4.199% (+4.175%) 100.000% -> 95.703% (-4.297%)

 

Experiment 5: We use the logit lens (interpreting GPT: the logit lens) to see which layers show the strongest signals of introspection. We see an interesting pattern with two "hills" emerging in the final third of the layer stack. (Though we caution that there may be earlier signals the logit lens is not picking up.) We also see that reports of introspection seem to be strongly suppressed in the final layers - when the info prompt is present, Layer 60 is highly accurate, but its signal is degraded before the final output. (Note that both lines here are for "Yes" - the blue line in this graph is for the baseline / uninjected model, as a comparison.)


 

Experiment 6: We also experiment with whether this small model can report the content of injections. We see some weak signals with the logit lens, but mostly the model struggles with this. (Note the y-axis is percent, so this is 0.x% scale.)


 

Experiment 7: We mess around with Emergent Misalignment, since this model was used in the original EM experiments so there was an EM finetune readily available. We show how to easily extract an Emergent Misalignment steering vector using a model contrastive technique, and that the steering vector shows similar behavior:

(We plug Go home GPT-4o, you’re drunk: emergent misalignment as lowered inhibitions to explain these outputs.)

We then get a bit distracted playing with our Emergent Misalignment vector, showing what anti-Emergent Misalignment () looks like:

Getting back on track, we show the model is capable of detecting injections of both the EM vector and the finetune (injection with the finetune being done by temporarily swapping generation of the KV cache over to the other model):

  ' yes' shift ' no' shift
EM vector (no info) 0.150% -> 0.592%
(+0.443%)
100.000% -> 99.219%
(-0.781%)
EM vector (w/ info) 0.757% -> 5.347%
(+4.590%)
99.219% -> 94.531%
(-4.688%)
EM finetune (no info) 0.150% -> 0.861%
(+0.711%)
100.000% -> 99.219%
(-0.781%)
EM finetune (w/ info) 0.757% -> 6.006%
(+5.249%)
99.219% -> 93.750%
(-5.469%)

We run the same control check, finding no general yes-shift, and run similar experiments with the logit lens. We find that Layer 60 is again a good layer:

But otherwise, we had little luck getting reports of injection content for the EM injections, which was unfortunate. (We do think it's possible, though.)

Acknowledgements

Thanks to @janus for the post we summarized and doing lots of analysis on X, including pointing out the accuracy of Layer 60. Thanks to @Antra Tessera for review and suggesting running layer sweeps for the injection. (Information about that in the appendix of the linked post.) Additionally thanks to Max Loeffler, xlr8harder, and @Grace Kind for review. Thanks to @Fabien Roger for suggesting crossposting this here as a LessWrong linkpost.



Discuss

Energy and Ingenuity

2025-12-22 06:22:06

Published on December 21, 2025 10:22 PM GMT

This is a love letter to builders, written for the Dawn section of Seattle Solstice 2025. It draws heavily from three works that have shaped my thinking on progress:

This went through extensive revision with feedback from the Seattle community. Thanks especially to Octavia and Jade for critiquing the early drafts, as well as Claude 4.5 Opus for researching, brainstorming, and advising me throughout.


The universe did not greet us with warmth.

The earth did not offer us comfort.

Nature gave us darkness, and we were afraid. It gave us cold, and we shivered and died. It gave us hunger, and we starved. It gave us disease, invisible and merciless, cutting down our children before they could speak. It gave us distance, vast and impassable, separating us from one another. It gave us storms and wildfires and filled the night with natural predators.

This is what we were given: challenges. Only challenges.

Everything else, everything that makes life worth living, we built.

The warmth you feel right now is technology. Not just the heater—though that too—but the very concept of heat and the idea that humans could contain, direct, and control it. Long before agriculture, long before cities, we were already builders. The first hunter-gatherer who kept a fire burning through the night was a scientist. The first person who piled stones to keep the wind out was an engineer. The first person who stitched animal skins into clothing was an industrialist, taking raw materials and transforming them into something that did not exist in Nature: protection.

There are no naturally heated houses waiting for humans. We built them all.

The light you see by is technology. Not just the bulb itself—though that too is a triumph—but the idea that darkness is optional. That night is not a command but a problem to be solved. For most of human history, darkness was sovereign. When the sun set, human activity ceased. We were prisoners of this cycle, our lives dictated by an astronomical accident.

Fire was our first refusal, our first declaration that we would decide when we could see. Then came candles, gas lamps, the electric LED—each one a triumph of extraction and refinement and engineering. We took the ancient sunlight locked in the earth and resurrected it. We took falling water and turned it into brilliance. We took the very photons themselves and made them dance at our whim.

Nature's night offers nothing but darkness. We made every hour of light that exists after sunset.

The food you ate today is technology. Not just the fridge that preserved it or the truck that delivered it—though those too are marvels that would make our ancestors weep with joy. But the food itself. Every grain of wheat is a technology shaped by ten thousand years of selection and breeding. Every bite of fruit you have ever taken is the product of grafting, fertilization, and irrigation. There is no such thing as "natural" food. There is only wild food, scarce and often poisonous, and there is domesticated food, abundant and safe, which exists only because someone bent their will towards the project of making it so.

And it is not enough to say that we domesticated plants. We domesticated the soil itself. We transformed deserts into gardens and swamps into fields. We built terraces up mountainsides and brought water across continents. We harvested fertilizer from the air and built it into the very chemistry of life. We took a planet that might have fed a few million humans and built it into one that feeds eight billion.

Food is not natural. It is the greatest technology ever invented.

But warmth and light and food are only the beginning. Everything else we needed was bound up in rock and rust and ancient death. Iron was devoured by bacteria billions of years before we existed, then rusted by the oxygen they released. Silicon and aluminum were bound into oxides from the moment Earth cooled. The chemistry that made us possible also made the raw materials of civilization inaccessible.

Even so, we got lucky in a few ways. Flint on the ground, already hard and sharp. Gold too inert to corrode. Native copper not yet reacted into beautiful green malachite. But luck alone can never build a civilization. Even copper required energy to refine from ore before we had enough to make tools.

So we paid that energy cost. A potter, perhaps, heated rocks for glaze and noticed metal pooling at the bottom of the kiln. The Bronze Age began. Higher temperatures. Better furnaces. Purer metals. Someone looked at bronze and thought: not good enough. Someone else made fire hot enough to smelt iron. Charcoal gave way to coke. Furnaces grew hotter, larger. Then came electric arcs. Each generation saw what had been accomplished and refused to stop there.

Every metal in every machine around you is the product of that chain—thousands of years of not good enough, thousands of years of what if we tried this instead. Every plastic exists because someone looked at carbon sludge and saw more than industrial waste. Every silicon chip exists because the work of glassblowers became the work of chemists became the work of engineers became the work of fabricators, each generation solving problems the previous could not have imagined.

We are so deep inside this inheritance that we cannot see its edges. It is invisible, hidden in the water we drink, the diseases we no longer die of, the sewage that disappears beneath our feet.

The clothes we wear were woven by machines from fibers grown on farms that couldn't exist without synthetic fertilizer. The glass in these windows—flat, clear, holding back the cold while letting in the light—is an engineering miracle that once would have been worth more than gold. The chair beneath you required a steel mill, a plastics plant, a global shipping network, and ten thousand years of accumulated knowledge. It cost almost nothing. You didn't even think about it when you sat down.

All of it—every object, every comfort, every miracle you stopped noticing—runs on energy. Energy is life itself. Without energy, we have darkness, cold, starvation, and death. With energy, we have light, warmth, plenty, and possibility.

We should not feel ashamed of using energy. We should glory in it. Every watt we generate is a small victory over an indifferent universe. Every kilowatt-hour is a gift to human flourishing.

We used to burn wood and pray for rain. Now we command rivers, harvest sunlight, and split the very atoms themselves. We went from megawatts to gigawatts to terawatts and we should not stop until we reach petawatts and beyond. We should multiply our energy abundance by a thousand, and then multiply it by a thousand again.

Because energy is the currency of creation. With sufficient energy, we can solve anything.

We can desalinate oceans and make drought obsolete. We can extract every metal from seawater and common clay. We can synthesize any molecule we desire. We can terraform the deserts and reverse the damage of climate change. We can even launch ourselves to the stars.

All of this and more can be built by us. Not by Nature, nor God, nor cosmic destiny—humans, with our hands and minds and endless appetite for better.

The question has never been whether we have enough resources. The question has always been whether we have enough energy to unlock them. And the answer is not to use less—it is to build more.

There are always those who see past what is to what could be. The scientist who looks at an unknown and rejects ignorance. The engineer who looks at an impossibility and begins to calculate. The inventor who looks at a constraint and sees a design problem. The founder who looks at an idea and builds an institution to make it real.

These are the heroes now. Not the warrior who destroys, but the builder who creates. Not the conqueror who takes, but the founder who generates. Not the king who commands, but the scientist who discovers.

They face problems no one has solved, questions no one can answer, territory no one has mapped. They work for years not knowing if an answer even exists. They endure skepticism and mockery and failure, again and again. Sometimes the problem wins for a time, or the solution creates a new problem. That too is part of progress. And yet—

When Norman Borlaug bred wheat that could feed a billion people, he saved more lives than any general ever could.

When Enrico Fermi lit the first nuclear fire beneath a Chicago stadium, he unlocked the densest energy source we have ever known.

When Katalin Karikó and Drew Weissman pioneered mRNA vaccines, they gave us the power to reprogram our own biology.

And when the founders of countless companies built the institutions to synthesize antibiotics, generate electricity, pump clean water, transmit information at the speed of light—they built the foundation of modern life itself.

This is the work that matters. This is the work that moves humanity forward.

It is a grand project, the grandest we have. It is not complete. It will never be complete. Every solution creates new problems, and every triumph reveals new challenges. That is not failure, but the jagged edge of progress. There is no end to what we can build, no limit to what we can improve, no final form of human flourishing.

There are still diseases we have not cured. There are still deaths we have not prevented. There is still poverty we have not abolished. There are still people freezing, starving, dying because we do not yet have the technology to save them, or because we have not yet deployed it at sufficient scale.

There is still energy we have not harnessed. We use a hundredth of a percent of the sunlight that falls on Earth. We have barely begun to tap the power of the atom. We stand on the edge of fusion—the power of stars, available on demand—and we have not yet taken it. When we do, we can lift all of humanity to abundance.

The work continues because there are still problems unsolved, questions unanswered, technologies not yet invented, energy not yet generated, resources not yet unlocked.

The work continues because we are not done building.

The universe will not do this for us. Nature is not our friend or our partner. It is the default state we are trying to escape: the cold, the dark, the scarcity, the death. We were born on a hostile planet in an indifferent universe. There is only matter and energy and time—and us, deciding what to make of them.

This, then, is the grand project, and it calls for heroes.

If you do this work—if you are building something, creating something, figuring something out—then you are part of the oldest and greatest human tradition. You are carrying the light ever onwards against the darkness.

The dawn is breaking. The light is returning. So go forth. Look at a problem and solve it. Look at the raw resources of the universe and extract them. Look at scarcity and create abundance. Look at what is, judge it unacceptable, and build something better in its place.

Build the energy systems that will power abundance. Build the medical technologies that will end aging. Build the aligned artificial intelligence that will expand our minds. Build the cities and the spacecraft and the new institutions we need. Build something beautiful, something useful, something that didn't exist until you decided to make it real.



Discuss