MoreRSS

site iconManton ReeceModify

I created Micro.blog. I also have 2 podcasts: Core Intuition and Timetable.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Manton Reece

2026-04-08 04:02:03

I like this thought from @patrickrhone:

The biggest danger with AI is that it is a shortcut around things that matter. Things that matter intellectually, things that matter socially, and things that matter emotionally.

I think that’s right. We have to make deliberate choices, which is hard right now because AI is like, “Hi, I can do everything for you.” But what actually makes us uniquely human? What has more value and meaning if we do it ourselves? I want shortcuts that give us more time for those things that matter. (And it’s complicated because that might not be the same for everyone.)

Meeting Vincent

2026-04-08 02:16:37

I got to meet @vincent in person for the first time while in Barcelona. He and his wife even brought me some sweet treats and special vodka from Poland!

I’m lucky to be able to work remotely. As I type this, I’m in a cafe in Paris. But there’s nothing like meeting face to face too… Vincent and I were able to sit down and go over the Inkwell mobile design, looking at the iOS simulator on my laptop, in a more natural way than back and forth over email.

Traveling is also a reminder that we’re all human, each of us with our own perspectives and day-to-day lives. If we judge people from across the ocean, people we have never met, we’re probably going to get it wrong. Real life gets flattened into stereotypes and exaggerations when seen only through social media on tiny screens.

Thanks Vincent for your help with Micro.blog over the years. I’m looking forward to sharing the Inkwell mobile app with everyone soon.

2026-04-07 20:43:36

The new Frozen ride is really well designed. At Disneyland Paris / Disney Adventure World.

Two animated characters, one in a green dress and another in a blue dress, are standing in a wintry scene with a friendly snowman surrounded by glowing blue and green foliage.

2026-04-07 04:14:54

Got a drink at Le Train Bleu. Beautiful old restaurant from 1901, inside Gare de Lyon.

A luxurious room features ornate ceilings with detailed paintings and an elaborate chandelier.

OpenAI got its name right

2026-04-06 20:52:47

People like to give OpenAI shit for having “open” in the name, but if we look at developer tooling and access to models, the company is actually more open than Anthropic. That contrast was highlighted when Anthropic accidentally leaked the source code to Claude Code.

TechCrunch reported on Anthropic’s attempt to unwind the mistake:

Anthropic issued a takedown notice under U.S. digital copyright law asking GitHub to take down repositories containing the offending code. According to GitHub’s records, the notice was executed against some 8,100 repositories — including legitimate forks of Anthropic’s own publicly released Claude Code repository, according to irate social media users whose code got blocked.

Anthropic walked some of that back, but their instinct is to tightly control AI with closed models and proprietary tools. You can see this not just in the reaction to the leak but also in how Claude subscriptions can’t be used with third-party tools like OpenClaw. That’s fine as long as Anthropic is one of several players. If they were ever to become dominant, though, I wouldn’t want them getting anywhere close to a monopoly on AI.

OpenAI’s Codex CLI is already open source. Tibo of the Codex team had fun in a tweet joking about this:

Whaaaa. Only realized now and apparently our repo was public since 11 months ago and noone told us?!

OpenAI’s philosophy leans toward broader access. They released open weight models like gpt-oss-120b, one of the largest open models from an American company. They hired Peter Steinberger, the creator of OpenClaw. They try to make ChatGPT’s free plan credible, no sign-in required.

How do we reconcile all of this? There is a feeling in the developer community that the Anthropic folks are the good guys. They’re the ones who care about safety. They’re the ones who don’t want ads. They’re the ones fighting the Pentagon.

If you look closer, though, it’s mostly a feeling. Anthropic is an enterprise software company with expensive models. I like Dario Amodei, but he has built a closed company that seems afraid of its own shadow.

Earlier this year in a speech at the AI Impact Summit in India, Sam Altman said something that caught my attention:

We can choose to either empower people, or concentrate power.

I get that some people are cynical about Sam. You could argue that OpenAI’s lead in AI and trying to build everything is itself a form of centralization. But I think he’s right, and it reveals what differentiates OpenAI and Anthropic.

For years I’ve worried about centralization on the web. AI risks concentrating even more power. If we hope that AI will improve people’s lives — democratizing access to intelligence, not making it a luxury only the wealthy can afford — then OpenAI, even with their occasional missteps, is actually the best chance of that happening.

2026-04-06 20:10:35

Artemis II is almost there:

The Orion spacecraft is now in the lunar sphere of influence, meaning the moon’s gravity has more pull on the vehicle than the Earth. At 1:46 p.m. ET, the crew will surpass the record for the farthest distance traveled from Earth by humans, which was set by the Apollo 13 mission at 248,655 statute miles from Earth.

🚀