Made Basecamp and HEY for the underdogs as co-owner and CTO of 37signals. Created Ruby on Rails. Wrote REWORK, It Doesn't Have to Be Crazy at Work, and REMOTE.
The original concept for ONCE sought to sell self-hostable web apps for a one-time fee. That didn't work. Sure, we recouped the investment on Campfire, our chat app, but that was it. You gotta listen when the market tells you what it wants! And it didn't seem to want to pay for self-hosted web apps in a one-off way.
So we set Campfire, Writebook, and now Fizzy free by releasing them all as open source with a permissive license. That worked! Tons of people have been running these apps on their own servers, contributing code back, and learning how we build real production applications at 37signals.
Now we're doubling down on the gift and adding an integrated way to run all these apps, and your own vibe-coded adventures too, on a brand-new application server we're also calling ONCE.
If you twist my arm, I can make that spell "Open Network Container Executor", but we don't even have to go there. Once is just a cool word, we already own the domain, and it's running all the original applications released under that banner as free and open-source installations. That's good enough!
The pitch here is that installing a whole suite of applications on your own server should be dead easy. The original ONCE model wanted a dedicated box or VM per app, which was just cumbersome and costly to maintain. Now you can use a single machine — even your laptop! — to run everything all at once.
ONCE gives you a beautiful terminal interface to track application metrics, like RAM + CPU usage, as well as basic visitor + request/second counts. It also gives you zero-downtime upgrades and scheduled backups. It's meant to be able to run all the infrastructure apps you'd need, like our full suite and all the ones your AI agents will soon be building for you.
Give it a spin. It's just a single command to install. I can show you how with this YouTube video tour. Enjoy!
The vibes around Linux are changing fast. Companies of all shapes and sizes are paying fresh attention. The hardware game on x86 is rapidly improving. And thanks to OpenCode and Claude Code, terminal user interfaces (TUIs) are suddenly everywhere. It's all this and Omarchy that we'll be celebrating in New York City on April 10 at the Shopify SoHo Space for the first OMACON!
We've got an incredible lineup of speakers coming. The creator of Hyprland, Vaxry, will be there. Along with ThePrimeagen and TJ DeVries. You'll see OpenCode creator Dax Raad. Omarchy power contributors Ryan Hughes and Bjarne Øverli. As well as Chris Powers (Typecraft) and myself as Linux superfans. All packed into a single day of short sessions, plenty of mingle time, and some good food.
Tickets go on sale tomorrow (February 19) at 10am EST. We only have room for 130 attendees total, so I imagine the offered-at-cost $299 tickets will go quickly. But if you can't manage to snatch a ticket in time, we'll also be recording everything, so you won't be left out entirely.
But there is just something special about being together in person about a shared passion. I've felt the intensity of that three years in a row now with Rails World. There's an endless amount of information and instruction available online, but a sense of community and connection is far more scarce. We nerds need this.
We also need people to JUST DO THINGS. Like kick off a fresh Linux distribution together with over three hundred contributors so far all leaning boldly into aesthetics, ergonomics, and that omakase spirit.
Omarchy only came about last summer, now we're seeing 50,000 ISO downloads a week, 30,000 people on the Discord, and now our very first exclusive gathering in New York City. This is open source at its best. People from all over, coming together, making cool shit.
(Oh, and thanks to Shopify and Tobi for hosting. You gotta love when a hundred-plus billion dollar company like this is run by an uber nerd who can just sign off on doing something fun and cool for the community without any direct plausible payback.)
With OpenClaw you're giving AI its own machine, long-term memory, reminders, and persistent execution. The model is no longer confined to a prompt-response cycle, but able to check its own email, Basecamp notifications, and whatever else you give it access to on a running basis. It's a sneak peek at a future where everyone has a personal agent assistant, and it's fascinating.
I set up mine on a Proxmox virtual machine to be fully isolated from my personal data and logins. (But there are people out there running wild and giving OpenClaw access to everything on their own machine, despite the repeated warnings that this is more than a little risky!).
Then I tried to see just how little help it would need navigating our human-centric digital world. I didn't install any skills, any MCPs, or give it access to any APIs. Zero machine accommodations. I just started off with a simple prompt: "Sign up for Fizzy, so we have a place to collaborate. Here's the invite link."
Kef, as I named my new agent, dutifully went to Fizzy to sign up, but was immediately stumped by needing an email address. It asked me what to do, and I replied: "Just go to hey.com and sign up for a new account." So it did. In a single try. No errors, no steering, no accommodations.
After it had procured its own email address, it continued on with the task of signing up for Fizzy. And again, it completed the mission without any complications. Now we had a shared space to collaborate.
So, as a test, I asked it to create a new board for business ideas, and add five cards with short suggestions, including providing a background image sourced from the web to describe the idea. And it did. Again, zero corrections. Perfect execution.
I then invited it to Basecamp by just adding it as I would any other user. That sent off an email to Kef's new HEY account, which it quickly received, then followed the instructions, got signed up, and greeted everyone in the chat room of the AI Labs project it was invited to.
I'm thoroughly impressed. All the agent accommodations, like MCPs/CLIs/APIs, probably still have a place for a bit longer, as doing all this work cold is both a bit slow and token-intensive. But I bet this is just a temporary crutch.
And while I ran this initial experiment on Claude's Opus 4.5, I later reran most of it on the Chinese open-weight model Kimi K2.5, and it too was able to get it all right (though it was a fair bit slower when provisioned through OpenRouter).
Everything is changing so fast in the world of AI right now, but if I was going to skate to where the puck is going to be, it'd be a world where agents, like self-driving cars, don't need special equipment, like LIDAR or MCPs, to interact with the environment. The human affordances will be more than adequate.
I fully understand the nostalgia for real ownership of physical-media games. I grew up on cassette tapes (C64 + Amstrad 464!), floppy disks (C64 5-1/4" then Amiga 3-1/2"), cartridges, and CDs. I occasionally envy the retro gamers on YouTube with an entire wall full of such physical media. But do you know what I like more than collecting? Playing! Anywhere. Anything. Anytime.
We went through the same coping phases with movies and music. Yes, vinyl had a resurgence, but it's still a tiny sliver of hours listened. Same too with 4K Blue-rays. Almost everyone just listens to Spotify or watches on Netflix these days. It's simply cheaper, faster, and, thus, better.
Not "better" in some abstract philosophical way (ownership vs rent) or even in a concrete technical way (bit rates), but in a practical way. Paying $20/month for unlimited music and the same again for a broad selection of shows and movies is clearly a deal most consumers are happy to make.
So why not video games? Well, because it just wasn't good enough! Netflix tried for casual gaming, but I didn't hear much of that after the announcement. Google Stadia appears to have been just a few years ahead of reality (eerie how often that happens for big G, like with both AI and AR!) as they shut down their service already.
NVIDIA, though, kept working, and its GeForce NOW service is actually, finally kinda amazing! I had tried it back in the late 2010s, and just didn't see anything worth using back then. Maybe my internet was too slow, maybe the service just wasn't good enough yet. But then I tried it again a few days ago, just after NVIDIA shipped the native GFN client for Linux, and holy smokes!!
You can legitimately play Fortnite in 2880x1800 at 120 fps through a remote 4080, and it looks incredible. Yes, there's a little input lag, but it's shockingly, surprisingly playable on a good internet connection. And that's with the hardest possible genre: competitive shooters! If you play racing games like Forza Horizon or story-mode games like Warhammer 40K: Space Marine 2, you can barely tell!
This is obviously a great option for anyone with a modest computer that can't run the latest triple-A titles, but also for Linux gamers who don't have access to run the cheat-protection software required for Fortnite and a few other games.
And, like Spotify and Netflix, it's pretty competitively priced. It's $20/month for access to that 4080-tier. You'd quickly spend $2,000+ on a gaming rig with a 4080, so this isn't a half bad deal: it's a payback of 100 months, and by then you'd probably want a 6080 anyway. Funny how NVIDIA is better at offering the promise of cheap cloud costs than the likes of AWS!
Anyway, I've been very impressed with NVIDIA GeForce NOW. We're going to bake the Linux installer straight into the next version of Omarchy, so you can just go to Install > Gaming > NVIDIA GeForce NOW to get going (just like we have such options for Steam and Minecraft).
But of course seeing Fortnite running in full graphics on that remote 4080 made me hungry for even more. I've been playing Fortnite every week for the last five years or so with the kids, but the majority of my gameplay has actually been on tablet. A high-end tablet, like an iPad M5, can play the game with good-for-mobile graphics at 120 Hz. It's smooth, it's easy, and the kids and I can lounge on the couch and play together. Good Family Fun! Not peak visual fidelity, though.
So after the NVIDIA GeForce NOW experience, I found a way to use the same amazing game streaming technology at home through a local-server solution called Apollo and a client called Moonlight. This allowed me to turn my racing-sim PC that's stuck downstairs into a cloud-like remote gaming service that I can access anywhere on the local network, so I can borrow its 4090 to play 120-fps, ultra-settings Fortnite with zero perceivable input lag on any computer in the house.
The NVIDIA cloud streaming is very impressive, but the local-server version of the same is mind-blowing. I'm mostly using the Asus G14 laptop as a client, so Fortnite looks incredible with those ultra, high-resolution settings on its OLED, but unlike when you use that laptop's built-in graphics card, the machine stays perfectly cool and silent pulling a meager 18 watts. And the graphics are of course a lot nicer.
The Moonlight client is available for virtually every platform: Mac, iOS, Android, and of course Linux. That means no need to dual boot to enjoy the best games at the highest fidelity. No need for a honking big PC on my primary desk. I did not know this was an option!!
Whether you give NVIDIA's cloud gaming setup a try or repurpose a local gaming PC for the same, you're in for a real treat of what's possible with streaming Fortnite on ultra settings at 120 fps on Linux (or even Mac!). GG, NVIDIA!
At the end of last year, AI agents really came alive for me. Partly because the models got better, but more so because we gave them the tools to take their capacity beyond pure reasoning. Now coding agents are controlling the terminal, running tests to validate their work, searching the web for documentation, and using web services with skills we taught them in plain English. Reality is fast catching the hype!
This is all very evident if you've tried to employ any of the new models — especially Claude Opus 4.5, Codex 5, Gemini 3, and even the Chinese open-weight models like MiniMax M2.1 and GLM-4.7 — in one of the modern terminal harnesses that give them access to all these autonomous powers. The code being produced by this new breed of AI is leagues ahead of where their predecessors were at the beginning of 2025.
I've thoroughly enjoyed putting them all to work in OpenCode, which is a terminal interface for coding agents that allows you to seamlessly switch between all of the models, capture your sessions for sharing, and simply looks astounding when theme-matched with the rest of Omarchy (where we're making it a default in the next version!).
See, I never really cared much for the in-editor experience of having AI autocomplete your code as you were writing it. That was the original format pioneered by GitHub's Copilot and Cursor, but it left me cold. When I code, I want to finish my own thoughts and sentences. That was the sentiment I expressed on the Lex Fridman podcast last summer.
But with these autonomous agents, the experience is very different. It's more like working on a team and less like working with an overly-zealous pair programmer who can't stop stealing the keyboard to complete the code you were in the middle of writing. With a team of agents, they're doing their work autonomously, and I just review the final outcome, offer guidance when asked, and marvel at how this is possible at all.
Yes, I'm ready to give the current crop of AI agents a promotion. They're no longer just here to help me learn, answer my questions, or check my work. They're fully capable of producing production-grade contributions to real-life code bases.
Yet pure vibe coding remains an aspirational dream for professional work for me, for now. Supervised collaboration, though, is here today. I've worked alongside agents to fix small bugs, finish substantial features, and get several drafts on major new initiatives. The paradigm shift finally feels real.
Now, it all depends on what you're working on, and what your expectations are. The hype train keeps accelerating, and if you bought the pitch that we're five minutes away from putting all professional programmers out of a job, you'll be disappointed.
I'm nowhere close to the claims of having agents write 90%+ of the code, as I see some boast about online. I don't know what code they're writing to hit those rates, but that's way off what I'm able to achieve, if I hold the line on quality and cohesion.
But I'll forgive folks for getting excited! Because you don't have to connect many future dots on the current trend line to get dizzy by the prospects. The leaps of improvement that AI agents took in 2025 is simply incredible. This is the most exciting thing we've made computers do since we connected them to the internet back in the '90s. So what might things look like in 2026 or 2027? I get the exuberance.
I also get that some programmers are eager to tune it all out. The hype drones on relentlessly, the most fantastical claims are still far off from being substantiated, and there's real uncertainty about where all this will leave the profession in the future. But that's still not reason enough to miss out on this incredible moment in human and computing history!
You gotta get in there. See where we're at now for yourself. Download OpenCode, throw some real work at Opus or the others, and relish the privilege of being alive during the days we taught the machines how to think.
One of my favorite parts of the early web was how easy it was to see how the front-end was built. Before View Source was ruined by minification, transpiling, and bundling, you really could just right-click on any web page and learn how it was all done. It was glorious.
But even back then, this only ever applied to the front-end. At least with commercial applications, the back-end was always kept proprietary. So learning how to write great web applications still meant piecing together lessons from books, tutorials, and hello-world-style code examples, not from production-grade commercial software.
The O'Saasy License seeks to remedy that. It's basically the do-whatever-you-want MIT license, but with the commercial rights to run the software as a service (SaaS) reserved for the copyright holder, thus encouraging more code to be open source while allowing the original creators to see a return on their investment.
We need more production-grade code to teach juniors and LLMs alike. A view source that extends to the back-end along with the open source invitation to fix bugs, propose features, and run the system yourself for free (if your data requirements or interests maks that a sensible choice over SaaS).
This is what we're doing with Fizzy, but now we've also given the O'Saasy License a home to call its own at osaasy.dev. The license is yours to download and apply to any project where it makes sense. I hope to read a lot more production-grade SaaS code as a result!