2026-02-05 14:47:00

I'm going to make this a really quick one because this is doing the rounds, and whilst I've tweeted about it, it's time to dig in.
Pragmatic Engineer's @GergelyOrosz is on a "secret email list" of agentic AI coders, and they're starting to report trouble sleeping because agent swarms are "like a vampire."
— TBPN (@tbpn) February 4, 2026
"A lot of people who are in 'multiple agents mode,' they're napping during the day... It just really is… pic.twitter.com/slsPgCfkKw
What Gergely is articulating here is something that I and everyone else went through a year ago who were paying attention. AI enables you to teleport to the future and rob your future self of retirement projects. Anything that you've been putting off to do someday, you can do it now.
To quote a post I authored almost eight months ago:
It might surprise some folks, but I'm incredibly cynical when it comes to AI and what is possible; yet I keep an open mind. That said, two weeks ago, when I was in SFO, I discovered another thing that should not be possible.
Every time I find out something that works, which should not be possible, it pushes me further and further, making me think that we are already in post-AGI territory.
- https://ghuntley.com/no/ (dated July 2025)
And another post back in September 2025:
It's a strange feeling knowing that you can create anything, and I'm starting to wonder if there's a seventh stage to the "people stages of AI adoption by software developers"

whereby that seventh stage is essentially this scene in the matrix...
In the previous 12 months, I've cloned SaaS product feature sets of many different companies. I've built file systems, networking protocols and even developed my own programming language.
From my perspective, nothing really changed in December. The models were already great, but what was needed was a time of rest - people just needed to pick up the guitar and play.

What makes December an inflection point was the models became much easier to use to achieve good outcomes and people picked up the guitar with an open mind and played.
Over the last couple of weeks, I've been catching up with software engineers, venture capitalists, business owners, and people in sales and marketing who are all going through this period of adjustment.
Universally, it can be described as a mild form of creative psychosis for people who like to create things. All builders who have an internal reward function of creating things as a form of pleasure go through it because AI enables them to just do things.

Everyone who gets AI goes through it, and it typically lasts about two to three months, until they get it out of their system by completing all the projects they were putting off until retirement.
Perhaps it could be described as a bit of a reset, similar to what happened during COVID-19, when people were able to reassess what they wanted to do in life.
It's a coin flip, really, because people are either going to commit more to their current employer if they are an employee, but on the other side of the coin, they're realising they are no longer dependent on others as much to achieve certain financial outcomes.
Perhaps this is the tipping point where more people throw their hats in and become entrepreneurs.
People with ideas and unique insight can get concepts to market in rapid time and be less dependent on needing others' expertise as the world's knowledge is now in the palms of everyone's hands.
Technologists are still required, perhaps it's the ideas guys/gals who should be concerned as software engineers now have a path to bootstrap a concept in every white collar industry (recruiting, law, finance, finance, accounting, et al) at breakneck speed without having to find co-founders.
- From Feb 2025
I guess I need to wrap this up now, but I will say this:
I've written about how some people won't make it, and I've spent the last year talking about this, pleading with people to pick up the guitar and play...
If you're having trouble sleeping because of all the things that you want to create, congratulations.
You've made it through to the other side of the chasm, and you are developing skills that employers in 2026 are expecting as a bare minimum.
The only question that remains is whether you are going to be a consumer of these tools or someone who understands them deeply and automates your job function?

go build yourself an agent and taste building in the recursive latent space
Trust me, you want to be in the latter camp because consumption is now the baseline for employment.
After you come out of this phase, I hope you get to where I am, because just because you can build something doesn't mean you necessarily should. Knowing what not to build now that anything can be built is a very important life lesson.
📰 What @GergelyOrosz is articulating here is something that I and everyone else went through a year ago who were paying attention. AI enables you to teleport to the future and rob your future self of retirement projects. Anything that you've been putting off to do someday, you… pic.twitter.com/OXm9VvXhdZ
— geoff (@GeoffreyHuntley) February 5, 2026
2026-01-17 18:46:56

I am fortunate to be surrounded by folks who listen and the link below post will go down as a seminal reading for people interested in AI context engineering.
A simple convo between mates - well Moss translated it into words and i’ve been waiting for it to come out so I didn’t front run him.

read this and internalise this
Enjoy. This is what engineering now looks like in the post loom/gastown era or even when doing ralph loops.

If you aren’t capturing your back-pressure then you are failing as a software engineer.
Back-pressure is part art, part engineering and a whole bung of performance engineering as you need "just enough" to reject invalid generations (aka "hallunications") but if the wheel spins too slow ("tests take a long time to run or for the application to compile") then it's too much resistance.
There are many ways to tune back-pressure and as Moss states it starts with choice of programming language, applying engineering knowledge to design a fast test suite that provides signal but perhaps my favorite one is pre-commit hooks (aka prek).
Under normal circumstances pre-commit hooks are annoying because they slow down humans but now that humans aren't the ones doing the software development it really doesn't matter anymore.
2026-01-17 14:43:54

I’ve been thinking about how I build software is so very very different how I used to do it three years ago.
No, I’m not talking about acceleration through usage of AI but instead at a more fundamental level of approach, techniques and best practices.
Standard software practices is to build it vertically brick by brick - like Jenga but these days I approach everything as a loop. You see ralph isn’t just about forwards (building autonomously) or reverse mode (clean rooming) it’s also a mind set that these computers can be indeed programmed.
watch this video to learn the mindset
I’m there as an engineer just as I was in the brick by brick era but instead am programming the loop, automating my job function and removing the need to hire humans.
Everyone right now is going through their zany period - just like i did with forward mode and building software AFK on full auto - however I hope that folks will come back down from orbit and remember this from the original ralph post.
While I was in SFO, everyone seemed to be trying to crack on multi-agent, agent-to-agent communication and multiplexing. At this stage, it's not needed. Consider microservices and all the complexities that come with them. Now, consider what microservices would look like if the microservices (agents) themselves are non-deterministic—a red hot mess.
What's the opposite of microservices? A monolithic application. A single operating system process that scales vertically. Ralph is monolithic. Ralph works autonomously in a single repository as a single process that performs one task per loop.
Software is now clay on the pottery wheel and if something isn’t right then i just throw it back on the wheel to address items that need resolving.
Ralph is an orchestrator pattern where you allocate the array with the required backing specifications and then give it a goal then looping the goal.
It's important to watch the loop as that is where your personal development and learning will come from. When you see a failure domain – put on your engineering hat and resolve the problem so it never happens again.
In practice this means doing the loop manually via prompting or via automation with a pause that involves having to prcss CTRL+C to progress onto the next task. This is still ralphing as ralph is about getting the most out how the underlying models work through context engineering and that pattern is GENERIC and can be used for ALL TASKS.
In other news I've been cooking on something called "The Weaving Loom". The source code of loom can now be found on my GitHub; do not use it if your name is not Geoffrey Huntley. Loom is something that has been in my head for the last three years (and various prototypes were developed last year!) and it is essentially infrastructure for evolutionary software. Gas town focuses on spinning plates and orchestration - a full level 8.

I’m going for a level 9 where autonomous loops evolve products and optimise automatically for revenue generation. Evolutionary software - also known as a software factory.

This is a divide now - we have software engineers outwardly rejecting AI or merely consuming via Claude Code/Cursor to accelerate the lego brick building process....
but software development is dead - I killed it. Software can now be developed cheaper than the wage of a burger flipper at maccas and it can be built autonomously whilst you are AFK.
hi, it me. i’m the guy
I’m deeply concerned for the future of these people and have started publishing videos on YouTube to send down ladders before the big bang happens.
i now won’t hire you unless you have this fundamental knowledge and can show what you have built with it
Whilst software development/programming is now dead. We however deeply need software engineers with these skills who understand that LLMs are a new form of programmable computer. If you haven’t built your own coding agent yet - please do.

It is but watch it happen live. We are here right now, it’s possible and i’m systemising it.
Here in the tweet below I am putting loom under the mother of all ralph loops to automatically perform system verification. Instead of days of planning, discussions and weeks of verification I’m programming this new computer and doing it afk whilst I DJ so that I don’t have to hire humans.

Any faults identified can be resolved through forward ralph loops to rectify issues. Over the last year the models have became quite good and it's only now that I'm able to realise this full vision but I'll leave you with this dear reader....
What if the models don't stop getting good?
How well will your fair if you are still building jenga stacks when there are classes of principal software engineers out there to prove a point that we are here right now and please pay attention.

Go build your agent, go learn how to program the new computer (guidance forthcoming in future posts), fall in love with all the possibilities and then join me in this space race of building automated software factories.
something incredible just happened here
— geoff (@GeoffreyHuntley) January 17, 2026
perhaps first evolutionary software auto heal.
i was running the system under a ralph system loop test. it identified a problem with a feature. then it studied the codebase, fixed it, deployed it automatically, verified that it worked and… https://t.co/ATDaIU4p5w pic.twitter.com/22U7FW6Dye
ps. socials
🗞️ everything is a ralph loop
— geoff (@GeoffreyHuntley) January 17, 2026
(link below)
I’ve been thinking about how I build software is so very very different how I used to do it three years ago.
No, I’m not talking about acceleration through usage of AI but instead at a more fundamental level of approach, techniques… pic.twitter.com/IBpO4HJ4AK
2025-12-08 23:55:28

In woodworking, there's a saying that you should work with the grain, not against the grain and I've been thinking about how this concept may apply to large language models.
These large language models are built by training on existing data. This data forms the backbone which creates output based upon the preferences of the underlying model weights.
We are now one year in where a new category of companies has been founded whereby the majority of the software behind that company was code-generated.
From here on out I’m going to call to these companies as model weight first. This category of companies can be defined as any company that is building with the data (“grain”) that has been baked into the large language models.
Model weight first companies do not require as much context engineering. They’re not stuffing the context window with rules to try attempt to override and change the base models to fit a pre-existing corporate standard and conceptualisation of how software should be.
The large language model has decided on what to call a method name or class name because that method or classs name is what the large language model prefers thus, when code is adapted, modified, and re-read into the context window, it is consuming its preferred choice of tokens.
Model-weight-first companies do not have the dogma of snake_case vs PascalCase vs kebab-case policies that many corporate companies have. Such policies were created for humans to create consistency so humans can comprehend the codebase. Something that is of a lesser concern now that AI is here.
Now variable naming is a contrived example, but I suspect in the years to come if a study was done to compare the velocity/productivity/success rates with AI of a model weight first company vs. a corporate company, I suspect a model weight company have vastly better outcomes because they're not trying to do context engineering to force the LLM to follow some pre-existing dogma. There is one universal truth with LLMs as they are now: the less that you use, the better the outcomes you get.
The less that you allocate (i.e., cursor rules or what else have you), then you'll have more context window available for actually implementing requirements of the software that needs to be built.
So if we take this thought experiment about the models having preferences for tokens and expand it out to another use case, let's say that you needed to build a Docker container at a model weight first company.
You could just ask an LLM to build a Docker container, and it knows how to build a Docker container for say Postgres, and it just works. But in the corporate setting, if you ask it to build a Docker container, and in that corporate you have to configure HTTPS, squid proxy, or some sort of artifactory and outbound internet access is restricted, that same simple thing becomes very comical.
You'll see an agent fill up with lots of failed tool calls unless you do context engineering to say "no, if you want to build a docker container, you got to follow these particular allocations of company conventions” in a crude attempt to override the preferences of the inbuilt model weights.
Building at large companies is hard right now due to the legacy infrastructure. 10-15 years ago, some of these companies were building in house to be frontier. Today, what's available internally is too disconnected from what's available outside and is often years behind.
— Jaana Dogan ヤナ ドガン (@rakyll) January 3, 2026
At a model weight first company, building a docker image is easy but at a corporate the agent will have one hell of a time and end up with a suboptimal/disappointing outcome.
So, perhaps this is going to be a factor that needs to be considered when talking and comparing the success rates of AI at one company versus another company, or across industries.
If a company is having problems with AI and getting outcomes from AI, are they a model weight first company or are they trying to bend AI to their whims?
Perhaps the corporates who succeed the most with the adoption of AI will be those who shed their dogma that no longer applies and start leaning into transforming to become model-weight-first companies.
ps. socials.
🗞️ llm weights vs the papercuts of corporate
— geoff (@GeoffreyHuntley) December 8, 2025
(link in next post)
“If a company is having problems with AI and getting outcomes from AI, are they a model weight first company or are they trying to bend AI to their whims?” pic.twitter.com/WLMGF4HYa1
2025-09-09 11:36:48

It's a strange feeling knowing that you can create anything, and I'm starting to wonder if there's a seventh stage to the "people stages of AI adoption by software developers"

whereby that seventh stage is essentially this scene in the matrix...
It's where you deeply understand that 'you can now do anything' and just start doing it because it's possible and fun, and doing so is faster than explaining yourself. Outcomes speak louder than words.
There's a falsehood that AI results in SWE's skill atrophy, and there's no learning potential.
If you’re using AI only to “do” and not “learn”, you are missing out
- David Fowler
I've never written a compiler, yet I've always wanted to do one, so I've been working on one for the last three months by running Claude in a while true loop (aka "Ralph Wiggum") with a simple prompt:
Hey, can you make me a programming language like Golang but all the lexical keywords are swapped so they're Gen Z slang?
Why? I really don't know. But it exists. And it produces compiled programs. During this period, Claude was able to implement anything that Claude desired.
The programming language is called "cursed". It's cursed in its lexical structure, it's cursed in how it was built, it's cursed that this is possible, it's cursed in how cheap this was, and it's cursed through how many times I've sworn at Claude.

For the last three months, Claude has been running in this loop with a single goal:
"Produce me a Gen-Z compiler, and you can implement anything you like."
It's now available at:
the website
the source code
Anything that Claude thought was appropriate to add. Currently...
Control Flow:ready → ifotherwise → elsebestie → forperiodt → whilevibe_check → switchmood → casebasic → default
Declaration:vibe → packageyeet → importslay → funcsus → varfacts → constbe_like → typesquad → struct
Flow Control:damn → returnghosted → breaksimp → continuelater → deferstan → goflex → range
Values & Types:based → truecringe → falsenah → nilnormie → inttea → stringdrip → floatlit → boolඞT (Amogus) → pointer to type T
Comments:fr fr → line commentno cap...on god → block comment
Here is leetcode 104 - maximum depth for a binary tree:
vibe main
yeet "vibez"
yeet "mathz"
// LeetCode #104: Maximum Depth of Binary Tree 🌲
// Find the maximum depth (height) of a binary tree using ඞ pointers
// Time: O(n), Space: O(h) where h is height
struct TreeNode {
sus val normie
sus left ඞTreeNode
sus right ඞTreeNode
}
slay max_depth(root ඞTreeNode) normie {
ready (root == null) {
damn 0 // Base case: empty tree has depth 0
}
sus left_depth normie = max_depth(root.left)
sus right_depth normie = max_depth(root.right)
// Return 1 + max of left and right subtree depths
damn 1 + mathz.max(left_depth, right_depth)
}
slay max_depth_iterative(root ඞTreeNode) normie {
// BFS approach using queue - this hits different! 🚀
ready (root == null) {
damn 0
}
sus queue ඞTreeNode[] = []ඞTreeNode{}
sus levels normie[] = []normie{}
append(queue, root)
append(levels, 1)
sus max_level normie = 0
bestie (len(queue) > 0) {
sus node ඞTreeNode = queue[0]
sus level normie = levels[0]
// Remove from front of queue
collections.remove_first(queue)
collections.remove_first(levels)
max_level = mathz.max(max_level, level)
ready (node.left != null) {
append(queue, node.left)
append(levels, level + 1)
}
ready (node.right != null) {
append(queue, node.right)
append(levels, level + 1)
}
}
damn max_level
}
slay create_test_tree() ඞTreeNode {
// Create tree: [3,9,20,null,null,15,7]
// 3
// / \
// 9 20
// / \
// 15 7
sus root ඞTreeNode = &TreeNode{val: 3, left: null, right: null}
root.left = &TreeNode{val: 9, left: null, right: null}
root.right = &TreeNode{val: 20, left: null, right: null}
root.right.left = &TreeNode{val: 15, left: null, right: null}
root.right.right = &TreeNode{val: 7, left: null, right: null}
damn root
}
slay create_skewed_tree() ඞTreeNode {
// Create skewed tree for testing edge cases
// 1
// \
// 2
// \
// 3
sus root ඞTreeNode = &TreeNode{val: 1, left: null, right: null}
root.right = &TreeNode{val: 2, left: null, right: null}
root.right.right = &TreeNode{val: 3, left: null, right: null}
damn root
}
slay test_maximum_depth() {
vibez.spill("=== 🌲 LeetCode #104: Maximum Depth of Binary Tree ===")
// Test case 1: Balanced tree [3,9,20,null,null,15,7]
sus root1 ඞTreeNode = create_test_tree()
sus depth1_rec normie = max_depth(root1)
sus depth1_iter normie = max_depth_iterative(root1)
vibez.spill("Test 1 - Balanced tree:")
vibez.spill("Expected depth: 3")
vibez.spill("Recursive result:", depth1_rec)
vibez.spill("Iterative result:", depth1_iter)
// Test case 2: Empty tree
sus root2 ඞTreeNode = null
sus depth2 normie = max_depth(root2)
vibez.spill("Test 2 - Empty tree:")
vibez.spill("Expected depth: 0, Got:", depth2)
// Test case 3: Single node [1]
sus root3 ඞTreeNode = &TreeNode{val: 1, left: null, right: null}
sus depth3 normie = max_depth(root3)
vibez.spill("Test 3 - Single node:")
vibez.spill("Expected depth: 1, Got:", depth3)
// Test case 4: Skewed tree
sus root4 ඞTreeNode = create_skewed_tree()
sus depth4 normie = max_depth(root4)
vibez.spill("Test 4 - Skewed tree:")
vibez.spill("Expected depth: 3, Got:", depth4)
vibez.spill("=== Maximum Depth Complete! Tree depth detection is sus-perfect ඞ🌲 ===")
}
slay main_character() {
test_maximum_depth()
}
If this is your sort of chaotic vibe, and you'd like to turn this into the dogecoin of programming languages, head on over to GitHub and run a few more Claude code loops with the following prompt.
study specs/* to learn about the programming language. When authoring the cursed standard library think extra extra hard as the CURSED programming language is not in your training data set and may be invalid. Come up with a plan to implement XYZ as markdown then do it
There is no roadmap; the roadmap is whatever the community decides to ship from this point forward.
At this point, I'm pretty much convinced that any problems found in cursed can be solved by just running more Ralph loops by skilled operators (ie. people with experience with compilers who shape it through prompts from their expertise vs letting Claude just rip unattended). There's still a lot to be fixed, happy to take pull-requests.

The most high-IQ thing is perhaps the most low-IQ thing: run an agent in a loop.

LLMs amplify the skills that developers already have and enable people to do things where they don't have that expertise yet.
Success is defined as cursed ending up in the Stack Overflow developer survey as either the "most loved" or "most hated" programming language, and continuing the work to bootstrap the compiler to be written in cursed itself.
Cya soon in Discord? - https://discord.gg/CRbJcKaGNT
website
source code
ps. socials
I ran Claude in a loop for 3 months and created a brand new "GenZ" programming language.
— geoff (@GeoffreyHuntley) September 9, 2025
It's called @cursedlang.
v0.0.1 is now available, and the website is ready to go.
Details below! pic.twitter.com/Ku5kbWMRgR
2025-09-02 23:53:29

I just finished up a phone call with a "stealth startup" that was pitching an idea that agents could generate code securely via an MCP server. Needless to say, the phone call did not go well. What follows is a recap of the conversation where I just shot down the idea and wrapped up the call early because it's a bad idea.
If anyone pitches you on the idea that you can achieve secure code generation via an MCP tool or Cursor rules, run, don't walk.
Over the last nine months, I've written about the changes that are coming to our industry, where we're entering an arena where most of the code going forward is not going to be written by hand, but instead by agents.

where I think the puck is going.
I haven't written code by hand for nine months. I've generated, read, and reviewed a lot of code, and I think perhaps within the next year, the large swaths of code in business will no longer be artisanal hand-crafted. Those days are fast coming to a close.
Thus, naturally, there is a question that's on everyone's mind:
How do I make the agent generate secure code?
Let's start with what you should not do and build up from first principles.