MoreRSS

site iconGeoffrey HuntleyModify

I work remotely from a van that is slowly working its way around Australia.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Geoffrey Huntley

Dear Student: Yes, AI is here, you're screwed unless you take action...

2025-02-27 17:26:00

Dear Student: Yes, AI is here, you're screwed unless you take action...

Two weeks ago a student anonymously emailed me asking for advice. This is the reply and if I was in your shoes this is what I'd do.

So, I read your blog post "An oh f*** moment in time" alongside "The future belongs to idea guys that can just do things", and decided "fine, I'll go try cursor for myself and see how well composer does". I normally use copilot and some claude, but not much since getting the context of my 50kloc project into Claude is rough, and I didn't want to start paying for API use so I could start using i.e aider (I normally get copilot for free). I heard some claims about cursor and friends that I assumed were pumped up LinkedIn talk, so I really didn't think about it until now.

For example, I've been working on a little JNI module for my android app, but it has a bunch of weird memory errors because managing the JNI resources is borderline impossible and C++ RAII has piles of insane footguns. So, I planned to rewrite it in Rust so I could 100% isolate the memory unsafety. This was a tall order, it was a three-way integration between a bulky C++ library with it's own CMake setup, and the JNI code of my app. It required some really messy translations from a rust trait into a C++ abstract class. It would take a lot to integrate it into my app. I decided it was a good enough benchmark to see if composer was up to snuff.

First go around was not well, I gave the lump sum of all the work I wanted and it choked and went around in circles. Tried to go back to debugging the memory problems manually, but honestly I hate C++ and decided to another go around with composer and the rust module. Second time I sliced up the task into smaller but still pretty big pieces and fed them one at a time to composer.

This time, composer probably wrote 99% of the code. Porting the previous shell script, making the shims I needed to map a trait to a C++ abstract class, porting the manual JNI code, and the massive buildscript linking it all together. It debugged its own work  when I pasted error traces in (sadly since this rust module was shoehorned into my app composer agent couldn't really test on it's own). And it just worked. There was some stuff it couldn't do, mostly choking on big error traces and incremental build bugs (thanks gradle), and sometimes I needed to point it in a better direction architecturally, but otherwise I barely touched the code. All I did was direct composer to do what I was thinking of. I then just killed time while it did the work.

I'm at a loss now. I'm a student. Fresh grads and interns aren't seen as idea guys that can get stuff done, they're seen as useless ticket monkeys to be herded by seniors. Now Cursor Composer is the ticket monkey presumably to be herded by a senior, not me. And maybe even the senior isn't needed if a scaled enough reasoner model can do the task slicing. I've already felt like I've been shuffling chairs on the titanic, talking about projects and internships as if this threat doesn't exist, and honestly this confirms it. Is there literally any reason why I shouldn't just throw away the 10 years of my life and do...I don't even know what. At this development pace I don't even know if I have time to "pivot" before everything else gets solved by o4 or whatever. What do I even do anymore?

Thanks,
An Anonymous Student

that's a strongly worded headline...

It's just facts, I'm a straight shooter. I'd rather tell it to you straight and provide actionable advice then placate feelings.

The steps that you take now will determine your success rate with obtaining a SWE role going forward. If you are a high autonomy person then you're not fucked, as long as you take action.

it has all happened before

The Software Development industry works in cycles, this is the third bust after a boom that I've weathered through. Understanding what happened in the 2000 dot-com bust is important because it lead to the 2017 boom.

So, let's rewind time.

Dear Student: Yes, AI is here, you're screwed unless you take action...

It's now the year 1998 and if you knew how to make a website using Dreamweaver, how to cobble some perl together and use RSH to connect to a Sun e4500 then you were guaranteed a role as a software engineer.

After a massive run-up of hype and an injection of too much venture capital money resulted in a talent market where there wasn't enough software engineers in the world.

Life was great. That is, until it wasn't. The bubble burst in 2000 and an engineer called Philip J. "Pud" Kaplan chronicled the downfall on a website called "Fucked Company dot com". To put into perspective of how important and meaningful this website was at the time - it's the equivalent of HackerNews (or Slashdot)

Dear Student: Yes, AI is here, you're screwed unless you take action...

Suddenly all the jobs overnight dried up. Students who had just graduated and were looking for a sweet-sweet software engineering positioning were up shit-creek just like you are now (if you don't take action).

Seemly overnight the employment switched from a sellers market (ie. in employee favor) to be a buyers market (ie. in employers favor) because there was an oversupply of senior engineering talent desperate to find their next job. Why would a company hire a graduate when they could obtain someone with five years knowledge on the cheap.

It was a COVID19 moment in time, if you were born in the wrong year (ie. year 12 high school students disrupted in Australia) then your life was utterly turned up-side-down.

That generation of graduates simply missed out on opportunities and core life experiences…

it took many years to recover from the bust

but here's the kicker. Because there was an under-supply of graduates as companies weren't raising the next generation it lead to a shortage of talented software engineers in the market. The result was inevitable.

The market switched sides to where the employees had all the power and due to an another massive round of venture capital injection money was flowing freely.

Every company out there wanted to attract "google pedigree talent". Companies everywhere started offering perks and compensation packages to tempt them to come join their company.

Dear Student: Yes, AI is here, you're screwed unless you take action...
L3 at Google is A$280,710 AUD/year. L5 at Google is A$592,734/year. L8 is A$1,655,874/year.

Word got out and suddenly everyone wanted to become a software engineer because the money was absurd. YouTube is now a thing, all sorts of grifters turned up selling courseware on how to get a job and boasting about their millions - which resulted in even more people becoming software-engineers (just for the money) because it was an easy path to fast retirement.

Dear Student: Yes, AI is here, you're screwed unless you take action...
All you needed to do was get in the door, hold on for dear life and hop between the levels.fyi ladders.

It's now the year 2023 and after a strong 14 years of growth, the industry fell on it's face once again. Interest rates went up and parking money-with-risk at Venture Capital firms was no longer desirable place for investors because there were other alternatives now available at lower risk.

Dear Student: Yes, AI is here, you're screwed unless you take action...

Whilst I managed to avoid the original dot-com dust, this one affected me personally. I was one of the software engineers caught up in the 219,709 layoffs and it took me (with my experience) six months to find a suitable replacement role.

If you graduated from university or entered the job market in 2023 then you were competing against me (and/or circa 219,709 qualified employees with experience).

ie. screwed…

Dear Student: Yes, AI is here, you're screwed unless you take action...

it is now the year 2025

AI is at our doorsteps and the employment market has not recovered from the last bust. You're lucky in a way because you realized early what's coming and have time to take action. There are many software engineers that simply aren't gonna make it because they haven't had the epiphany that you have and haven't even started progressing through the people stages of AI adoption...

Dear Student: Yes, AI is here, you're screwed unless you take action...

this time it's different

A brilliant co-worker penned together the following chart and the following words which explains the impact of AI for different levels of engineers...

Dear Student: Yes, AI is here, you're screwed unless you take action...

Junior Engineer

You’re just starting your work in a new codebase and you’re still piecing together a solid mental model of how things actually work. Here, an LLM is a lifesaver. Stuck on an error? An LLM can give you an explanation that makes sense. Need to write some code for a minor feature, or do a library upgrade? All of this can be done much faster with an LLM.

An LLM can already feel like it can do a huge part of your job for you. That's why I believe there’s a real danger zone here. If you lean on an LLM as a shortcut to get unstuck in the same way as you’d reach out to your more senior colleagues when you’d otherwise have to ask – then fine. In the real world, chances are you won’t have the luxury of avoiding LLMs even if you wanted to. However, if you end up copy-pasting code back and forth between your IDE and the LLM without truly understanding what’s happening or why, then advancing your engineering skills will become a serious challenge.

Mid-Level Engineer

You’ve built up a fair amount of context and can navigate your codebase with confidence. You still find that LLMs make you write code much faster. You can ship features faster with copilot’s completion, use agents to write less boilerplate code, learn new frameworks much faster with ChatGPT.

However, you’re already bumping into cases that an LLM simply can’t handle yet. It won’t decipher what the customers actually wanted from the ticket you were given, it can’t use your debugger to pinpoint a dangerous race condition, and it can’t help you much when you’re responding to a midnight on-call alert.

Senior Engineer

You've got a great mental model of the whole codebase you're responsible for. You know all of its ins and outs. Hell, you probably wrote a decent part of it. Sure, you can code much faster, and you enjoy it, but how much time do you really spend writing code? When you work on the roadmap, it can’t really help you much. When you dive into a weird heisenbug, it can’t really help you—it gets confused. When you’re writing an extensive design document for the next project, it can only help you with the formatting and structure, not the hardest part – the substance. It just doesn’t have all the nuance and context you’ve accumulated in your head, and even if you wanted to, you couldn’t write it down.

Many of your friends and colleagues are excited, and you want to be excited, but you just can’t. The AI is simply not there yet. This is probably the level where the most scepticism about LLMs comes from, and the more technical or unique your domain is, the stronger the disillusionment.

Staff+ Engineer

While there are many staff archetypes out there, one thing is common between them all – your role is often to light the path for others to follow. And to achieve this, you have to experiment a lot.

Here’s where LLMs can start shining again. Writing proof-of-concept projects has suddenly become much easier. If you need to show the feasibility of taking an approach, an LLM can help create a half-baked, barely working solution much faster than without it. And the best part is that once the LLM gets stuck, you can very quickly get it unstuck using the extensive domain knowledge in your brain.

companies are closing their doors on juniors

Staff+, senior and mid-level engineers (whom invest in themselves) are now more desirable because they can use their expertise to a output a multiple factor more code and thanks to wisdom earned over the years (or tens of years) they are able obtain better outcomes from AI as they have experience to know when the AI is bullshitting and have developed taste for what does or does not look right.

No other profession trivialises their profession to the degree of software
Software in 2022 is overwhelmingly built with little to no consequence and is made up of other components which are overwhelmingly developed by unpaid volunteers on an AS-IS basis that are being financially neglected. Systemically, I’m concerned that there is a lack of professional liability, rigorous industry best practices, and
Dear Student: Yes, AI is here, you're screwed unless you take action...

companies don't even know how to hire anymore

With hundreds of thousands of dollars at stake, all the incentives are there for candidates to cheat. In the video below is one of many tools that now exist today which hook the video render of macOS and provides overlays (similar to how OpenGL game hacks work) that can't be detected by screen recording software or zoom.

The software interview process was never great but it's taken a turn for the worst as AI can easily solve any thing thrown at it - including interview screenings.

Another co-worker of mine recently penned the blog post below which went viral on HackerNews. I highly recommend reading the comments.

AI Killed The Tech Interview. Now What?
How can we do better interviews in the age of AI
Dear Student: Yes, AI is here, you're screwed unless you take action...

companies business models are in jeopardy

It's now incredibly easy to clone any SaaS company out there in existence if you know how to drive AI and have the expertise.

You've already read the article below where this was hinted at so I'll supplement with the following wisdom. I suspect the future of work is going to be lots of small 10 person companies operating similar to how a law-firm works - with profit sharing between the senior partners.

The future belongs to idea guys who can just do things
There, I said it. I seriously can’t see a path forward where the majority of software engineers are doing artisanal hand-crafted commits by as soon as the end of 2026. If you are a software engineer and were considering taking a gap year/holiday this year it would be an
Dear Student: Yes, AI is here, you're screwed unless you take action...

so, when does this story get good?

If you graduated last year and are entering the workforce this year then it doesn't unless you take action. It's a COVID19 moment in life again.

If you are a student who has just started university and will be graduating in four years from now - there will be software engineering roles, although they will be different.

What's likely to happen is - if more companies close their doors to juniors then the next generation of juniors won't be raised - similar to what happened in the dotcom boom/bust - and we’ll get another boom - which will lead to the incredible perks and fat paychecks for people with the right skills because not enough juniors entered the workforce…

what would I do if I was in your shoes..

Understand that time is on your side. You have about a year, maybe less. Whatever do you - do not squander it. Your edge right now is a large majority of software engineers have not discovered what you have. When they do - it's game over unless you have an edge.

Dear Student: Yes, AI is here, you're screwed unless you take action...

Learn the fundamentals that university typically does not teach:

  • Create an application (it can be anything, even a basic todo app website)
  • Learn how to do property based testing and learn how to craft code that can be tested.
  • Setup a CI pipeline (something other than GitHub Actions).
  • Learn SCM (source code management) such as Git from first principles (pdf).
  • Learn how to release software in increments using SCM+CI+Property Based Testing.

Find yourself a peer that pushes you to the limits of learning
The standard pace of learning is for chumps. If you are better than the average then you can fold space-time and outpace the competition. Read this blog post.

There’s no speed limit | Derek Sivers
Dear Student: Yes, AI is here, you're screwed unless you take action...

Do not join a Startup
In the last VC boom and bust that happened before AI, many startups raised too much capital or engaged in outright fraud to pump their customer numbers up by cross-selling between each other. The AI bust that's about to happen is going to be brutal for them - there's many many living dead zombie companies out there.

Only join an existing startup if it pays well (ignore the equity/dreams of a jackpot) and if you find an operating environment which pushes your growth and learning to the limits.

Understand that most Startups fail and you could be out on your ass overnight. Ensure you have plenty of cash stowed away - it could take you +6 months to find a new role.

Don't get a job that has an ban on AI coding tools
You'll be doing yourself a huge disservice. Look for companies that encourage it.

Obtain skills that are going to be highly desirable by every employer
I'm going to call it right now - ignore commodity bullshit disposable knowledge like AWS - focus on what people don't know and what will be in demand.

Become a subject matter expert in MCPs. There is brand new territory open to you right now to be a prolific open-source author of MCPs as not many people are creating them at this moment in time.

Pull apart https://github.com/block/goose and https://github.com/All-Hands-AI/OpenHands. Learn how they work and then build your own AI coding assistant from the ground up - from first principals.

You want to be the employee that companies are fighting for because you have 6+ months edge on everyone else and are bringing new ideas/techniques to the table. Become the one that understands what all these tools do - under the hood - better than an engineer who's 20+ years into their career who hasn't been paying attention.

Focus on shipping with rust
Every programming language community and ecosystem has it's window in time where it attracts brilliant minds. Right now, that's the rust ecosystem. It's full of innovators - you want to be hanging out with the innovators. If you focus on learning something commodity - such as typescript - you are positioning yourself incorrectly. Having said all of that, programming languages no longer matter but rust has a unique property where it's able to achieve better outcomes with LLMs due to it's TypeSystem that other programming languages (except haskell) lack.

Understand how your work directly translates to business value
The organizational abstractions layers are shrinking. Going forward it's incredibly important to think and act like an entrepreneur. You aren't a software engineer who types code into an IDE - those days are gone. Instead you need to understand how your work directly translates to business value and to achieve that you need to become a product engineer. Find yourself a product manager (or founder) who has immense domain depth and befriend them. Learn what are high value problems to be solved and discover their pain-points/research items by walking in their boots or in the boots of a customer. Be a doctor, not a waiter. Learn a little bit about business and finance.

Build a widget, not an aeroplane
When building, start small. As soon as you make the tiniest of things, share it with the world. Take an approach of building lots of small things that compose well that can combined to make something big. It's a marathon, not a sprint.

Build a public profile
I can't stress this enough. So many opportunities in my life which yielded fruits only happened because I manifested the luck. Once you have a couple MCPs under your belt and you are on the way to building your own AI coding assistant - start networking. Don't network to get a job. Network to share your knowledge with others.

In application this means:

  • Creating your own personal professional website (ie. name dot com) and regularly posting to it your learnings. Start small - build this - https://til.simonwillison.net/
  • Creating a GitHub account and start publishing all your work there
  • Publishing your MCP tools on npm and/or crates.io (recruiters scour these two)
  • Attend meetups - they are always looking for speakers. Public speaking is hard, no matter how good at you it always remains hard. Push past it and start sharing your knowledge.
  • Identify peers and build relations with them. Share your knowledge without asking for anything in return.

Play with the tools, develop edges and unique insights...

You are using Cursor AI incorrectly...
I’m hesitant to give this advice away for free, but I’m gonna push past it and share it anyway. You’re using Cursor incorrectly. Over the last few weeks I’ve been doing /zooms with software engineers - from entry level, to staff level and all the way up to principal level.
Dear Student: Yes, AI is here, you're screwed unless you take action...

ps. socials @ https://x.com/GeoffreyHuntley/status/1895043009991032996

I had my AI "oh f***" moment and I'm a student, now what?

2025-02-09 07:09:44

What follows is an email that arrived in my inbox moments ago, reproduced in it's entirety. I'll be doing a response letter, after I get some sleep. For now, discuss at https://x.com/GeoffreyHuntley/status/1888365040572751973
I had my AI "oh f***" moment and I'm a student, now what?

Hi there,

So, I read your blog post "An oh f*** moment in time" alongside "The future belongs to idea guys that can just do things", and decided "fine, I'll go try cursor for myself and see how well composer does". I normally use copilot and some claude, but not much since getting the context of my 50kloc project into Claude is rough, and I didn't want to start paying for API use so I could start using i.e aider (I normally get copilot for free). I heard some claims about cursor and friends that I assumed were pumped up LinkedIn talk, so I really didn't think about it until now.

For example, I've been working on a little JNI module for my android app, but it has a bunch of weird memory errors because managing the JNI resources is borderline impossible and C++ RAII has piles of insane footguns. So, I planned to rewrite it in Rust so I could 100% isolate the memory unsafety. This was a tall order, it was a three-way integration between a bulky C++ library with it's own CMake setup, and the JNI code of my app. It required some really messy translations from a rust trait into a C++ abstract class. It would take a lot to integrate it into my app. I decided it was a good enough benchmark to see if composer was up to snuff.

First go around was not well, I gave the lump sum of all the work I wanted and it choked and went around in circles. Tried to go back to debugging the memory problems manually, but honestly I hate C++ and decided to another go around with composer and the rust module. Second time I sliced up the task into smaller but still pretty big pieces and fed them one at a time to composer.

This time, composer probably wrote 99% of the code. Porting the previous shell script, making the shims I needed to map a trait to a C++ abstract class, porting the manual JNI code, and the massive buildscript linking it all together. It debugged its own work  when I pasted error traces in (sadly since this rust module was shoehorned into my app composer agent couldn't really test on it's own). And it just worked. There was some stuff it couldn't do, mostly choking on big error traces and incremental build bugs (thanks gradle), and sometimes I needed to point it in a better direction architecturally, but otherwise I barely touched the code. All I did was direct composer to do what I was thinking of. I then just killed time while it did the work.

I'm at a loss now. I'm a student. Fresh grads and interns aren't seen as idea guys that can get stuff done, they're seen as useless ticket monkeys to be herded by seniors. Now Cursor Composer is the ticket monkey presumably to be herded by a senior, not me. And maybe even the senior isn't needed if a scaled enough reasoner model can do the task slicing. I've already felt like I've been shuffling chairs on the titanic, talking about projects and internships as if this threat doesn't exist, and honestly this confirms it. Is there literally any reason why I shouldn't just throw away the 10 years of my life and do...I don't even know what. At this development pace I don't even know if I have time to "pivot" before everything else gets solved by o4 or whatever. What do I even do anymore?

Thanks,
An Anonymous Student

You are using Cursor AI incorrectly...

2025-02-09 02:22:59

You are using Cursor AI incorrectly...

I'm hesitant to give this advice away for free, but I'm gonna push past it and share it anyway. You're using Cursor incorrectly.

Over the last few weeks I've been doing /zooms with software engineers - from entry level, to staff level and all the way up to principal level.

Here's what I've seen:

  • Using Cursor as a replacement for Google Search.
  • Under specification of prompts, not knowing how to drive outcomes and using low-level thinking of "implement XYZ please".
  • Treating Cursor as if it is an IDE, instead of it being an autonomous agent.
  • Blissful unawareness to the concept that you can program LLM outcomes.
  • Unnecessary usage of pleasantries ("please" and "can you") with it as if it was a human. If it fucks up, swear at it - go all caps and call it a clown. It soothes the soul.

Okay, well that last point - it doesn't really change the outcome of Cursor so let's focus on the other points....

Cursor has a pretty powerful feature called Cursor Rules and it's a killer feature that is being slept on/is misunderstood. A quick scour of GitHub for implementations and scouting the community forums reveals that people are using them incorrectly - they all roughly look like this...

# WordPress PHP Guzzle Gutenberg .cursorrules prompt file

Author: <redacted>

## What you can build
E-commerce Store Integration Plugin: Create a WordPress plugin that integrates various e-commerce platforms using the Guzzle-based HTTP client, allowing users to manage products, orders, and inventory directly from their WordPress dashboard. Include Gutenberg blocks for adding product listings and shopping cart functionality.Social Media Auto Poster: Develop a plugin that automatically shares new WordPress posts to connected social media accounts by utilizing the Guzzle HTTP client for API interactions. Provide Gutenberg blocks for social media settings and customization of post content.Custom Form Builder with REST API Submission: Design a WordPress form builder plugin that creates custom forms with Gutenberg blocks and submits entries via the WP REST API. Include options for saving entries to external databases or services through the Guzzle client.SEO Optimization Toolkit: Build a plugin that offers SEO analysis and recommendations using external APIs accessed via Guzzle. Implement Gutenberg blocks showing SEO scores and suggestions for improving content directly in the editor.Content Syndication Hub: Offer a plugin that enables easy content syndication across multiple WordPress sites and external platforms, leveraging GUzzle for HTTP requests and REST API endpoints for managing syndication settings.Custom Analytics Dashboard: Create a WordPress plugin that presents a personalized analytics dashboard, pulling data from multiple third-party services using Guzzle. Utilize Gutenberg blocks to display graphs, statistics, and insights directly within the WordPress admin.Dynamic Content Importer: Develop a plugin that periodically imports and updates content from specified external sources using Guzzle. Provide Gutenberg blocks for configuring import settings, schedules, and display options for the imported content.Advanced Newsletter Integration: Implement a plugin that connects WordPress to various newsletter services using the Guzzle client, enabling automated email campaigns based on website activity. Include Gutenberg blocks for subscription forms and campaign management.Multilingual Content Manager: Design a plugin for managing multilingual content in WordPress, using the Guzzle client to access and translate content via external translation APIs. Gutenberg blocks can be used for displaying translated content and managing language settings.Real-Time Cryptocurrency Ticker: Create a Gutenberg block plugin that displays real-time cryptocurrency prices and market data by leveraging Guzzle to fetch data from financial APIs. Offer users customizable ticker settings directly within the WordPress dashboard.

## Benefits


## Synopsis
WordPress developers can create a plugin that integrates external APIs using Guzzle, adds custom WP REST endpoints, and introduces Gutenberg blocks, adhering to WordPress coding standards and optimizing code readability.

## Overview of .cursorrules prompt
The .cursorrules file provides guidelines for developing a WordPress plugin that includes a Guzzle-based HTTP client, WP REST endpoint additions, and new Gutenberg editor blocks. It emphasizes using WordPress coding standards for PHP, JavaScript, and TypeScript, with a preference for TypeScript over JavaScript. The file promotes functional programming paradigms and composition over inheritance while ensuring consistency with WordPress ecosystem best practices. Additionally, it stresses the importance of optimizing code for readability and employing type hinting in PHP code.

Instead of approaching Cursor from the angle of "implement XYZ of code" you should instead be thinking of building out a "stdlib" (standard library) of thousands of prompting rules and then composing them together like unix pipes.

The first rule that every engineering project should have is a function that describes where to store the rules...

---
description: Cursor Rules Location
globs: *.mdc
---
# Cursor Rules Location

Rules for placing and organizing Cursor rule files in the repository.

<rule>
name: cursor_rules_location
description: Standards for placing Cursor rule files in the correct directory
filters:
  # Match any .mdc files
  - type: file_extension
    pattern: "\\.mdc$"
  # Match files that look like Cursor rules
  - type: content
    pattern: "(?s)<rule>.*?</rule>"
  # Match file creation events
  - type: event
    pattern: "file_create"

actions:
  - type: reject
    conditions:
      - pattern: "^(?!\\.\\/\\.cursor\\/rules\\/.*\\.mdc$)"
        message: "Cursor rule files (.mdc) must be placed in the .cursor/rules directory"

  - type: suggest
    message: |
      When creating Cursor rules:

      1. Always place rule files in PROJECT_ROOT/.cursor/rules/:
         ```
         .cursor/rules/
         ├── your-rule-name.mdc
         ├── another-rule.mdc
         └── ...
         ```

      2. Follow the naming convention:
         - Use kebab-case for filenames
         - Always use .mdc extension
         - Make names descriptive of the rule's purpose

      3. Directory structure:
         ```
         PROJECT_ROOT/
         ├── .cursor/
         │   └── rules/
         │       ├── your-rule-name.mdc
         │       └── ...
         └── ...
         ```

      4. Never place rule files:
         - In the project root
         - In subdirectories outside .cursor/rules
         - In any other location

examples:
  - input: |
      # Bad: Rule file in wrong location
      rules/my-rule.mdc
      my-rule.mdc
      .rules/my-rule.mdc

      # Good: Rule file in correct location
      .cursor/rules/my-rule.mdc
    output: "Correctly placed Cursor rule file"

metadata:
  priority: high
  version: 1.0
</rule>

Now, you might be wondering why? Ah. That's because people are missing out on the fact that you can ask Cursor to write rules. To build out your "stdlib" you are going to asking Cursor to write rules and update rules with learnings as if your career depended upon it.

The foundational LLM models right now are what I'd estimate to be at circa 45% accuracy and require frequent steering. When doing a session in fully automated YOLO mode my instructions to composer roughly follow the following steps:

  • A lengthy discussion about requirements and listing the requirements out in numbered bullet points so that I can cite the specific requirement when something needs changing or if something goes wrong.
  • Asking cursor to write the requirements to a file, that I can reinject back in to the context window if required.
  • Attaching the @file and @file-test into the context. Specifically instructing Cursor to "inspect and describe the file"
  • Asking the agent to implement "XYZ" requirement, author tests and documentation.
  • Run builds and tests after each change.
  • Perform a git commit (via a configured rule) if everything went alright.

When a requirement is implemented successfully - Great, so what? The key thing is the steps of intervention when the foundational models let you down and the actions you do after it gets it right.

You are using Cursor AI incorrectly...
it's a numbers game, and you are in full control of the odds via your stdlib

I know you have been reading for a while, so here's the grand reveal.

When Cursor gets it right after intervention. Ask it to author a rule or update a rule with it's learnings.

In my monorepo, I exclusively use Nix. Yet, Cursor kept recommending solutions which involved Bazel and creating BUILD.bazel files. After a stern fuck you was exchanged with cursor I asked it to create a rule to ensure it never happened again.

---
description: No Bazel
globs: *
---
# No Bazel

Strictly prohibits any Bazel-related code, recommendations, or tooling.

<rule>
name: no_bazel
description: Strictly prohibits Bazel usage and recommendations
filters:
  # Match any Bazel-related terms
  - type: content
    pattern: "(?i)\\b(bazel|blaze|bzl|BUILD|WORKSPACE|starlark|\\.star)\\b"
  # Match build system recommendations
  - type: intent
    pattern: "build_system_recommendation"
  # Match file extensions
  - type: file_extension
    pattern: "\\.(bzl|star|bazel)$"
  # Match file names
  - type: file_name
    pattern: "^(BUILD|WORKSPACE)$"

actions:
  - type: reject
    message: |
      Bazel and related tools are not allowed in this codebase:
      - No Bazel build files or configurations
      - No Starlark (.star/.bzl) files
      - No Bazel-related tooling or dependencies
      - No recommendations of Bazel as a build system

      Please use Nix for build and dependency management.

  - type: suggest
    message: |
      Instead of Bazel, consider:
      - Nix for reproducible builds and dependencies
      - Make for simple build automation
      - Language-native build tools
      - Shell scripts for basic automation

examples:
  - input: "How should I structure the build?"
    output: "Use Nix for reproducible builds and dependency management. See our Nix documentation for examples."
  - input: "Can we add a Bazel rule?"
    output: "We use Nix overlays instead of Bazel rules. Please convert this to a Nix overlay."

metadata:
  priority: critical
  version: 2.0
</rule>

Ever since that moment Cursor no longer tries to push Bazel down my throat. So yeah, you can clamp and fine-tune responses.

Another thing that you can do is provide instructions that allow you to do IF-THIS-THEN-THAT. Here's an example where when new files are added by cursor it automatically invokes my software licensing tool to add the appropriate copyright headers.

---
description: Depot Add License Header
globs: *
---
# Add License Header

Automatically add license headers to new files.

<rule>
name: add_license_header
description: Automatically add license headers to new files
filters:
  - type: file_extension
    pattern: "*"
  - type: event
    pattern: "file_create"
actions:
  - type: execute
    command: "depot-addlicense \"$FILE\""
  - type: suggest
    message: |
      License headers should follow these formats:

      Go files:
      ```go
      // Copyright (c) 2025 Geoffrey Huntley <[email protected]>. All rights reserved.
      // SPDX-License-Identifier: Proprietary
      ```

      Nix files:
      ```nix
      # Copyright (c) 2025 Geoffrey Huntley <[email protected]>. All rights reserved.
      # SPDX-License-Identifier: Proprietary
      ```

      Shell files:
      ```bash
      # Copyright (c) 2025 Geoffrey Huntley <[email protected]>. All rights reserved.
      # SPDX-License-Identifier: Proprietary
      ```
metadata:
  priority: high
  version: 1.0
</rule>

Okay, that's interesting but it's not cool. What if we automated commits to source control after every successful requirement was done? Easy....

You are using Cursor AI incorrectly...
# Git Conventional Commits

Rule for automatically committing changes made by CursorAI using conventional commits format.

<rule>
name: conventional_commits
description: Automatically commit changes made by CursorAI using conventional commits format
filters:
  - type: event
    pattern: "build_success"
  - type: file_change
    pattern: "*"

actions:
  - type: execute
    command: |
      # Extract the change type and scope from the changes
      CHANGE_TYPE=""
      case "$CHANGE_DESCRIPTION" in
        *"add"*|*"create"*|*"implement"*) CHANGE_TYPE="feat";;
        *"fix"*|*"correct"*|*"resolve"*) CHANGE_TYPE="fix";;
        *"refactor"*|*"restructure"*) CHANGE_TYPE="refactor";;
        *"test"*) CHANGE_TYPE="test";;
        *"doc"*|*"comment"*) CHANGE_TYPE="docs";;
        *"style"*|*"format"*) CHANGE_TYPE="style";;
        *"perf"*|*"optimize"*) CHANGE_TYPE="perf";;
        *) CHANGE_TYPE="chore";;
      esac

      # Extract scope from file path
      SCOPE=$(dirname "$FILE" | tr '/' '-')

      # Commit the changes
      git add "$FILE"
      git commit -m "$CHANGE_TYPE($SCOPE): $CHANGE_DESCRIPTION"

  - type: suggest
    message: |
      Changes should be committed using conventional commits format:

      Format: <type>(<scope>): <description>

      Types:
      - feat: A new feature
      - fix: A bug fix
      - docs: Documentation only changes
      - style: Changes that do not affect the meaning of the code
      - refactor: A code change that neither fixes a bug nor adds a feature
      - perf: A code change that improves performance
      - test: Adding missing tests or correcting existing tests
      - chore: Changes to the build process or auxiliary tools

      The scope should be derived from the file path or affected component.
      The description should be clear and concise, written in imperative mood.

examples:
  - input: |
      # After adding a new function
      CHANGE_DESCRIPTION="add user authentication function"
      FILE="src/auth/login.ts"
    output: "feat(src-auth): add user authentication function"

  - input: |
      # After fixing a bug
      CHANGE_DESCRIPTION="fix incorrect date parsing"
      FILE="lib/utils/date.js"
    output: "fix(lib-utils): fix incorrect date parsing"

metadata:
  priority: high
  version: 1.0
</rule>


<!--
 Copyright (c) 2025 Geoffrey Huntley <[email protected]>. All rights reserved.
 SPDX-License-Identifier: Proprietary
-->

Over the last 8 hours, I've built up a pretty big "stdlib" which has taught Cursor about my codebase and I'm hitting successful outcomes/jackpots on an ever increasing rate.

You are using Cursor AI incorrectly...
you can program a better outcome

Mr 10 was sitting next to me the entire time glued to the screen whilst I took my explanations to him and dumped them 1:1 directly into Cursor, he explained it best as follows:

Dad, it's like you are teaching it how to build and ride a bike. First you are describing what pedals are, what their purpose is and how to install them onto a bike. When the AI attempts to screw the pedals in clockwise, you are correcting it to it screw counter-clockwise so that you'll never have to do that again. Eventually you'll have a fully functioning bike that can assemble another bike by itself and then that bike can be used to make a Ferrari...
- my son

Here's what I've shipped:

I'm inches away from being able to compose high-level requirements to automatically build a website, configure DNS and automatically deploy infrastructure now that I've been able to rig Cursor to bring in a jackpot every time I pull the lever. Lego piece by Lego piece I'm going up levels of abstraction and solving classes of problems forever. A moment where I can unleash 1000 concurrent cursors/autonomous agents on a backlog is not too far off...

You are using Cursor AI incorrectly...
if you think the above bullet-list is not impressive - perhaps you are missing the bigger picture? The foundational models are getting better every day and the future is a developer tooling departing from what the have today - the IDE - towards reviewing PRs from 1000 agents that are autonomously smashing out the backlog, in parallel.

Hope these insights help you steer clear of the incoming ngmi that's about to rip through our industry. All the rules in this blog-post and in my stdlib were authored by Cursor itself and when it got something wrong, I asked it to update the stdlib with lessons learned.

The End of Programming as We Know It
You are using Cursor AI incorrectly...

Go forward and build your stdlib - brick, by brick!

p.s. socials @ https://x.com/GeoffreyHuntley/status/1888296890552447320

What do I mean by some software devs are "ngmi"?

2025-02-07 17:17:06

What do I mean by some software devs are "ngmi"?

At "an oh fuck moment in time" I closed off the post with the following quote.

N period on from now, software engineers who haven't adopted or started exploring software assistants, are frankly not gonna make it. Engineering organizations right now are split between employees who have had that "oh fuck" moment, are leaning into software assistants and those who have not.

People have asked me to expand. So let's do it.

This is a tale about seven fruits. Seven fruits who work at a company and how the shifts in the software industry by AI will affect all companies and their employees.

It really doesn't matter what company as the productivity benefits that are delivered by LLMs and agentic software development are going to rapidly change employee performance metrics dynamics within all companies - all at once. AI has been commoditized and can be purchased with a credit card.

At this company we have seven software developers, and the company does six-monthly employee performance review cycles.

What do I mean by some software devs are "ngmi"?

Nearly all corporate companies use a Vitality curve. Some companies force the distribution, other's do not. Performance season closes and here's the results represented as a tier-list (rather than as a bell-curve - as for the purposes of story telling the distribution doesn't matter).

What do I mean by some software devs are "ngmi"?

Lemon doesn't make it, well, because they are a lemon.

What do I mean by some software devs are "ngmi"?

Orange and Strawberry are shocked to receive such a low employee rating.

Motivated to learn, grow and determined to find an edge they stumble upon new techniques that enable them to do 16x the work output of everyone else - LLMs and AI powered Code Editors.

Apple and Grape - whom historically have been top performers at the company are well aware of these tools but have rejected them. Being prototypical early adopters of technology, they tried them a year ago and found that it didn't help and thus are now stuck in the disbelief chasm.

What do I mean by some software devs are "ngmi"?

When Apple and Grape see the new generation of early adopters that utilise the tools sing praises they dismiss it as just hype. Missing the point that the tools and foundational AI models have improved, are getting better every single day and that today is different than yesterday.

6 months wizzes on by, and at the next performance cycle Apple and Grape are at the bottom of the tier-list whilst Orange is on top. Strawberry has climbed in rankings.

Through careful skills development Orange has found a way to out-deliver and become even more effective in shipping software features with a low defect rate.

What do I mean by some software devs are "ngmi"?

Banana is shocked, takes notice of Orange's and Strawberry's ranking change and starts to invest in personal development. Orange continues to refine their skills, whilst Apple starts on the journey of going all-in on LLMs

At the next performance cycle the results are predictable...

What do I mean by some software devs are "ngmi"?

Another performance cycle happens. Grape fails to adapt to the changes within the industry and is no longer with the company.

What do I mean by some software devs are "ngmi"?

Now the shocking lesson for Grape is, it's not just the company they were working for that has been changing at breakneck speed to adapt to new emerging threats to business models. It's every company – the software industry has changed.

Seemingly overnight. Companies are now looking for X years of programming LLMs (not just consuming them) - skills that Grape does not possess.

This tale isn't about any particular company, it's about the raging AI bull that's been unleashed which enables anyone, to do anything.

You may not believe me, but I'll leave you with a ponderoo...

Over the last month I've caught up with founders which have already been implementing above for over a year now. The founders didn't intend to slash workforce, it just happened naturally - people self-selected out of the new AI+tools first direction - which works for now but the places to turn to will soon diminish, fast.

Founders have discovered that the tools do indeed work and AI enables a path to do more, faster, with less. I strongly recommend developers to get out of the bubble of speaking with other developers and instead start having discussions with technical founders...

What do I mean by some software devs are "ngmi"?
https://www.reddit.com/r/ChatGPTCoding/comments/1h26x0k/team_transitioned_to_cursor_but_bottleneck_is_now/

The moral of the story? Don't fall asleep on the wheel on this one. Invest in yourself - if you’re a high agency person, there’s never been a better time to be alive...

I suspect there's not going to be mass-layoffs for software developers at Companies due to AI, instead there what we will see is a natural attrition between those who invest in themselves right now and those who do not.

ps. socials @ https://x.com/GeoffreyHuntley/status/1887793735129563475

The future belongs to idea guys who can just do things

2025-02-07 02:48:26

The future belongs to idea guys who can just do things

There, I said it. I seriously can't see a path forward where the majority of software engineers are doing artisanal hand-crafted commits by as soon as the end of 2026. If you are a software engineer and were considering taking a gap year/holiday this year it would be an incredibly bad decision/time to do it.

I'm no longer hiring junior or even mid-level software engineers.

Our tokens per codebase: Gumroad: 2M Flexile: 800K Helper: 500K Iffy: 200K Shortest: 100K Both Claude 3.5 Sonnet and o3-mini have context windows of 200K tokens, meaning they can now write 100% of our Iffy and Shortest code if prompted well.

It won’t be long until AI will be writing all the code for Helper, Flexile, and Gumroad.

Our new process: 1. Sit and chat about what we need to build, doing research with Deep Research as we go. 2. Have AI record everything and turn it into a spec. 3. Clean up the spec, adding any design requirements / other nuances. 4. Have Devin code it up. 5. QA, merge, (auto-)deploy to prod.
- Sahil Lavingia (Founder of Gumroad)

It's been a good 43 years of software development where everything has remained roughly the same but it's time to go up another layer of abstraction as we have in the past - from hand rolling assembler to higher level compilers to programming LLMs. It's now critical for engineers to embrace these new tools and for companies to accelerate their employees "time to oh-fuck" moment.

Companies, look at the roadmap you have carefully developed and consider throwing out parts that no longer make sense. Start motions towards up-skilling how your employees think. There's going to be a-lot of scared people out there right now within the ranks. It's critical to steer people past the emotional "deer in the headlights" phase of "oh shit, do I have a job in the future?".

the people stages of AI adoption

The future belongs to idea guys who can just do things
  • detraction/cope/disbelief - "it's not good enough/provide me with proof that AI isn't hype"
  • experimental usage with LLMs
  • deer in headlights/worry after discovering more and more things that it IS good at - "will I have a job? AI is going to take my job. The company is going to replace me with AI"
  • concern/alarm/we need to bin our planning - "everything else we are doing right now feels just so inconsequential"
  • engaged, consuming AI and starting to build using LLMs (ie. using Cursor) and evolving their thinking, trying new approaches. Realising the areas where it is not currently great at and learning how to get the right outcomes.
  • engaged, realization that you can program the LLMs and doing it.

what companies can do

People are cut from two different cloths, those whom have failed, failed, failed and grinded their way to becoming an entrepreneur and that those whom have chosen the life of a steady predictable pay cheque. Employees are at an established company because they made a decision in life for stability. Embrace that, continue to provide stability (currency for employees) in these uncertain times as it will be key to steadying the ship and retaining talent.

Do anything you can to accelerate accelerate your employees "time to oh-fuck" moment. Consider adjusting personal expenses policies and enabling employees to personally expense LLM/AI tools - even if they are used at home.

Develop internal training that quickly shifts minds from "using LLMs as a Google replacement" to "here's how you can program LLMs to automate tasks with accuracy"

Noise, fuckery and politics. It has no place. Understand that we are now in a strange new world with these LLMs and it's going to create a brand new category of cultural problems within the hyper-competitive software profession:

  • When someone figures out an 'edge' will co-workers in corporate share it freely or to they keep it to themselves so that they can game systems for "impact" (ie. perf reviews)
  • Will software development become like business development (sales) where edges are closely guarded secrets?
  • Will software development become less of a "team sport/activity"?

Create space, time and slack for people to experiment at work with these LLMs. I'm fortunate enough that my kiddos are now much older and thus the pace that I can put into personal learning/development is much higher than it would have been if they were younger. If your company employee age is trending younger (ie. folks have only just started pushing out babies) then you'll need create space for them to be able to grow on the job.

Ultimately, personal growth is the responsibility of an individual and a company can only (or should) do so much but if a company can support employees to increase it's amount of lotto tickets, it should.

It's hackathon (during business hours) once a month, every month time.

on the tech front here's what I've learned so far

The current generation of autonomous agents work by brute force. It's imperative that investments are made into anything that can be used as a "re-enforcement" technique:

  • The stronger the type system of the language and the better the compiler errors the harder you can drive these LLMs. If software doesn't compile and the compiler provides strong primitives that compiling = success, the harder it goes.
  • Tests. A failing test is a reinforcement to the LLM when they make changes that they have potentially taken a wrong path. Teams and code-bases where test coverage is high are the best positioned to capitalize on LLMs.
  • Cycle times.
    • Compilation times matter and your test suite needs to run fast. The faster the inner-loop, the faster you can provide reinforcement to the LLMs and reward it with a pass or fail.
    • Your developer environment time-to-onboard for humans should be as close to zero and be completely automated.

what companies should be aware of

There is a well known successful company in Australia and as the tale goes about 10 years ago an idea was born but the two founders were turned down for funding in the early days because they didn't have a good technical co-founder. They found one and the rest is history. In 2025/2026 it's different. People with ideas+unique-insight can get concepts to market in rapid time and be less dependent on needing others expertise as the worlds knowledge is now in the palms of everyone's hands.

Technologists are still required, perhaps it's the ideas guys/gals who should be concerned as software engineers now have a path to bootstrap a concept in every white collar industry (recruiting, law, finance, finance, accounting, et al) at breakneck speed without having to find co-founders.

what software developers should be aware of

The huge thing that software engineers don't realize right now is – they can program the LLMs and build a "stdlib" that manufactures successful LLM outcomes. Folks are stuck thinking at a primitive level of "what if I had an AI coworker" and haven't, yet, got to the thought of..

no fam, what if you had *1000* AI coworkers that went ham on your entire issue backlog at once
- Anni Betts (Anthropic)

Why write code directly into an IDE when you can program the LLM to replace the need to find co-founders? Currently there is unbounded opportunity available right now for any proficient software engineer to become the next Garry Brewer if they capitalize in this moment of time as people still haven't had their oh fuck moment or are stuck in the "deer in the headlights" phase of doom.

If you’re a high agency person, there’s never been a better time to be alive...

Ya know that old saying ideas are cheap and execution is everything? Well it's being flipped on it's head by AI. Execution is now cheap. All that matters now is brand, distribution, ideas and retaining people who get it. The entire concept of time and delivery pace is different now.

ps. socials @ https://x.com/GeoffreyHuntley/status/1887575083008598265

Multi Boxing LLMs

2025-01-28 10:35:48

Multi Boxing LLMs

Been doing heaps of thinking about how software is made after https://ghuntley.com/oh-fuck and the current design/UX approach by vendors of software assistants.

IDEs since 1983 have been designed around an experience of a single plane of glass. Restricted by what an engineer can see on their screen, how fast they think and how fast the compile chain can go.

Over the years that have gone by, this experience of IDEs has gotten better due to the ability to open multiple files at the same time and we now have multi-monitor workstations which have enabled engineers to view multiple files concurrently without switching between tabs.

However, it's still a synchronous design. These coding assistants such as Cursor, Windsurfer, Cody and GitHub Co-pilot have all been designed to fit in with pre-existing decisions (baggage) thus I'm starting to think this is a flawed approach.

how work is allocated

Products are the result of a cumulative effort of shipping tasks. In the workforce today, engineers pick a "single" story from the backlog.

Multi Boxing LLMs

With these turbo charged IDEs engineers can ship faster but it's got me pondering the following question...

why does an engineer pick only one story?

Instead of allocating a singular story from the backlog what if it was the engineering norm to pick multiple stories.

I've been playing around with this at home and running multiple coding agents concurrently to build stories concurrently and oh fuck fuck - it reminds me of World of Warcraft multi-boxing. Why level one character when you can do three or four at the same time?

Multi Boxing LLMs

The key to making it work is ensuring these agents don't fight with each other by splitting the work into separate discrete domain units of work within the same code-base or checking out the code-base multiple times.

next generation IDEs are required

So, I've been wondering what does the next generation of IDE's look like in this post LLM world. Why do I need to have multiple IDE's open?

What if instead of being shackled to design inherited from Turbo Pascal in 1983 - where IDEs are centered around humans we had a fresh take - IDE's are designed around software assistants first, humans second.

What if we ripped out the "collaborate with humans" functionality from Zed and inverted it so instead it's "collaborate with LLMs" and an unbounded amount of LLMs could be connected into the the IDE.

Multi Boxing LLMs

Even then, perhaps, that's not radical enough of an UX/design overhaul. Maybe we don't even need an IDE? Maybe https://devin.ai/ is onto something...