MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Can Claude Code Replace a Junior Developer? I Tested It.

2026-03-13 01:50:58

Junior developer job postings have cratered. Down 30% since 2022. Your company isn't hiring fewer entry-level roles by accident; they're automating them. I spent five months with Claude Code as my only coding partner, building a beast of a production pipeline. The results? Uncomfortable. And they revealed a specific kind of AI failure that benchmarks will never show you.

I’m a senior frontend engineer. Twelve years in the trenches, mostly React, Next.js, TypeScript. Production code is my daily grind. Six months ago, I needed a project outside my comfort zone. So I built a full video production pipeline. In Python. A language I didn’t write daily, but one I could navigate.

This wasn't some toy project. We're talking 32 Python files, ten production steps, six AI services all wired together: Claude, Gemini, Imagen, Google TTS, FFmpeg, and the YouTube API. Claude Code? It was my pair programmer for every line. Imagine your own AI agent, living in your terminal, always ready.

I needed a way to measure its performance. So I built a scorecard. Six dimensions, pulled straight from the rubric I use to evaluate junior engineers in interviews. Each graded A through F. What I found was startling. A specific failure pattern kept repeating. It exposes a hidden cost to AI coding tools that goes way beyond the monthly subscription. Keep that in mind.

The Scorecard: How Claude Stacks Up

Bug Detection & Diagnosis: A-

This was Claude's shining moment. My pipeline had a silent failure: if an image failed to generate, the system just moved on. No error. No log. The final video? Black frames. A producer's nightmare.

I described the symptom to Claude. Ninety seconds later, it traced the root cause. A missing error handler, three functions deep in the image pipeline. A classic.

Over the five months, I fed it eleven unique root causes. Unbound variables. Stale cache invalidation. Silent file skips. Claude found every single one. A junior developer might have found half in the same timeframe. I remember one junior telling me, "I spent three days on a bug like that once, just tracing through files line by line." Claude had a structural advantage. It could ingest all 32 files simultaneously. It held the entire project in its digital mind.

The minus on that A? Claude sometimes proposed fixes for bugs that didn't exist. Phantom bugs. Confident diagnoses of problems I never had. It’s like a doctor confidently prescribing treatment for an ailment you don't actually have. Annoying, but easily dismissed.

Multi-File Refactoring: A

This is where Claude is unambiguously better than a junior developer. Maybe better than most senior devs, for that matter.

I asked it to rename a core function across seven files. Update all imports. Fix all references. Maintain backward compatibility. Done. Under two minutes. Zero missed references. It even caught a string interpolation in a log message I'd forgotten about.

Adding shebang lines to a dozen scripts? Flawless. Restructuring imports after a module split? Perfect. Updating config paths after a directory rename? Handled. This kind of mechanical precision across files saves hours. No typos. No forgotten imports. No merge conflicts you have to untangle later. It just gets it done.

Feature Implementation from Scratch: B

I needed a humanized scoring system for scripts. Five domains: AI fingerprint detection, viewer engagement, rhythm analysis, story quality, hook strength. Claude built the entire scoring engine in one session. Weighted rubric. Pattern detection. Fix recommendations. Even a CLI interface to run it standalone.

So why not an A? Because I had to redesign the architecture twice.

You know the drill: the first pass is just too much. Abstract base classes for a simple scoring function. Factory patterns for five categories. A junior developer might write simpler code on the first pass. Not because they're better architects. Because they don't know enough patterns to over-apply them. Sometimes, ignorance is bliss. It saves you headaches.

Same pattern with fact grounding. I asked for a Google Search verification layer. Claude added retry logic, caching, rate limiting, and a progress dashboard. I needed one function. The pattern is consistent: Claude executes well when you give it clear constraints. When you leave architecture decisions open, it defaults to enterprise Java circa 2012.

Context Management: C+

Here's where it gets ugly. Claude would lose track of decisions we'd made ten minutes earlier. Mid-session, it would just… forget. You'd say "use the existing function." It would write a new one.

Hallucinated file paths. Referenced functions that didn't exist in my codebase. Confidently suggested imports from packages I never installed. A junior developer does something Claude cannot: they walk to your desk and say, "I'm confused. Can you explain the architecture again?" Claude never admits confusion. It fabricates.

The Claude.md file (a project context file) helps. It's a workaround for a fundamental limitation: the context window is a hard ceiling. In long sessions, I started repeating myself. Restating constraints. Re-explaining architecture. That overhead eats into your productivity gains from bug detection and refactoring. Remember that hidden cost I mentioned earlier? This is it: the cognitive load of constantly auditing AI output is immense.

Code Review Quality: B+

A critical skill. I started using Claude to review its own code. And code I wrote myself. It caught real issues: missing error handlers, race conditions in async functions, unused imports. Good stuff.

But it also flagged things that weren't problems. Suggested adding type hints to a quick one-off script. Recommended docstrings for internal helper functions nobody else would ever read. It wasted my time. The signal-to-noise ratio matters. A junior developer learns what level of review you expect. Claude gives you the same corporate-grade thoroughness whether you're prototyping or shipping. It's thorough to a fault.

Cost Per Productive Hour: A+

The math is brutal. The average junior developer in the United States costs around $73,000 a year. That’s roughly $6,000 a month. Claude Code with a Max subscription? About $200 a month. That's three percent of the junior developer cost. Three percent.

Even if Claude is only half as productive as a junior developer, the economics are stark. You get fifty percent of the output at three percent of the cost. A couple of years back, in 2024, McKinsey published a survey finding nearly thirty percent of companies expected to hire fewer juniors because of AI efficiency gains. By March 2026, those companies followed through.

This is the number that keeps hiring managers up at night. Not because AI is better. Because the economics no longer justify the headcount.

The Recurring Failure: Judgment

Let’s go back to that failure pattern. The one that keeps repeating. Benchmarks will never show you this. Claude Opus 4.6 scores 79% on Swe-bench. Impressive on paper, sure. But what does that mean for you?

In my pipeline, Claude kept making the same category of mistake. Not technical bugs. Judgment errors. Decisions that required understanding why the code exists, not just what it does.

I asked it to fix a stale date reference. The script said "2024." We are in 2026. Claude's humanize fix? It reintroduced the exact same stale date. Three times. It passed every validator I built. Word count, shot structure, hook quality. All green. But the content was wrong. Because no validator checks for temporal awareness.

A junior developer reading that script would have caught it instantly. Your human sense of time would have flagged it. Models? They don’t. They live in an eternal present tense, devoid of contextual meaning.

The Verdict: A Senior Developer Multiplier

Final scorecard:

  • Bug Detection: A-
  • Refactoring: A
  • Feature Implementation: B
  • Context Management: C+
  • Code Review: B+
  • Cost: A+

Overall verdict? Claude Code is not a junior developer replacement. It is a senior developer multiplier. And that distinction changes who should be worried.

If you’re a senior engineer who can critically review AI output, Claude makes you three times faster. That's not marketing speak. That's five months of daily measurement talking.

If you’re a junior developer who relies on AI to write code you cannot review, you are building on a foundation you cannot inspect. That is a career risk, not a shortcut. I once sat across from a hiring manager at a Series B startup. She told me they cut their junior cohort from eight to two. Not because of layoffs. Because two seniors with Claude Code shipped more.

Here's the thing: 46% of all code written by active developers now comes from AI. Twenty million developers use AI coding tools daily. These aren't projections anymore. These are current numbers. The question isn't whether Claude Code can replace a junior developer. The question is whether your company still needs as many juniors when seniors are three times more productive.

What This Means For You

If you're early in your career, the path forward is not to avoid AI tools. It's to understand what they get wrong and why. That's the skill gap that matters. Every time Claude wrote code I couldn't evaluate, I was exposed. Not because the code was wrong. Because I had no way to know if it was right.

Academic research shows AI coding tools deliver 26% to 55% productivity improvement. But those gains depend heavily on one variable: whether the developer can actually review the output. Because the multiplier only works if the human in the loop can catch the errors.

The more you know, the more AI accelerates you. The less you know, the more it buries you.

Key Takeaways

  • AI doesn't replace, it multiplies: Claude Code elevates senior developers, making them significantly more productive.
  • The economics are brutal: AI tools offer massive cost savings compared to human hires, driving corporate decisions.
  • Context is AI's Achilles' heel: Models struggle with persistent context, leading to repetitive work and hallucinated details.
  • Judgment is uniquely human: AI fails on tasks requiring contextual understanding or temporal awareness, even when technical execution is perfect.
  • Auditability is the new bottleneck: Productivity gains from AI are capped by a developer's ability to critically review and correct its output.
  • The junior role is evolving: The focus shifts from writing code to auditing machine-generated code and understanding its limitations.

Can Claude Code replace a junior developer? Wrong question. It already changed what a junior developer needs to be. The junior developer of 2026 is not the junior developer of 2022. The ones who survive will be the ones who can audit the machine, not just operate it.

Watch the full video breakdown on YouTube: Can Claude Code Replace a Junior Developer? I Tested It.

The Machine Pulse covers the technology that's rewriting the rules — how AI actually works under the hood, what's hype vs. what's real, and what it means for your career and your future.

Follow @themachinepulse for weekly deep dives into AI, emerging tech, and the future of work.

If you can read the manual, you can change the world

2026-03-13 01:50:07

If you can blink an LED, you can change the world.
But I believe, If you can read the manual, you can change the world.

Back in my college days, I was interested in Embedded System and Internet of Things, and loves to spend my pocket money on building new things.

I used to run the simulation on Proteus, a software well known for simulating things and then on real device.

On one occasion, I was building a scrolling text animation on dot matrix display using Arduino, which was working fine on Proteus, but failed on real device.

Dakshim Scrolling Display using Arduino

Frustrated, I turned to my mentor, a seasoned veteran. His advice was simple:
"Dakshim, have you read the display manual".

Me: But it is working fine on Proteus.
He: Read the manual first.

It turns out to be a common issue with that specific display that was mentioned on the very first page of the manual.

Decade passed, but lesson remained. Read the Fabulous Manual

I Got Tired of Rewriting Macro Boilerplate — So I Built a Template Engine for Proc Macros

2026-03-13 01:49:32

I've been writing proc macros for a while now. Derive macros for internal tools, attribute macros for instrumentation. And every time, the same two problems: quote! doesn't compose (you end up passing TokenStream fragments through five layers of helper functions and writing hundreds of let statements), and debugging generated code means cargo expand and then squinting at unformatted token output hoping something jumps out.

Because of this I ended up writing the same helper methods, composite AST parsing and tokenizing types, extractors etc. I would have to copy these from project to project as needed, and eventually just decided to publish a crate so I never have to do it again.

So I built zyn — a proc macro framework with a template language, composable components, and compile-time diagnostics.

🎯 Goals

  1. Template syntax that supports expressions, looping, composition of reusable custom elements, and editor syntax highlighting + type safety.
  2. Automated attribute arguments parsing.
  3. Diagnostic pattern that supports more than just hard compiler errors and can emit more than one at a time, linked to the span it originated from. Ideally with editor integration.
  4. Extensions for syn AST types to make querying the parsed AST easier.
  5. Testing features like debug and assertion macros so I don't have to use cargo expand or stringify token streams and make fuzzy assertions.
  6. Comparable performance to using syn + quote (benchmarks)

🔨 Building a Builder

I'm going to build a #[derive(Builder)] macro with it, start to finish. The whole thing comes out to about 60 lines.

Features

cargo add zyn

What we want the user to write:

source | docs

#[derive(Builder)]
struct Config {
    host: String,
    port: u16,
    #[builder(default)]
    verbose: bool,
    #[builder(default_value = "30")]
    timeout: i64,
}

ℹ️ A struct annotated with #[derive(Builder)]. Fields marked #[builder(default)] use Default::default() when omitted, and #[builder(default_value = "...")] uses a custom expression.

And what we want to generate:

struct ConfigBuilder {
    host: Option<String>,
    port: Option<u16>,
    verbose: Option<bool>,
    timeout: Option<i64>,
}

impl ConfigBuilder {
    fn host(mut self, value: String) -> Self {
        self.host = Some(value);
        self
    }
    // ... setters for each field ...

    fn build(self) -> Config {
        Config {
            host: self.host.expect("field `host` is required"),
            port: self.port.expect("field `port` is required"),
            verbose: self.verbose.unwrap_or_default(),
            timeout: self.timeout.unwrap_or_else(|| 30),
        }
    }
}

impl Config {
    fn builder() -> ConfigBuilder {
        ConfigBuilder {
            host: None,
            port: None,
            verbose: None,
            timeout: None,
        }
    }
}

With raw quote!, this gets messy fast — nested iterations, conditional logic for defaults, splicing field names and types everywhere.

🏷️ Typed Attribute Parsing

First, parsing #[builder(default)] and #[builder(default = expr)]. Doing this by hand means a syn::parse::Parse impl, handling every variant, producing decent errors. With zyn:

source | docs

#[derive(zyn::Attribute)]
#[zyn("builder")]
struct BuilderConfig {
    #[zyn(default)]
    skip: bool,
    #[zyn(default)]
    default: bool,
    default_value: Option<String>,
}

ℹ️ Declares a typed config for #[builder(...)] attributes. #[zyn("builder")] sets the attribute name to match. #[zyn(default)] fields default to false/None when omitted. zyn generates from_args() and from_input() parsing methods automatically.

That generates from_args() and from_input() methods. We add a convenience from_field that extracts from a field's attributes using the ext feature:

source | docs

use zyn::ext::AttrExt;

impl BuilderConfig {
    fn from_field(field: &zyn::syn::Field) -> Self {
        let attr = field.attrs.iter().find(|a| a.is("builder"));

        match attr {
            Some(attr) => {
                let args = attr.args().unwrap();
                Self::from_args(&args).unwrap()
            }
            None => Self {
                skip: false,
                default: false,
                default_value: None,
            },
        }
    }
}

ℹ️ A convenience method that extracts BuilderConfig from a field's attributes. AttrExt::is() finds the #[builder(...)] attribute, args() parses its arguments, and from_args() maps them into the typed struct. Returns defaults when no attribute is present.

Typo suggestions come free 💡:

error: unknown argument `skiip`
  |
5 | #[builder(skiip)]
  |           ^^^^^
  |
  = help: did you mean `skip`?

Levenshtein distance. Your users get did you mean skip? instead of unexpected token.

🧩 Composable Elements

Instead of one giant quote! block, you break the macro into elements — reusable template components with typed props.

source | docs

#[zyn::element]
fn setter(
    name: zyn::syn::Ident,
    ty: zyn::syn::Type,
) -> zyn::TokenStream {
    zyn::zyn! {
        fn {{ name }}(mut self, value: {{ ty }}) -> Self {
            self.{{ name }} = Some(value);
            self
        }
    }
}

ℹ️ An element that generates a builder setter method. Takes a field name and type as props, produces a method that sets the corresponding Option field and returns self for chaining.

If you wanted methods like with_host instead of host, pipes handle it inline: {{ name | ident:"with_{}" }}. They compose — {{ name | upper | ident:"SET_{}" }} would produce SET_HOST from host.

The build method, where defaults come in:

source | docs

#[zyn::element]
fn build_field(
    name: zyn::syn::Ident,
    config: BuilderConfig,
) -> zyn::TokenStream {
    let name_str = name.to_string();

    if config.default {
        zyn::zyn!({{ name }}: self.{{ name }}.unwrap_or_default())
    } else if let Some(ref expr) = config.default_value {
        let default_expr: zyn::syn::Expr = zyn::syn::parse_str(expr).unwrap();
        zyn::zyn!({{ name }}: self.{{ name }}.unwrap_or_else(|| {{ default_expr }}))
    } else {
        zyn::zyn!({{ name }}: self.{{ name }}.expect(
            ::std::concat!("field `", {{ name_str | str }}, "` is required")
        ))
    }
}

ℹ️ Generates a single field assignment inside build(). Uses unwrap_or_default() for #[builder(default)], unwrap_or_else(|| expr) for #[builder(default_value)], and panics with a descriptive message for required fields.

The setter doesn't care about defaults — that's build_field's job.

⚙️ The Derive Entry Point

The derive uses extractors — typed parameters that zyn resolves from the macro input automatically:

source | docs

#[zyn::derive("Builder", attributes(builder))]
fn builder(
    #[zyn(input)] ident: zyn::syn::Ident,
    #[zyn(input)] fields: zyn::Fields<zyn::syn::FieldsNamed>,
) -> zyn::TokenStream {

    zyn::zyn! {
        struct {{ ident | ident:"{}Builder" }} {
            @for (field in fields.named.iter()) {
                {{ field.ident }}: Option<{{ field.ty }}>,
            }
        }

        impl {{ ident | ident:"{}Builder" }} {
            @for (field in fields.named.iter()) {
                @setter(
                    name = field.ident.clone().unwrap(),
                    ty = field.ty.clone(),
                )
            }

            fn build(self) -> {{ ident }} {
                {{ ident }} {
                    @for (field in fields.named.iter()) {
                        @build_field(
                            name = field.ident.clone().unwrap(),
                            config = BuilderConfig::from_field(field),
                        ),
                    }
                }
            }
        }

        impl {{ ident }} {
            fn builder() -> {{ ident | ident:"{}Builder" }} {
                {{ ident | ident:"{}Builder" }} {
                    @for (field in fields.named.iter()) {
                        {{ field.ident }}: None,
                    }
                }
            }
        }
    }
}

ℹ️ The derive entry point. Extractors (#[zyn(input)]) resolve ident and fields from the derive input automatically. The template generates a FooBuilder struct with Option-wrapped fields, setter methods via @setter, a build() method that unwraps each field via @build_field, and a Foo::builder() constructor.

Parameters marked #[zyn(input)] are extractors — ident gets resolved from the derive input automatically, Fields<FieldsNamed> pulls the named fields. If someone puts #[derive(Builder)] on an enum, zyn emits a compile error automatically.

The @for loops iterate fields. The @setter and @build_field calls compose the pieces. The template reads top-to-bottom as one block, no splicing iterator chains back together like you would with quote!.

🩺 Diagnostics

Standard proc macros bail on the first error. Fix, recompile, hit the next one.

zyn accumulates them. Add some validation to the builder:

source | docs

for field in fields.named.iter() {
    let config = BuilderConfig::from_field(field);

    if config.skip && config.default {
        error!(
            "`skip` and `default` are mutually exclusive on field `{}`",
            field.ident.as_ref().unwrap();
            span = field.ident.as_ref().unwrap().span()
        );
    }

    if config.skip && config.default_value.is_some() {
        warn!(
            "`default_value` is ignored when `skip` is set";
            span = field.ident.as_ref().unwrap().span()
        );
    }
}

// stop here if any errors accumulated, otherwise continue to codegen
bail!();

ℹ️ Validates field configurations before codegen. error! and warn! accumulate diagnostics with span information instead of panicking on the first problem. bail!() stops compilation only if errors were recorded — warnings pass through.

error!, warn!, note!, help! are injected into every #[zyn::derive], #[zyn::element], and #[zyn::attribute] body. bail!() with no arguments checks if any errors were accumulated and returns early — but only if there are errors. Warnings pass through.

Users see every problem in one compile pass. ✅

🔍 Debugging

I wrote the debug system after spending two days on a bug where a generated impl block was missing a lifetime bound. cargo expand spat out 400 lines of tokens and I couldn't find it, so I built a debug system.

Add debug = "pretty" to any element, derive, or attribute macro:

docs

#[zyn::element(debug = "pretty")]
fn setter(name: zyn::syn::Ident, ty: zyn::syn::Type) -> zyn::TokenStream {
    // ...
}
ZYN_DEBUG="Setter" cargo build

ℹ️ Opts the element into debug output. debug = "pretty" formats the generated code through prettyplease. The ZYN_DEBUG env var controls which macros emit output — wildcard patterns like "*" match everything.

debug pretty

Generated code shows up as a compiler note — in your terminal, in your IDE's Problems panel. pretty runs it through prettyplease so you get formatted Rust instead of token soup. Wildcard patterns work: ZYN_DEBUG="*" dumps everything.

🧪 Testing

zyn's test module gives you assertion macros that compare token streams structurally. Here's how we test the setter element from the builder:

source | docs

use zyn::quote::quote;

#[zyn::element(debug = "pretty")]
fn setter(name: zyn::syn::Ident, ty: zyn::syn::Type) -> zyn::TokenStream {
    zyn::zyn! {
        fn {{ name }}(mut self, value: {{ ty }}) -> Self {
            self.{{ name }} = Some(value);
            self
        }
    }
}

#[test]
fn setter_generates_expected_signature() {
    let input: zyn::Input = zyn::parse!("struct Foo;").unwrap();
    let output = zyn::zyn!(
        @setter(
            name = zyn::format_ident!("port"),
            ty = zyn::syn::parse_str::<zyn::syn::Type>("u16").unwrap(),
        )
    );
    let expected = quote! {
        fn port(mut self, value: u16) -> Self {
            self.port = Some(value);
            self
        }
    };

    zyn::assert_tokens!(output, expected);
}

#[test]
fn setter_pretty_output() {
    let input: zyn::Input = zyn::parse!("struct Foo;").unwrap();
    let output = zyn::zyn!(
        @setter(
            name = zyn::format_ident!("host"),
            ty = zyn::syn::parse_str::<zyn::syn::Type>("String").unwrap(),
        )
    );

    zyn::assert_tokens_contain_pretty!(output, "fn host(mut self, value: String) -> Self");
}

ℹ️ Tests for the setter element. assert_tokens! compares token streams structurally — no whitespace sensitivity. assert_tokens_contain_pretty! does substring matching on formatted output for readable assertions.

assert_tokens! compares structurally — no to_string() comparisons that break on whitespace. assert_tokens_contain! does substring matching on the cleaned output. assert_tokens_contain_pretty! (behind the pretty feature) gives you human-readable diffs when things fail.

⚡ Performance

Benchmarks are run via CI on push and also on a schedule.

The full pipeline (parse → extract → codegen) compared to equivalent hand-written syn + quote!:

Full Pipeline - Bencher

more benchmarks.

🚀 Try It

cargo add zyn

There's also extension traits behind the ext feature for common syn operations — field.is_option(), attr.exists("builder"), keyed field access. Saves some repetitive syn traversal.

The getting started guide walks through everything. The API docs cover every type and trait. The full builder example from this post is in the repo with tests.

I built zyn because quote! was making me miserable. It's not done — there are rough edges around macro hygiene in some edge cases — but it's how I write every proc macro now.

🔗 GitHub | crates.io | Docs

The Anatomy of a Smart Contract Audit: What Auditors Look For

2026-03-13 01:48:50

The Anatomy of a Smart Contract Audit: What Auditors Look For

cover

In November 2022, a single integer overflow bug in Wormhole's token bridge drained $325 million in wrapped Ethereum.1 The code was audited twice. The vulnerability existed in plain sight: a lack of proper state validation that allowed an attacker to forge signatures and drain the vault. This wasn't a novel zero-day. It was Protocol 101 stuff, executed poorly.

If you're about to launch a smart contract and thinking an audit is just a rubber stamp—or worse, that it's optional—this article is your wake-up call.

What Auditors Actually Hunt For

Auditors look for four categories of bugs: access control failures and reentrancy, arithmetic errors and overflow/underflow, state management issues and improper validation, and cryptographic and signature vulnerabilities. Most audits take 2–6 weeks and cost $10k–$500k+. They still miss edge cases. Assume your code is broken until proven otherwise.

How an Audit Actually Works

A competent smart contract audit doesn't happen in a weekend. It's layered, methodical, and often infuriatingly slow (from a developer perspective).

First comes automated tooling. Auditors start with static analysis—Slither, Mythril, Echidna.2 These run in minutes and catch reentrancy patterns, unprotected delegatecall, integer arithmetic issues, missing zero-address checks, and visibility problems. Automated tooling catches maybe 40% of real vulnerabilities in audited code.

Then a human reads your code. Usually several humans, if you're paying real money. They're not trying to understand what you meant to do. They're trying to break what you actually did. This is where the Wormhole bug lived:

// DO NOT DO THIS (simplified version of Wormhole's actual bug)
mapping(address => bool) initialized;

function initialize(address guardian) external {
    require(!initialized[guardian], "already initialized");
    initialized[guardian] = true;
    // Store guardian, set up state...
}

// Problem: No signature verification. An attacker could call
// initialize() with ANY guardian address, forging a "legitimate" setup.

The correct approach requires proper state validation:

// DO THIS
bool private initialized;
address private guardian;

function initialize(address _guardian, bytes calldata signature) external {
    require(!initialized, "already initialized");

    // Verify the signature actually came from someone authorized
    bytes32 digest = keccak256(abi.encodePacked(_guardian));
    address signer = recoverSigner(digest, signature);
    require(signer == DEPLOYER, "invalid signature");

    initialized = true;
    guardian = _guardian;
}

One assumes the caller is honest. The other doesn't.

Finally, they threat-model your contract. Auditors build mental models of how your contract will be used—and abused. What happens if this function is called during a reentrancy attack? Can I flash-loan my way into the vault? What if this external call fails silently? Can I exploit the order of operations in a transaction? This is where experience matters. A junior auditor might miss the fact that your ERC-20 transfer relies on the token contract not being malicious. (Spoiler: it can be.)

The Big Four Bug Categories

After auditing dozens of contracts across DeFi, NFT protocols, and bridge systems, ~80% of real vulnerabilities fall into four buckets.

Access Control Failures. Your contract probably has owner functions. Do they actually check who's calling?

// DO NOT DO THIS
function withdrawAll() external {
    // "Only the owner should call this"
    // But we never actually check...
    uint256 balance = address(this).balance;
    payable(msg.sender).transfer(balance);
}

// DO THIS
function withdrawAll() external onlyOwner {
    uint256 balance = address(this).balance;
    payable(owner).transfer(balance);
}

Bonus points if your access control is so tangled that even you can't remember who can call what. (I've seen this exact situation in three separate audits.)

Reentrancy and Call Ordering. The classic. An attacker recursively calls your contract before a state update completes. This is why Checks-Effects-Interactions (CEI) matters:

// DO NOT DO THIS
function withdraw(uint256 amount) external {
    balances[msg.sender] -= amount;  // STATE CHANGE
    (bool success, ) = msg.sender.call{value: amount}("");
    require(success);  // EXTERNAL CALL LAST (wrong!)
}

// DO THIS
function withdraw(uint256 amount) external {
    require(balances[msg.sender] >= amount);  // CHECK
    balances[msg.sender] -= amount;  // EFFECT
    (bool success, ) = msg.sender.call{value: amount}("");  // INTERACTION
    require(success);
}

Arithmetic Errors. Even with Solidity 0.8+ (which has overflow protection by default), you can still mess this up:

// DO NOT DO THIS
uint8 count = 255;
unchecked { count++; }  // Now it's 0. Whoops.

// DO THIS
// Use appropriate types and document why you're opting out of compiler checks.
uint256 count = 255;
count++;  // Protected by compiler unless you explicitly opt out.

Cryptographic and Signature Issues. This is where protocols like Wormhole stumbled. Signature verification is hard, and mistakes are expensive. Watch out for signature malleability (v/r/s can be flipped), missing nonce checks (replay attacks), incorrect hash construction (collision risks), and using ecrecover() without validating the return value.

Why Audits Aren't Magic

A good audit costs $50k–$200k+ and takes 4–8 weeks.3 A great one costs $300k+. Even then, it's not insurance. It's a probabilistic reduction in risk.

Some of the worst exploits happen in audited contracts. Not because auditors are incompetent, but because auditors work within scope boundaries, economic incentives change post-audit, and complex interactions with other protocols aren't always foreseeable.

The question isn't "Will an audit catch everything?" It's "Are the remaining risks acceptable?"

Pre-Audit Checklist

Run Slither first and fix the obvious stuff. Have internal review rounds—you know your protocol better than anyone else will. Write tests, lots of them, including fuzz tests. Get someone who didn't write the code to read it with fresh eyes. Do these things and you'll look like you take security seriously. Skip them and you'll look like Wormhole.

  1. Wormhole bridge exploit (February 2022). The vulnerability was in the token bridge code, allowing signature forgery. Lesson: even "audited" contracts can have critical flaws. ↩

  2. Trail of Bits maintains Slither; Mythril is maintained by Consensys. Both are free, both are useful, neither is perfect. ↩

  3. Based on market rates in 2023–2024 for reputable firms. Faster audits = higher risk that things were missed. ↩

I Stopped Posting on Twitter for 2 Months

2026-03-13 01:45:40

I Stopped Posting on Twitter for 2 Months

I stopped posting on Twitter for two months. Not a planned break, not a "digital detox," not a strategic rebranding pause. I just... forgot. This is what actually happened when I disappeared from X (Twitter) for October and November 2025, and what I learned about taking breaks you didn't mean to take.

How I stopped posting on Twitter for 2 months without meaning to

September 6 was my last post before the gap. A week later I tweeted "staying away from X for a few days, wonder if it ruins reach" and then proceeded to vanish for two full months instead of a few days.

I wasn't planning that. There was no decision point where I said "I'm taking a break." I was building. Seven apps in parallel, deep in agent architecture, ADHD hyperfocus locked in. Twitter stopped feeling like a place I existed in. Not because I was boycotting it or burned out on the discourse. I just got absorbed and the habit broke.

That's the honest version. Not a detox story. I didn't meditate more. I didn't reclaim my attention span through discipline. I got pulled into something more interesting and social media fell off naturally. That's how ADHD actually works - when something grabs you hard enough, everything else gets crowded out.

The gap ran October through November. I came back January 26, 2026 with one post: "I'm back because of Clawdbot meta." No apology, no "I've been doing some reflection," no thread about what I learned. Just back.

What actually happened to reach when I stopped posting on Twitter

Short answer: yes, disappearing kills your numbers. I came back to impressions that were noticeably down from where they were in September. The algorithm punishes inconsistency in ways that are both predictable and annoying.

Here's what surprised me though - my follower count barely moved. The people who followed me for real reasons didn't unfollow during a two-month gap. They just... waited. Or forgot I existed but kept the follow anyway, which is functionally the same thing.

What did drop off were the engagement-farmers. The follow-back accounts, the people following hoping for a mutual, the ones gaming numbers. When I stopped posting, I stopped being useful to them. They left. Good.

So the actual damage from a two-month absence was:

  • Impressions down significantly on return
  • Algorithmic reach basically reset
  • Genuine followers intact
  • Low-quality followers gone

That's not catastrophic. It's annoying if you're trying to grow on a consistent curve, but it's not irreversible. Coming back with something real to say matters more than the reach penalty.

The platform rewards consistency, but it doesn't erase you for breaks. It's not that vindictive. It just forgets you for a while and you have to re-earn distribution. Which, to be clear, still sucks. But it's survivable.

Research from the Reuters Institute Digital News Report backs this up - audiences don't actively track individual creator absences the way creators fear they do. People are mostly watching the feed, not waiting for you specifically.

Building in public culture and why it makes breaks feel worse than they are

There's a specific kind of pressure that comes with building in public. The implicit rule is you have to be visible. Posting daily "day 47 of building X" content. Sharing every milestone. Being present enough that people remember you exist.

Building in public culture and why it makes breaks feel worse than they are

Miss a week and you feel like you're falling behind. Miss a month and it feels like career death. I'm not immune to this feeling. I've been building in public for long enough to have internalized the assumption that visibility = momentum.

But the two months proved that assumption wrong in at least one direction.

I built more in those two silent months than in the three months of posting before them. Seven apps. Real infrastructure. Clawdbot, which ended up being the thing I came back to announce. When you're not performing the work, you're just doing it. Turns out those are different modes.

The building in public model optimizes for consistency of output, not quality of output. There's value in that - accountability, community, people following along with the journey. But it can also turn into a content treadmill where the posting becomes the thing instead of the building.

I'm not saying building in public is bad. I'll keep doing it. But I'm now aware that the ADHD hyperfocus mode where I forget Twitter exists and just build for two months is also a valid mode. Maybe more productive in certain phases.

The ADHD angle: forgetting to post is not a failure

Everyone talking about "intentional breaks" from social media has the same energy: "I realized I needed to step back and prioritize my wellbeing." Very deliberate. Very curated. Very LinkedIn.

That wasn't this.

I didn't decide to take a break. The habit just... dissolved. I was on a 2am coding session in early October, deep in something, and the thought of tweeting about it didn't occur to me. Same the next night. By week two the pattern was just gone.

This is textbook ADHD. The executive function overhead of maintaining a social media posting habit - opening the app, forming a thought worth sharing, hitting post, checking the response - that whole loop requires a certain baseline attention budget. When something captures all of it, the habits that depend on leftover attention just stop running.

The ADHD and executive function research out of Harvard Medical School explains this pretty well - executive function isn't a moral failing, it's a resource allocation problem. When the resource is fully committed elsewhere, discretionary habits are the first to drop.

I've made peace with this. It's not undisciplined, it's just how my brain allocates. The flip side of forgetting to post for two months is also why I can build seven apps in parallel while maintaining an agent system that runs 14 cron jobs. Same mechanism, different outputs.

If you have ADHD and you've done this - gone silent for weeks because you were deep in something - it's not a failure mode. It's just the cost of the hyperfocus that also lets you ship faster than most people.

The trick is not building your brand strategy on a foundation that requires daily consistency you won't reliably deliver. Build it on depth instead. One good post after two silent months is worth more than sixty filler posts.

What "coming back" actually looks like

January 26. "I'm back because of Clawdbot meta."

What 'coming back' actually looks like

That was the whole post. No explanation, no recap thread, no "here's what I learned while I was away" (except this post, I guess). Just the most direct possible signal: I exist, I have a reason to be back, here's the thing.

This felt right. The alternative - the big return post with the reflective thread - felt like it was performing a story instead of just getting back to work. The people who care will engage with the work. The people who need a narrative about why you were gone aren't really your audience anyway.

I did get some "welcome back" responses. More than I expected, honestly. A few people had noticed the gap and were curious what happened. That was kind of nice - it meant the previous presence had registered as real enough that the absence was notable.

But the bigger signal was that nobody was mad. Nobody had been waiting with a timer. The internet doesn't work that way. People move on, the feed keeps moving, and when you come back with something worth seeing, you get traction again.

What I'd tell someone about to take (or accidentally start) a social media break

Not going to frame this as advice, because I didn't plan any of it. But if you're reading this because you've already been gone for a month and you're wondering if you've tanked your presence - you probably haven't.

A few things that are actually true from experience:

Genuine followers don't leave during a two-month absence. The people who followed you for real reasons are still there. The follower count number that matters is quality, not quantity, and quiet people who actually care about your work have more patience than the algorithm does.

Your reach will take a hit and that's fine. You'll rebuild it. Reach is lagging indicator of consistency, and consistency can be rebuilt faster than you think when you come back with something real. I came back with Clawdbot. That gave me actual things to say.

The building you do during the silence compounds. Those two months produced more than the three months of documented-daily-grind before them. There's something to that. Not every phase of building should be public. Some of it needs to be quiet.

The "building in public" pressure is real and mostly self-imposed. The audience you're building in public for is smaller and more patient than the anxiety makes it seem. If you're good at what you do and you come back with evidence of it, people remember.

And if you have ADHD and you just disappeared because something grabbed you - that's the mechanism working, not failing. The work you did in the silence is the asset. The posts are just distribution for it.

What stopped posting on Twitter for 2 months actually cost me (and what it didn't)

I have how I grew to 500 followers where you can see the full context of what I work on - design background, fintech, AI engineering, all of it. None of it died during two months off Twitter.

The projects I was building kept building. The systems kept running. The professional relationships that matter don't live on Twitter anyway - they live in Discord servers, in direct messages, in shipped product people can actually use.

Twitter is distribution. It's a real tool and a decent one for this type of work. But it's not the substrate. The work is the substrate.

The two months off X taught me that more than anything. When the posting stopped, nothing important stopped with it. The important stuff was already running somewhere else - in the codebase, in the agent system, in the products actually getting built.

Coming back felt like turning a tool back on. Not like returning from exile.

That's the right relationship to have with it. Check what I was building instead to see what came out of those two silent months. And if you want context on how Twitter's algorithm actually handles inactive accounts, X's own creator documentation doesn't spell it out cleanly - but the pattern from third-party analyses is consistent: reach drops fast after ~2 weeks of silence, then stabilizes, then rebuilds within a few weeks of returning.

The algorithm is back to punishing me for the gap. I'm fine with it. The seven apps I built during the silence are more valuable than consistent impressions metrics would've been. Trade you'd make again without thinking about it.

UQL v0.3: the first TypeScript ORM with native Vector Search

2026-03-13 01:40:15

AI-powered search is everywhere, but if you use an ORM, you've probably hit this wall: the moment you need vector similarity, you're forced to use raw SQL. Hand-written distance expressions, and dialect-specific quirks — all outside your type-safe query API.

UQL v0.3 changes that. Semantic search is now a first-class citizen in UQL, works across PostgreSQL, MariaDB, and SQLite with a single, unified interface.

Vector search image

What It Looks Like

const results = await querier.findMany(Article, {
  $select: { id: true, title: "true },"
  $sort: { embedding: { $vector: queryEmbedding, $distance: 'cosine' } },
  $limit: 10,
});

UQL generates the right SQL for your database:

-- Postgres
SELECT "id", "title" FROM "Article"
ORDER BY "embedding" <=> $1::vector
LIMIT 10
-- MariaDB
SELECT `id`, `title` FROM `Article`
ORDER BY VEC_DISTANCE_COSINE(`embedding`, ?)
LIMIT 10
-- SQLite
SELECT `id`, `title` FROM `Article`
ORDER BY vec_distance_cosine(`embedding`, ?)
LIMIT 10

No raw SQL. No dialect checks. Same query everywhere.

Entity Setup

Define your vector field and index — UQL handles schema generation, extension creation, and index building:

import { Entity, Id, Field, Index } from 'uql-orm';

@Entity()
@Index({ columns: ['embedding'], type: 'hnsw', distance: 'cosine', m: 16, efConstruction: 64 })
export class Article {
  @Id() id?: number;
  @Field() title?: string;

  @Field({ type: 'vector', dimensions: 1536 })
  embedding?: number[];
}

For Postgres, UQL automatically emits CREATE EXTENSION IF NOT EXISTS vector. MariaDB and SQLite just work — no extensions needed.

Key Features

Distance Projection

Project the computed distance into your results with $project — no duplicate computation:

import type { WithDistance } from 'uql-orm';

const results = await querier.findMany(Article, {
  $select: { id: true, title: "true },"
  $sort: { embedding: { $vector: queryVec, $distance: 'cosine', $project: 'similarity' } },
  $limit: 10,
}) as WithDistance<Article, 'similarity'>[];

results[0].similarity; // autocomplete works ✓

WithDistance<E, K> adds a typed distance property to each result — your IDE autocompletes it, and typos are caught at compile time.

5 Distance Metrics

Metric Postgres MariaDB SQLite
cosine <=>
l2 <->
inner <#>
l1 <+>
hamming <~>

3 Vector Types

Type Storage Use Case
'vector' 32-bit float Standard embeddings (OpenAI, etc.)
'halfvec' 16-bit float 50% storage savings, near-identical accuracy
'sparsevec' Sparse SPLADE, BM25-style sparse retrieval

halfvec and sparsevec are Postgres-only. MariaDB and SQLite transparently map them to their native VECTOR type — your entities work everywhere.

Vector Indexes

Type Postgres MariaDB
HNSW ✅ with m, efConstruction
IVFFlat ✅ with lists
Native VECTOR INDEX

Why $sort?

Vector similarity search is fundamentally sorting by distance. UQL reuses the existing $sort API, which composes naturally with $where, $select, $limit, and regular sort fields:

const results = await querier.findMany(Article, {
  $where: { category: 'science' },
  $sort: { embedding: { $vector: queryVec, $distance: 'cosine' }, title: 'asc' },
  $limit: 10,
});

No new concepts to learn. The query API you already know now handles AI search.

Get Started

npm i uql-orm

We'd love to hear how you're using vector search in your projects — join the Discord and let us know!