2026-04-18 05:06:40
What happens when you can no longer tell, at a glance, whether the person on the other end is real?
\ That is the premise World built its Lift Off, pushing its biggest World ID protocol upgrade since launch, a dedicated World ID app in public beta, and more than a dozen partnerships spanning Zoom, Tinder, Docusign, Vercel, Okta and others. The headline framing is "full-stack proof of human." The sharper way to read it is as an architectural claim: the internet's trust layer is shifting from verifying devices to verifying humans.
\

The protocol upgrade is the substantive part. World ID now runs on an account-based architecture with multi-key support, key rotation, recovery, and formal session management, all designed so the system survives the failures production infrastructure faces: lost devices, compromised keys, team turnover. One-time-use nullifiers strengthen the anonymity guarantee so interactions cannot be linked across services. The SDK is now open source, which means any application can become a World ID authenticator.
\ Alongside the protocol, World released a dedicated World ID app in public beta. It is positioned as the place where proof of human lives on a user's device. Tools for Humanity is the first builder. The protocol permits any developer to ship their own authenticator, and the decentralisation trajectory depends on that happening.
\
\ Today's security stack verifies devices and credentials: something you have, something you know. A hardware key and a PIN. A laptop and a password. The assumption is that the right human is behind the device. That assumption is the weakest link, and AI is compressing it further. Phishing, credential theft, session hijacking, and deepfake-assisted social engineering all exploit the same gap.
\ World ID replaces the assumption with a cryptographic attestation that a specific, verified, unique human is present across an interaction. Not a device. Not a credential. A person. The relying service sees no personal data, only the proof, because the system uses zero-knowledge proofs to ship verification without identity. In security terms, the primitive is called human continuity. In plain terms, it answers a question no current system answers cleanly: is the human authorising this action the same real, verified human the system expects?
\
\ Deepfake-enabled fraud tripled in 2025, from $360 million in 2024 to $1.1 billion. Deloitte projects US AI-enabled fraud losses reach $40 billion by 2027. Single incidents already run into eight figures: a finance worker at Arup wired $25 million after a deepfake video call impersonated the CFO in Hong Kong. Gartner expects 30 percent of enterprises to stop trusting standalone identity-verification tools by 2026.
\

Zoom is the first communications platform to integrate World ID Deep Face into its meetings product. The mechanism is a three-way cryptographic match: the signed image captured at the participant's original Orb verification, a real-time liveness selfie from the participant's device, and the live video frame the meeting sees. When all three align, the meeting gets a high-assurance signal that the person on the call is the verified human expected. VanEck Funds is in a limited beta.
\ Docusign is adding proof of human to its agreement workflows, letting signers confirm a human (and not a bot or an agent) authorised a specific signature. Outtake Verify for Email extends the same idea to outbound messages, deployed across Tools for Humanity's finance, recruiting and executive teams.
\ Bots are a consumer problem too. UK fans spend roughly £145 million extra every year on resales driven by ticket bots. During one Taylor Swift Eras Tour presale, Ticketmaster recorded 3.5 billion system requests in a single day, and some tickets appeared on resale platforms at 70 times face value. Authorities later alleged that a single broker used automated tools to acquire hundreds of thousands of tickets.
\

Tinder's World ID integration, first piloted in Japan, now goes global, letting Orb-verified profiles carry a verified human badge and receive five free Boosts. Concert Kit, a new product from World, reserves tickets for verified humans across existing ticketing platforms, and launches with Bruno Mars's current world tour featuring DJ Pee .Wee (Anderson .Paak). Razer continues to use Razer ID verified by World ID as a human-first gaming standard, Mythical Games extends the same check into player-owned game economies, and Reddit has signalled it is exploring the option for accounts flagged as automated.
\
The structurally largest change may be on the agent surface. AgentKit now ships three primitives. Agent delegation lets a verified human attach proof of human to an agent so downstream services can verify a real person is behind it. Human-in-the-loop, built with Vercel's new Workflow SDK, lets any agent workflow request a zero-knowledge proof that a unique human approved a specific action, with full audit trail, and is live today on npm. Agentic commerce, demonstrated alongside the Universal Commerce Protocol co-developed by Shopify and Google, lets merchants enforce "one human, one agent, one allocation" on flash sales and limited drops.
\

Okta is planning a product called Human Principal that lets API builders verify whether a human stands behind an agent and enforce policies accordingly. World ID is slated as one of the first integrations. Browserbase and Exa already accept verified-agent traffic with preferential access: Browserbase reduces anti-bot friction on agents carrying a World ID, and Exa offers 100 free API requests a month to agents verified through AgentKit.
\
The honest read is that this is less a feature release and more an infrastructure bid. World is claiming proof of human belongs next to HTTPS and OAuth in the stack, and it is stacking the product side of that claim faster than most have acknowledged: nearly 18 million verified humans, 160 countries, and a partner list across video, dating, tickets, signatures, email and agent workflows. None of those surfaces have a working defence against the problem that actually hurts them, which is that bots and deepfakes are beating every non-cryptographic check the incumbents ship.
\ The counter is that World's history remains contested for reasons that have not gone away: iris-scan biometrics at scale, regulatory pushback in markets including Kenya and Spain, and open questions about incentive alignment in early rollouts. The protocol redesign addresses a real security gap, and the partnership list is the strongest signal yet that the primitive has product-market fit, but adoption will run directly into privacy and governance friction no upgrade alone can resolve. The next test is less about the math and more about whether regulators, enterprises, and users accept proof of human as infrastructure the way they accepted TLS.
\ Don’t forget to like and share the story!
2026-04-18 04:47:33
What if the quantum attack on today's encrypted internet had already started, and nobody told you?
\ That was the framing Fhenix pressed during its recent X Space on post-quantum cryptography, hosted by Founder Guy Zyskind with Ethereum Foundation developer Nicolas Serrano, Fhenix researcher Doron Zarchy, and Michael Cowart of VenturemindAI. The panel's argument was uncomfortable in a specific way. The post-quantum transition is no longer a product roadmap waiting on a breakthrough. It is a migration the industry has already fallen behind on.
\
Zyskind set the stakes: replacing the cryptography that underpins the internet is not a feature update, and the probability of a cryptographically relevant quantum computer arriving is no longer negligible. Serrano broke the threat into two distinct vectors. Zarchy surveyed the gap between standards and deployment, noting that most systems in production today are not post-quantum secure. Cowart grounded the conversation in enterprise risk.
https://x.com/fhenix/status/2044794366406095317?s=46&t=sTtv9w6thA6BUx_tq2fBWA&embedable=true
\ Guy Zyskind, Founder of Fhenix, explains,
We don't know when it hits, and that is exactly why it is dangerous. It could be a decade away or much sooner. The same math behind FHE is likely to underpin post-quantum cryptography.
\ The context outside the livestream backs the urgency. In February 2026, Vitalik Buterin published a full quantum roadmap for Ethereum naming four vulnerable components: consensus-layer BLS signatures, KZG-based data availability, ECDSA account signatures, and zero-knowledge proofs. The Ethereum Foundation stood up a dedicated Post-Quantum team in January 2026, and Buterin has publicly said quantum risk could surface before the 2028 US election.
\
Serrano's sharpest point was that the quantum threat is not one event. It is two.
\ The first is harvest-now-decrypt-later. Adversaries intercept and archive encrypted traffic today, counting on a quantum machine capable of breaking it to arrive inside the shelf life of the data. The US Federal Reserve, NSA, CISA, and the EU's cyber agency all now cite this as the working threat model. Expert analysis suggests harvested records could begin being decrypted around 2030.
\

Example, concrete: a trading firm routes encrypted order flow today. An attacker captures the traffic and stores it. In 2030, a quantum machine breaks the session keys. The firm's strategies from 2026 become readable. No intrusion alert ever fires.
\ The second vector is signature forgery. Once attackers can derive private keys from exposed public keys, they sign transactions as the original holder. For public chains, that is the end of custody as it exists today.
\
Zarchy's contribution was procedural. The mathematics is not the problem anymore.
\ NIST finalised its first three post-quantum standards in August 2024: FIPS 203 (ML-KEM, for key encapsulation), FIPS 204 (ML-DSA, for digital signatures), and FIPS 205 (SLH-DSA, a hash-based backup). A fifth algorithm, HQC, was selected in March 2025. The NSA's CNSA 2.0 framework mandates post-quantum deployment for new classified systems by 2027 and full transition by 2035.
\

Standards are not the bottleneck. Implementation is. Historical cryptographic migrations take five to ten years. Every wallet, every HSM, every TLS endpoint, every smart contract has to move, and most operators are not talking publicly about their timelines.
\ The gap shows up in cost. On Ethereum, verifying an ECDSA signature today costs roughly 3,000 gas. A quantum-resistant check is estimated at 200,000 gas, a 66-fold jump. Buterin's proposal uses recursive STARK aggregation to compress that overhead into one proof per tick. Most other Layer 1s have not published a plan at all.
\

Fhenix has a commercial interest in this argument, but the technical claim stands on its own. The lattice-based mathematics beneath Fully Homomorphic Encryption, the primitive Fhenix uses to run encrypted computation on Ethereum, is the same family that secures CRYSTALS-Kyber and CRYSTALS-Dilithium in the NIST standards.
\

That matters for two reasons. First, building FHE-native systems today positions a protocol to be post-quantum-ready by construction, not by retrofit. Second, encrypting mempools with post-quantum primitives tackles front-running, MEV extraction, and quantum exposure at the same time. Zyskind has argued in earlier interviews that the privacy stack and the post-quantum stack are on track to collapse into a single layer.
\
The value of the Fhenix conversation was not the warning. Warnings about quantum are cheap. The value was the honest framing of the bottleneck. The math exists. The standards exist. What does not exist is a migration path the broader ecosystem has agreed on, staffed for, and begun executing. Ethereum has at least named the four components at risk and sketched a multi-year path through them. Bitcoin is a harder coordination problem with no equivalent plan yet. Most consumer-facing infrastructure sits in the same position, only quieter about it.
\ The takeaway for builders is narrower than "add post-quantum to the roadmap." It is to decide now whether the data and signatures a protocol handles today will still be sensitive in 2035, and to assume an attacker is collecting everything they can in the meantime. On that test, projects built on FHE-compatible foundations have an accidental head start. Everyone else faces a migration whose hardest part is not cryptography. It is persuading every wallet, every integrator, and every user to move at once.
\ Don’t forget to like and share the story!
\
2026-04-18 03:52:58
\
What happens when a Layer 1 built for real-world finance stops letting builders pick their own stablecoin and picks one for them?
\ That is the question Pharos Network answered, naming USDC the core stablecoin for its Native-to-Pharos Incubation Program. The announcement looks thin on paper and lands heavy in practice. Every project funded through the $10 million incubator will default to USDC for settlement and collateral, and to Circle's Cross-Chain Transfer Protocol for moving dollars between Pharos and more than twenty other chains. Instead of a stablecoin menu, builders now get a stablecoin spec.
\
Pharos launched Native-to-Pharos in February with backing from Hack VC, Draper Dragon, Lightspeed Faction, and Centrifuge. The focus was clear from day one: early-stage teams working on decentralized exchanges, yield infrastructure tied to tokenized assets, and prediction markets anchored in real-world outcomes.
\

\ Today's update ties the capital to a settlement asset. Inside the incubator, USDC will function as the reserve instrument for lending markets, trading venues, and payment rails on Pharos. CCTP will handle cross-chain flows without wrapped assets, removing the third-party bridges that have been the weakest link in almost every multi-chain exploit since 2021. The broader Circle integration, announced on March 27, puts the same rails under the Pacific Ocean mainnet.
\ Said Wish Wu, Co-founder and CEO of Pharos Network, explains,
\
With a rapidly expanding developer base already building on USDC, we are embedding globally used settlement infrastructure into the Pharos builder ecosystem. By leveraging USDC and CCTP, builders can natively extend into Pharos and operate across ecosystems without additional structural complexity.
\
For readers who have not spent time in this stack, the plain version: USDC is a dollar token issued by Circle, redeemable one for one against US dollars, with reserves held at BNY Mellon and managed through a BlackRock-run money market fund. It is a dollar that lives on public blockchains and moves at blockchain speed.
\

\ \ CCTP is the wiring between those chains. Rather than locking USDC on one chain and issuing a wrapped copy on another, CCTP burns the token on the source chain and mints a fresh unit on the destination. Supply stays constant, the token remains native, and the user is not trusting a bridge operator sitting between two networks.
\ Example, concrete: a team building a lending market on Pharos can accept USDC collateral from Ethereum, Solana, or Base without ever touching a wrapped asset. A user depositing $10,000 from Arbitrum sees those funds arrive as native USDC on Pharos within seconds under CCTP V2's Fast Transfer, ready to borrow against, lend out, or settle.
\
The choice is less about Pharos and more about where the money is sitting. USDC's market cap reached roughly $79 billion in mid-April, with supply growing 73 percent across 2025 and outpacing Tether's USDT in year-on-year expansion for the second year running. USDC also captured 64 percent of stablecoin transaction volume in March, the first time it has crossed that line in nearly a decade.
\

On the asset side, the tokenized RWA market crossed $26.4 billion in March 2026, roughly a 300 percent jump year-on-year. Tokenized Treasuries alone account for about $5.8 billion, led by BlackRock's BUIDL and Ondo Finance's products. McKinsey projects the segment reaches $2 trillion by 2030, while US Treasury Secretary Scott Bessent has publicly floated a $3 trillion stablecoin market by the end of the decade.
\ Pharos is not picking USDC for brand. It is picking the asset with the deepest regulated distribution, the strongest institutional custody, and the clearest political tailwind under the GENIUS Act framework.
\
Every Layer 1 chasing RealFi is now making the same decision in different directions. Stable has built its entire chain around USDT. Circle's own Arc network is being positioned as a regulated corridor for USDC-native institutional flows. Pharos, a general-purpose EVM L1 seeded by former Ant Group engineers, is taking a third path: keep the open network, but hard-wire USDC as the default dollar for the incubator cohort.
\

The move reduces optionality and increases coherence. A team entering Native-to-Pharos does not spend its first two weeks picking between four stablecoins with different trust assumptions. It spends them building against one known quantity. For the investors behind the incubator, that is a cleaner bet on a single settlement standard rather than a diffuse multi-token ecosystem.
\
Announcements like this read as press material and function as product commitments. By naming USDC the core stablecoin of its incubator, Pharos is telling Circle, regulators, and founders where its roadmap lives. The upside is real. If the Pacific Ocean mainnet ships with strong USDC liquidity and active CCTP routes, Pharos becomes a credible venue for tokenized Treasuries, private credit, and merchant settlement, three segments growing faster than almost anything else in crypto.
\ The test is whether the first incubator cohort ships products institutions will actually use. Stablecoin rails and a recent $44 million Series A round do not guarantee flows. Building regulated, yield-bearing assets that clear across jurisdictions is slower, messier work than hackathon wins. Pharos has stacked the deck well. The next four quarters, and the first real applications inside Native-to-Pharos, will decide whether the bet turns into volume.
\ Don’t forget to like and share the story!
\
2026-04-18 02:20:14
For a while, “just add evals” felt like the obvious answer to shipping LLM systems responsibly.
That made sense. OpenAI describes evals as a way to make LLM applications more stable and resilient to code and model changes, while Anthropic frames evals as the mechanism for testing whether an AI system succeeds on a task at all. At the same time, tools like Promptfoo and DeepEval have pushed evaluations closer to the software engineering mainstream by explicitly supporting CI/CD workflows.
So, on paper, the problem looked solved: you run an eval suite, get scores, and check whether the model is good enough. In practice, that is not what happened.
What I keep seeing is that teams are getting better at producing eval results, but not necessarily better at making build decisions from them, and those two are not the same thing. An eval run can tell you a lot: it can produce pass rates, metric scores, safety findings, per-test detail. But CI does not care that a dashboard you are maintaining is informative, for example. CI needs something much harsher: a deterministic answer to a narrow question, like “should this build pass?”. And until you try to operationalize it, that sounds trivial.
The first problem is that evals are not naturally shaped like policy, but rather like measurements. A tool will tell you that one suite scored 0.84 on “groundedness”, another had a 92% pass rate, and a third failed on a handful of long-tail edge cases. Which is useful, of course, but what does your organization actually require? Is the 0.84 acceptable? Is the 92% acceptable on all suites, or only on non-safety suites? Is a one-point regression acceptable if the variance is normal, but unacceptable if it affects hallucination checks? The moment you ask those questions, you are no longer talking about “running evals.” You are talking about governance. That is where things start getting messy.
The second problem is that most real teams do not live in one clean eval ecosystem. OpenAI and Anthropic both emphasize that evaluations should be ongoing and tied to real production behavior, which is sensible, but teams rarely standardize perfectly around a single framework for long. Different groups use different tooling, different metrics, different naming conventions, and different report formats. Promptfoo supports CI for prompt and security testing. DeepEval supports CI for unit-style and end-to-end LLM testing. Both are valid and both solve real problems, but once multiple ecosystems exist inside one company, you inherit a new class of problem above the tool level: how to apply one consistent release standard across all of them.
That gap gets underestimated because dashboards make everything look more mature than it is. A dashboard can show trend lines, pretty summaries, and lots of reassuring numbers, but a merge pipeline is unforgiving. It has to cope with malformed result files, missing metrics, renamed tests, empty selections, bad regex filters, broken baselines, and edge cases when someone defines a relative regression check carelessly. In other words, the actual operational problem is not “can we evaluate this model?” The real problem is “can we trust the evaluation artifacts enough to let them block releases without creating chaos?”, which are very different standards.
I think this is why so many AI quality systems look good in demos but feel brittle in production. The demo assumes the eval ran correctly, the schema is stable, the metrics are present, and everyone agrees on what counts as failure. Production gives you the opposite. Someone changes a metric name, or uploads JSON in a slightly different shape. A baseline is missing, a test subset returns no rows, and nobody agrees whether that should pass silently or fail loudly. Suddenly, your “AI quality gate” is really just a collection of fragile assumptions taped to a build script.
If you make the gate too loose, regressions slip through. A model gets less grounded, less safe, or less consistent, but the build still goes green because nobody translated the organization’s standards into machine-enforceable rules. If you make the gate too brittle, engineering loses trust in it. People start bypassing the checks because the system blocks releases for configuration problems rather than real quality problems. Once that happens, the whole promise of eval-driven development starts to erode.
OpenAI’s recent writing on evals for agents makes the broader point clearly: the goal is to turn skills into something you can test, score, and improve over time. Anthropic makes a similar case that automated evals should support development before real users are exposed. I agree with both, but it feels like there is a missing middle layer between “we have test results” and “we can safely use them in release engineering.”
That missing layer is policy, not more evals, another benchmark, or another scorecard. By policy, I mean explicit, reviewable rules that say things like: this suite must stay above a given pass rate, this metric must not regress more than a defined amount from the baseline, these tags are advisory, and those tags are blocking. Once you think in those terms, the architecture becomes clearer, and the evaluation framework starts generating evidence. A separate policy layer interprets that evidence for CI.
That separation matters more than it first appears. It means teams can change models, prompts, and eval providers without rewriting their release standards every time. It means quality requirements stop living as tribal knowledge in Slack threads and start existing as versioned policy. It means warnings and errors can be treated differently on purpose rather than accidentally, and build decisions become auditable, which is exactly what you want when an LLM system starts affecting customer-facing behavior, internal automation, or safety-sensitive workflows.
The real missing layer is small and operational: a way to take whatever eval results already existed, interpret them through explicit rules, and give CI a fail-safe answer. Because once you are shipping LLM systems seriously, “we ran evals” is not enough, you need to know what happens next.
2026-04-18 02:16:01
Lovable, the AI-powered app builder, hit $200M ARR in under a year — the fastest AI startup ever to cross $100M ARR, doing it in just eight months. You’d think a company growing that fast has PMF locked down. Elena Verna, their Head of Growth, calls PMF at Lovable a “perishable good” — something the team must re-earn every 90 days. The playbook of “find PMF, then scale” is a trap. This article is the operating manual for the treadmill that never stops.
If PMF is a treadmill, the first step is admitting where you actually stand. The honest answer is brutal.
Ninety to ninety-five percent of new products fail. In my experience training 12,000+ product managers, the reason is almost always the same: the team was wrong about something fundamental — the segment, the job, the value prop, the channel, the economics — and they discovered it too late.
The goal of early stage isn’t to ship a product. It’s to buy knowledge about what will kill your idea.
We don’t launch products — we purchase validated learnings. A pivot isn’t some dramatic reinvention. It’s the surgical act of changing an assumption that turned out to be wrong.
Consider Notion. Their V1 was a programming tool “for non-coders.” Nobody wanted it. Founder Ivan Zhao fired all four employees, moved to Kyoto, and rebuilt from scratch. His critical insight: people don’t want to build software — they want to get stuff done. The wrong assumption was about the Job To Be Done and the core Value Proposition. Notion V2 launched as a modular workspace in 2018 and now sits at a $10B valuation.
If the goal of early stage is to buy knowledge about what kills your idea, the next question is obvious: where do you start digging?
Jobs are the root cause of everything in your product. A product exists to perform a job for a customer, and profit appears only when you create real added value within a specific segment.
The most common mistake I see? Teams skip segment and job selection entirely and jump straight into building a solution. They fall in love with the technology, not the problem.
The correct sequence is: pick a segment, identify the highest-value job within it, model unit economics, validate whether you can deliver enough value, then figure out how to communicate that value.
Wispr Flow lived this. The founders spent years building a hardware voice device. After a brutally honest board meeting in mid-2024, they confronted the truth: the Job was never “own a cool voice gadget.” It was “type faster and more naturally.” They killed the hardware, pivoted to a macOS dictation app, and hit #1 on Product Hunt. Free-to-paid conversion reached roughly 20% — against an industry average of 3-4%. It became possible because they finally matched the right segment with the right Job.
Jobs give you direction. But in 2025, the thing that used to force you to validate before building — expensive code — disappeared overnight.
The old cycle was linear and expensive: research segments, identify jobs, model economics, and only then — when you had strong conviction — invest in building an MVP. Code was scarce, so you had to be right before you started building.
AI collapsed that constraint. Writing code is becoming practically free. And that changes the entire validation sequence.
The old model said: do research first, find the promising segments and Jobs To Be Done, then build one MVP for the best bet. The new model says: build MVPs for each promising segment in parallel. Not polished products — lightweight probes designed to reach one moment: the actual sale.
Why the sale? Because the most valuable feedback isn’t an interview quote or a survey response. It’s someone pulling out their wallet. At the point of sale, you validate the entire causal chain at once: does the segment exist? Is the job real? Does the value proposition land? Does your communication activate the buyer? Is there willingness to pay?
This is a radical acceleration of hypothesis testing. Instead of running one careful experiment per quarter, you can run a dozen cheap ones per month — each designed to reach the point of transaction as fast as possible. You’re not building products. You’re purchasing validated learnings about what kills your idea.
The danger is obvious: teams mistake building velocity for learning velocity. They ship feature after feature, feeling productive, while core assumptions go untested. More code, less validation.
The right approach: use AI as an accelerator of artifacts — probes, MVPs, prototypes — not as a substitute for learning. Measure your progress in the number of validated assumptions and the quality of signal from real sales, not in features shipped or lines of code pushed.
AI has turned the founder into a factory operator. The number of hypotheses you can test per unit of time has exploded. But throughput without validation is just noise — more experiments don’t mean more knowledge.
Your hypothesis factory needs an operating system with clear explicit kill criteria: what would make you kill this bet.
Kill criteria protect you from bad bets. But even good bets expire. The hardest lesson in AI-era product work is that PMF itself has a shelf life.
Jasper AI was the poster child of AI-first PMF — $120M ARR, $1.5B valuation. Then ChatGPT launched. Traffic dropped 30% in two months. Revenue crashed to $35M. Both co-founders were out. Three pivots in twelve months. Perfect product-market fit, vaporized — not because the product got worse, but because the market shifted beneath them.
This isn’t just an AI-category phenomenon.
In every market, PMF is becoming less durable. Models change, user expectations evolve, competitors spawn faster, and advantages that took years to build evaporate in quarters. The strategy of “found PMF, now optimize and scale” is a trap in any category.
The solution is two loops running simultaneously. Loop one: optimize your current fit — double down on what your foundational cohort loves, squeeze more value from existing assumptions that tested well. Loop two: run a continuous innovation loop — new bets, new segments, new jobs — even while Loop one is working. As Elena Verna puts it, what used to be long-horizon innovation is now a quarterly reality. The hypothesis factory isn’t a phase you grow out of. It’s a perpetual engine.
The treadmill never stops. But now you have the operating manual. Six moves, starting Monday.
PMF isn’t something you find. It’s something you keep finding. The only question is whether you have a system for it — or whether you’re just running.
2026-04-18 02:04:10
\ I studied law. Seven years later, I did an MBA. At no point during either degree or in my work in between did anyone pull me aside and say, “Hey, you should probably learn to write a ‘for loop’.” And yet here I am, running a legaltech startup, shipping production code, and mass-texting my cofounder screenshots of features at 2 am.
I’ve been chewing on the idea of the “Humanities Engineer” since about 2018-2020, back when I first taught myself to code through books, Udemy courses, FreeCodeCamp rabbit holes, and a frankly concerning obsession with Stack Overflow.
So, what is a “Humanities’ Engineer”? It’s someone who studied humanities, arts, or some flavour of social science, and who crosses into technical building; specifically, software engineering. Here, we’re not thinking of someone who pivots their careers and abandons everything they learned before, or a bootcamp graduate who memorised JavaScript flashcards for twelve weeks after a history degree per se. A “Humanities’ Engineer” is someone who brings all the analytical training, close reading, critical thinking, and genuine human curiosity from their education straight into the codebase.
How does one actually become a “Humanities’ Engineer”? In this article, I want to lay down some actual, concrete steps and the right mindset to become one. Because there has never, in the history of computing, been a better time for a philosophy major or a history nerd or a literature obsessive to become an exceptional builder. \n
You’ve probably seen this debate floating around. Jensen Huang, CEO of Nvidia, stood on a stage at the World Government Summit in Dubai in February 2024 and told a room full of powerful people that kids shouldn’t learn to code anymore. His argument: AI will make human language the programming language, so people should focus on domain expertise instead.
There is an entire cottage industry of Medium posts, and LinkedIn takes arguing about whether Jensen is right or wrong. I think the whole debate is a red herring, at least for us. It is out of scope for “Humanities’ Engineers.” We are our own different thing. The question for us isn’t “should the average person learn to code?”, it’s “I have ideas, I have taste, I can read 400 pages about the Peloponnesian War for fun. Can I build things?” And the answer is: absolutely yes, and it has never been easier.
When I first started learning, I felt like there were a few gatekeepers. You see them everywhere - condescending Stack Overflow answers, textbooks written for people who already knew what was in the textbook. Simply, loads of places where you ask a question and get a response that makes you want to close your laptop, walk into the sea, and start a new life as a fisherman. The astonishing thing now is that I can ask an LLM to explain something to me like I’m five, and it will do it patiently, repeatedly, at 3 am, without ever making me feel stupid.
\
I’m going to assume you have up to a 10th-grade level of maths education. Even that isn’t strictly necessary. It’s just nice to have, especially the concept of functions: something goes in, something comes out. But if you hated maths, don’t worry.
I’m also going to assume you can swing about $20/month (or roughly £18 per month if you’re in the UK like me) for an LLM subscription. Claude has been my go-to (full transparency). But ChatGPT and Gemini are solid too at this stage. Cursor IDE is, in my opinion, the best tool available right now for AI-assisted coding.
If you can’t spare $20 at this moment, that’s completely fine. You can still do this. ChatGPT’s free tier is okay. There are open-source models you can run locally (look into Ollama and open-weight models like Llama, DeepSeek, Kimi, etc). Many IDEs have free AI integrations now. It’ll be a little more friction and a little more switching between tabs, but the core path is the same.
If you went to university, I’m going to assume you studied humanities, arts, or some form of social science (though again, not necessary at all; you just need to bring your passion for the humanities). And most importantly, I’m going to assume you have ideas you want to build, or at least a nagging feeling that you want to find your place in the world of making things. If you’re juggling extraordinary circumstances (raising kids solo, working two jobs, whatever it is), I’m also going to assume you’ll carve out some time, even if it’s small. In my experience, small and consistent beats large and sporadic for the Humanities’ Engineer.
\
Most articles will tell you to just follow a tutorial, build a to-do app, and then somehow emerge as a fully formed engineer. That’s like saying “to become a chef, first make toast, then open a restaurant.”
But what if you could start building something right now, this second? What would it be?
I have asked this exact question to people, and the answers are stunningly diverse: Tracking car races in real-time. A personalised, agentic life-organisation tool. A virtual try-on for fashion (“how would this jacket look on me before I buy it?”). Creative advertising platforms (AR, VR - the world is your oyster!).
Chances are that whatever you’re thinking is genuinely unique to you. And you should chase it.
The secret sub-step within this step is this: go look at “competitors.” I’m using that word very loosely because you’ll probably build this project to learn, or open-source it, or maybe just show your friends. But look at how others have approached similar problems. Say you love fashion and you want to build that virtual try-on thing. Search for existing products. Look at their features, their UX, what they do well, what they do badly. Then ask an LLM: “How would I start building something like this?” The point isn’t to copy. The point is that seeing what exists will spark new creative ideas for you.
At this stage, you will feel daunted. You’ll look at what’s already out there and think, “Oh my god, everything that could be built has already been built.” I promise you it hasn’t. From cafes to cars to chatbots, the amount of room for differentiation is staggering. NOBODY has built YOUR version of the thing yet.
\
You hear a lot of conflicting advice about which language to learn first. Some people say, “just learn the general concepts and syntax of programming.” That’s not how we’re going to do this as “Humanities’ Engineers.” We are prioritising building. We want to get ideas out of our heads and into the world as fast as humanly possible, with as few potential for bugs as possible.
Before LLMs, I used to recommend Python. Python is still my favourite language. It’s elegant and concise. But these days, I’d say go with Next.js and TypeScript.
(Quick note: Use an LLM to break down these next two paragraphs if you aren’t kinda familiar with these concepts already and ask it to ELI5.) Next.js has become the go-to React framework. It has over 138,000 GitHub stars and powers massive products used by Netflix, TikTok, Nike, Uber, Starbucks, and Spotify. We use it at my startup too, and I’ve been happy with it. The hosting ecosystem is mature: Vercel (which created Next.js), Render, AWS Amplify, Railway, Netlify, and all the major cloud platforms let you deploy directly. You can build end-to-end products. Frontend, backend, API routes, the whole thing. It’s your gateway drug to making real things real people can use.
With Python and Django, there’s a bit more setup overhead to get a full-stack app running. Do I think Python and Django are conceptually easier to learn? Honestly, yes. But as “Humanities’ Engineers,” we’re optimising for one thing above all: building. Getting your idea live. You can always pick up Python later (and you should, it’s wonderful).
A good mental trick: Pretend you’re building a startup. Act like you’re going to launch this thing. Get your friends and family to try it out. Give yourself a name, a landing page, a deadline. We aren’t play-acting here. This is how you create urgency and avoid the procrastination (yes, it still counts as procrastination!) that spending four weeks picking the “right” CSS library can be. This is how you avoid feeling that weird existential dread of being stuck doing something no one - not even yourself, it seems - will see.
\
This is the part where the “Humanities” part of being a “Humanities’ Engineer” makes a difference.
We are used to reading long, dense, sometimes boring material: Fantasy trilogies. Legal textbooks. 19th-century Russian novels. Entire Wikipedia rabbit holes about obscure historical events. This is actually a superpower in software engineering, and I am not trying to just hype up our (often, unfortunately, degraded as “useless”) degrees.
There’s a running joke that engineers don’t read the documentation. As “Humanities’ Engineers,” we will not only read the documentation (i.e. the manual of how a bunch of code works together), we will read the code itself. My legal training taught me that when you’re reading a contract or a statute, every word matters. You don’t skim (unless you’re confident of the text). You read each clause, and you ask why it says what it says. I brought that same habit to code, and it has been the single most useful skill transfer of my career.
\ How do you apply it practically? Whatever code an LLM writes for you, interrogate every single line. Let’s say it gives you something like:
\
export function calculateTotal(items: CartItem[]): number {
return items.reduce((sum, item) => sum + item.price * item.quantity, 0);
}
Ask the LLM to explain every piece of this. What does export mean? What does function mean? Why is “calculate total” written like it’s commanding something, why is it concatenated and the first letter is not capitalised? Why are there parentheses? What’s CartItem[] and why is there a colon after items? What does : number at the end mean? What on earth is .reduce? Why is there a 0 at the end? What’s the difference between a parameter and an argument?
\ THIS IS IMPORTANT! Treat your understanding of syntax like a very long walk. Like, say, 100,000 steps. Each step counts. Every single little piece of syntax you understand contributes to that journey. I hear a lot of vague advice like “don’t be intimidated.” Obviously, you’re going to be intimidated. A hundred thousand steps is a long way. So be intimidated. And then take the next step anyway.
\
If you’re using something like Cursor, you can get a minimum viable product up surprisingly fast. Build it, deploy it, then show it to someone.
They say that if you’re not embarrassed by the first version of your product, you shipped too late. This is annoyingly true. Your first version will be ugly. It will have bugs. Someone will click a button, and the whole thing will break in a way you didn’t know was possible. That’s fine. Get feedback, add features, fix things, and learn every single line of your codebase along the way.
\
This one used to frustrate me. People in the tech world would say “develop good taste,” and I’d think, amazing, thanks, but that’s like saying “to win at basketball, just put the ball in the net more than your opponent” Sure. But how?
What actually helps is a bit more tedious: as you build your understanding of syntax and logic, make your code read like English. If an LLM or a tutorial uses some clever, compressed syntax that you don’t understand, rewrite it in the simplest, most readable way you can. Even if it’s “less efficient,” or someone you know who’s an “actual” engineer rolls their eyes. As a “Humanities’ Engineer,” you should write code that you find readable and useful. Fancy syntax can come later, once you actually understand what it’s doing.
I’ve found writing about my code helpful, like rubber-duck debugging (Google this term, it’s very cute <3), but on paper. Explaining what a function does in plain language forces you to understand it. Your humanities training makes you better at this than you’d expect.
The best book I can recommend for developing taste in code is Clean Code by Robert Martin. Fair warning: the first time you read it, even the chapter titles will seem alien. But that’s okay. Come back to it after a few months of building. It’ll click each time differently.
\
This is the big one. This was my biggest regret from the tutorial era, and fixing this habit was the single biggest accelerator in my learning.
There’s this funny YouTube video called “Every Programming Tutorial”. It’s about 30 seconds long, it’s been watched millions of times, and it perfectly captures the problem. Programming tutorials spend an eternity on basics like “this is a variable, this stores information,” then suddenly jump to building something impossibly complex, and the moment you hit the hard part, the instructor says: “Don’t worry about this section, we’ll come back to it later.”
They never come back to it. And “that section” is always the critical part.
The advantage of LLMs is that you can stop and say, “Wait. Tell me exactly what this part does. Be casual about it. Explain it like I’m at a pub.” More often than not, the thing you were told to skip is the thing that would have unlocked your understanding.
I’ll give you a real example: I was working with an image classification library that Claude suggested. It wrote a bunch of code, things kept breaking, and I tried to debug things I didn’t even know existed just ten minutes ago. Nothing was working - shocker! Then I remembered this exact rule. The Primeagen (a popular programmer and content creator) once made a really funny observation about how developers’ eyes just mentally gloss over regex (regular expressions, those pattern-matching strings of text that look like a cat walked across a keyboard like (..+?)\\1+). He was going over the CrowdStrike outage from July 2024, which, by the way, crashed about 8.5 million Windows computers. The root cause involved CrowdStrike’s Content Interpreter, which uses a regex-based engine, hitting a parameter mismatch that caused an out-of-bounds memory read. Regex was probably at the centre of one of the largest IT outages in history, partly because people’s instinct is to look at it, shudder, and move on.
I noticed I was doing the same thing. Claude had suggested some library functions I didn’t understand, and I was just accepting them. When I forced myself to stop and interrogate every single part, I found the bug in about ten minutes. The rule works.
\
Even though I’ve recommended building with Next.js and TypeScript, I have to mention that Angela Yu’s 100 Hours of Python course (on Udemy) is genuinely fun and well-structured. If you want to start there before jumping into Next.js, go for it. Writing things down as you learn will help. Keeping a notebook, even a messy one, forces you to process what you’re absorbing instead of just passively watching videos.
Speaking of which: “tutorial hell” used to be a real thing (and probably still is, even in 2026). It’s the trap of watching tutorial after tutorial, feeling like you’re learning, but never actually building anything. LLMs have mostly killed this trap, because you can learn by doing from day one, but the ghost of tutorial hell still haunts people who spend weeks “preparing” before they start building anything. Just start. You’ll learn what you need as you need it.
\
I saved this for the end because I wanted you to have the practical steps first, but this is the thing that honestly matters most. Since you read to the end, you’ve earned it!
The number one thing you need to bring to this journey is a quietly audacious belief that you can become an exceptional builder. “Quiet” is the critical word there. You don’t need to announce it on LinkedIn or post “Day 1 of my coding journey” content (unless that genuinely motivates you, in which case, amazing). What you need is an internal conviction that you can do this, paired with the tenacity and discipline to actually prove it.
People from humanities backgrounds have so much to bring to this field. We read closely. We think critically. We communicate better than most. We’re trained to handle ambiguity and complexity, to maybe over-analyse and ask “what does this actually mean?” rather than just accepting things at face value. But it’s not over-analysis when, increasingly, English itself is becoming a way to write programs. I work in legaltech, and I watch non-technical lawyers interact with AI tools every day and produce things that would have taken a development team weeks just a few years ago.
The barrier between “person with ideas” and “person who builds things” has never been thinner - it’s practically translucent at this point.
So build. Now. Today. Open Cursor, spin up a Next.js project (#notSponsoredByNextJS), and start making the thing you’ve been thinking about.
The world has enough people writing think-pieces about whether coding is dead.
It needs more people to build.