2026-02-19 16:52:51
I spent 2 months turning Claude Code from "just a CLI" into a fully custom dev environment. Here's how I built 14 slash commands, 11 specialized AI agents, and a real-time usage tracker — and why it changed how I code.
If you've used Claude Code, you know it's powerful. But out of the box, it's generic.
Every session starts fresh. Every request requires context. Every limit surprise kills momentum.
I wanted better.
The goal: Build a development environment that knows my stack, my conventions, and my workflow — so I can focus on building, not explaining.
Two months and 42,000 messages later, I have:
The stack:
The result: An environment where AI actually understands my project context from the first message.
This isn't a tutorial. This is a case study on what's possible when you treat Claude Code as a platform, not just a tool.
Before customization:
Every conversation started the same way:
"I'm building a Next.js 15 app with React 19 and Supabase. I use TypeScript strict mode, functional components, and..."
Copy-paste context. Explain stack. Repeat for every session.
Or worse:
"Can you help me design this API?"
And Claude would suggest Express.js. Or Flask. Or something I'm not using.
Because it doesn't know my stack.
The limits problem:
Halfway through a refactor:
"You've reached your rate limit. Try again in 3 hours."
No warning. No visibility. Just... stopped.
Instead of explaining context every time, I created 11 specialized agents with pre-configured knowledge:
Turns vague ideas into concrete specs.
Before:
"I want to build a dashboard"
After:
/requirements-analyst"Building a dashboard. Let me clarify:
- Real-time or static data?
- User roles and permissions?
- Key metrics to display?
- Deployment target?"
Asks the right questions. Outputs structured requirements.
Architecture design focused on MY stack.
Pre-configured with:
Example:
/backend-architect "User authentication flow"
Output:
// Suggested architecture for auth flow
// Using Next.js 15 Server Actions + Supabase Auth
// 1. Server Action: app/actions/auth.ts
export async function signIn(formData: FormData) {
'use server'
const supabase = createClient()
const { data, error } = await supabase.auth.signInWithPassword({
email: formData.get('email') as string,
password: formData.get('password') as string,
})
if (error) throw error
revalidatePath('/')
redirect('/dashboard')
}
No generic advice. Just code that fits my stack.
Checks for vulnerabilities.
Scans for:
Optimizes based on metrics.
Analyzes:
Researches technologies before making decisions.
Example:
"Should I use Prisma or Drizzle ORM with Supabase?"
Makes informed recommendations with sources.
Beyond agents, I built 14 slash commands for common operations:
/new-task — Task Planning
Breaks down work into concrete steps.
Input:
/new-task "Add user profile page"
Output:
## Task: Add User Profile Page
Subtasks:
1. [ ] Create route: app/profile/page.tsx
2. [ ] Fetch user data (Server Component)
3. [ ] Build UI (avatar, name, bio, stats)
4. [ ] Add edit mode (Client Component)
5. [ ] Update API: app/api/profile/route.ts
6. [ ] Add tests
Estimated: 2-3 hours
Stack: Next.js 15, Supabase, Tailwind
/api-new — API Endpoint Generator
Creates new API routes with validation.
Includes:
/code-cleanup — Refactoring
Cleans up code while preserving functionality.
Focuses on:
The biggest quality-of-life improvement: visibility into limits.
Right below the Claude Code prompt:
🍺 ~/nazarf-claude-code │ main ✓ │ 14% 23k[▓░░░░░░░░░]143k │ $0.04
Shows:
The foundation: Ghostty terminal emulator + tmux.
Why ghostty?
Why tmux?
Running multiple AI agents simultaneously requires session management.
Here's my typical workflow:
Session 1 (3h59m active): backend-architect │ 25% rate limit
Session 2 (3d18h active): frontend-architect │ 40% rate limit
Session 3 (1h12m active): deep-research │ 15% rate limit
tmux gives me:
The tmux status bar:
1 claude 3h59m: [██░░░░] 25% 3d18h: [████░░░░] 40%
Shows:
1 claude)3h59m)25% used)40% used)Why this matters:
Claude Pro has two rolling rate limits:
Most people hit these limits by surprise.
With tmux tracking, I see limits approaching across all sessions and shift workload accordingly.
Full configs: github.com/norens/dotfiles (ghostty + tmux + Claude Code setup)
Here's my actual terminal running Claude Code:
What you see:
~/IdeaProjects/nazarf-claude-code
main branch, 14% context used, 24k chars)1 claude) with rate limit tracking (2h23m: 20%, 5d9h: 18%)This is the environment running while I build. Multiple sessions, persistent state, full visibility.
Every session, Claude Code reads CLAUDE.md automatically.
Think of it as a README, but for AI.
Mine includes:
nazarf-claude-code/
├── app/ # Next.js 15 App Router
├── components/ # React components
├── lib/
│ ├── supabase/ # Supabase client
│ └── utils/ # Helpers
└── types/ # TypeScript types
Result:
Claude knows my stack. My conventions. My preferences.
Every session.
2 months in:
Productivity impact:
I've built more in 2 months with this setup than in 6 months before.
Not because Claude Code is magic.
Because friction disappeared.
This is a Claude Code plugin. You can install it in ~2 minutes.
Repository: github.com/norens/nazarf-claude-code
# Clone the plugin repository
git clone https://github.com/norens/nazarf-claude-code
cd nazarf-claude-code
# Install as Claude Code plugin
claude-code plugin install .
# Verify installation
claude-code plugin list
That's it. All 14 commands and 11 agents are now available.
# Copy the CLAUDE.md template to your project
cp CLAUDE.template.md ~/your-project/CLAUDE.md
# Edit it with your stack, conventions, and structure
vim ~/your-project/CLAUDE.md
Claude Code will auto-load CLAUDE.md from your project root.
For the full experience (ghostty + tmux + rate limit tracking):
# Clone dotfiles
git clone https://github.com/norens/dotfiles
cd dotfiles
# Install ghostty config
cp .config/ghostty/* ~/.config/ghostty/
# Install tmux config (includes Claude Code status bar)
cp .tmux.conf ~/.tmux.conf
# Reload tmux
tmux source-file ~/.tmux.conf
Each agent is a YAML config:
# agents/backend-architect.yaml
name: backend-architect
description: "Design backend architecture for Next.js 15 + Supabase"
system_prompt: |
You are a backend architect specializing in:
- Next.js 15 App Router
- Server Actions and Server Components
- Supabase (PostgreSQL, Auth, RLS)
Out of the box: useful.
Customized: transformative.
The difference is treating it as extensible infrastructure, not a static product.
Generic AI advice is worthless.
Stack-specific, project-aware AI is a force multiplier.
Invest in onboarding your AI. It pays back 10x.
Rate limits aren't the problem.
Invisible rate limits are.
Real-time tracking = proactive workflow management.
One generalist AI < Five specialist AIs.
Each agent knows its domain deeply.
Claude Code gave me AI in the terminal.
Customization gave me a dev environment that knows me.
The setup:
The result:
The lesson:
Generic tools are starting points, not destinations.
Your competitive advantage isn't the tool.
It's what you build on top of it.
GitHub: github.com/norens/nazarf-claude-code
Stack: Next.js 15, React 19, Supabase, TypeScript
License: MIT
Questions? Built something similar? Share in the comments. 👇
Published: February 19, 2026
Author: Nazar Fedishin
2026-02-19 16:41:32
I've spent the last year paying per-token overages on Claude Code because I keep hitting my weekly limit.
That's not a complaint. That's context.
I use Claude Code the way most developers eventually do — as a headless collaborator running in the background while I do other things. The problem is "other things" often means I'm not at my computer. I'd come back 40 minutes later to find Claude had been waiting on me for 35 of them.
So I built a notification system on top of it. And in doing so, I accidentally built something that looks a lot like OpenClaw — just in the other direction.
This post is the conceptual walkthrough. I'm keeping the actual implementation to myself for now — but the primitives are public, the webhook docs are public, and anyone can build this. Here's how I think about it.
OpenClaw (the gateway that connects Claude to messaging apps like Telegram and WhatsApp) is doing something conceptually simple:
event comes in → invoke Claude → route output somewhere
That's the whole thing. The magic is in the plumbing.
Claude Code has hooks.
Claude Code's lifecycle exposes 14 hook events — SessionStart, PreToolUse, PostToolUse, Notification, Stop, SessionEnd, and more. Each one fires at a specific point and delivers a JSON payload to any shell command you configure. Which means the same primitive exists, just flipped:
Claude Code does something → hook fires → your command runs → route it somewhere
If you add a response path — your phone sends a message back, your server picks it up and spawns a new claude session — you've closed the loop. You've built a personal Claude Code gateway. One that reaches you wherever you are.
Here's the architecture:
┌─────────────────────────────────────────────────┐
│ Claude Code │
│ (running locally, headless or in terminal) │
└────────────────────┬────────────────────────────┘
│ hook events (JSON via stdin)
▼
┌─────────────────────────────────────────────────┐
│ Node.js Server (local) │
│ - receives hook payloads │
│ - checks context (am I at my desk?) │
│ - routes notifications │
│ - manages browser blocker state │
└──────────┬──────────────────────────┬────────────┘
│ push notification │ browser signal
▼ ▼
📱 Phone 🖥️ Browser extension
(only when away) (blocks tabs when idle)
Three components. Let's go through each.
Claude Code lets you configure hooks in ~/.claude/settings.json. When a hook event fires, Claude Code pipes a JSON payload to your command's stdin. The simplest thing to do with that payload is forward it to a local server.
Since each hook is configured separately per event type, I use separate routes so the server knows exactly what fired without needing to parse the event type out of the payload:
{
"hooks": {
"Notification": [
{
"matcher": "",
"hooks": [
{
"type": "command",
"command": "curl -s -X POST http://localhost:4242/hook/notification -H 'Content-Type: application/json' -d @-"
}
]
}
],
"Stop": [
{
"matcher": "",
"hooks": [
{
"type": "command",
"command": "curl -s -X POST http://localhost:4242/hook/stop -H 'Content-Type: application/json' -d @-"
}
]
}
],
"PreToolUse": [
{
"matcher": "",
"hooks": [
{
"type": "command",
"command": "curl -s -X POST http://localhost:4242/hook/pretooluse -H 'Content-Type: application/json' -d @-"
}
]
}
]
}
}
The -d @- tells curl to read the request body from stdin — which is exactly where Claude Code pipes the event payload. The Content-Type: application/json header lets Express parse it automatically.
Here's the server:
import express from "express";
const app = express();
app.use(express.json());
let sessionActive = false;
app.post("/hook/notification", async (req, res) => {
await handleNotification();
res.json({ ok: true });
});
app.post("/hook/stop", async (req, res) => {
sessionActive = false;
await handleStop();
res.json({ ok: true });
});
app.post("/hook/pretooluse", (req, res) => {
sessionActive = true;
res.json({ ok: true });
});
app.get("/status", (req, res) => {
res.json({ sessionActive });
});
app.listen(4242, () => console.log("Hook server running on :4242"));
The whole point is to only notify me when it matters. If I'm at my desk, Claude Code's session panel is right in front of me. I don't need a push to my phone.
On macOS, ioreg exposes the system's HID idle time in nanoseconds. One awk pipe gives you a clean number:
import { execSync } from "child_process";
function isScreenIdle(): boolean {
try {
// HIDIdleTime is in nanoseconds. 60 seconds = 60_000_000_000 ns
const idleNs = parseInt(
execSync(
"ioreg -c IOHIDSystem | awk '/HIDIdleTime/ {print $NF; exit}'",
{ encoding: "utf8" }
).trim(),
10
);
return idleNs > 60_000_000_000;
} catch {
return false;
}
}
If the screen has been idle for over a minute, I'm not there. Route the notification.
I use ntfy.sh — a free, open-source push notification service. You subscribe to a topic on your phone, your server publishes to it. No account needed for basic use, just pick a long random topic name and treat it like a secret.
const NTFY_TOPIC = "your-long-random-private-topic";
async function notify(title: string, body: string) {
await fetch(`https://ntfy.sh/${NTFY_TOPIC}`, {
method: "POST",
headers: {
Title: title,
Priority: "default",
Tags: "robot",
},
body,
});
}
async function handleNotification() {
if (!isScreenIdle()) return;
await notify("Claude Code", "Claude needs your attention");
}
async function handleStop() {
if (!isScreenIdle()) return;
await notify("Claude Code — Done", "Session finished. Come back.");
}
The Notification hook fires when Claude is waiting on you — needs input, hit a permission prompt, something needs attention. The Stop hook fires when the session ends. Both are the signals I care about.
Install the ntfy app on your phone, subscribe to your topic. Done.
The PreToolUse hook fires every time Claude Code is about to call a tool — which means as long as a session is active and running, the server keeps sessionActive = true. When Stop fires, it flips to false.
A lightweight browser extension polls /status every 30 seconds. When sessionActive is false, it adds a blur overlay and a redirect prompt to sites on my blocklist. When Claude Code fires up again, they unblock.
You can build the extension in about 50 lines of vanilla JS using the Chrome Extensions Manifest V3 API.
Right now this is one-directional — Claude Code talks to me. The natural next step is closing the loop:
📱 Phone → send message to server
server → spawn `claude -p` session with that prompt
Claude Code → runs → hooks fire back → notification
The spawning part is straightforward. The -p / --print flag runs Claude Code non-interactively — it takes a prompt, runs it, prints the result, and exits:
import { spawn } from "child_process";
function runClaudeSession(prompt: string) {
const proc = spawn("claude", ["-p", prompt], {
stdio: "inherit",
detached: true,
});
proc.unref();
}
Add an inbound route to your server (an ntfy webhook, a Telegram bot, whatever messaging surface you prefer), wire it to runClaudeSession, and now you can kick off Claude Code tasks from your phone while you're away. Come back to a finished PR. Or a first draft. Or a bug fixed.
That's OpenClaw. Trigger + reaction. Built from the Claude Code side. I'm not publishing mine — but there's nothing stopping you from building yours.
Most people think of Claude Code as a terminal tool. That's limiting.
The hook system makes it an agent with an event bus. The primitives for building a personal AI gateway are already there — 14 lifecycle events, JSON payloads on stdin, any shell command you want to run. OpenClaw proved the pattern works in one direction. Claude Code's hooks let you do it in the other.
The community is already building these bridges independently. The ecosystem is moving fast.
The primitives are already there. The hook docs are public. The rest is plumbing — and that part I'll leave to you.
This started as a LinkedIn post — discussion is happening there if you want to pick it apart.
2026-02-19 16:31:37
When I started building software, I mostly built tools for developers.
But something kept bothering me.
In many small businesses — especially in emerging markets — selling online still means:
It works… until it doesn’t.
As orders grow, chaos grows.
So I started building InnoStore.
A simple platform that allows vendors to:
All without technical knowledge.
I didn’t want to build “another Shopify clone”.
I wanted to build something:
This is still early.
But I’m building it in public.
If you're a vendor tired of “DM to order”,
or a developer interested in SaaS building,
you can check it out here:
I’d love feedback.
2026-02-19 16:30:00
It is February 19th. Today, we are stepping away from UI fluff and diving into structural architecture. Your mission is to build a nested file explorer.
Create a directory tree (folders inside folders) that can be toggled open and closed. It needs to look and feel like a real code editor's sidebar.
<input type="checkbox"> + <label>) to manage the open/closed state of every folder.Clicking a folder name should expand it to reveal the files inside. Clicking it again should collapse it. The files and folders should have proper indentation based on their depth.
Pro Tip : Use the adjacent sibling selector
input:checked + label + ulto show the nested list only when the hidden checkbox is active!
Drop your CodePen or GitHub Repo in the comments!
Have fun!
2026-02-19 16:27:12
We’ve all seen that one button. The simple, and straightforward one that’s on most login and sign up pages on almost every login page today.
“Continue with Google”
It just works. It’s fast, and it’s convenient.
But am I the only one who wonders “what happens if Google randomly disappears?”
Yeah, I understand that there are layers of safeguards behind the scenes that prevent that whole scenario from happening. But definitely, I couldn’t just let this thought slide.
In essence, this article is basically about understanding how SSO (Single Sign-On) works. You know it as the “Continue with Google” button; engineers call it SSO.
We’re going to look at everything from the moment you click that button to the terrifying realization that you don’t own your identity.
The chances of Google actually disappearing are quite low.
That’s a black swan event; an unpredictable event, quite extremely rare too… with severe consequences. Think of events like the 9/11 attacks or the 2008 financial crisis.
Those consequences in the possibility that Google disappears are exactly what we’d be looking at in this article.
We have traded sovereignty for convenience, creating the biggest “Single Point of Failure” in internet history.
Whenever I’m asked this question, I like to compare the login system with the “ticket at a club” scenario.
Let’s assume you want to sign into your Spotify account.
Instead of you having to type in your email address, and then enter your password, you click the “Continue with Google” button.
We can say Spotify is the “Club”, and Google can be synonymous to the “Club Bouncer”.
On clicking that button, you are basically asking Google to give you a “wristband”.
See, Spotify doesn’t actually want to know who you are. They don’t want to see your ID, and they definitely don’t want to be responsible for holding “your wallet” (your password).
They just want to know if you’re cool to enter. {Funny how this isn’t just Spotify, but a lot of other tech companies.}
So, the handshake, behind the scene, looks like this:
You tell Spotify, “Let me in.”
Spotify says, “Go ask the Bouncer (Google).”
You go to Google, show your ID (log in), and prove you’re you.
Google hands you a stamped wristband (technically called an Access Token).
You walk back to Spotify, show the wristband, and they wave you through.
It’s seamless. It’s brilliant. It’s OAuth 2.0 and OpenID Connect in action.
But this is the part that everyone seems to miss: Spotify never actually met you. They only met the wristband. And that wristband has an expiration date. Every hour or so, Spotify has to run back to the Bouncer and ask, “Hey, is this guy still cool?”
Now, the engineers reading this will say, “Actually, Spotify doesn’t check with Google for every single click; they verify the cryptographic signature locally.” {more technical words like that 😂}
And they’re right. For a short time, you are free.
But that wristband (the Access Token) has a short lifespan (usually about an hour). When it expires, Spotify has to quietly go back to the Bouncer and ask for a new one using a Refresh Token.
This is the choke point.
Now, in our scenario where our Bouncer, Google suddenly disappears, or just decides they no longer like your face, and bans your account… The answer to Spotify’s question about you “still being cool” is a big, hard “NO!”
The refresh request fails, and the bouncer refuses to renew your pass.
And just like that, the club, Spotify (or any app, but you get my point 🥲) kicks you out… not fully knowing what you did, but just because the bouncer, Google, said they no longer like you.
The idea is quite crazy if you think about it, but it’s just the truth {but again, wow 😂}
The problem now is… As Google kicks you out, you don’t have a backup ID. You don’t have a password, and effectively, you become a stranger to your own library.
This isn’t just a Spotify problem.
This logic applies to every corner of your digital life. Your Zoom meetings, your Notion workspaces, your Figma designs… thousands of services we rely on every day are built on this exact handshake.
We like to think of the internet as a vast, open sea of independent islands. But the reality is much more claustrophobic. For most of us, the web has become a series of locked rooms, and we’ve let Google hold the only master key.
The point is clear: You don’t actually have a relationship with these apps. You have a relationship with the Bouncer, and the apps are just following his lead.
Out of 8 billion people on Earth, I’m definitely not the only one staring at that button with a bit of side-eye.
I remember a weekly stand-up at a startup I was working with.
The lead developer and the designer were debating the onboarding flow: “Should we even build a custom login form, or just have the ‘Continue with Google’ button?”
If it were up to me, I’d choose the Google button every time.
Why? Because it’s functional. It’s frictionless. And frankly, the alternative has historically been terrible.
If we didn’t have Google SSO, the average user would likely use the password Password123! for their bank, their email, and their Spotify. The moment one obscure site got hacked, their entire life would be wide open.
In that sense, Google is a Great Defender. They employ the best security engineers on earth. They stop phishing attacks and handle 2FA with a level of sophistication a small startup could never dream of. By centralizing identity, we actually made ourselves safer from hackers.
It feels like we’ve won. We outsourced our biggest headache to the experts. But ah yes, there’s a catch. In solving our Security problem, we inadvertently created a Sovereignty problem.
We made the walls of the fortress much higher, but we gave the only key to the landlord. We traded the risk of being robbed for the risk of being evicted.
This isn’t just about losing access to a playlist. For the modern economy, this is an existential threat.
Consider a YouTuber or a freelance developer. Their Google Account isn’t just an email; it is their:
Archive (Google Photos/Drive)
Rolodex (Contacts)
Bank (AdSense/Google Pay)
Passport (SSO for generic sites)
If an automated bot flags a file in their Google Drive as “suspicious”, even falsely, the entire account can be suspended.
In the physical world, if a bank freezes your account, they don’t also come to your house, lock your front door, and confiscate your passport.
But in the digital world, that is exactly what a Google ban does. It is a civil death.
Given that you understand the scenarios here… Now we talk about the structural philosophy we’ve all blindly accepted: Centralization.
The internet was originally designed to be decentralized. Think of it as a messy web where every node was equal. But over the last decade, we’ve slowly reorganized it into a “Hub and Spoke” model.
Here’s a quick visualisation between the two model types mentioned.
An Image of how the Internet was initially designed to be; Decentralized
An Image of the Hub and Spoke Model; Centralization.
As you think about it, it makes sense on paper. Google has thousands of security engineers; a small startup has maybe two or three. Trusting Google would be the safer option right?
Until the bus crashes.
In systems engineering, we call this a Single Point of Failure (SPOF). By routing the identity of the entire internet through three or four main providers (Google, Apple, Meta), we have created a “Bus Factor” of 1.
Writer’s Note: For non-technical readers, in tech terms, a ‘Bus Factor’ is how many people need to get hit by a bus before your project dies. For the internet, that number is currently terrifyingly close to one.
We saw a glimpse of this in December 2020. A boring internal storage tool at Google ran out of quota. For 47 minutes, the Bouncer went on a coffee break. The result? It wasn’t just that people couldn’t check Gmail.
Students couldn’t log into Zoom for finals.
Designers were locked out of Figma.
People with Nest thermostats literally couldn’t change the temperature in their own homes.
In a blink, entire workflows and daily routines were frozen… all because one system hiccuped.
We have built a digital world where we are tenants, not owners. We are building our houses on rented land, and the landlord holds the only set of keys.
If you think this is paranoia, just look at the last 18 months. We’ve had clearer warnings than ever that “Too Big to Fail” is a myth.
Remember the day the airports stopped? A single bad software update from a security vendor (CrowdStrike) took down 8.5 million Windows devices worldwide (July 2024).
The Reality Check: It wasn’t a hacker. It was a typo in a code update.
The Connection: While this wasn’t strictly an SSO failure, it proved the “Single Point of Failure” theory. Hospitals, banks, and airlines were paralyzed because they all relied on one vendor. If that vendor makes a mistake, the world stops.
Less than two weeks later, Microsoft Azure faced a massive outage caused by a DDoS attack that their own defense systems mishandled.
The Impact: It lasted nearly 8 hours.
The Connection: Companies that built their entire login infrastructure on Microsoft’s cloud were left in the dark. You couldn’t just “switch providers” because your identity data was locked inside the burning building.
In more recent news, we ended 2025 with a series of wobbles from Cloudflare and AWS. These were shorter, but scarier.
The Impact: Random “500 Errors” across the web.
The Connection: These showed us that even if Google is fine, the pipes connecting you to Google are fragile. If the road to the “Bouncer” is broken, you still can’t get into the club.
These events weren’t anomalies; they were stress tests. And the system failed.
So, we have a broken system. The obvious question is: “How do we fix it without making life harder?”
For a long time, the answer was “you can’t.” You either trusted the Bouncer, or you stayed home.
But recently, a new architectural concept has moved from theoretical whitepapers to actual code. It’s a solution that sounds radical only because we’ve become so used to digital servitude.
The concept is Self-Sovereign Identity (SSI).
What if your login didn’t belong to a corporation, but to you?
I know, I know. Usually, when people start talking about “decentralization” or “Web3,” your brain jumps to expensive JPEGs, crypto scams, or finance bros. But if you strip away the hype and the noise, the core engineering philosophy is the only logical answer to the “Single Point of Failure” problem. It stops being a buzzword and starts being a lifeboat.
It proposes a shift from a Federated Model to a Sovereign Model.
In the Google model: Google owns the keys. You ask permission to use them.
In the Sovereign model: You own the keys (stored in a decentralized wallet or vault). You grant the app permission to see who you are.
Think of it like the difference between a hotel and a house.
In a hotel, the front desk can deactivate your key card at any moment. In a house, you own the deed and the deadbolt. Even if the construction company goes out of business, your key still turns the lock.
Now, I’m not saying you should delete your Google account and move your life to a blockchain wallet today {well, that would be dumb 😂}
I’m a realist.
The user experience of the “Sovereign Web” is still clunky. It’s intimidating. In the Google world, if you’re forgetful, you click “Forgot Password.”
In the Self-Sovereign world, if you lose your private key, you lose your digital identity… FOREVER.
And in cryptography, “forever” actually means forever.
But it’s the truth. We’ve heard tales about people who have lost access to their crypto tokens, just cause they lost 12 words {And yes, I’m underplaying the value of those 12 words}
There is no customer support to call when you are the one in charge. That is a different kind of fear, and it’s exactly why mass adoption hasn’t happened yet.
Sometimes, I wonder: “Do I really want to manage 12 words for every single app I use?” Probably not.
But despite the clunkiness, the architecture laid out here is the correct answer to the centralization problem. We are currently in the “dial-up” phase of identity… it’s slow and noisy, but it’s the only way out of the trap.
So, where does that leave us?
We aren’t going to stop using Google SSO tomorrow. It’s too fast, too convenient, and frankly, too embedded in our lives.
But we need to stop being blind to the trade-off we are making. We can’t keep building our entire digital lives on rented land and acting surprised when the landlord changes the rules.
Convenience is a hell of a drug, but it shouldn’t be a suicide pact. We need to start treating our digital identity like our physical one: something we actually hold, rather than something we borrow.
So, here’s a survival guide for us all, the “prudent tenant”
Audit Your “Must-Haves”: Go to your most critical apps: your bank, your primary work tools, your password manager. Check if you can add a direct email/password login alongside your Google button. Most apps allow this; we just never bother to do it.
The Data Lifeboat: Regularly use tools like Google Takeout to export your data. If you are evicted from the “Google house,” you want to at least make sure you’ve packed your bags. I discovered it while researching this piece. It generates a full export of your account data in portable formats.
Google Takeout | https://takeout.google.com/
Diversify Your Identity: Don’t put all your eggs in one basket. Use “Sign in with Apple” for some things, or a dedicated email for others. It’s a bit more work, but it raises your “Bus Factor.”
Don’t forget. Bus Factor is the number of people that need to get hit before your project dies 🥲
Keep an Eye on the Exit: The “Sovereign Web” is coming. It’s clunky today, but so was the internet in 1995. Keep an eye on decentralized identity tools as they mature.
In all, if the Black Swan ever does arrive… whether it’s a policy ban, a cable cut, or a corporate collapse, you don’t want to be the one standing outside the club, arguing about a wristband that no longer exists.
It’s time to stop just clicking buttons and start holding your own keys.
If this resonated, share it with someone who clicks “Continue with Google” every day.
2026-02-19 16:23:55
When 41% of AI investments target tasks workers actively resist, you're not building competitive advantage—you're funding organizational friction.
Stanford's landmark 2025 study of 1,500 workers exposes a critical gap: enterprises are automating the wrong work. The real opportunity isn't replacement; it's workflow automation design that aligns technical capability with human intent. For EU SMEs navigating AI readiness assessment, this research reframes the entire strategy.
Stanford's research reveals workers don't want AI takeovers — they want AI teammates. The study found 45.2% of workers prefer H3-level "Equal Partnership" with AI, where humans and machines share responsibility for task completion.
The study used audio-enhanced interviews to capture nuanced worker desires, moving beyond simple "automate or not" questions. Researchers introduced the Human Agency Scale (HAS), ranging from H1 (no human involvement) to H5 (human essential), providing a shared language for discussing AI integration.
Key findings challenge automation assumptions:
The Human Agency Scale represents a fundamental shift from "AI-first" to "human-centered" decision making. Instead of asking what can be automated, it asks what should be augmented and why.
The five levels provide clarity:
H3 emerged as the dominant preference in 47 out of 104 occupations analyzed, making it the most common worker-desired level overall. This preference for collaboration over replacement challenges the industry's focus on maximum automation.
For organizations conducting AI governance & risk advisory or business process optimization, this scale becomes the diagnostic framework. It translates worker sentiment into implementable architecture.
Workers aren't resisting progress — they're defining it. When workers express automation desire, it's strategic, not surrendering control.
Among workers rating automation desire at 3 or higher (5-point scale), motivations were clear:
Trust remains the primary barrier. Research shows 45% express doubts about AI accuracy and reliability, while 23% fear job loss and 16% worry about a lack of human oversight. Workers especially resist AI in creative tasks or client communication.
This insight is critical for AI tool integration strategies. Resistance isn't obstruction—it's data. It signals where AI compliance and transparency matter most.
Stanford's zone framework maps worker desire against AI capability, creating strategic guidance for implementation:
Green Light Zone (High desire + High capability): Tasks like routine data entry, scheduling, and file maintenance, where workers welcome automation and AI delivers results.
Red Light Zone (Low desire + High capability): Areas where AI is technically capable but workers resist. Automating here risks resistance and reduced morale.
R&D Opportunity Zone (High desire + Low capability): Worker-desired areas where AI isn't ready yet. These represent valuable innovation frontiers.
Low Priority Zone (Low desire + Low capability): Neither workers nor technology are ready. Best to deprioritize.
The shocking discovery: 41% of current AI investments target Red Light or Low Priority zones, revealing widespread misalignment between development and worker needs.
This is where digital transformation strategy diverges from hype. Enterprises investing in Red Light zones are essentially funding change resistance. The winning move: redirect capital to Green Light and R&D Opportunity zones, where adoption friction dissolves naturally.
A wage reversal is underway. Traditional high-value information analysis roles are losing premium, while interpersonal skills gain value.
Recent research analyzing 12 million job vacancies (2018–2023) shows AI-focused roles are nearly twice as likely to require skills like resilience, agility, and analytical thinking compared to non-AI roles. Data scientists earn 5–10% higher salaries when they possess resilience or ethics capability.
Skills commanding premiums include:
For AI training for teams and operational AI implementation, this signals a shift: technical depth alone doesn't command premium anymore. The bottleneck is judgment, trust-building, and change leadership.
Written by Dr Hernani Costa | Powered by Core Ventures
Originally published at First AI Movers.
Technology is easy. Mapping it to P&L is hard. At First AI Movers, we don't just write code; we build the 'Executive Nervous System' for EU SMEs.
Is your AI roadmap creating technical debt or business equity?
👉 Get your AI Readiness Score (Free Company Assessment)
Discover where your organization sits on the Human Agency Scale—and which adoption zones hold your highest-ROI opportunities.