MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Why Your Signup Form Is Less Secure Than You Think (And How to Fix It)

2026-04-08 12:22:00

You've seen those password rules. "Must be more than 8 characters. Must include a symbol. Must contain a number." They have good intentions, but have one fatal flaw. Us.

You would hope everyone uses something more than "P@ssword1" (or any of its many variants), but unfortunately, you'd be wrong.

Photo of the Bastion Demo that shows the password: "P@ssword1" being included in 442,781 known breaches.

So what is actually happening, and what can we do about it?

These "traditional" password rules weren't wrong to exist, but they focus on the wrong thing. Primarily, they focus on what makes a password look complex, rather than what a machine considers "complex".

That'd be fine, except most attacks don't work that way. Most passwords susceptible to this are brute forced, rather than recreated from over your shoulder.

NIST (the National Institute of Standards and Technology) actually updated their guidelines recently, recommending longer minimum lengths and preventing the use of common, expected and compromised passwords.

Most people, understandably (my younger self included), took shortcuts, prioritising short, simple passwords (usually for memorisation) over complexity.

Now databases are littered with these "valid", but trivial passwords that can take machines minutes, if that, to crack (obviously this is still better than NO rules... "123456" has been found an embarrassingly large number of times in breaches).

Photo of the Bastion Demo that shows the password: "123456" being included in 209,972,844 known breaches.

What we actually want to measure is the complexity of guessing (or brute forcing) said password, which is a very different problem.

zxcvbn (I only just realised now, while writing this, that it's the bottom row of the QWERTY keyboard... never clicked when I was using it for the API ;-;) was built by Dropbox and takes a very different approach to password strength. Rather than checking for these rules, it uses more complex pattern matching to estimate the number of guesses it would take to crack a password.

zxcvbn checks against things like:

  • Most common passwords, names, and dates
  • Keyboard patterns (qwerty, 123456, and even zxcvbn)
  • Common substitutions (@ for a or 3 for e)
  • Repeating/reversed characters or strings (aaabbbccc)

The result is a score, crack times against multiple scenarios (throttled online, offline fast hash, etc.), as well as warnings and suggestions for weaker passwords.

Bad password example:

Photo of the Bastion Demo that shows the password: "P@ssword1" being included in 442,781 known breaches. It also shows the estimated crack time ranging from 4 days to less than a second. Below that is a warning with a list of suggestions.

Good password example:

Photo of the Bastion Demo that shows the password: "correct-horse-battery-staple" being included in no known breaches. It also shows the estimated crack time ranging from centuries to 57 years.

In Practice:

Here's what it looks like to actually check a password against an API that uses zxcvbn:

Request:

POST https://bastion.eande171.workers.dev/v1/evaluate 
BODY { "password": "P@ssword1" }

Response:

{
    "score": 1,
    "strength": "Weak",
    "entropy_bits": 13.425215903299385,
    "crack_times": {
        "online_throttled": "4 days",
        "online_unthrottled": "18 minutes",
        "offline_slow_hash": "1 second",
        "offline_fast_hash": "less than a second"
    },
    "warning": "This is similar to a commonly used password.",
    "suggestions": [
        "Add another word or two. Uncommon words are better.",
        "Capitalization doesn't help very much.",
        "Predictable substitutions like '@' instead of 'a' don't help very much."
    ]
}

This response gives you a lot of information to work with. The score and strength fields give you a numerical and human-readable version of the password strength.

An attacker with throttled attempts would find this password in 4 days (likely because the form would start rejecting too many attempts). This drops to 18 minutes without any throttling, and if they got access to the database, they could calculate it in no time at all.

The warning and suggestion fields are an addition given to weak passwords to give users clear, actionable feedback to improve the quality of their passwords.

This can be compared to a password (or passphrase), which is just a little bit longer:

Request:

POST https://bastion.eande171.workers.dev/v1/evaluate 
BODY { "password": "correct-horse-battery-staple" }

Response:

{
    "score": 4,
    "strength": "Very Strong",
    "entropy_bits": 64,
    "crack_times": {
        "online_throttled": "centuries",
        "online_unthrottled": "centuries",
        "offline_slow_hash": "centuries",
        "offline_fast_hash": "57 years"
    },
    "warning": null,
    "suggestions": null
}

This fills me with much more confidence, knowing that even if someone did get access to a database, it would take them 57 years (or possibly centuries) to crack it (unless it's stored in plain text... but that's a separate issue- although just as bad).

Password rules certainly aren't useless, but genuine protection requires measuring complexity, not enforcing arbitrary rules. zxcvbn (still can't believe I didn't get that) gives you a way to do that without reinventing the wheel.

Unfortunately, this is only one half of the problem... what happens if a password is strong but is already in a breach? It's more common than you think.

If any of you made it to this bit... THANK YOU!!! If you want to poke around the API I made for this exact purpose, you can find it here. It would really mean a lot if you could check it out (even just the free/demo versions).

Why I’m Engineering My FIRE with Python — A Manifesto

2026-04-08 12:16:10

Why I’m Engineering My FIRE with Python

I started coding in December 2025. Three months later, I’d built over 100 applications. One of them — a patent search engine over 3.5 million US patents in a 74GB SQLite database — got 400+ upvotes on Reddit.

Now I’m applying the same engineering mindset to something more personal: designing my financial independence with code.

This isn’t about stock picks. This is about building systems.

The Question Nobody Asks

Every corporation has a CFO. Every bank has an ALM (Asset-Liability Management) desk. They stress-test their balance sheets quarterly. They model worst-case scenarios. They maintain credit facilities they may never use.

But individuals? We track our assets — maybe in a spreadsheet if we’re diligent — and call it financial planning.

That’s half the picture. It’s like monitoring CPU usage but ignoring memory leaks.

Where is the liability side? Rent is a liability. Education costs are liabilities. Your monthly living expenses are liabilities. They’re just not written on a balance sheet, so you pretend they don’t exist.

The moment you start modeling both sides — assets AND liabilities — something shifts. You stop asking “how much do I have?” and start asking “how much can the world fall apart before my life breaks?”

That’s the question this series engineers an answer to.

Debt Is a Tool, Not an Enemy

Here’s something that puzzles me: everyone accepts that businesses borrow to grow. Startup founders take on debt. Real estate investors leverage mortgages. Nobody blinks.

But suggest that an individual with ¥50M+ in assets should strategically use a securities-backed loan, and people look at you like you’ve lost your mind.

Consider the asymmetry:

  • Assets compound. Dividends reinvested, book value growing, share prices reflecting that growth over time.
  • Debt is linear. You pay interest — a fixed, predictable cost — and the principal doesn’t grow.

The spread between your portfolio’s yield and your borrowing cost is what buys you time. And time is the one resource you can’t manufacture.

When a company borrows at 2% to invest in projects returning 8%, we call it smart capital allocation. When an individual does the same thing with a securities-backed loan and a high-dividend portfolio, we call it reckless.

I think that’s backwards.

The Architecture of Personal ALM

Banks manage risk by monitoring the relationship between their assets and liabilities across different time horizons and stress scenarios. The discipline is called ALM — Asset-Liability Management.

I’ve been applying this framework to my own finances, and the mental model changes everything.

The Balance Sheet View

ASSETS                          LIABILITIES
─────────────────────           ─────────────────────
Equity portfolio    ¥125M       Securities-backed loan  ¥50M
Cash reserves       ¥10M       Consumer credit line    ¥8M (standby)
Real estate         (paid off)  Monthly burn rate       ¥80K/mo
                                Hidden: taxes, insurance,
                                aging, inflation
─────────────────────           ─────────────────────

Suddenly, questions that felt vague become precise:

  • What’s my margin ratio — the ratio of debt to collateral value?
  • At what drawdown level does my lender freeze new borrowing?
  • At what level do they force-liquidate?
  • How much cash do I need to repay my way out of each danger zone?

These aren’t philosophical questions. They’re arithmetic. And arithmetic can be automated.

The Orthogonal Defense Principle

Here’s the key insight I arrived at through simulation:

Borrowing more from the same collateral pool makes you weaker. Borrowing from an orthogonal source makes you stronger.

A securities-backed loan ties your borrowing capacity to your portfolio value. When markets crash — exactly when you might need liquidity — your borrowing capacity shrinks. It’s a procyclical trap.

The solution: maintain a separate, unsecured credit facility. A personal commitment line. One that doesn’t care about your stock prices. You pay nothing when you don’t use it. But when you need it, it’s there.

Corporations call this a revolving credit facility. For individuals, a consumer credit line with a pre-approved limit serves the same function.

# Correlated defense — breaks when you need it most
margin_loan = Loan(balance=50_000_000, collateral=portfolio)
# When portfolio drops, your capacity drops too

# Orthogonal defense — independent of market conditions
credit_line = Loan(balance=8_000_000, collateral=None)
# Available regardless of what markets do

Two loans. Same total capacity. Radically different survival profiles.

The 90/10 Portfolio Philosophy

My portfolio construction follows a simple principle:

90% dividend core + 10% growth satellite.

The core: companies with DOE (Dividend on Equity) policies or progressive dividend commitments. DOE-based dividends grow with book value — they’re programmatic, not discretionary. When a company commits to DOE of 6%, your dividend grows automatically as their equity grows. No board meeting required.

These stocks also tend to be undervalued. The market systematically underprices boring, predictable cash flows. That’s fine by me — I’ll take the spread.

The satellite: one or two positions with 3-5x potential over 2-3 years. This is where capital gains come from. Not speculation — deep value situations where the market price diverges significantly from intrinsic value.

The result: the core generates yield that exceeds borrowing costs (the spread that buys time), while the satellite provides optionality for step-function wealth growth.

FI Is Not a Number — It’s a Probability

Most FIRE content fixates on a target number. “You need ¥200M.” “You need 25x your annual expenses.”

That’s not how engineering works. In engineering, we think in terms of confidence intervals and failure modes.

The real question is: given my current trajectory — dividend growth, reinvestment rate, income volatility, market risk — what is the probability that my passive income sustains my lifestyle for the next 40 years?

If that probability is 94%, you’re FI. Not because you hit a magic number, but because the system is robust.

And here’s what surprised me when I ran the simulations on my own situation: I was already there. The cage door was open. I just hadn’t looked.

The Cage You Build Yourself

This is the part that no Python script can solve.

Many people with sufficient assets keep running the same race. “I need to earn more.” “I need to save more.” “What if something goes wrong?” The hedonic treadmill and the anxiety treadmill are the same machine.

You track your net worth obsessively but never ask: at what point is it enough?

Financial independence isn’t about having ¥500M. People with ¥500M still feel the anxiety. It’s about visibility — seeing, with quantitative clarity, that your system works. That it survives stress tests. That the downside is bounded.

When you can see the math, the cage dissolves.

That’s what I mean by 生活金融工学 — “Life Financial Engineering.” Not the engineering of returns, but the engineering of a life you don’t need to escape from.

What We’re Building

Over the next 6 weeks, we’ll build a complete personal financial defense system in Python. Each article ships working code. No complex stacks — just SQLite, pandas, and Streamlit.

# Title What You’ll Build
01 Building a Personal ALM System SQLite schema for assets + liabilities
02 Stress Testing Your Life Drawdown simulator with margin ratio tracking
03 Designing a Personal Commitment Line Multi-loan optimizer for layered defense
04 Dividend Snowball Simulator DOE-based dividend growth projector
05 When to Pull the Trigger on FIRE Monte Carlo FIRE probability engine
06 Portfolio Defense Dashboard Streamlit dashboard — your morning check

The philosophy throughout: 枯れた技術の水平思考 — lateral thinking with mature technology. No vendor lock-in. No dependencies you can’t understand. Code you can run on a single machine forever.

Who This Is For

This series is for developers who have accumulated some assets — maybe ¥30M, maybe ¥100M — and want to apply engineering rigor to their financial lives.

It’s not for people who want stock tips. It’s not for people looking for get-rich-quick schemes.

It’s for people who understand that cron + SQLite + Python can solve problems that expensive financial advisors charge 1% annually to not solve.

And it’s for people who suspect — but haven’t yet proven to themselves — that the cage door might already be open.

Next week: [01] Building a Personal ALM System — your life as a database schema.

About me: Former construction engineer and business/patent lawyer. Started coding December 2025. Built PatentLLM (3.5M US patents, 74GB SQLite FTS5), SoyLM (local NotebookLM alternative), and 100+ other applications. Now designing my FIRE with the same tools.

Every company has a CFO. This series is about becoming your own.

Running Agentic AI at Scale on Google Kubernetes Engine

2026-04-08 12:15:15

The AI industry crossed an inflection point. We stopped asking "can the model answer my question?" and started asking "can the system complete my goal?" That shift from inference to agency changes everything about how we build, deploy, and scale AI in the cloud.

Google Kubernetes Engine (GKE) has quietly become the platform of choice for teams running production AI workloads. Its elastic compute, GPU node pools, and rich ecosystem of observability tools make it uniquely suited not just for model serving but for the orchestration challenges that agentic AI introduces.

This blog walks through the full landscape: what kinds of AI systems exist today, how agentic architectures differ, and what it actually looks like to run them reliably on GKE.

The AI Taxonomy: From Reactive to Autonomous

Before diving into infrastructure, it's worth establishing what we mean by the different modes of AI deployment. Not all AI is "agentic," and the architecture you choose should match the behavior you need

Reactive / Inference

Stateless prompt-response. One request, one LLM call, one answer. The model has no memory between turns. Examples: text classifiers, summarizers, one-shot code generators.

Conversational AI

Multi-turn dialog with session state. The model remembers context within a conversation window. Examples: customer support bots, document Q&A, coding assistants.

Retrieval-Augmented (RAG)

The model can query external knowledge at runtime before generating a response. Introduces a retrieval step vector DBs, semantic search, tool calls to databases.

Agentic AI

The model plans, takes actions, observes results, and loops until a goal is reached. It can call tools, spawn subagents, and make decisions across many steps autonomously.

Multi-Agent Systems

A network of specialized agents collaborating: an orchestrator decomposes a task and delegates to researcher, writer, executor agents that work in parallel or sequence.
Each mode up the stack introduces new infrastructure requirements: more state to manage, longer-lived processes, more concurrent workloads, harder failure modes, and deeper observability needs.

Why GKE for AI Workloads?

Kubernetes is table stakes for any modern distributed system. But GKE specifically brings several features that make it exceptional for AI:

GKE Capabilities for AI

GPU and TPU Node Pools

To handle the heavy lifting of Agentic AI, GKE offers specialized Accelerator Node Pools. This infrastructure allows you to dynamically attach high-end compute resources such as NVIDIA A100, H100, or L4 GPUs and Google TPUs exactly when your agents need them.

Workload Identity & Secret Management

Agentic systems touch many external APIs (databases, external services, third-party tools). Workload Identity Federation lets pods authenticate to Google Cloud services without storing long-lived credentials.

Horizontal Pod Autoscaling with Custom Metrics

Scale agent runner replicas based on queue depth (Pub/Sub backlog, Redis list length) rather than CPU. This allows demand-driven scaling that matches agent workload patterns precisely.

GKE Autopilot & Standard Modes

Autopilot mode handles node management entirely, ideal for teams wanting to focus on agent logic. Standard mode gives full control when you need custom kernel modules or specialized hardware affinity rules.

Cloud Run on GKE for Burst Workloads

Short-lived tool execution steps in an agent pipeline can be offloaded to Cloud Run, which scales to zero between invocations avoiding the overhead of always-on Kubernetes pods for infrequent task

Anatomy of an Agentic AI System

An agentic AI system isn't a single process ,it's a distributed workflow. Understanding its components is essential before mapping it onto Kubernetes primitives.
"An agent is an LLM that can observe the world, decide what to do next, and take actions - in a loop, until a goal is satisfied."

Popular Agentic Frameworks on GKE

Several frameworks have emerged to help teams build agentic systems without reinventing the orchestration wheel. Each has a different philosophy and maps to GKE differently.

Agent Development Kit (ADK)

Google's native framework for building multi-agent systems on Vertex AI. First-class GKE support, tight Gemini integration, built-in evaluation tools. Best choice for teams already on Google Cloud.

LangGraph

Graph-based agent orchestration with explicit state machines. Excellent for complex branching workflows. Containerizes cleanly. LangSmith provides tracing that integrates with GKE logging pipelines

CrewAI

Defines agents as role-playing entities (Researcher, Writer, Editor) with goals and backstories. Simple to model complex human workflows. Ideal for content, analysis, and research pipelines.

Google ADK on GKE >> Native Fit

The Google Agent Development Kit (ADK) is architected to treat Kubernetes as its primary "home," creating a seamless integration where the framework and the platform operate as one. Because ADK is built with a Kubernetes-native philosophy, it transforms GKE from a simple hosting environment into a specialized runtime for autonomous systems.

Observability: The Hard Part

Agentic systems fail in non-obvious ways. An agent might produce a response - but the response could be hallucinated, based on a failed tool call, or the result of an unintended plan branch. Standard HTTP error monitoring doesn't catch this.

The recommended observability stack for GKE-based agentic systems:

Observability Stack

OpenTelemetry Instrumentation

Instrument each agent with OpenTelemetry. Emit spans for every LLM call, tool invocation, and planning step. Export to Google Cloud Trace for full distributed trace visualization.

Structured Logging to Cloud Logging

Log each reasoning step as a structured JSON event: task ID, agent ID, step number, prompt hash, tool name, tool result summary, token counts. Query across traces in BigQuery for post-hoc analysis.
Custom Metrics via Cloud Monitoring

Track agent-specific metrics: tasks completed per minute, average steps per task, tool call success rate, LLM latency P50/P95/P99, and hallucination rate from your eval pipeline.

LLM-specific Tracing (LangSmith / Vertex AI Eval)

Leverage LangSmith or Vertex AI's built-in evaluation capabilities to capture complete prompt–response interactions along with semantic quality metrics. These insights can then be fed back into your continuous improvement cycle.

Security Considerations for Agentic AI on GKE

Agents with tool use are a new attack surface. An agent that can execute code, send emails, or write to a database is a powerful actor - and must be treated like one.

Prompt Injection

Malicious content in retrieved documents can instruct the agent to deviate from its goal. Sanitize all retrieved content before insertion into prompts. Use system-level guardrails in your LLM configuration.

Privilege Escalation

Each agent should operate with the minimum IAM permissions needed for its specific tools. Use Workload Identity with role-specific service accounts never a single all-powerful SA for all agents.

Human-in-the-Loop Gates

For irreversible actions (sending emails, deploying code, database writes), require a human approval step before execution. Implement approval workflows via Pub/Sub pause + Cloud Tasks callback.

Network Policies

Use GKE Network Policies to restrict which agent pods can talk to which services. A researcher agent has no reason to reach the database writer service directly - enforce this in the cluster, not just in code.

What's Next: The Agentic Platform

The direction of travel is clear. GKE is evolving from an application runtime into an agentic platform - a place where autonomous AI systems can be deployed, composed, monitored, and governed with the same rigor we apply to microservices today.
Several emerging capabilities are worth tracking:

Agent-to-Agent Communication (A2A Protocol) - Google's emerging standard for cross-agent RPC, allowing agents built with different frameworks to interoperate. GKE provides the network fabric for this via internal load balancers and service mesh.

Model Context Protocol (MCP) on Kubernetes - MCP is becoming the standard way for agents to discover and call tools. Running MCP servers as sidecar containers or standalone Deployments in GKE makes tool registries cluster-native.

Vertex AI Agent Engine - Google's fully managed orchestration layer for agents that sits above GKE, handling session management, tool routing, and evaluation out of the box. The boundary between GKE and managed agent infrastructure will continue to blur.

"Kubernetes wasn't built for AI. But it turns out the problems of distributed systems - scale, failure, state, observability - are exactly the problems agentic AI inherits."

Core Reference Documentation

https://docs.cloud.google.com/kubernetes-engine/docs/integrations/ai-infra

https://github.com/GoogleCloudPlatform/accelerated-platforms/blob/main/docs/platforms/gke/base/use-cases/inference-ref-arch/README.md

https://docs.cloud.google.com/agent-builder/agent-development-kit/overview

Hands-on Tutorials

https://codelabs.developers.google.com/devsite/codelabs/build-agents-with-adk-foundation

https://cloud.google.com/blog/topics/developers-practitioners/build-a-multi-agent-system-for-expert-content-with-google-adk-mcp-and-cloud-run-part-1

Best Clipboard Manager for Developers (2026 Guide)

2026-04-08 12:13:38

If you spend your day between a terminal, an editor, browser tabs, and an AI assistant, generic clipboard history stops being enough. Developers need a clipboard manager that recovers context fast, handles sensitive data carefully, and fits keyboard-first habits instead of slowing them down.

Why the clipboard becomes a developer problem

For developers, the clipboard is not just a convenience. It is a temporary working layer for commands, paths, errors, JSON fragments, URLs, config values, and the small pieces of context that keep momentum going.

The problem is that this working layer is fragile by default. One extra copy wipes out the last useful item. A path gets replaced by an error. The error gets replaced by a token. The token gets replaced by a URL. Then the reconstruction begins: search terminal history, reopen logs, find the same file again, or retry the same failing command just to recapture output you already had once.

Lost commands

Especially painful when the original command had a one-off flag, environment variable, or destructive dry-run combination.

Lost paths

It is rarely the path itself. It is the interruption of having to find it again while mentally switching tasks.

Lost errors

The right error message is often the fastest route to a fix, but only if it is still available when you need it.

Leaky secrets

Clipboard history becomes risky when it stores everything forever, including tokens and credentials copied in a hurry.

The best clipboard manager for developers is not the one with the longest feature list. It is the one that reduces recovery time, protects sensitive data, and lets you get the right item back without thinking too hard.

Why generic clipboard tools break down

Most clipboard tools are built for broad desktop use. They are fine for snippets of prose, office docs, meeting notes, and the occasional link. But developer work creates more specialized clipboard traffic. Commands, traces, secrets, file paths, diffs, JSON, SQL, URLs, and logs all behave differently, and they deserve different treatment.

  • Everything gets flattened into plain text — Generic history treats an SSH command, a stack trace, and a token as the same kind of thing. That makes retrieval slower and safety weaker.

  • Chronology is not enough — Developers often remember what they copied, not when they copied it. Retrieval by type is usually faster than scrolling by recency alone.

  • Security posture is too generic — A tool that happily stores every copied credential in long-lived history is not helping. It is just moving risk into a prettier UI.

Developers do not need a better bucket for random text. They need a safer and more recoverable working memory for technical context.

What to look for in 2026

A developer-focused clipboard manager should be judged by practical criteria. If it cannot improve recovery of commands, errors, and paths while handling secrets carefully, it probably is not built for engineering workflows first.

Criteria Why it matters Good sign
Local-first behavior Your clipboard should still work as a local tool, without a required account or cloud sync to do the basics. Core capture, search, retrieval, and storage all work on-device.
Secret-aware handling Tokens, keys, and passwords should not be treated like ordinary notes or chat text. Masking, memory-only options, careful storage policy, or explicit protection.
Terminal fit Developers move faster when clipboard actions feel natural from shell and editor workflows. CLI support, keyboard-first retrieval, pipes, and scriptable commands.
Semantic retrieval Retrieval by type is often faster than scrolling through a flat timeline. Commands, errors, paths, JSON, URLs, and other categories are distinguishable.
AI-ready packaging Developers increasingly need to bundle the right context, not the entire clipboard, for assistants. Structured packaging and selective retrieval instead of blind history dumps.

Local-first and secret-aware should be non-negotiable

Local-first is not just a nice-to-have. It is a trust requirement. When developers copy errors, deployment commands, internal URLs, credentials, or environment values, the default expectation should be that the tool remains useful without shipping that context somewhere else first.

Secret-aware behavior matters just as much. A good clipboard manager should be able to recognize when the clipboard contains a likely token, key, password, or secret-shaped string. That does not mean blocking every workflow. It means giving sensitive items better defaults, such as redaction, memory-only handling, or shorter retention.

  • Local-first means the core job stays on-device — Capture, classify, store, and retrieve locally by default. If output later gets piped to another tool, that should happen because the user chose it.

  • Secret-aware means a safer default posture — The tool should help reduce accidental retention and shoulder-surfing, not create a permanent archive of every credential that passes through the clipboard.

Terminal-friendly and AI-ready now matter more than ever

Developers increasingly jump between a shell, an editor, browser tabs, issue trackers, and an assistant. The clipboard is one of the few surfaces that touches all of those contexts. That makes a terminal-friendly workflow especially valuable.

Terminal-friendly

The best tools reduce friction at the shell. You should be able to capture, search, inspect, and retrieve without reaching for a mouse-heavy desktop UI every time.

echo "TypeError: x is undefined" | cg copy
cg list
cg paste -t error

AI-ready

AI-ready should not mean sending the whole clipboard to a model. It should mean packaging the right commands, errors, and notes together when you choose to ask for help.

cg pack -t error -n 5 | claude "fix these issues"

That distinction matters. A good tool helps you decide what context to forward, and helps you avoid forwarding the wrong context by accident.

Where ClipGate fits

ClipGate is built around the idea that developer clipboard history should be typed, local-first, and fast to recover. Instead of treating copied content as one flat stream, it tries to recognize what each item is and make retrieval semantic rather than purely chronological.

  • Typed retrieval — Errors, commands, paths, JSON, URLs, and other technical content can be retrieved by meaning, not just by order.

  • Local-first runtime — The core CLI is designed to stay useful without requiring a cloud account for everyday developer workflows.

  • Secret-aware posture — Sensitive content is treated differently from ordinary text, so the tool can help reduce long-lived exposure when secrets pass through the clipboard.

  • Terminal-native packaging — When the right next step is an assistant, ticket, or handoff, packaging the right context becomes part of the flow instead of a manual cleanup task.

Measured conclusion: the best clipboard manager for developers in 2026 is the one that quietly saves time, respects sensitive data, and fits the way engineers already work. Local-first, terminal-friendly, and secret-aware are the baseline.

Try the model, not just the marketing

The fastest way to evaluate any developer clipboard tool is to try it in a real session: copy an error, recover a path, search an older command, and see whether the tool feels like a natural extension of your workflow or another place to babysit state.

If you want to test ClipGate in that spirit, start with the official installer, then use the docs and release notes as the second step.

Site installer

Fastest path for macOS and Linux if you want the official binary with minimal setup.

curl -fsSL https://clipgate.github.io/install.sh | sh

PyPI

Useful when Python is already part of your environment, including Windows workflows.

pip install clipgate

Homebrew

Best fit for terminal-native installs if Homebrew already manages the rest of your toolchain.

brew install clipgate/tap/cg

Try ClipGate with a real workflow in mind

Install it, copy a few real items from your day, and see how quickly you can recover errors, commands, and paths when context starts moving fast.

Visit https://clipgate.github.io/ to install for free or read the docs.

Originally published on clipgate.github.io. ClipGate is an open-source terminal-native clipboard vault — GitHub · Install.

Users Don’t Choose the Best Tool — They Choose the Easiest One

2026-04-08 12:13:02

🚨 The Wrong Assumption I Had

When I was building my tools, I believed:

“If I make the best tool, users will choose it.”

So I focused on:

  • Features
  • Accuracy
  • More options
  • Better output

I thought quality wins.

But I was wrong.---

😐 What I Started Noticing

Even after improving tools…

Users still:

  • Didn’t use the “better” tool
  • Ignored advanced features
  • Picked the simplest option

At first, I thought:

“Maybe they don’t understand the value.”

But that wasn’t it.

⚡ The Real Reason

Users don’t optimize for best result.

They optimize for:

least effort

🧠 What Actually Happens

A user comes with a simple goal:

  • “Convert this text”
  • “Resize this image”
  • “Fix this format”

They don’t want:

  • Settings
  • Options
  • Decisions

They want:

Done. Fast. No thinking.

🔥 Where I Was Going Wrong

Some of my tools had:

  • Too many input options
  • Multiple steps
  • Extra controls

Even though they were “better”…

👉 They felt heavier.

So users avoided them.

💡 What I Changed

Instead of improving features…

I reduced friction.

Step 1: Removed unnecessary inputs

If something wasn’t required → gone

Step 2: Made default behavior smart

User opens → tool already ready

Step 3: Reduced decisions

Less buttons
Less confusion
Clear action

📈 What Happened After

Same tools. Less complexity.

And suddenly:

  • More usage
  • Faster actions
  • Better retention

🤯 The Insight That Changed Everything

Users don’t choose the most powerful tool.
They choose the one that feels effortless.

🧩 Simple Rule I Follow Now

If a user has to think…

👉 I’ve already lost them.

🛠️ What I’d Tell Builders

If you’re building tools:

  • Don’t just improve capability
  • Reduce effort
  • Remove decisions
  • Focus on speed

Because:

Easy beats powerful.

Every time.

🚀 Final Thought

Your tool isn’t competing on features.

It’s competing on:

How quickly a user can finish their task and leave.

North Korea-Linked Hackers Use GitHub as C2 Infrastructure to Attack South Korea

2026-04-08 12:10:35

Executive Summary

FortiGuard Labs has identified a sophisticated multi-stage attack campaign attributed to the North Korea-linked threat actor Kimsuky. The group is abusing GitHub as a living-off-the-land Command and Control (C2) infrastructure to target South Korean organizations.

The attack chain starts with obfuscated Windows Shortcut (LNK) files delivered via phishing emails. These LNK files deploy decoy PDF documents while silently executing PowerShell scripts in the background. The scripts perform anti-analysis checks, establish persistence through scheduled tasks, and exfiltrate collected data to GitHub repositories using hardcoded access tokens. Additional modules and commands are also retrieved from the same GitHub repositories.

This campaign highlights the increasing trend of state-sponsored actors abusing legitimate cloud platforms and native Windows tools (LOLBins) to lower detection rates and maintain long-term access.

Attack Chain Breakdown

  1. Initial Access

    Phishing emails deliver obfuscated LNK files. When opened, victims see a legitimate-looking PDF document while a malicious PowerShell script runs silently in the background.

  2. Anti-Analysis & Evasion

    The PowerShell script scans for virtual machines, debuggers, and forensic tools. If any are detected, the script immediately terminates.

  3. Persistence

    If the environment is clean, the script extracts a Visual Basic Script (VBScript) and creates a scheduled task that runs the PowerShell payload every 30 minutes in a hidden window. This ensures execution after system reboots.

  4. Data Collection & Exfiltration

    The script gathers host information, saves results to a log file, and exfiltrates the data to GitHub repositories under attacker-controlled accounts, including:

    • motoralis
    • God0808RAMA
    • Pigresy80
    • entire73
    • pandora0009
    • brandonleeodd93-blip
  5. C2 via GitHub

    The same GitHub repositories are used to store additional modules and commands, allowing operators to maintain persistent control over compromised systems while blending into trusted platforms.

Connection to Previous Campaigns

Fortinet notes that earlier iterations of this activity delivered the Xeno RAT malware family. Similar GitHub-based C2 usage for distributing Xeno RAT and its variant MoonPeak was previously reported by ENKI and Trellix, both attributing the activity to Kimsuky.

This disclosure coincides with AhnLab’s report on a similar LNK-based infection chain by Kimsuky that ultimately deploys a Python-based backdoor. In that variant, the LNK executes PowerShell which creates a hidden folder C:\windirr, drops decoy documents, and uses Dropbox as an interim C2 before downloading ZIP fragments from quickcon[.]store to deploy an XML Scheduled Task and the final Python implant.

The Python backdoor supports downloading additional payloads and executing commands such as running shell scripts, listing directories, uploading/downloading/deleting files, and executing BAT, VBScript, or EXE files.

Related TTP Evolution

These findings also align with observations from ScarCruft (another DPRK-linked group), which has shifted from traditional LNK → BAT → shellcode chains to HWP OLE-based droppers for delivering RokRAT — a remote access trojan exclusively used by North Korean hacking groups.

Researcher Comments

Security researcher Cara Lin from Fortinet stated:

“Threat actors are moving away from complex custom malware and instead leveraging native Windows tools for deployment, evasion, and persistence. By minimizing the use of PE files and heavily relying on LOLBins, attackers can target a broad audience with significantly lower detection rates.”

Recommendations

  • Strengthen email security gateways with advanced LNK and PowerShell inspection
  • Monitor abnormal access to GitHub, Dropbox, and other cloud repositories from endpoints
  • Implement strict application whitelisting and behavioral monitoring for scheduled tasks
  • Enable enhanced logging for PowerShell execution (Script Block Logging, Module Logging)
  • Regularly hunt for suspicious GitHub accounts and repositories with high-frequency commits from compromised environments

This campaign once again demonstrates how nation-state actors continue to innovate by abusing trusted platforms and living-off-the-land techniques to evade traditional security controls.

Analysis based on reporting from FortiGuard Labs, AhnLab, and open-source intelligence as of April 2026.