MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Testing Payment Flows Without the Payment SDK

2026-04-20 03:04:53

Payment integrations are one of the hardest things to test in a web app. The SDK renders its own UI, controls its own form fields, and fires callbacks when the user completes a payment. You can't programmatically fill in a credit card number. You can't simulate a declined card. And if the SDK fails to initialize — because of a network issue, a bad API key, or a test environment misconfiguration — your entire test falls apart.

You can mock the SDK's setup endpoint to get the SDK rendering, the form mounting, the session resolving. That covers surface area — but it stops there. It doesn't test what happens after the payment resolves: the API calls, the analytics events, the navigation, the error states. The part that actually matters.

This article shows a different approach: using TWD's component mocking to replace the payment SDK entirely with a simple mock that gives you full control over the payment lifecycle.

Test What You Own. Mock What You Don't.

That's TWD's philosophy, and it's the whole reason component mocking is the right tool here. The payment SDK is someone else's code — its internals and lifecycle are their problem, covered by their test suite. Your responsibility is the seam: the callbacks fired into your app, the API calls they trigger, the analytics events, the UI state. That's where your bugs ship from.

You won't exercise the real SDK in these tests. That's the tradeoff — and it's deliberate. What you gain is the ability to exercise your side of the integration exhaustively: every callback, every branch, every error path. The SDK's correctness is the vendor's concern. The correctness of everything your app does around it is yours, and that's what these tests finally reach.

The Problem

A typical payment component looks like this:

function PaymentDropIn({ session, clientKey, orderId, cart }) {
  useEffect(() => {
    const checkout = await PaymentSDK.init({
      session,
      clientKey,
      onPaymentCompleted: async () => {
        await confirmOrder(orderId);
        await trackPurchase(cart, orderId);
        navigate("/success");
      },
      onPaymentFailed: (result) => {
        trackPaymentError(cart, result.code);
        setError("Payment failed");
      },
    });
    checkout.mount(ref.current);
  }, []);

  return <div ref={ref} />;
}

Everything is tangled inside one component: the SDK initialization, the business logic, the analytics, the navigation, the error handling. You can't test the onPaymentCompleted callback without actually initializing the SDK. And you can't initialize the SDK without a real (or carefully mocked) payment session.

Step 1: Separate the SDK from the Logic

The fix is architectural. Move the callback logic out of the payment component and into the parent. The payment component becomes a thin SDK wrapper that receives callbacks as props:

// Thin wrapper — just the SDK
function PaymentDropIn({ session, clientKey, onCompleted, onFailed, onError }) {
  useEffect(() => {
    const checkout = await PaymentSDK.init({
      session,
      clientKey,
      onPaymentCompleted: () => onCompleted(),
      onPaymentFailed: (result) => onFailed(result.code),
      onError: (err) => onError(err.message),
    });
    checkout.mount(ref.current);
  }, []);

  return <div ref={ref} />;
}
// Parent — owns the business logic
function CheckoutPage({ cart, orderId }) {
  const handleCompleted = async () => {
    await confirmOrder(orderId);
    await trackPurchase(cart, orderId);
    navigate("/success");
  };

  const handleFailed = (code) => {
    trackPaymentError(cart, code);
    setError("Payment failed");
  };

  return (
    <PaymentDropIn
      session={session}
      clientKey={clientKey}
      onCompleted={handleCompleted}
      onFailed={handleFailed}
      onError={handleError}
    />
  );
}

This is a good refactor regardless of testing. The parent owns the business logic. The payment component owns the SDK. Clean separation.

Step 2: Wrap for Mocking

TWD provides MockedComponent — a wrapper that lets tests replace a component's children with a mock. Wrap the payment component:

import { MockedComponent } from "twd-js/ui";

function PaymentDropIn(props) {
  return (
    <MockedComponent name="paymentDropIn">
      <PaymentDropInContent {...props} />
    </MockedComponent>
  );
}

In production, MockedComponent is a transparent pass-through — it renders its children. In tests, twd.mockComponent("paymentDropIn", ...) replaces the children with whatever you provide.

One important detail: MockedComponent passes its child's props to the mock component. That's why we need PaymentDropInContent as a separate component that receives all the callback props — so the mock receives them too.

Step 3: Build the Mock

The mock is dead simple. Three buttons — one per payment outcome:

twd.mockComponent("paymentDropIn", ({ onCompleted, onFailed, onError }) => {
  return (
    <div>
      <button onClick={() => onCompleted()}>Pay</button>
      <button onClick={() => onFailed("Refused")}>Fail Payment</button>
      <button onClick={() => onError("SDK crashed")}>Error</button>
    </div>
  );
});

Click "Pay" and the parent's handleCompleted fires — calling confirmOrder, sending the purchase event, navigating to success. Click "Fail Payment" and handleFailed fires — sending the error event, showing the error banner. No SDK involved. Just callbacks.

Step 4: Test Everything

Now you can test the full payment lifecycle with standard TWD patterns:

it("should call confirmOrder and navigate to success", async () => {
  await twd.mockRequest("confirmOrder", {
    url: `/api/orders/${orderId}/confirm`,
    method: "PATCH",
    status: 200,
    response: { customer_id: "cust-123", order_count: 3 },
  });

  // ... fill form, submit, wait for payment session ...

  const payButton = await screenDom.findByRole("button", { name: "Pay" });
  await userEvent.click(payButton);

  // Verify the API was called
  const rule = await twd.waitForRequest("confirmOrder");
  expect(rule).to.exist;

  // Verify navigation
  await twd.url().should("contain.url", "/success");
});

it("should fire purchase_error when payment is declined", async () => {
  // ... setup ...

  const failButton = await screenDom.findByRole("button", { name: "Fail Payment" });
  await userEvent.click(failButton);

  const errorEvent = await twd.waitFor(() => {
    const ev = window.dataLayer.find(e => e.event === "purchase_error");
    if (!ev) throw new Error("Event not found");
    return ev;
  });
  expect(errorEvent.error_code).to.equal("Refused");
});

it("should show error banner when confirmOrder fails", async () => {
  await twd.mockRequest("confirmOrderFail", {
    url: `/api/orders/${orderId}/confirm`,
    method: "PATCH",
    status: 500,
    response: { message: "Server error" },
  });

  // ... setup ...

  const payButton = await screenDom.findByRole("button", { name: "Pay" });
  await userEvent.click(payButton);

  const errorBanner = await twd.get("[data-testid='payment-error']");
  errorBanner.should("be.visible");
  await twd.url().should("not.contain.url", "/success");
});

What This Pattern Gives You

Coverage you couldn't get before:

  • Analytics events fire with the correct data (payment type, transaction ID, error codes)
  • The confirmOrder API is called with the right order ID
  • Navigation to the success page happens after payment, not before
  • Error banners appear when the API fails
  • Error banners appear when the payment is declined
  • Error banners appear when the SDK crashes

Speed: These tests run in ~1 second each. No SDK initialization, no payment session setup, no Adyen/Stripe endpoint mocking.

Reliability: No more flaky tests that break because the payment SDK's test environment is down. The mock is deterministic.

Conclusion

The unlock is component mocking. TWD's MockedComponent lets you replace a third-party SDK in tests with a simple stand-in whose callbacks you fire on demand — so the payment flow, which previously depended on an un-drivable SDK, becomes three buttons and a set of assertions. The SDK never boots. Tests run in a second. The callback flow — API calls, analytics, navigation, error states — is finally exercised.

The thin-wrapper refactor is what makes that possible, but it's the enabler, not the point. Once it's in place, the pattern transfers to any third-party component that fires callbacks: map SDKs, video players, chat widgets, auth flows. Same shape every time — wrap the component, swap it in tests.

Existing tests that mock the SDK's setup endpoint still work; they cover different ground. The component mock picks up where those stop.

More on the feature at twd.dev/component-mocking.

When Your Mocks Lie: Contract Testing with TWD

2026-04-20 03:03:50

Every mock you write is a claim about what your backend returns. The moment the backend changes — a renamed field, a tightened enum, a new required property — that claim becomes a lie. Your tests still pass. Production breaks.

This is mock drift, and it's invisible. You don't find out until a user hits a 500 or an empty UI in prod. The mocks that gave you confidence were the thing misleading you.

TWD's contract testing closes this gap. Every mock response registered in a test gets validated against your OpenAPI spec during the same run that executes the test. A schema mismatch becomes a loud, specific error — in the same output as the test failures. No separate pipeline, no broker, no provider verifier. One command does both.

This article walks through what contract testing in TWD actually does, how to wire it into an existing project, and what the output looks like when it catches real drift.

The problem contract testing solves

Consider a typical mock in a TWD test:

await twd.mockRequest("userList", {
  method: "GET",
  url: "/v1/users",
  response: {
    count: 3,
    next: null,
    previous: null,
    results: [
      {
        id: "a1b2-...",
        name: "Acme Corp",
        balance: "10000.00",
        // ...
      }
    ],
  },
  status: 200,
});

This shape made the test pass three months ago. Since then:

  • The backend team removed balance from the list endpoint (it's a wallet concept now, served elsewhere).
  • A new required field external_id was added.
  • The discount field format tightened from "15" to "15.00" (two decimals).

None of these changes break the test. The component receives exactly the shape the mock provides. The test is green. Everything looks fine.

Meanwhile in production, the real API returns external_id (which a column in the table now expects), omits balance (which a detail drawer is still reading), and sends "10.00" where the formatter assumes trailing decimals. Bugs ship.

The test was never wrong — it was testing the wrong reality. The mock had drifted from the contract.

What TWD does about it

TWD's contract testing runs as part of npx twd-cli run — the headless runner you'd typically invoke in CI, not the live sidebar you use during local dev. Your inner loop stays fast; drift gets surfaced on every push.

On every call to twd.mockRequest(), the response payload is collected. After tests run, each response is validated against the OpenAPI schema for the endpoint that the mock targets.

The validation uses openapi-mock-validator under the hood and covers what you'd expect from JSON Schema:

  • Types (string, number, integer, boolean, array, object)
  • String formats (uuid, email, date-time, uri, and so on)
  • Numeric bounds, array constraints, enum values
  • Required fields, additionalProperties
  • Composition (oneOf, anyOf, allOf)

In practice this means: if your mock returns "id": "user-123" where the spec says "format": "uuid", you hear about it. If your mock omits external_id where the spec marks it required, you hear about it. If your mock sets "status": "pending" where the spec enum only allows ["COMPLETED", "FAILED", "PENDING"], you hear about it.

The key design choice: no extra test-writing effort. You don't author contract tests separately. The mocks you already write double as contract probes. Two signals from one artifact.

Setting it up

Three pieces: get the spec, tell TWD about it, decide how loud to be.

1. Get the OpenAPI spec

Point TWD at an openapi.json somewhere on disk. How it gets there is up to you — a curl against your backend's spec endpoint in CI is the common path. Download fresh on every run so you're always validating against the current contract.

2. Configure TWD

Create twd.config.json at the project root:

{
  "url": "http://localhost:5173",
  "contractReportPath": ".twd/contract-report.md",
  "retryCount": 3,
  "contracts": [
    {
      "source": "./openapi.json",
      "baseUrl": "/",
      "mode": "warn",
      "strict": true
    }
  ]
}

Key fields:

  • source — path to the OpenAPI JSON.
  • baseUrl — prefix to strip when matching mock URLs against spec paths. If your mocks call /v1/users and the spec paths are also /v1/..., set "/". If the spec is served under /api and your mocks include that prefix, set "/api".
  • mode"warn" or "error". Start with "warn".
  • strict — whether to reject undocumented response properties.

3. Decide the mode

This is the one real decision.

"warn" — mismatches appear in the output but the test run still passes. Good posture when you're introducing contract testing into an existing codebase with accumulated drift. You see what's broken without immediately red-gating the team.

"error" — mismatches fail the run. This is where you want to land. It's the only mode that prevents regressions.

A realistic migration path: start in warn to surface the backlog, fix mismatches module by module, then flip to error once you're clean. The flip is the important step — without it, nothing stops new drift from accumulating.

The TWD ecosystem

Contract testing isn't a standalone library — it's the seam where the TWD packages meet: mocks authored with twd-js, runs executed by twd-cli, validation handled by openapi-mock-validator, and (if you're also using the AI agent skills) the browser bridge through twd-relay.

The TWD Ecosystem

If you're starting from zero with TWD, the AI-powered frontend testing series walks through project setup, writing tests, and wiring them into CI. Contract testing slots in once that's working.

The payoff: what the output looks like

This is the part worth showing up for — and it exists only because you're already in the TWD stack. Your mocks run through twd-js. twd-cli already executes them. The validator just reads what's already moving through your tests. No separate contract test suite, no broker to run, no provider verifier to keep in sync.

Run your normal test command:

npx twd-cli run

Alongside the usual pass/fail output for each test, you'll see a per-mock contract status line:

✓ GET /v1/users (200) — mock "userList" — in "User list > should display the table"
✗ GET /v1/users/{user_id} (200) — mock "getUser" — in "User detail"
  → response.external_id: missing required property "external_id"
✗ GET /v1/orders (200) — mock "getOrders" — in "User detail"
  → response.next: missing required property "next"
  → response.previous: missing required property "previous"

And a summary:

Mocks validated: 253 | Errors: 93 | Warnings: 1 | Skipped: 0

Contract report written to .twd/contract-report.md

That second failure line — a required property missing on a test that otherwise passes — is where contract testing earns its keep. Without it, the mock keeps serving a shape the real API no longer returns, and the only person who finds out is a user.

The markdown report is useful for PRs and CI artifacts — it groups failures by endpoint and includes the test name that produced each mock, so tracing a failure back to a specific file is straightforward.

Why this matters more than it looks

Most contract testing tools (Pact being the canonical one) are heavy: brokers, provider verifiers, consumer-driven workflows, separate CI pipelines, coordination between frontend and backend teams. The ceremony is often what kills adoption — teams try it, find it exhausting, and revert to hoping for the best.

TWD's approach gets maybe 80% of the value for 10% of the cost, because it's opportunistic rather than exhaustive. You're not testing every possible response the backend could emit — you're testing the specific responses your app actually depends on (your mocks). That's often the right target: the place where client assumptions are encoded is exactly the place worth validating.

And it's cheap to adopt. No broker, no CI changes beyond one step to download the spec, no coordination with the backend team. A consuming team can turn this on unilaterally in an afternoon and immediately benefit.

The moment the backend ships a breaking change, your next CI run reports it. Not the next deploy. Not the next bug report from a user. The next CI run.

Wiring it into CI

One change to your workflow:

- name: Download OpenAPI contract
  run: npm run contract:download

- name: Install service worker
  run: npx twd-js init public --save

- name: Run TWD tests
  run: npx twd-cli run

- name: Contract testing report
  run: cat .twd/contract-report.md

Conclusion

Contract testing isn't the whole pitch — it's one piece of a stack designed to make each part of the testing workflow cheap instead of painful. Adopt TWD and you get:

  • Tests that run in your real browser, with a live sidebar as you develop.
  • A CI pipeline that's a few lines of YAML away.
  • Coverage collected without a separate configuration fight.
  • Mocks that double as contract probes, validated against your OpenAPI spec on every run.

The opportunity isn't just catching drift. It's that once you're in the TWD stack, everything above comes with it — and each piece is an afternoon of setup, not a quarter of migration.

More details and the full config reference live at twd.dev/contract-testing. The project is on GitHub at BRIKEV/twd. If you find a bug in the validator or want a new format supported, PRs welcome.

AI Writes Your App. You Lose Your Job. Good.

2026-04-20 03:03:34

AI Writes Your App. You Lose Your Job. Good.

Before you close this tab — read the second sentence.

You will not be jobless. You will be work-free. Those are not the same thing.

The Fear Is Real

Let us not pretend it is not.

A developer spends years learning React, Kotlin, Swift, C#.
They build a career on that knowledge.
Then an AI arrives that can generate working code from a sentence.

The fear is legitimate. The question is whether the conclusion is correct.

What AI Actually Does to Development

Here is what happens when you ask an AI to build a React app:

  • It generates plausible-looking code
  • The hooks are slightly wrong
  • The state management is from a blog post written in 2021
  • The dependency array is missing two items
  • It works in the demo, breaks in production

React is complex enough that AI-generated code requires a developer
to review, fix, and understand every line. The developer is still fully employed.
They are just a reviewer now instead of an author.

This is not liberation. This is the same job with extra steps.

What Happens With a Simpler Language

SML and SMS are intentionally minimal.

SML has no lifecycle methods. No virtual DOM. No reconciler.
A Column contains children. A Button has text. An ID identifies an element.

Column {
    padding: 24

    Label {
        id: counter
        text: "0"
        fontSize: 48
    }

    Button {
        id: btnAdd
        text: "+"
    }
}

SMS has no async/await. No promises. No event bubbling.
Something happened. Here is what to do about it.

var count = 0

on btnAdd.clicked() {
    count = count + 1
    counter.text = count
}

The entire rule set fits in a single system prompt.
Including the things that trip up AI:

IDs are not strings. Write id: counter, not id: "counter".

Event handlers use dot notation. on btn.clicked(), not onClick.

Variables are global to the script. No closures, no scope confusion.

An AI given this context generates correct Forge apps on the first try.
Not approximately correct. Actually correct. Deployable.

The Loop Nobody Talks About

When AI can generate a complete, deployable app from a description,
the development loop changes:

Before:
  User idea → Designer → Review → Developer → QA → Staging → Production
  Time: weeks

With AI + Forge:
  User describes → AI writes SML + SMS → Push to Codeberg → Live
  Time: minutes

The developer is no longer the bottleneck between idea and reality.

This is where the fear comes from.
This is also where the opportunity comes from.

"But That Means I Lose My Job"

Let us be honest about what that job actually was.

Translating human intent into machine instructions.
That is the core of software development.
A human has an idea. A developer translates it into code the computer understands.

AI is getting very good at that translation.

But notice what disappears when the translation is instant:
the gap between thinking and building.
The weeks of meetings, handoffs, misunderstandings, revisions.
The user who forgot what they wanted.
The developer who built the wrong thing for three months.

That gap was not valuable. It was waste.
We called it "the development process" because we had no other way.

Arbeitslos vs. Die Arbeit Los

There is a distinction that English handles clumsily but the idea is clear.

Jobless — without income, without purpose, without dignity.

Work-free — freed from labor that a machine can do better.

These are not the same condition.

A farmer who gets a tractor is not jobless.
They are freed from breaking their back with a hand plow.
The question is whether they have land to farm with the tractor.

The question for developers is not "will AI take my job?"
The question is "what will I do with the time AI gives back?"

UBI Is the Bridge

Universal Basic Income — BGE in German — is the infrastructure
that makes this transition survivable.

Without it: AI replaces jobs, income disappears, people suffer.
With it: AI replaces jobs, basic needs are covered, time is returned.

The technology already exists to feed, house, and care for everyone.
The question has never been whether we can afford it.
The question has always been whether we choose to.

When a developer no longer needs to spend 40 hours a week
translating requirements into React components —
what does that time become?

What the Time Becomes

Time for the things that were always more important and always got postponed.

Being present with the people you love.

Learning something because it is beautiful, not because it is marketable.

Walking 12 kilometers into a forest to sit with strangers around a fire.

Doing your Kundalini practice before the day starts instead of at 11pm.

Building something because it should exist, not because someone is paying for it.

Software was never the point.

The point was always what software made possible.

The Honest Risk

We should not pretend this transition will be painless.

Between "AI can now do this" and "everyone has UBI" there is a gap.
People will lose income before the safety net exists.
That is real. That is happening now. It should not be minimized.

The answer is not to slow down the technology.
The answer is to build the safety net faster than the disruption spreads.

That is a political problem, not a technical one.
Developers who understand both the technology and its consequences
are exactly the people who should be in that conversation.

What Forge Is Actually For

Forge was built for a world where the gap between idea and running app
is measured in minutes, not months.

SML and SMS are simple by design — not because simplicity is a virtue in itself,
but because a language that a human can learn in a day
is also a language that an AI can generate correctly in a second.

The sandbox is not a technical feature.
It is a commitment: the technology will not be used to harm.
Ahimsa. Do no harm. Written into the license, not the README.

When AI generates a Forge app and a user runs it,
they are running sandboxed code that cannot reach outside its permission boundary.
The AI cannot weaponize the app. The developer cannot weaponize the app.
The architecture enforces the ethics.

That is the kind of technology worth building.

A Different Question

Instead of "will AI take my job?" —

What would you build if the translation problem were solved?

What has been in your head for years that you never had time to make real?
What would you create if creation cost minutes instead of months?

That question is more interesting than the fear.
And the answer to it is what comes after the transition.

Forge is open source.

The tools exist. The time is coming.

Use both well.

Sat Nam. 🌱

Previous posts in this series:

→ We Benchmarked SMS Against C++, C#, and Kotlin. Here's What Happened.

→ From Request to Production in One Push. No Mockup Tool Required.

Authentication Security Deep Dive: From Brute Force to Salted Hashing (With Java Examples)

2026-04-20 02:54:23

What if I told you that even if you hash passwords, an attacker might still crack them in seconds?

Authentication is one of the most critical parts of any application—and also one of the most misunderstood.

In this post, we’ll think like an attacker, break insecure implementations using Java examples, and then progressively strengthen our defenses using hashing and salting.
If you’ve ever stored a password using only hashing, your system may still be vulnerable.

🧠 1. Why Authentication Security Matters

When authentication fails, everything fails:

  • Account takeover
  • Data breaches
  • Privilege escalation

To build secure systems, we must first understand how attackers operate.

⚔️ 2. How Attackers Break Authentication

(i). Brute Force Attack

Attacker tries all possible passwords until one works.
👉 Works because:

  • Users choose weak passwords
  • Systems don’t limit attempts

(ii). Dictionary Attack

Instead of all combinations, attacker uses a list of common passwords:
123456
password
admin
welcome123

(iii). Rainbow Table Attack

Attacker precomputes hashes:
password → hash1
admin → hash2

Then instantly matches stolen hashes.
👉 Extremely fast if no salt is used

(iv). Session Attacks (brief)

Focus on hijacking authenticated sessions (cookies, tokens).
👉 Important, but outside this blog’s main focus.

🔴 3. Thinking Like an Attacker: Breaking Weak Authentication

3.1 Brute Force Simulation (Java)

package com.auth;

public class BruteForceDemo {

    public static void main(String[] args) {

        String actualPassword = "1234";

        for (int i = 0; i <= 9999; i++) {
            String guess = String.format("%04d", i);

            System.out.println("Trying: " + guess);

            if (guess.equals(actualPassword)) {
                System.out.println("==> Password found: " + guess);
                break;
            }
        }
    }
}

👉 This works because:

  • Password is short and predictable
  • No rate limiting

3.2 Hashing Alone is NOT Enough

Let’s say system stores:
hash(password)
Java Example

package com.auth;

import java.security.MessageDigest;

public class HashAttackDemo {

    public static String hash(String input) throws Exception {
        MessageDigest md = MessageDigest.getInstance("SHA-256");
        byte[] digest = md.digest(input.getBytes());

        StringBuilder hex = new StringBuilder();
        for (byte b : digest) {
            hex.append(String.format("%02x", b));
        }
        return hex.toString();
    }

    public static void main(String[] args) throws Exception {

        String storedHash = hash("password");

        String[] guesses = {"123456", "password", "admin"};

        for (String guess : guesses) {
            if (hash(guess).equals(storedHash)) {
                System.out.println("==> Cracked: " + guess);
            }
        }
    }
}

👉 Even though password is hashed, attacker can still:

  • Hash guesses
  • Compare results

3.3 Rainbow Table Attack (Precomputation)

package com.auth;

import java.security.MessageDigest;
import java.util.HashMap;
import java.util.Map;

public class RainbowTableDemo {

    public static String hash(String input) throws Exception {
        MessageDigest md = MessageDigest.getInstance("MD5");
        byte[] digest = md.digest(input.getBytes());

        StringBuilder hex = new StringBuilder();
        for (byte b : digest) {
            hex.append(String.format("%02x", b));
        }
        return hex.toString();
    }

    public static void main(String[] args) throws Exception {

        String[] passwords = {"123456", "password", "admin"};

        Map<String, String> table = new HashMap<>();

        for (String pwd : passwords) {
            table.put(hash(pwd), pwd);
        }

        String stolenHash = hash("password");

        if (table.containsKey(stolenHash)) {
            System.out.println("==> Instantly cracked: " + table.get(stolenHash));
        }
    }
}

👉 No guessing needed — just lookup.

🛡️ 4. Why SALT Changes Everything

4.1 Why SALT is Needed


Problem:
password → same hash everywhere
Solution:
hash(password + salt)
👉 Makes each hash unique
4.2 SALT Implementation (Java)

package com.auth;

import java.security.MessageDigest;
import java.security.SecureRandom;
import java.util.Base64;

public class SaltedHashDemo {

    public static String generateSalt() {
        byte[] salt = new byte[16];
        new SecureRandom().nextBytes(salt);
        return Base64.getEncoder().encodeToString(salt);
    }

    public static String hash(String password, String salt) throws Exception {
        MessageDigest md = MessageDigest.getInstance("SHA-256");
        byte[] digest = md.digest((password + salt).getBytes());

        StringBuilder hex = new StringBuilder();
        for (byte b : digest) {
            hex.append(String.format("%02x", b));
        }
        return hex.toString();
    }

    public static void main(String[] args) throws Exception {

        String password = "password";
        String salt = generateSalt();

        String hashed = hash(password, salt);

        System.out.println("Salt: " + salt);
        System.out.println("Hash: " + hashed);
    }
}

4.3 How SALT Breaks Rainbow Attacks

password + salt1 → hash1
password + salt2 → hash2

👉 Same password ≠ same hash
👉 Rainbow tables become useless

🔴 5. Attacker vs SALT

5.1 Attacking Salted Hashes (Java)

package com.auth;

import java.security.MessageDigest;

public class SaltedAttackDemo {

    public static String hash(String password, String salt) throws Exception {
        MessageDigest md = MessageDigest.getInstance("SHA-256");
        byte[] digest = md.digest((password + salt).getBytes());

        StringBuilder hex = new StringBuilder();
        for (byte b : digest) {
            hex.append(String.format("%02x", b));
        }
        return hex.toString();
    }

    public static void main(String[] args) throws Exception {

        String salt = "randomSalt123";
        String storedHash = hash("password", salt);

        String[] dictionary = {"123456", "password", "admin"};

        for (String guess : dictionary) {
            if (hash(guess, salt).equals(storedHash)) {
                System.out.println("==> Cracked even with salt: " + guess);
            }
        }
    }
}

👉 Salt does NOT stop guessing — only slows it down.

5.2 Cost Explosion

Without salt:
1M guesses → cracks many users
With salt:
1M users × 1M guesses = 1 trillion operations

Source Code

All source files are available on GitHub:
Github source codes

👉 This is where salt becomes powerful.

⚠️ 6. What SALT Does NOT Solve

  • Weak passwords (still crackable)
  • Fast hashing (SHA-256 is too fast)
  • GPU-based attacks

👉 SALT makes attacks harder—but not impossible.
Attackers can still:

  • Perform brute force attacks
  • Use GPUs to compute hashes at scale
  • Target weak passwords

This is why modern systems go beyond

🚀 Final Defense: Modern Password Hashing
Use:

Why:

  • Built-in salt
  • Slow hashing (costly per attempt)
  • Resistant to GPU attacks

🧠 7.Conclusion

Authentication security evolves like this:
Plain text → completely broken
Hash only → still vulnerable
Salted hash → better
Salt + slow hashing → strong defense

👉 Security is not about making attacks impossible,
👉 It’s about making them economically infeasible.

💡 8.Final Thought

Think like an attacker:

  • Can I guess this password?
  • Can I reuse work?
  • Can I scale this attack?

Good security doesn’t make attacks impossible—it makes them impractical.

As a developer, your goal isn’t to stop attackers completely, but to ensure that breaking your system is simply not worth the effort.

What Did Earth Look Like When You Were Born? I Built It to Find Out

2026-04-20 02:52:20

🌍 I Built a Climate Time Machine That Makes CO₂ Data Feel Personal

This is a submission for Weekend Challenge: Earth Day Edition

The problem with climate data is that it's abstract. "+1.25°C" doesn't land. "427 ppm CO₂" doesn't hurt. Numbers don't move people — stories do.

So I asked a different question: what if you could see exactly what the planet looked like the year you were born?

That's Earth Time Machine. Enter your birth year. Feel what changed.

What I Built

Earth Time Machine is an interactive climate data experience that makes global warming feel personally relevant by anchoring every metric to your birth year.

The core idea

Instead of showing abstract global averages, it asks: what was CO₂ when you were born? How much Arctic ice existed? How many more people share this planet now? Every number becomes a story about your lifetime.

Features

  • 🌐 Animated 3D globe in the hero — hand-coded in Canvas 2D, no WebGL library. The globe visibly browns and cracks based on the CO₂ rise since your birth year. Born in 1960? Healthy green oceans. 2008? Noticeably warmer.

  • 💨 CO₂ breathing animation — the number counts up from zero to your lifetime rise, then locks and pulses gently like Earth exhaling. No looping. It arrives and stays.

  • 📊 6 planetary vital sign cards — CO₂, temperature, Arctic sea ice, forest cover, population, sea level. Each has a sparkline, animated thermometers, comparison bars, and a "sting" — a one-line gut-punch fact that reveals after you've absorbed the numbers.

  • 🌍 20-country local data — select your country and see local warming vs world average, per-capita CO₂ then vs now, and your country's climate risk tier. India's local warming hits differently than Canada's.

  • 🔮 4 IPCC scenario projections to 2100 — actual path, Paris 1.5°C, moderate action, worst case. Canvas-drawn chart showing where each path leads. Interactive tabs with verdicts.

  • 🌿 Paris Path toggle — overlays what values should be if the Paris Agreement was met. The gap is sobering.

  • 🎮 Generational compare mode — enter two birth years, see the CO₂ absorbed between them, the temperature difference, the ice lost. Perfect for showing a parent vs child what Earth each of them inherited.

  • 🎯 Climate knowledge quiz — 4 randomised questions from a pool of 8. Immediate feedback, score bar, explanations. Keeps people engaged after the data shock.

  • 🔊 Ambient sound — two sine-wave oscillators through the Web Audio API. As you drag the year scrubber, the pitch shifts — deeper in early decades, eerier as CO₂ rises. Subtle but effective.

  • 📥 Download your Earth Identity Card as PNG via html2canvas. Share your personal climate report on social media.

  • 🌐 Live CO₂ clock in the sticky toolbar — ticks upward in real time at the actual rate (2.4 ppm/year). Watch it move.

Demo

🔗 https://earth-time-machine.vercel.app/

Try entering:

  • 1960 — see a relatively healthy planet, then watch how much changed
  • 1990 — the year of the Earth Summit, Rio. CO₂ was already rising fast
  • 2005 — the year Arctic ice hit a then-record low

Drag the year scrubber slowly from 1950 to 2026. Watch the CO₂ pill change. Listen to the drone shift. Find 2007 — the year the Arctic shattered its record.

Code

React + Vite

This template provides a minimal setup to get React working in Vite with HMR and some ESLint rules.

Currently, two official plugins are available:

React Compiler

The React Compiler is not enabled on this template because of its impact on dev & build performances. To add it, see this documentation.

Expanding the ESLint configuration

If you are developing a production application, we recommend using TypeScript with type-aware lint rules enabled. Check out the TS template for information on how to integrate TypeScript and typescript-eslint in your project.




Built with:

  • React 19 + Vite 6
  • Tailwind CSS v4 (single @import "tailwindcss" — no config file)
  • Canvas 2D for the globe and all charts (no chart library)
  • Web Audio API for ambient sound
  • html2canvas for PNG card export

Zero external data dependencies. All climate datasets (CO₂, TEMP, ICE, POP, FOREST, SEA) are bundled from NOAA, NASA GISTEMP, NSIDC, FAO, and UN World Population Prospects.

How I Built It

The globe

The hardest part was making the hero globe look like Earth — not a red blob, not random ellipses.

The breakthrough was switching from position: absolute painted ellipses to actual spherical projection math. Each continent is defined as a series of [lat, lon] pairs. A project(lat, lon, t) function maps them onto the 2D canvas using:

function project(lat, lon, t) {
  const phi    = (lat * Math.PI) / 180;
  const lam    = (lon * Math.PI) / 180 + t * 0.35; // t = spin time
  const cosLat = Math.cos(phi);
  const x3d    = cosLat * Math.sin(lam);
  const y3d    = Math.sin(phi);
  const z3d    = cosLat * Math.cos(lam);
  if (z3d < -0.05) return null; // back-face cull
  return [cx + x3d * r, cy - y3d * r];
}

Back-face culling (z3d < -0.05) means continents on the far side of the globe simply don't draw. The spin is smooth because t increments 0.008 per frame via requestAnimationFrame.

The heat factor maps CO₂ rise since your birth year onto a 0 → 0.58 value that shifts:

  • Ocean colour (vivid blue → murky)
  • Land colour (forest green → reddish-brown)
  • Ice cap size (large → tiny)
  • Atmosphere halo (blue-cyan → dimmer)
let heat = 0;
if (birthYear > 0) {
  const rise = interp(CO2, 2026) - interp(CO2, birthYear);
  heat = Math.min(rise / 90, 0.58);
}

The CO₂ number

The original version looped through seasonal offsets forever, making the number tick up and down endlessly. That felt anxious and wrong.

The fix: count up once with a cubic ease-out, then lock the value and add a pure CSS pulse — no more JS interval touching the text:

const tick = (now) => {
  const p    = Math.min((now - start) / 1800, 1);
  const ease = 1 - Math.pow(1 - p, 3);
  el.textContent = `+${(target * ease).toFixed(1)}`;
  if (p < 1) requestAnimationFrame(tick);
  else {
    el.textContent = `+${target.toFixed(1)}`; // locked
    el.classList.add('breathing');             // CSS pulse only
  }
};
@keyframes co2Breathe {
  0%, 100% { transform: scale(1);      filter: drop-shadow(0 0 0px rgba(192,64,48,0)); }
  50%       { transform: scale(1.028); filter: drop-shadow(0 0 18px rgba(192,64,48,.45)); }
}

Gentle, readable, and the number never changes again.

Sound design

Web Audio API is blocked until a user gesture (browser policy). The fix is audioCtx.resume() which returns a Promise — you must wait for it:

const toggle = () => {
  ctx.resume().then(() => {
    gain.gain.cancelScheduledValues(ctx.currentTime);
    gain.gain.setTargetAtTime(0.6, ctx.currentTime, 0.4);
    updatePitch(CO2[2026]);
  });
};

Two oscillators detuned by 2.5Hz create a warm beating effect. As the year scrubber moves, setTargetAtTime glides the frequency smoothly rather than jumping:

function updatePitch(co2) {
  const t    = Math.max(0, Math.min((co2 - 310) / 120, 1));
  const freq = 45 + t * 30; // 45Hz clean → 75Hz tense
  osc1.frequency.setTargetAtTime(freq, ctx.currentTime, 1.5);
  osc2.frequency.setTargetAtTime(freq * 1.045, ctx.currentTime, 1.5);
}

Architecture

The whole app is structured around a single truth: birthYear. Everything derives from it.

App.jsx
├── birthYear (state) ─────────────────────────────┐
├── Hero           → onReveal(year) sets birthYear  │
├── ShockSection   ← birthYear                      │
├── Toolbar        ← country, toggles               │
├── CountryPanel   ← birthYear + country            │
├── Scrubber       ← birthYear + sliderYear         │
├── ScenarioChart  ← birthYear + scenario           │
├── MetricCards    ← birthYear + country + modes    │
├── CompareSection ← birthYear                      │
├── Quiz           ← birthYear (resets questions)   │
├── Timeline       ← birthYear (filters events)     │
└── ShareCard      ← birthYear + country            │
                   ────────────────────────────────┘

No Redux, no Zustand. Just useState in App.jsx with props passed down. The data layer is pure JS objects — no API calls, no loading states, instant renders.

The data

All datasets are annual means from primary sources, hand-curated and interpolated:

// Linear interpolation across sparse data
export function interp(data, year) {
  if (data[year] !== undefined) return data[year];
  const keys = Object.keys(data).map(Number).sort((a, b) => a - b);
  // find surrounding years and lerp
  const t = (year - lo) / (hi - lo);
  return data[lo] + t * (data[hi] - data[lo]);
}

This means every year from 1950–2026 returns an accurate value even if we only store data every 5 years.

What I'd do with more time

  • SVG world map with country-level temperature choropleth
  • Real Keeling Curve seasonal animation (actual monthly NOAA data)
  • Share to Twitter/X with OpenGraph preview card
  • Offline PWA support — the whole app works without internet anyway

Design Philosophy

The biggest design decision wasn't technical — it was emotional.

Climate data is usually presented as a problem to solve. Numbers, charts, policy. It lands as guilt or numbness.

Earth Time Machine tries a different frame: this happened in your lifetime. You were there. The CO₂ that rose, rose while you were alive. The ice that melted, melted while you watched.

That's not guilt — it's stakes. And stakes make people want to act.

The dark brown/forest green colour palette is intentional. Earth tones. Soil. The feeling of something organic under stress, not cold data on white.

The CO₂ number that arrives and pulses — not loops — is intentional. It's not a process. It's a fact. It stays.

Prize Categories

Not submitting to sponsored prize categories — this project uses no third-party AI APIs, authentication services, or blockchain infrastructure. Just the web platform, open climate data, and a weekend.

Data sources: NOAA Mauna Loa (CO₂), NASA GISTEMP v4 (temperature), NSIDC Sea Ice Index (Arctic ice), FAO FRA 2025 (forest), UN World Population Prospects (population), CSIRO/Church & White 2011 (sea level). All datasets are annual means verified against primary sources.

Built over one weekend for Earth Day 2026. The planet deserved better tooling.

Self-Healing CI, AI in Education, and the Missing Human Half

2026-04-20 02:51:40

Self-Healing CI, AI in Education, and the Missing Human Half

AI tools are moving beyond automation to reshape how we build, learn, and collaborate. Self-healing systems promise less manual maintenance, education debates AI’s role in classrooms, and a growing emphasis on human oversight in machine-driven workflows.

Self-healing GitHub CI that won’t let AI touch your application code

What happened: A new GitHub CI system automatically fixes deployment issues without altering user code.

Why it matters: Developers can reduce maintenance overhead and prevent accidental code changes during repairs.

Context: The tool targets CI pipelines where reliability is critical but code integrity must stay untouched.

Artificial Intelligence: The double-edged sword redefining education

What happened: AI is transforming education with personalized learning tools but raises concerns about over-reliance and equity.

Why it matters: Educators and developers must balance innovation with safeguards to avoid widening skill gaps.

Context: The article highlights both opportunities and risks in integrating AI into curricula.

Cardynal – AI support agent for businesses, no code, WhatsApp and web chat

What happened: A no-code AI agent handles customer support via WhatsApp and web interfaces.

Why it matters: Startups can deploy support tools rapidly without engineering resources.

Context: The platform emphasizes accessibility for non-technical teams.

The Missing Human Half of AI

What happened: AI systems often lack human intuition, leading to errors when context or nuance is required.

Why it matters: Developers must prioritize human-AI collaboration to avoid flawed outputs in critical applications.

Context: The piece argues for designing systems that augment, rather than replace, human judgment.

Sources: Hacker News AI, Google News AI