2026-05-01 21:54:19
I spent three days refactoring code that should have taken three hours.
The project was a voice recognition SaaS — Next.js on the front, FastAPI on the back. The original MVP had been built fast. Really fast. AI-assisted, prompt-driven, vibe-first. It worked. Users could log in, the core flows ran, nothing was on fire.
Then I had to touch it.
What I found: a single file with six unrelated methods sharing state in ways nobody had documented. Variable names that made sense to an AI in the moment — handleProcessV2, tempDataFinal, newLogicRefactored — but communicated nothing about intent or ownership. Comments that described what the code did, not why it existed. Dead code scattered throughout, still referenced in two places, doing nothing.
The actual bug I was fixing was small. But to fix it safely, I had to understand the blast radius. And that took three days, not three hours.
That gap — three hours versus three days — is what I now think of as Comprehension Debt.
Technical debt is when you take a shortcut and promise to clean it up later. Engineers have managed it for decades — it's in your backlog, it has a ticket, someone owns it.
Comprehension Debt is when your codebase grows faster than your team's collective understanding of it. There's no ticket for it. Nobody owns it. It's invisible until someone has to touch something.
AI coding tools produce Comprehension Debt at scale, and they do it quietly.
The code works. The tests pass (if there are tests). The PR gets merged. Leadership sees velocity metrics improving. Developers feel productive. And underneath all of that, the cost of understanding is being silently deferred — sprint after sprint.
When that cost comes due, it doesn't feel like a debt payment. It feels like entropy. It feels like your team is just getting slower for no clear reason.
Here's the pattern I've seen, and that research is now documenting at scale:
Day 1: The AI-assisted feature ships in an afternoon. What used to take a senior engineer two days now takes two hours. Everyone is impressed.
Day 30: A bug appears in that feature. A developer opens the file. The function is 200 lines long, handles four unrelated concerns, and was clearly assembled from multiple AI-generated fragments that nobody connected into a coherent whole. The fix takes two days instead of two hours.
Day 90: The same developer gets asked about another feature in the same area. They say it'll take "a week to be safe." Nobody questions it because the codebase just feels complex now. But nobody can point to why.
This is Comprehension Debt compounding. And it's happening on teams right now — not because engineers are careless, but because AI coding tools are optimized for generation, not for the human understanding that follows.
A large-scale study of 8.1 million pull requests found that technical debt increases 30–41% after AI coding tool adoption. Vibe-coded projects accumulate technical debt roughly three times faster than traditionally written ones — not because the code looks wrong, but because it lacks the documentation, test coverage, and architectural coherence that comes from a human who actually thought through the system design.
In my experience — and in what I'm hearing from other teams — Comprehension Debt concentrates in predictable places:
Responsibility collapse: Logic that should live in three separate modules ends up in one file because the AI generated it together and nobody restructured it. You can't change one thing without understanding everything.
Naming that made sense to a prompt: AI-generated variable and function names often reflect the prompt that produced them, not the domain they operate in. processUserAudioV3 tells you nothing about what changed from V2, or why V3 exists.
Absent rationale: Traditional code accumulates informal documentation — commit messages, PR descriptions, inline comments that explain why a decision was made. AI-generated code skips this layer entirely. You inherit the decision without the reasoning.
Dead code at scale: AI assistants frequently generate helper functions, fallback handlers, and utility methods that never get called. In a human-written codebase, these get caught in review. In a vibe-coded codebase, they accumulate, cluttering the dependency graph and making blast radius analysis harder.
Invisible test coverage gaps: GitHub's research shows developers complete tasks 55% faster with AI coding tools. Test writing velocity does not keep pace — it stays the same, or gets worse, because the code is harder to understand and test boundaries are blurry.
Individual developers feel this. But engineering managers are the ones who absorb the consequences without seeing the cause.
The symptoms look like this:
None of these symptoms point directly to Comprehension Debt. They look like normal engineering friction. They get attributed to scope creep, team capacity, or just the inherent complexity of the problem domain.
But the actual cause is that your team is operating a codebase they don't fully understand — and that gap is widening every sprint as AI-generated code continues to be merged without the comprehension layer being built alongside it.
I'm not arguing against AI-assisted development. I use it daily. The velocity gains are real and the economics of ignoring them are bad.
But there are specific practices that prevent Comprehension Debt from accumulating:
Make understanding a merge requirement, not a review nice-to-have. Before AI-generated code merges, someone on the team — not the author — should be able to explain what it does, why it exists, and what breaks if you remove it. This isn't gatekeeping. It's the minimum standard for code that will run in production.
Treat dead code as a blocker, not a backlog item. AI assistants generate dead code constantly. Remove it at the point of generation, not later. Later never comes.
Write the rationale, not just the code. Commit messages and PR descriptions should explain decisions, not summarize diffs. AI can write the diff. Only a human can write the reason.
Surface stall signals early. Comprehension Debt reveals itself as stalled tickets — someone is blocked because they don't understand the codebase well enough to move. If a ticket sits in "In Progress" for three or more days with no update, it's a signal worth investigating, not a standup detail to acknowledge and move past.
That last point is why I built Ordia. After the three-day refactor I described at the top of this post, I started thinking about what would have helped. Not better AI tools — the AI was fine. What was missing was visibility into where the team was stuck, and why. Ordia monitors your issue tracker and code host automatically and delivers a morning digest to your team chat: stalled tickets, unreviewed PRs, blockers before they become delays. It doesn't fix Comprehension Debt. But it makes it visible before it costs you a sprint.
AI coding tools have changed what it means to ship software. Writing code is no longer the bottleneck.
Understanding it is.
The teams that figure out how to maintain comprehension — across the full codebase, across the full team, at the speed AI-assisted development runs — are the ones that will actually compound the velocity gains instead of spending them on debugging sessions nobody can explain.
The vibe was good. Now comes the understanding.
I'm building Ordia — a tool that surfaces stalled tickets and forgotten PRs to your team chat every morning, so engineering managers have visibility into where the team is blocked before it delays the sprint. If the problem described in this post resonates, I'd genuinely like to hear what it looks like on your team.
2026-05-01 21:52:01
2026-05-01 21:47:23
A note before we start: What follows is fiction. A thought experiment set in an alternate world that only looks like ours by accident. Any resemblance to real labs, real bosses, or real workplace policies is the reader’s imagination doing the heavy lifting. With that out of the way.
In this imagined world, strip the hype off AI (artificial intelligence, software that learns from huge amounts of data to answer questions, write text, and so on) and you’re looking at a religion:
A new priesthood (the AI labs) telling us the details are too complex for normal people to question. Just trust them. Sacred texts nobody is allowed to check. Closed code, secret training data, “trust us, we tested it.
“Personal stories used as proof. “It changed my life” is not evidence. It’s a church testimony. End-times talk dressed up as forecasting.
Either heaven (an AGI utopia, where AGI means artificial general intelligence, a hypothetical future AI as smart or smarter than humans at almost everything) or hell (AI doom, the idea that AI wipes us out).
Notice how both endings need you to fund the church now: Pay-to-be-saved. Buy the Pro plan and your sins of being slow are forgiven. Heretics already getting punished, quietly. Critics mocked, artists ignored, workers fired for saying no.
Here is the part nobody can fully explain. Every company, every government, every school suddenly has to adopt AI. Right now. Faster. The pressure does not match the actual results yet. Real productivity gains are mixed. Real profits at most AI companies are negative.
And still the orders come down from the top: roll it out, train everyone, make it mandatory.Why? Maybe it is silver dollars. Maybe it is investor money chasing the next big thing. Maybe it is governments quietly pushing because whoever controls AI controls the next century.
Maybe it is something simpler and uglier, like bosses who finally have an excuse to cut headcount. We do not really know. The order comes from somewhere above, and it does not get questioned out loud.
Christianity had the same moment. In the year 312, the Roman Emperor Constantine had a vision before a battle, won, and made Christianity the favored religion of the empire.
Almost overnight, a small persecuted sect became the official faith of the most powerful state on earth. Bishops got palaces. Pagans got pushed out of public jobs. Was it because Constantine truly believed?
Because it was politically useful to unite a fracturing empire under one faith? Because the church had become too big to fight, so better to ride it? Historians still argue.
The point is the conversion came from the top, the reasons stayed murky, and once it started, refusing was no longer a neutral choice. It was a career-ending one. That is where we are now.
The Constantine moment for AI has already happened. We just do not know whose vision it was, or what they really saw.
Here is the strange part. We don’t have one yet. Christianity had a founder, a life, a death, and then the church came after. AI has the church first, and is still waiting for its messiah to show up. Right now there are only candidates. AGI itself is the most popular pick, a savior that has been “two years away” for about ten years now.
Some treat Sam Altman or other lab bosses as prophets, but nobody really believes they are the chosen one, including them. Others point at the next unreleased model, always the next one, never this one.That is what makes this moment odd.
The faith is huge.
The followers are everywhere.
The money is real.
But the figure at the center of it all is still missing. People are praying toward an empty chair and calling it inevitable.
Forced AI use at work.
Teachers told to “embrace the tools” or lose their jobs. Writers called Luddites (a name for people in the 1800s who smashed factory machines because the machines took their jobs, now used as an insult for anyone who criticizes new tech) for refusing.
Whole jobs told to retrain or die, by bosses who keep their human staff.
The punishment is the layoff. The confession is your usage report.
Parents who keep kids off screens. Musicians who stamp “100% human” on the album.
Lawyers who write by hand. Teachers going back to paper exams. They’ll be mocked as old-fashioned, anti-progress, snobs. Every insult the winning side throws at the losing one.
Christianity took 300 years to take over an empire. The empire took over AI in about 30 months. That is not a sign the tech is good.
That is a sign nobody got to vote. The real question is whether enough people will still know how to read, think, and write on their own to notice what was lost, before the tools that used to do those things for them get too expensive to live without.
Again, fiction. Any similarity to the world outside your window is between you and your window.
2026-05-01 21:46:43
A few weeks ago I was reading a model card for an open-weight code model. It claimed pass@1 = 67% on HumanEval. I tried to reproduce it. I got 54%.
I went back to the model card. The metric was named, the dataset was named, the model checkpoint hash was published. Everything looked reproducible.
Except: which version of HumanEval? The original 164 problems, or the de-contaminated 161? What temperature? What seed for nucleus sampling? What was the threshold the team committed to before they ran the eval, and how do I know the published 67% is not the best of three runs at three temperatures?
I read the paper. I read the README. I read the eval harness source. I could not answer any of those questions from the published artifacts. I could only ask the authors, and they could only tell me what they remembered. And I had no way to distinguish what they remembered from what they wished they had done.
This is not a problem about that specific model card or those specific authors. It is a problem about every published ML accuracy number I have ever read.
A claim like "our model achieves 91.3% accuracy on benchmark X" can be wrong, in published form, in at least these five ways, none of which leave a forensic trace:
Each of these is consistent with current best-practice reporting. Each leaves the published number unfalsifiable: a reader cannot, even in principle, distinguish honest reporting from any of the above.
Pre-registration solved this exact problem in adjacent fields:
ML never got the equivalent. The closest thing — the ML Reproducibility Challenge — is an annual peer-driven effort to re-run published experiments. It produces excellent post-hoc analysis but does not change the publication-time commitment surface.
The 2026 regulatory window is the part that matters most for builders. The EU AI Act Article 12 requires automatic logging of evaluation events for high-risk systems. Article 18 requires 10-year retention. Both enter force August 2, 2026. NIST AI RMF references content-addressed audit trails as a recommended control. ISO/IEC 42001:2023 mandates documented information practices that PRML directly satisfies.
In other words: there is now a regulatory deadline by which "we have a tradition of reporting these numbers honestly" stops being a sufficient answer.
I drafted a small format, working draft v0.1, currently under public review. It is called PRML — Pre-Registered ML Manifest. The whole spec fits in a single YAML schema:
version: "prml/0.1"
claim_id: "01900000-0000-7000-8000-000000000000"
created_at: "2026-05-01T12:00:00Z"
metric: "accuracy"
comparator: ">="
threshold: 0.85
dataset:
id: "imagenet-val-2012"
hash: "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"
seed: 42
producer:
id: "studio-11.co"
That is the entire required surface. Eight fields. Plain text. UTF-8. YAML 1.2 strict subset (block style only, lexicographic key ordering, no comments, no flow collections).
The format defines a deterministic canonicalization. Given any logical YAML mapping with these fields, there is exactly one canonical UTF-8 byte sequence. The SHA-256 of those bytes is the manifest hash.
The hash is published before the experiment runs. After the experiment, an independent verifier can:
3 (TAMPERED).0 (PASS), 10 (FAIL), or one of the diagnostic codes.There is no trust in the producer required at verification time. Anyone with the manifest, the dataset, and the model can reproduce the verdict offline.
Honest amendments — "we found 12 mislabeled examples and re-ran" — do not overwrite. They append. Each new manifest carries a prior_hash field pointing to the manifest it amends. The chain is the audit log. When a regulator or reviewer asks "what was committed when?", the answer is one hash, and from that hash the entire history is recoverable.
The reference implementation is a single-file Python CLI called falsify, MIT-licensed, 1287 lines. Install it the usual way:
pip install falsify
Initialize a claim:
falsify init imagenet-87
This writes .falsify/imagenet-87/spec.yaml with the required PRML fields as placeholders. Edit the file with your real values:
version: "prml/0.1"
claim_id: "01900000-0000-7000-8000-000000000010"
created_at: "2026-05-01T14:00:00Z"
metric: "accuracy"
comparator: ">="
threshold: 0.87
dataset:
id: "imagenet-val-2012"
hash: "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"
seed: 42
producer:
id: "your-org.example"
Lock it:
$ falsify lock imagenet-87
locked: yes (sha256:1a3466cc08ee, locked_at 2026-05-01T14:00:00Z)
Now the spec is hash-bound. If anyone — including you — edits the YAML, the next falsify verify exits 3 and refuses to produce a verdict.
Run the experiment, capture the metric value (let us say 0.876), and verify:
$ falsify verify imagenet-87 --observed 0.876
PASS metric=accuracy observed=0.876 >= threshold=0.87
exit 0
If the team had silently raised the threshold to 0.88 after seeing the result:
$ falsify verify imagenet-87 --observed 0.876
TAMPERED spec hash drift detected
recorded: 1a3466cc08ee...
current: 7b2c9a5d1e4f...
exit 3
The CI pipeline halts. The deploy does not happen. There is no judgment call.
The most reasonable skeptical question about a content-addressed format is: what guarantees that two implementations produce the same canonical bytes for the same input?
For v0.1 we publish 12 conformance test vectors. Each vector defines:
The vectors exercise:
| Test | Property |
|---|---|
| TV-001 | Minimal valid manifest |
| TV-002 | Key-ordering invariance — random insertion order produces same hash |
| TV-003 | Single-bit-of-content sensitivity — 0.85 vs 0.86 produces different hash |
| TV-004 | Optional fields populated (model.id, model.hash, dataset.uri) |
| TV-005 | Unicode handling in producer.id
|
| TV-006 | Maximum seed value (2⁶⁴ − 1) |
| TV-007 | Minimum seed (0) |
| TV-008 | Equality comparator with integer-valued threshold |
| TV-009 | Amendment with prior_hash linkage |
| TV-010 |
pass@k metric for code generation |
| TV-011 | AUROC with strict comparator |
| TV-012 | Regression metric with <= comparator |
A new implementation in Rust, Go, or TypeScript is conformant only if it reproduces all 12 vectors exactly. The reference implementation has 28 unittest assertions in CI that lock in the v0.1 hash contract; any code change that breaks a vector forces a v0.2 spec bump.
PRML does not establish whether a claimed metric is correct, fair, or sufficient. It establishes only that the claim was committed before it was tested. Specifically:
falsify is one implementation. A second implementation in any language passes if it reproduces the test vectors.The cost of adopting PRML at the experiment level is one hash function call. SHA-256 is FIPS 180-4, available in every standard library written since 2002. The format is UTF-8 plain text, readable in 2046 by any tool that can read text.
The cost of not adopting it scales with deployment scope. For a personal project, zero. For a research paper, growing pressure as reviewers begin to ask. For a product subject to EU AI Act Annex III obligations, measurable in regulatory exposure plus legal review hours. For a foundation model that will be cited in safety cases for a decade, the cost is roughly the credibility of every accuracy claim you have ever shipped.
This is a working draft. v0.2 freeze is targeted 2026-05-22. Three concrete asks:
Discussion thread: github.com/sk8ordie84/falsify/discussions/6.
Spec: spec.falsify.dev/v0.1
Code: github.com/sk8ordie84/falsify
Discussion: github.com/sk8ordie84/falsify/discussions/6
2026-05-01 21:44:38
Hey folks!
Continuing the My Broker B3 series, we've reached one of the most business-logic-rich services in the ecosystem: the Broker Wallet API.
In previous posts we built the B3 infrastructure (market price sync and matching engine). Now we enter the financial core of the broker. This service is the guardian of the investor's money and assets — it needs to be correct above all else.
The trading-broker-wallet is the financial custody service of the ecosystem. Its responsibility is to react to order lifecycle events and ensure the user's money and assets are managed correctly at each step.
It doesn't receive direct buy commands — it listens to Kafka events and acts according to each order's status:
Kafka: order-events-v1
│
┌────┴────────────────────────┐
│ │
PENDING FILLED / REJECTED
│ │
reserveBalance() settleOrder() / refundBalance()
│ │
Blocks balance Settles or refunds
Imagine the following scenario without proper custody:
The wallet solves this with the concept of blocked balance: the moment an order enters as PENDING, the value is immediately reserved. The available balance is always balance - blockedBalance.
| Technology | Usage |
|---|---|
| Java 21 + Spring Boot 3.5.11 | Service core |
| MySQL + Flyway | Operational persistence and versioning |
| Spring Kafka | Order event consumption |
| Spring Data Redis | Real-time prices for portfolio valuation |
| SpringDoc OpenAPI | Swagger UI documentation |
Before diving into the logic, it's important to understand the three entities that support the service:
Account
├── userId (UNIQUE)
├── balance ← total balance (includes blocked amount)
├── blockedBalance ← amount reserved for PENDING orders
└── currency
Account (1) ──▶ (N) Position
├── ticker
├── quantity
└── averagePrice
WalletTransaction (audit log)
├── orderId
├── userId
├── transactionType ← RESERVE | SETTLEMENT | REFUND
└── amount
The blockedBalance is the central piece of custody. The balance never goes below zero — only blockedBalance goes up and down as orders progress through their lifecycle.
When an order is created, it arrives as PENDING. The wallet must immediately block the maximum amount that could be debited (quantity × limit price):
@Transactional
public void reserveBalance(OrderEventDTO event) {
BigDecimal amountToReserve = event.getPrice().multiply(event.getQuantity());
Account account = accountRepository.findByUserId(event.getUserId())
.orElseThrow(() -> new RuntimeException("Account not found for user: " + event.getUserId()));
// Available = balance - blockedBalance
BigDecimal availableBalance = account.getAvailableBalance();
if (availableBalance.compareTo(amountToReserve) < 0) {
throw new RuntimeException("Insufficient balance to reserve funds.");
}
// Only blockedBalance increases — total balance stays intact
account.setBlockedBalance(account.getBlockedBalance().add(amountToReserve));
accountRepository.save(account);
// Audit record
walletTransactionRepository.save(WalletTransaction.builder()
.userId(event.getUserId())
.orderId(event.getOrderId())
.transactionType(TransactionType.RESERVE)
.amount(amountToReserve)
.build());
}
Important point: the total balance doesn't change at this step. Only blockedBalance increases. The money is still in the account — it's just reserved.
The order was executed by the Matching Engine. Now the money actually leaves. But here's a subtlety: the executed price may differ from the reserved price.
Example: user placed a buy order with a limit of R$ 30.00, but the market executed at R$ 29.50. The reserved amount was R$ 300 (10 shares × R$ 30), but the actual executed amount was R$ 295 (10 × R$ 29.50). The R$ 5.00 difference automatically becomes available again.
@Transactional
public void settleOrder(OrderEventDTO event) {
BigDecimal totalReserved = event.getPrice().multiply(event.getQuantity());
BigDecimal totalExecuted = event.getExecutedPrice().multiply(event.getQuantity());
Account account = accountRepository.findByUserId(event.getUserId())
.orElseThrow(() -> new RuntimeException("Account not found"));
// Release the initial block
BigDecimal newBlockedBalance = account.getBlockedBalance().subtract(totalReserved);
account.setBlockedBalance(newBlockedBalance.compareTo(BigDecimal.ZERO) < 0
? BigDecimal.ZERO
: newBlockedBalance);
// Debit the actual executed amount from total balance
account.setBalance(account.getBalance().subtract(totalExecuted));
accountRepository.save(account);
// Update asset position
updatePosition(event, account);
}
The order was rejected (price outside market range, ticker not found in Redis, etc.). The blocked money becomes available again:
@Transactional
public void refundBalance(OrderEventDTO event) {
BigDecimal amountToRefund = event.getPrice().multiply(event.getQuantity());
Account account = accountRepository.findByUserId(event.getUserId())
.orElseThrow(() -> new RuntimeException("Account not found for refund"));
// Only subtract from blockedBalance — total balance was never touched
if (account.getBlockedBalance().compareTo(amountToRefund) >= 0) {
account.setBlockedBalance(account.getBlockedBalance().subtract(amountToRefund));
} else {
log.warn("Refund amount exceeds blocked balance for order {}", event.getOrderId());
account.setBlockedBalance(BigDecimal.ZERO);
}
accountRepository.save(account);
}
When a purchase is settled, we need to update the user's position in that asset. The weighted average price ensures that purchases made at different times are correctly reflected:
New Average Price = (Current Cost + New Cost) / (Current Qty + New Qty)
private void updatePosition(OrderEventDTO event, Account account) {
Position position = positionRepository
.findByAccountIdAndTicker(account.getId(), event.getTicker())
.orElse(Position.builder()
.account(account)
.ticker(event.getTicker())
.quantity(BigDecimal.ZERO)
.averagePrice(BigDecimal.ZERO)
.build());
boolean isSell = "SELL".equalsIgnoreCase(event.getSide());
if (isSell) {
BigDecimal newQuantity = position.getQuantity().subtract(event.getQuantity());
if (newQuantity.compareTo(BigDecimal.ZERO) <= 0) {
positionRepository.delete(position);
return;
}
position.setQuantity(newQuantity);
// Average price stays the same on sell
} else {
// BUY: recalculate weighted average price
BigDecimal currentCost = position.getAveragePrice().multiply(position.getQuantity());
BigDecimal newCost = event.getExecutedPrice().multiply(event.getQuantity());
BigDecimal totalQuantity = position.getQuantity().add(event.getQuantity());
BigDecimal newAveragePrice = currentCost.add(newCost)
.divide(totalQuantity, 4, RoundingMode.HALF_UP);
position.setQuantity(totalQuantity);
position.setAveragePrice(newAveragePrice);
}
positionRepository.save(position);
}
Practical example:
During the code review, I identified issues that would cause silent failures in production:
1. Guaranteed NPE on first deposit
New account created without initializing blockedBalance. The first call to getAvailableBalance() would throw NullPointerException.
// ❌ Before
Account.builder()
.balance(BigDecimal.ZERO)
// blockedBalance missing → NPE
// ✅ After
Account.builder()
.balance(BigDecimal.ZERO)
.blockedBalance(BigDecimal.ZERO)
2. Withdraw validated total balance, not available balance
Allowed withdrawing money that was blocked in pending orders:
// ❌ Before — allows withdrawing reserved funds
if (account.getBalance().compareTo(amount) < 0)
// ✅ After — validates only what's available
if (account.getAvailableBalance().compareTo(amount) < 0)
3. SELL increased position instead of decreasing it
The updatePosition() method always added quantity, completely ignoring the side field. Selling 10 shares would add 10 to the position.
4. Kafka consumer silently swallowed exceptions
Errors were logged but the offset was committed — the message was discarded without retry. Now the consumer rethrows the exception so Kafka can apply its retry policy correctly.
To prevent duplicates in case of Kafka reprocessing, we added a database constraint:
-- V4 migration
ALTER TABLE wallet_transactions
ADD CONSTRAINT uq_order_transaction UNIQUE (order_id, transaction_type);
If the same PENDING event arrives twice, the second insert will fail with a constraint violation — preventing the balance from being blocked twice for the same order.
| Method | Endpoint | Description |
|---|---|---|
| POST | /api/v1/wallet/{userId}/deposit |
Deposit funds |
| POST | /api/v1/wallet/{userId}/withdraw |
Withdraw available funds |
| GET | /api/v1/wallet/{userId}/summary |
Summary: balance + positions + total equity |
| GET | /api/v1/wallet/{userId}/positions |
List asset positions |
| GET | /api/v1/wallet/{userId}/transactions |
Transaction history |
📄 Swagger UI: http://localhost:8085/swagger-ui.html
With the application running locally:
order-events-v1
trading-broker-wallet syncedhttp://localhost:8085/swagger-ui.html
With the Wallet ready to react to order events, the next step is building the trading-broker-order — the orchestrator that will manage the complete order lifecycle:
trading-broker-asset
PENDING and publish to KafkaOnce that service is ready, we'll have the first complete end-to-end flow: from the user's click all the way to the updated wallet balance.
Got any questions about the custody logic or the PENDING → FILLED → REJECTED cycle? Drop them in the comments!
⬅️ Previous Post: The Heart of B3: Building the Matching Engine with RabbitMQ and Redis
📘 Series Index: Series Roadmap
Links:
2026-05-01 21:41:43
Filament v5 ships a native register page. Some plugins ignore it and roll their own controller. Here's how we added captcha, honeypot, role assignment, and event bridging — entirely through Filament's documented hooks, without touching register() itself.
Filament is one of the best backend frameworks for Laravel. It builds admin panels fast, looks great out of the box, and is deeply extensible. But most non-trivial projects don't stop at the backend. If you're building a SaaS, a web app with a public-facing portion, or anything with real users signing up, you eventually need authentication that lives outside the admin shell.
A common answer is to reach for one of Laravel's official starter kits. The modern lineup — React, Vue, and Livewire flavours, all with auth scaffolding baked in — is the path most teams take today, and earlier kits like Breeze and Jetstream from the Laravel 10.x era are still maintained and widely deployed. They're battle-tested, beautifully documented, and a great fit for many projects: they handle public-facing auth completely while letting Filament focus on the admin side. For plenty of teams that separation is exactly what you want, and these kits absolutely deserve their place in the Laravel ecosystem.
That said, if you'd rather harden your authentication and roll your own using Filament, you don't need to reach outside the framework — Filament v5 already has everything you need to extend. It ships a complete auth flow — \Filament\Auth\Pages\Login, \Filament\Auth\Pages\Register, email verification, password reset, and multi-factor authentication — and these aren't just admin-panel utilities. They're real, themable, extensible pages you can put in front of your users.
The catch: the register page in particular needs work before it's production-ready. Out of the box there's no captcha, no honeypot, no role assignment, no Laravel-event bridge for listeners that other parts of your app already depend on. Most plugins solve this by reaching for a custom Laravel controller, a Blade form, and manual validation — abandoning Filament's auth flow entirely.
We took the other approach while building tallcms/filament-registration(Link). Every production feature — captcha, honeypot, rate limiting, default role assignment, post-registration redirect — landed inside Filament's documented extension points. We never overrode register(). These patterns apply to any Filament v5 project that needs a public register page without leaving the framework.
The Core Idea: Two Hooks, Not One Override
Filament's Register page exposes two extension points :
mutateFormDataBeforeRegister(array $data): array — runs after validation but before User::create(). The place for anything that should gate registration.
handleRegistration(array $data): Model — wraps user creation. The place for anything that should run after the user exists.
The temptation, especially if you're porting from a legacy controller, is to override register() and rebuild everything inside one method. Don't. Filament's register() already orchestrates throttling, validation, email verification, response dispatch, and event firing. You won't get all those right by rewriting it. You don't need to.
class Register extends \Filament\Auth\Pages\Register
{
public function form(Schema $schema): Schema
{
return $schema->components([
$this->getNameFormComponent(),
$this->getEmailFormComponent(),
$this->getPasswordFormComponent(),
$this->getPasswordConfirmationFormComponent(),
HoneypotField::make(),
CaptchaField::make(),
]);
}
protected function mutateFormDataBeforeRegister(array $data): array
{
$this->checkHoneypot($data);
$this->throttleCaptcha();
$this->verifyCaptcha($data);
unset($data[$this->honeypotField], $data[$this->tokenField]);
return $data;
}
protected function handleRegistration(array $data): Model
{
$user = parent::handleRegistration($data);
$this->maybeMarkEmailVerified($user);
$this->maybeAssignDefaultRole($user);
event(new \Illuminate\Auth\Events\Registered($user));
return $user;
}
}
That's the entire shape. Everything below is what goes inside those methods.
Layer 1: Honeypot via Validation, Not Stealth
The legacy approach to honeypots is to silently render a fake success page when the field is filled, hoping the bot logs a false positive. It works, until it doesn't — modern bots roll their own success-page detectors.
A cleaner approach inside Filament: throw a ValidationException. Filament catches Laravel's validation exceptions automatically and surfaces the message on the form. Bots that don't fill the honeypot get through validation; bots that do see a generic "Bot check failed" message attached to the honeypot field.
if (! empty($data[$this->honeypotField])) {
throw ValidationException::withMessages([
$this->honeypotField => 'Bot check failed. Please try again.',
]);
}
Layer 2: Rate-Limit Before You Verify
Layer 3: Pluggable Captcha via a Contract
Layer 4: Two-Layer Config (Env + Admin UI)
Layer 5: Container-Bound Redirects
We have shared this in details on this blog