2026-04-19 20:03:45
When Anthropic introduced Agent Skills last October, they named the core design principle progressive disclosure. The community adopted it instantly. It's now in the official docs, the engineering blog, every tutorial, every breakdown. It has become the canonical term.
I want to suggest a better one: progressive discovery. Not to be contrarian, and not because the term is technically wrong but because discovery produces a clearer, more intuitive picture of what is actually happening. And when you have that picture clearly in mind, everything about working with Skills becomes more natural.
At runtime, Claude Code scans the name and description of every installed Skill and reasons about whether any of them are relevant to the current task. If one looks promising, it reads the full SKILL.md body and reasons further. Only if the task demands more depth does it go looking for supplementary files — references, scripts, assets. It might stop at level one because that's all it needs.
The progression is not a fixed reveal sequence. It is Claude reasoning its way deeper, one conditional step at a time, each small act of discovery informing whether the next one is even necessary.
The key thing to hold onto is this: Claude is the one doing something. The Skill is not. The Skill is bytes on disk. It has no mechanism, no trigger, no awareness of context. It simply exists, waiting to be found and discovered. Claude is the active party. The Skill is a passive resource.
Disclosure implies an active subject — something with intent, making a decision to reveal. In UX design, where the term originates, that's accurate. The interface is the active party — it reads state and decides what to reveal and when. Gmail discloses advanced settings when you're ready for them. The interface is doing something.
When people encounter "progressive disclosure" in the context of Skills, it's natural to absorb that same model. The Skill becomes the active party. It discloses what it needs to, when it needs to. That framing is subtle and mostly unconscious — and it quietly makes the whole thing slightly harder to picture accurately.
It also points your thinking as a Skill author in a slightly unhelpful direction. If the Skill is doing the disclosing, you might find yourself thinking about what it should surface and when, as though you are designing a reveal sequence. You might focus on the structure of the layers rather than on the question that actually matters.
Swap the term and the picture sharpens immediately.
Claude is discovering. The Skill is being discovered.
That one shift makes the whole dynamic legible. Claude discovers the metadata and decides whether to go further. Claude discovers the SKILL.md body and decides whether to go further still. Claude discovers the supporting files only if it needs them. The progression belongs to Claude's reasoning — a chain of small, conditional decisions made by an active subject working its way through a passive resource.
And once you see it that way, what you should be optimising for as a Skill author becomes obvious. You are not designing a reveal sequence. You are making something to be discoverable.
The question you should be asking at every layer is not "what does this Skill disclose here?" but "can Claude find what it needs here, and does it have enough to decide whether to go deeper?" That is a more useful question, and it comes naturally from the right term.
Anthropic's own engineering blog, the post that introduced the term, describes what actually happens in entirely accurate language. Claude triggers. Claude determines. Claude loads. The explanation and the label are quietly in tension with each other.
The community has followed the same pattern. Writers instinctively reach for discovery language when describing Claude's behaviour, even in pieces that headline "progressive disclosure." One widely-read breakdown notes that Claude "should have no problem discovering what resources it needs." Another describes Skills as "dynamically discovered and loaded." The more intuitive framing keeps surfacing in the descriptions, even when the terminology says something slightly different.
The instinct is right. The label just hasn't caught up.
""Progressive disclosure" came from UX, where it belongs. As an analogy it helps with initial understanding — most developers already know the concept, and it gives you a quick way in. But it only takes you so far, and beyond that it can mislead.
Progressive discovery makes it clear. Claude is the subject, working its way incrementally through a passive resource — discovering a little, then a little more, then a little more, until it has what it needs. The Skill's job is to be found, at every layer, by a subject actively reasoning its way through.
That picture, once you have it, is hard to unsee. And it makes building good Skills feel considerably more intuitive.
2026-04-19 20:01:11
Impac Mortgage Holdings disclosed a data breach that exposed the Social Security numbers of 19,253 individuals after an unknown actor accessed its systems in early 2024. The company waited two years after discovery to notify the public and is now offering credit monitoring services.
Read the full article on BeyondMachines
This article was originally published on BeyondMachines
2026-04-19 20:00:58
Ebben a rendkívül rövid blogbejegyzésben arról fogok értekezni, hogyan lehet osztályközös lekérdezést létrehozni SQL nyelven. A példához MsSQL-t használok majd, de mivel az SQL szintaxisok rendkívül hasonlóak, így ezt bármelyik SQL "dialektusban" meg lehet oldani.
Itt is van a rendkívül profán lekérdezés:
SELECT
amount / 200000 * 200000 AS BinStart,
amount / 200000 * 200000 + 199999 AS BinEnd,
COUNT(*) AS Count
FROM Salaries
WHERE end_date IS NULL
GROUP BY
amount / 200000;
A tábla maga így néz ki:
CREATE TABLE Salaries (
salary_id int PRIMARY KEY IDENTITY(1, 1),
employee_id int,
amount int,
start_date date,
end_date date,
registered datetime DEFAULT (GETDATE())
)
Az alábbi sor alakítja ki az osztályközöket úgy, hogy az "amount" oszlop értékét osztja, majd megszorozza az osztályköz hosszával:amount / 200000 * 200000 AS BinStart
Ha belegondolunk, itt nyilván egy egész számot fogunk kapni, ami megmondja, hogy hány egészszer van meg a fizetésben az osztályköz hosszával egyenlő szám. Azért nem szükséges a FLOOR alkalmazása, mert az MS SQL erősen típusos, és integert osztunk integerrel. Ez viszont nem azt jelenti, hogy más adatbázisrendszernél ez így működik. MySQL-ben például szükséges lenne a FLOOR alkalmazása. Ha ezt beszorozzuk az osztályköz hosszával, akkor nyilván n-szer kapjuk meg az annak nagyságát. Az osztályköz felső határát az összeadás biztosítja. A "GROUP BY" művelet segítségével csoportosítunk aszerint, hogy hányszor egészszer van meg a fizetésben az osztályköz hossza.
A "WHERE end_date IS NULL" azért került oda, mert az adatbázisban mindegyik fizetésnél, amelyik aktuális, az "end_date" mező értéke null.
2026-04-19 19:56:39
A few weeks back I came across a post by @boristane sharing a website he made, loggingsucks.com. It caught my eye because it had been shared by my favorite tech YouTuber, @theo. Like most people, I was really inspired by the article and shared it with my team. @lukebsilver, Appwrite's Engineering Lead, was also inspired by it and decided to work on a new PHP library, utopia-php/span, to fix logging throughout the Appwrite codebase.
Before this, Appwrite used a combination of two different libraries targeting logging in two different areas:
utopia-php/console — a very simple wrapper library around stdout logging using functions like Console::success(), Console::error(), etc.utopia-php/logger — an adapter-based library to push error logs to monitoring systems like Sentry, AppSignal, Raygun, etc.Combined, these libraries served their purpose for a long time, but we often ran into problems when debugging production issues, the same ones the original article discusses in detail. I'd highly recommend going through that article first so I don't repeat it all here.
Funnily enough, the first tricky problem was deciding on a name. "Logger" was already taken, so we had to be creative. The word "Span" captured exactly what we were trying to solve: a fundamental unit of work with a named, timed operation alongside various attributes, errors, trace IDs, etc.
The first step was to move away from simple log lines to structured logging. Span enforces this by only exposing a single primary method, add(), which accepts a key-value pair.
Before:
Console::info("Deleting project {$project->getId()} (type={$type}, region={$project->getAttribute('region')})");
After:
Span::add('project.id', $project->getId());
Span::add('project.type', $type);
Span::add('project.region', $project->getAttribute('region'));
This massively improved the queryability of our logs — one of the things we struggled with most when going through logs in production.
We also wanted the library to be extremely simple to use. Earlier, with "logger", we had to hop through various dependency injection loops just to use it:
public function action(
Message $message,
Document $project,
Log $log, // ← has to be injected just to add a tag
): void {
$log->addTag('projectId', $project->getId());
$log->addTag('type', $payload['type']);
// ...actual work...
}
With Span, it's much simpler:
public function action(
Message $message,
Document $project,
): void {
Span::add('projectId', $project->getId());
Span::add('type', $payload['type']);
// ...actual work...
}
Because Appwrite's codebase leverages coroutines (via Swoole) for concurrency between requests, similar to goroutines in Go. A naive static implementation would leak state across concurrent requests. Span solves this by allowing you to choose the storage type:
Span::setStorage(new Storage\Coroutine());
To combine both logger and console capabilities, Span exposes built-in Exporters, which, as the name suggests, export the logs to not just stdout but any supported adapter. The library currently supports three:
{
"action": "worker.deletes",
"span.trace_id": "7a3f9c2b4e1d8f06",
"span.duration": 1.92,
"project.id": "67f3a9",
"project.type": "projects",
"project.region": "fra"
}
worker.deletes · 1.92s · 7a3f9c2b
project.id 67f3a9
project.type projects
project.region fra
────────────────────────────────
Span::addExporter(
new Sentry(dsn: '...'),
// Sampler: drop noisy expected errors, keep everything else.
sampler: function (Span $span): bool {
$error = $span->getError();
return !($error instanceof ExecutorException) || $error->isPublishable();
},
);
One massive improvement we saw was with error logs. Before, we had very verbose and noisy errors that were often hard to make sense of:
[Error] Timestamp: 2026-04-17T10:32:16+00:00
[Error] Type: Utopia\Database\Exception\Timeout
[Error] Message: Query took too long
[Error] File: /usr/src/code/src/Appwrite/Cloud/Platform/Workers/Deletes.php
[Error] Line: 214
Trace: #0 /usr/src/code/app/worker.php(828): ...
Now:
{
"action": "worker.deletes",
"span.trace_id": "7a3f9c2b4e1d8f06",
"span.duration": 2.14,
"project.id": "67f3a9",
"error.type": "Utopia\\Database\\Exception\\Timeout",
"error.message": "Query took too long",
"error.file": "/usr/src/code/src/Appwrite/Cloud/Platform/Workers/Deletes.php",
"error.line": 214,
"error.trace": [
{ "file": "/usr/src/code/app/worker.php", "line": 828, "function": "action" }
]
}
If you're writing PHP in 2026, give utopia-php/span a shot. And a massive shoutout to @lukebsilver, who actually built the library. I just learned from him and wanted to share what I picked up.
2026-04-19 19:56:09
On April 2, 2026, Amazon Web Services introduced Amazon ElastiCache Serverless now supports IPv6 and dual stack connectivity, adding support for IPv6 and dual stack connectivity on ElastiCache Serverless. This expands beyond the previous IPv4-only model and allows a cache to accept connections over both IPv4 and IPv6 simultaneously, enabling more flexible connectivity patterns.
In this post, I put the new dual stack capability to the test by verifying IPv4 and IPv6 connectivity on ElastiCache Serverless.
I started by enabling IPv6 at the VPC level by attaching an Amazon-provided IPv6 CIDR block, allowing resources inside the VPC to communicate over IPv6.
I then deployed an ElastiCache Serverless instance and selected dual stack as the Network Type during creation. This option was introduced in the April 2 update and allows the cache to handle both IPv4 and IPv6 connections at the same time. The selected subnets must support both IPv4 and IPv6 address space for this configuration to work.
From an EC2 instance with IPv6 enabled, I first verified that the cache resolves to an IPv6 address:
nslookup -type=AAAA <cache-endpoint>
Output:
The result shows AAAA records, indicating that the cache is reachable over IPv6.
Next, I validated connectivity using the approach recommended by Amazon Web Services for ElastiCache Serverless, using openssl s_client with filtered output for clarity.
openssl s_client -connect <cache-endpoint>:6379 -6 2>&1 | grep -E "Connecting|CONNECTED|Verification|Protocol"
Output:
The CONNECTED status confirms that a TCP connection is successfully established, while Verification: OK indicates that the TLS certificate is valid.
openssl s_client -connect <cache-endpoint>:6379 -4 2>&1 | grep -E "Connecting|CONNECTED|Verification|Protocol"
The IPv4 test also succeeds, showing that the same cache is reachable over IPv4 with a valid TLS session.
From this test, the dual stack capability in Amazon ElastiCache Serverless works exactly as described by Amazon Web Services. The cache resolves to an IPv6 address and accepts TLS connections over both IPv6 and IPv4 simultaneously from the same endpoint. This reflects a gradual migration path where IPv6 can be introduced alongside IPv4 traffic without impacting existing application connectivity.
Beyond dual stack, IPv6-only configuration is also supported as a separate option, allowing workloads that fully transition to IPv6 to operate without relying on IPv4 addressing.
Based on this hands-on test, the dual stack capability in ElastiCache Serverless performs well in real usage. The same cache can be accessed over IPv4 and IPv6, with both paths functioning as expected, and this capability is available at no additional charge across all AWS Regions, making it easy to adopt IPv6 in existing ElastiCache Serverless workloads as part of a gradual transition.
2026-04-19 19:54:00
In my last post, I talked about why I started building Juice—mainly out of frustration with class-heavy UI code and how messy things can get as projects scale.
If you haven’t read that yet, here it is:
👉 https://dev.to/stinklewinks/i-got-tired-of-class-heavy-ui-code-so-i-started-building-juice-4ocg
This post is about what came next.
Not just what Juice is, but what I’m actually trying to build with it.
After stepping back, I realized something:
The issue wasn’t just Tailwind-style class overload.
It was bigger than that.
Most UI systems today:
You can make them work—but you’re constantly managing them.
And that’s where things started to feel off to me.
Instead of this:
<div class="flex items-center justify-between p-4 bg-white rounded-lg shadow-md">
What if you could express intent more directly?
<div row centered gap="1" padding="4rem" card>
Not just shorter—but more meaningful.
That’s the direction Juice is going.
Juice is built around one simple idea:
UI should describe intent, not implementation.
Instead of:
You define:
Using attributes that map to a design system.
Classes are flexible, but they come with trade-offs:
Attributes, on the other hand:
Example:
<div grid="2" gap="4">
<div card>A</div>
<div card>B</div>
</div>
You immediately understand:
No mental decoding required.
Juice isn’t just about styling elements.
It’s about creating a cohesive UI system that includes:
The goal is to make UI:
Right now, Juice is still early.
But the direction is clear:
And eventually:
A system where developers and non-developers can both build interfaces without fighting the code.
Some of the things I’m actively thinking through:
At the end of the day, this isn’t just about CSS.
It’s about reducing friction when building ideas.
Because when UI becomes easier to reason about:
And that’s the real goal.
This is still evolving, and I’m learning as I go.
If you’ve run into similar frustrations—or have thoughts on this approach—I’d love to hear them.
Repo here:
👉 https://github.com/citrusworx/webengine/tree/master/libraries/juice
Next up, I’ll probably dive deeper into:
Appreciate you reading 🙏