MoreRSS

site iconHackerNoonModify

We are an open and international community of 45,000+ contributing writers publishing stories and expertise for 4+ million curious and insightful monthly readers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of HackerNoon

The Query Optimizer’s Mind: Architecting SQL for Distributed Scale

2026-04-11 04:48:41

The Black Box Problem

Most developers treat a SQL database like a black box: you put a query in, and a result comes out. But when you’re working with petabytes of data in a distributed Lakehouse like Snowflake or Databricks, that black box mindset is exactly what leads to five-figure cloud bills and constant timeouts.

In my experience, the secret to high-performance data engineering isn't writing clever SQL—it’s writing SQL that is easy for the Query Optimizer to understand. You have to think like the engine.

The optimizer is essentially trying to find the path of least resistance through your data, and if you give it a messy map, it will take the scenic (and expensive) route.

\

1. The Magic of Predicate Pushdown

The most important thing to understand about a distributed engine is that moving data is expensive. The engine wants to throw away as much data as possible before it starts joining or calculating. This is known as Predicate Pushdown.

Think of it this way: Imagine you're looking for a specific red book in a library with ten floors.

A bad query tells the librarian: "Bring all the books from the 4th floor to the front desk, and then I’ll check which ones are red." A great query tells them: "Only bring me the red books from the 4th floor."

When you use functions like

WHERE UPPER(status) = 'ACTIVE'

You’re forcing the engine to bring all the books to the desk first to calculate the uppercase version.

By keeping your filters clean—using WHERE status = 'active'—you allow the engine to push that filter all the way down to the storage layer, saving massive amounts of compute time.

\

2. Why Table Statistics Rule the Join

In a distributed join, the engine has to decide which table to hold in memory (the Build table) and which one to stream past it (the Probe table). If it picks the wrong one, you hit the Disk Spilling problem, where the engine runs out of RAM and starts writing to slow disk storage.

Even though modern optimizers are smart, they aren't psychic. They rely on Table Statistics.

If your stats are stale, the engine might try to hold a 50GB table in memory while streaming a tiny 10MB table.

By ensuring your ANALYZE TABLE commands are part of your ingestion pipeline, you give the optimizer the "eyes" it needs to pick the most efficient path.

\

3. Avoiding the Cartesian Accident

We’ve all been there: you miss one join condition, and suddenly a query that should return 100 rows is trying to return 100 trillion. This is a Cartesian Product.

In a distributed system, this doesn't just slow you down—it can literally freeze a cluster as it tries to broadcast massive amounts of data to every node.

Always use Explicit Joins (JOIN … ON) rather than listing tables in the FROM clause. It’s easier for humans to read and much harder for the optimizer to misinterpret your intent.

\

4. Group By vs. Window Functions: Use the Right Tool

I often see developers reach for Window Functions (OVER PARTITION BY) when a simple GROUP BY would do. Window functions are powerful, but they are resource-heavy because they often require the engine to keep the entire partition in memory.

If you just need a total count or an average, stick to GROUP BY.

It allows the engine to perform Partial Aggregation—calculating small totals on each worker node and then combining them at the end. This reduces network traffic and keeps your memory footprint small.

\

5. The materialization Shortcut

Sometimes, the optimizer simply cannot find a fast path through a complex set of joins. This is where Materialized Views or pre-computed tables come in.

Instead of asking the engine to calculate a complex clinical metric every time a dashboard refreshes, calculate it once an hour and store the result.

Architecting for speed often means knowing when to stop asking the database to be "real-time" and starting to be "smart" about pre-computation.

\

Comparison: How the Optimizer Sees Your Query

| Task | The Expensive Path | The Optimized Path | |----|----|----| | Filtering | WHERE DATE_DIFF(…) '2026-01-01' | | Joining | Joining on non-indexed strings | Joining on integer keys | | Aggregating | Window functions for simple sums | GROUP BY with partial agg | | Logic | Subqueries in the SELECT | Common Table Expressions (CTEs) |

\

Final Summary

SQL isn't just a language for asking questions; it’s a language for describing data movement. When you write a query, you are writing instructions for a massive, distributed machine.

By understanding the architectural rigor behind how the optimizer thinks, you can build systems that aren't just fast, but are sustainable and cost-effective. In a world of infinite data, the most valuable skill is knowing how to ignore 99% of it.

Stablecoins vs Traditional Banking: The New Financial Infrastructure

2026-04-11 04:37:50

Stablecoins have come from nowhere to create a parallel financial world alongside traditional banking. These crypto tokens, usually pegged for a 1:1 exchange with fiat currencies and backed by liquid assets, aim to bring the best of blockchain to the table and add a dash of stability. Over the past few years, stablecoins have really taken off: we're now talking about market capitalization in the hundreds of billions and trading volume in tens of trillions annually. They're promising faster, cheaper international money transfers and new financial services to boot - but this is also spurring a whole host of questions about how they fit into the existing banking and regulatory worlds.

Here, this article looks at stablecoins and traditional banking side by side, and we'll be taking a close-up look at how stablecoins actually work, and exploring the technical and liquidity differences between them and good old bank deposits. We'll also be examining the emerging regulatory landscape - and, of course, tackling the growing concern from the banking industry about people pulling their money out of the system, and how the traditional banking world is responding to this digital upstart.

As Geoff Kendrick from Standard Chartered puts it bluntly: "stablecoins are a real threat to traditional banks, a systemic risk that a lot of people are still trying to ignore". But on the flip side, advocates like Circle's Jeremy Allaire are saying stablecoins can actually revolutionize finance, rather than tearing it down. We aim to put some facts and expert opinion on these claims, and show how the stablecoin and banking worlds are evolving alongside each other in the modern payments space.

How Stablecoins Work

At a technical level, stablecoins are centralized tokens that get issued on all sorts of blockchains - they're usually pegged to a national currency, which is most commonly the US dollar, by the simple act of setting aside cash, bonds, or other assets in reserve. Unlike the totally decentralized world of Bitcoin, stablecoins need a sponsoring entity to keep their value on track.

Most of the big stablecoins (we're talking about things like Tether's USDT and Circle's USDC here) stash their reserves in short-term treasury bills or commercial paper. As the IMF has pointed out, "Most stablecoins are denominated in US dollars and are typically backed by US Treasury Bonds". When it all comes down to it, that means each token essentially represents a claim on a small slice of the reserve portfolio. It's a bit different from putting money in a traditional bank - as a rule of thumb, only a tiny fraction of deposits get kept in the bank itself, the rest get loaned out.

Stablecoin reserves often get held in separate bank accounts (which aren't even in the crypto-space, as it were), but looking at some recent analysis, issuers like Tether seem to be holding only tiny fractions of their reserves in actual bank deposits. For Tether, it's as low as 0.02 percent, whereas Circle is a bit better off, holding about 14.5%.

In theory, stablecoins are super fast to move around. You can do cross-border payments, which can take days via the banking system, in a matter of minutes or seconds via token transactions. The BIS has said that stablecoins could really help make global payments "faster, cheaper and more inclusive" than current systems. And the IMF too has pointed out the potential for stablecoins to drive innovation in retail and cross-border payments, especially in regions that are a bit harder to reach.

At the end of the day, stablecoins are basically a different way of doing some of the same things that bank accounts do: they let you make payments and use them as a place to stash your cash, but under a completely different model. And importantly, stablecoins are totally backed (in theory), so you can get your money back on demand from the issuer, which is a bit different from bank deposits, which are protected by deposit insurance but are part of a system where only a fraction of deposits are kept in the bank.

Banking vs. Stablecoins: Deposit Dynamics

For consumers and businesses, putting cash into a bank account versus a stablecoin wallet might seem like two sides of the same coin at first. Both give you instant access to digital cash you can use to make payments. But scratch beneath the surface, and it becomes clear they're as different as night and day.

A bank deposit is basically a debt on the bank's balance sheet. When you chuck $100 into a bank account, the bank is saying it owes you $100, but in reality they only keep a tiny fraction of that in cash on hand. The rest they lend out or invest, raking in the interest and profits for themselves. Banks count on having a steady flow of deposits (which don't cost them a thing) to fund their loans. This system has been the bedrock of economies for generations until stablecoins came along, that is.

Stablecoins turn this whole equation on its head. When you spend $100 on USDC or USDT, for instance, that cash is usually parked in a reserve account (a $100 chunk in T-bills, for example). You end up with a token on the blockchain that represents the equivalent amount of fiat cash. The main difference is that stablecoins aren't directly funding loans in the same way bank deposits are. The reserves backing stablecoins are usually cashed up in super liquid assets that aren't being lent out the way bank deposits are. Take Tether, for instance they only keep 2 cents of their reserves in the bank, the rest is parked in Treasuries. The reason for this is that stablecoins can be redeemed at any time (subject to the rules the issuer sets out) without putting the lending capacity of a bank at risk. And the flip side of that is that while customers take their deposits out of the bank to put them into stablecoins, that money isn't flowing back in to fund loans; it's going straight to the capital markets. In other words, the money is actually leaving the banking system altogether.

Liquidity and Financial Stability Concerns

This sparks a major showdown between stablecoins and banks over the liquidity outflows. Bank CEOs are all too aware that as stablecoins start to get really popular, a whole lot of customer cash, we're talking deposits here, is going to start shifting over to crypto networks.

Going by what happened during a 2026 earnings call, it's clear that Bank of America’s CEO Brian Moynihan is seriously worried that letting stablecoins pay interest " could be a recipe for disaster" for the US banking system. He even quoted from U.S. Treasury studies suggesting a whopping $6 trillion could just up and move to stablecoins that pay interest, commonly known as 'yield-bearing' stablecoins. A report from Standard Chartered painted an even more sobering picture; they think we could see around $500 billion of US bank deposits flowing into stablecoins by 2028. This, of course, would leave the banks bang out of luck because stablecoin issuers aren't exactly stashing their cash in the banks; on the contrary, we know that they are basically putting almost none of their reserves into banks.

And then what happens? The short answer is that banks are in trouble specifically because with fewer deposits to play with, they've got less money to lend out. And for regional banks, which count on making a chunk of their profits from the interest paid on deposits, things are looking particularly grim. To get by, banks might be forced to lend less money or raise interest rates for borrowers. This whole cycle could knock the wind right out of economic growth and make borrowing even more expensive.

Standard Chartered is warning its customers that "the rise of stablecoins, those dollar-backed tokens, could unleash a massive outflow of US bank deposits". Their head of digital assets, Geoff Kendrick, has put it even more blunt: "stablecoins are an absolute risk to traditional banks, a systemic threat that a lot of people are blissfully ignoring." To put it bluntly, if people start widely adopting stablecoins, we're looking at a situation where the very foundations of the US banking system its deposit base could start crumbling away.

Comparing Financial Infrastructure

Despite all these reservations, stablecoins and banks serve jobs that kinda overlap yet stay pretty distinct.

Payments & Speed: Traditional banks have payment systems (like ACH & SWIFT) that get the job done just fine when it comes to well-established economies. But, cross-border transfers are often slow (we're talking days) and super pricey. Stablecoins, on the other hand, settle in seconds on a blockchain and never close, operating 24/7 no matter where you are in the world. The IMF reckons that stablecoins could seriously speed up and reduce the cost of remittances and cross-border payments, with the IMF talking up to a 20% reduction in some cases.

Getting Access: Banks need accounts, which can be a real barrier for people without one. Stablecoins, on the other hand, can be accessed by anyone with a phone and a connection to the internet. That makes financial services a lot more inclusive, and the IMF is saying that many developing regions are actually skipping the traditional banking system and leapfrogging straight to mobile and digital currencies.

Monetary Control: Banks are kept in line by the central bank and are insured. Stablecoins are in a weird middle ground. Without a decent regulatory framework in place, they can undermine the central bank's control over the money supply, for example, in a country with high inflation, for example. People might just opt for USD stablecoins instead of the local currency, which would mean that instead of holding their money locally, they're holding onto their money abroad. That in turn limits the central bank's ability to properly control the economy.

Collateral and Reserves: Bank deposits are liabilities that need to be backed up with some cash in reserve; stablecoin reserves are assets that back the tokens. This kind of flips things on its head, so stablecoin systems don't have the same safety net as a traditional bank would. If people start to lose confidence in the stablecoin's backing, they may all try to cash out at once, which could force the issuer to liquidate all their reserves in one go (essentially a "run" on the stablecoin). The BIS reckon that stablecoins could be in a whole lot of trouble if that happens, which is why they're saying that they really need some decent governance & prudential safeguards to make sure that stablecoin systems get it right

Regulatory Landscape

Across the globe, regulators are struggling to get a handle on stablecoins. In the US, we've got some legislative proposals like the SAFE Innovation and CLARITY Acts, which aim to figure out where stablecoin issuers sit in terms of the law and could either ban or heavily restrict interest being paid on deposits into stablecoins. It's a pretty heated debate. Big banks are pushing for tight rules to protect their deposit business, but crypto firms are warning that over-regulation will stifle new ideas from taking off.

In Europe, meanwhile, the MiCA regulation (scheduled to come into effect in '24) goes a step further by banning interest on stablecoins altogether and also requires 30% of the reserves backing those stablecoins to be held as traditional bank deposits, rising to 60% for the really big players. This highlights just how seriously regulators view stablecoins; they're being treated as much like bank money as just about anything else. But the approach isn't uniform. The IMF is stressing that the future of stablecoins is going to depend on policymakers finding a balance between letting innovation happen and keeping things stable.

Stablecoins are effectively the bridge between crypto and traditional finance, which is probably why we're seeing equal amounts of interest and hostility from both sides. Policymakers are weighing the pros (like faster payments and getting more people financially included) against the cons (like bank runs and the potential for illicit finance). One thing that's definitely happening is that there's a growing consensus that stablecoins should be treated as payment instruments, which in practice would mean they'd come under pretty heavy regulation.

Expert Perspectives

Experts differ on the impact. Traditional bankers express alarm: Bank of America’s Moynihan warned that stablecoins paying yield “would more closely resemble money market funds” and could “seriously endanger banks’ deposit base”.Standard Chartered’s Kendrick likens unregulated stablecoins to a silent bank run on the American financial system.

By contrast, crypto entrepreneurs highlight stablecoins’ benefits. Circle’s Jeremy Allaire calls fears of stablecoins “completely absurd,” arguing these tools can transform finance without destroying it. Coinbase CEO Brian Armstrong has warned against laws that favor banks, saying “it’s better to have no law at all than a bad law”. The IMF suggests that with proper oversight, stablecoins can improve global finance, noting that stablecoins are “growing in influence” due to their integration with mainstream markets.

Both sides agree on one thing: the integration of stablecoins is already underway. Even the ECB notes some euro-pegged stablecoins exist despite regulators’ preferences.

Data Insights

Recent data shows the sector is locked in a tug of war. Stablecoin trading volume has skyrocketed to around $23 trillion in 2024, a staggering 90% jump from 2023 levels. Most of that activity still revolves around crypto trading, but cross-border flows are growing at a dizzying pace. Meanwhile, the US bank deposits are sitting at a whopping $18 trillion. Analysts at Standard Chartered reckon that up to $0.5–6 trillion of those could shift to stablecoins by 2028, which would be a seismic shift in terms of liquidity.

As it stands, adoption is a bit of a mixed bag: Asia is way out in front in terms of stablecoin usage, while Africa and Latin America are showing some impressive adoption rates relative to GDP. What this says is that stablecoins aren't just some American trend, they're attracting global capital from all corners of the globe.

Conclusion

Stablecoins and traditional banks are in the midst of a messy evolution, sometimes working together, sometimes competing with each other to be the top dog in the financial game. This new reality is a game-changer for anyone who works with data you need to be on top of this new financial landscape if you want to stay ahead of the curve.

As the old money system based on traditional banks starts to crumble, it's being replaced by a hybrid system where tokens and cash coexist in a single, messy infrastructure. Standard Chartered's analysts aren't beating around the bush when they say that stablecoins are basically making the US dollar even stronger around the world but also causing the institutions themselves to get weaker. Whether that weakening is a major problem or just a short-term blip is still up for grabs. What we do know is that stablecoins are no longer some esoteric concept that only a handful of people care about they're pushing the boundaries of what is possible with money and payments.

\ Banks will find a way to adapt to this new world ( a lot of them are already experimenting with stuff like tokenization and digital currencies), and regulators will be tweaking the rules as they go along. And in the meantime, stablecoins are basically a real-world stress test for the entire financial system. As one expert puts it, "the line is getting increasingly blurred between innovation and security". And for anyone working in fintech or data analysis, the key is to watch how stablecoins and banks are reshaping each other and the new financial infrastructure that's emerging from the wreckage, block by block.

\

Frontend Minimalism: Build Faster, Lighter Apps Without Overengineering

2026-04-11 04:26:52

After building more than a few dozen web applications, I keep noticing the same thing: as frontend developers, we often make projects more complicated than they need to be. We bring in heavy libraries just to cover a couple of features. We add abstractions that end up confusing even us. We do not keep an eye on unnecessary re-renders. We do not think about bundle size. And then we wonder why the project has become slow, awkward, and difficult for new people on the team.

I do not think every project should be built with “minimalism at any cost”. No. But for small and medium-sized products, there is often a lot of room to simplify life for both yourself and your team without sacrificing quality. In this article, I will show how I usually approach frontend development like this: calmly, without unnecessary noise, and without worshipping heavy solutions.

One important thing up front: I will not go into every tool and every library in detail. Otherwise this article would turn into a reference manual. My goal is different — to show the general approach and the things I actually find useful.

Why this is worth doing at all

This approach has a few clear advantages.

  1. Clean code structure.

    The fewer unnecessary entities you have, the easier the project is to understand. If I can get by with the standard tools provided by the language and platform, I do. There is no need to invent an extra layer where one is not needed. Such a project is easier to maintain and easier to grow. And if it does become larger over time, you can always move to a more scalable structure and a more powerful state manager.

  2. A small bundle.

    For small and medium-sized projects, this is a very big win. One tidy bundle, with no unnecessary requests and no lazy loading, often gives excellent results. Modern browsers support compression, and Brotli usually gives better compression than gzip, although the compression itself can take a little longer.

  3. Easier onboarding.

    If a new developer joins the project, they will understand a simple structure much faster. That matters even more when there is more than one person on the team.

  4. Easier to work with AI tools.

    We are increasingly using models for code, tests, and refactoring. The clearer the project is, the easier it is for models to write code in it without making unnecessary mistakes. That does not mean AI will replace a developer. But it understands good, simple code better than code that is complex and overloaded.

  5. A developer with less experience can maintain such a product.

    That means less cost for the business.

There are also downsides.

  1. In real work, you often cannot choose everything yourself.

    A company may already have adopted MobX, Material UI, or another ecosystem, and you have to live with that. And very often the business insists on tasks and decisions that will throw all our minimalism straight in the bin.

  2. Design frameworks are convenient.

    We are used to them. And sometimes you really do not want to go back to the days when you had to build every button, modal, and tooltip by hand. But for the sake of simplicity, I often build my own buttons, text fields, and other basic components. That helps me keep full control over them, instead of pulling them in from a third-party design system where everything is overthought and weighed down with unnecessary features.

  3. This approach works best for startups, prototypes, and smaller products.

    For a very large and complex product, you may sometimes need to add more complexity than I would personally like.

Where I start

I like to begin with the simplest possible setup.

Editor

If you want to go to the extreme of minimalism, you can use Neovim. But I usually go with plain VS Code. It is free, easy to understand, and simple to configure. You can tailor it to your preferences with extensions and appearance settings without turning the whole thing into a religious argument about editors.

Project foundation

First I make sure I have a recent version of Node.js installed, then I create a project through Vite. Vite officially supports the preact-ts template, so it is easy to get started with.

Example:

npm create vite@latest my-preact-ts-app -- --template preact-ts

Why this approach?

Preact is a lightweight alternative to React. In the official Preact materials, the emphasis is on small size, good performance, and closeness to the DOM. It also has one important difference from React: it does not use a synthetic event system, but works through the native addEventListener. That makes it closer to standard DOM behaviour.

I consider TypeScript essential. These days, most projects feel uncomfortable without it. It helps keep data types in order, catches mistakes earlier, and makes existing code easier to understand.

Folder structure

I like a simple structure. Nothing fancy.

Something like this:

  • components — shared components
  • pages — pages
  • services — classes and functions for working with the backend
  • contextscreateContext
  • providers — providers with logic
  • hooks — shared hooks
  • models — types, contracts, and interfaces
  • utils — reusable functions
  • consts — global constants

You can make things more complicated with FSD, a modular structure, or your own scheme. But I often choose the simpler route. When a project is small or medium-sized, extra architecture only gets in the way. I prefer structure that helps rather than structure that starts living its own life. Of course, we do not always know from day one whether a project will become complex, but you can always spend a bit of time later and move, for example, to FSD. Personally, I usually stick with my own structure and mix it a little with modular architecture, turning components in the components folder and pages in the pages folder into modules that can have their own hooks, utilities, subcomponents, and constants.

Code quality tools

ESLint, Prettier, and Stylelint have long since become standard. I do not think setting them up makes a project more complicated. On the contrary, they help keep the code clean and predictable.

These tools add almost nothing to the frontend bundle, but they make a huge difference to the quality of work, not only in a team, but even when you are working alone.

What to do about state

This is usually where unnecessary complexity starts.

In many small projects, plain React Context is enough. Not for everything, of course. But for some tasks — absolutely.

For example, interface language.

export interface LanguageContextType {
  language: ELanguage;
  setLanguage: (language: ELanguage) => void;
}

export const LanguageContext = createContext<LanguageContextType | null>(null);

export const LanguageProvider = ({ children }: PropsWithChildren) => {
  const [language, setLanguage] = useState<ELanguage | null>(null);
  const [location] = useLocation();
  const contextValue = useMemo(() => ({ language, setLanguage }), [language]);

  useEffect(() => {
    const languageInPath = location.split('/')[1] as ELanguage;
    const currentLanguage =
      languageInPath in ELanguage ? languageInPath : ELanguage.en;

    document.documentElement.lang = currentLanguage;
    setLanguage(currentLanguage);
  }, [location]);

  if (!contextValue.language) return <Loader />;

  return (
    <LanguageContext.Provider value={contextValue as LanguageContextType}>
      {children}
    </LanguageContext.Provider>
  );
};

After that, any component can access the data like this:

const languageContext = useContext(LanguageContext);

And that is it. No extra magic.

If you still want something more convenient, you can use a lighter store such as Signals, Jotai, or Zustand. I do not see a problem with that. The main thing is not to choose a heavy tool just because it is fashionable. Use it only when it genuinely makes the code simpler.

Working with backend data

A lot of people automatically reach for axios. I have done that myself many times. But in many cases, plain fetch, wrapped in your own function, is enough.

export const fetcher = async <ResponseType, PayloadType = undefined>(
  url: string,
  method = 'GET',
  payload?: PayloadType,
  prefix = import.meta.env.VITE_BACKEND_URL + '/api/',
) => {
  const response = await fetch(prefix + url, {
    method,
    headers: {
      'Content-Type': 'application/json',
    },
    credentials: 'include',
    body: payload ? JSON.stringify(payload) : null,
  });

  if (response.ok) {
    return (await response.json()) as ResponseType;
  }

  const error = (await response.json()) as ErrorResponse;

  throw new ErrorResponse(error.message, error.httpStatus);
};

The point is simple: do not bring in an extra dependency if you can solve the job with a short, clear wrapper.

For reading and updating data, I would look at SWR. SWR has a minimal API, caching, revalidation, and request deduplication. This is exactly the kind of tool that does the job without weighing the project down. In the official documentation, SWR specifically describes useSWR and useSWRMutation, and for a standard REST example it shows a fetcher based on native fetch.

Fetching data:

const {
  data: cardData,
  error,
  isLoading,
} = useSWR<Card, ErrorResponse>(cacheDataKey);

Mutating data:

const { trigger: triggerDeleteCardLink, isMutating: isDeletingCardLink } =
  useSWRMutation<CardLink, ErrorResponse, string, number, Card>(
    cacheDataKey || '',
    async (_url, { arg: cardLinkId }) =>
      await fetcher<CardLink>(`card-links/${cardLinkId}`, 'DELETE'),
    {
      revalidate: false,
      populateCache: (cardLink, card) => {
        if (!card) throw new Error('Card not found');

        return {
          ...card,
          links: card.links?.filter((link) => link.id !== cardLink.id),
        };
      },
    },
  );

I like this approach because it gives me control. I decide what gets updated and when. I do not hide the logic behind ten layers of abstraction. And in a small project, that is often the better choice.

Tests

There is almost never enough time for tests. That is true.

But in a simple application, unit tests do not feel nearly as scary. AI tools are quite good at writing them now, especially when the code itself is not overloaded.

I would not say tests can be ignored. Quite the opposite: when the project is simple, it is easier to write basic tests. And that helps a lot when you later need to change something without being afraid of breaking everything.

Design frameworks

My view here is simple.

If a project genuinely needs a heavy framework, fine. But in small projects, it is often not needed. Sometimes it is easier to build components yourself. Then you have full control over appearance, behaviour, and code size.

Yes, it takes a little more manual work. But afterwards you do not have to live inside someone else’s decisions, which are often difficult to reshape for your own needs.

Other small libraries

I do not like dragging large packages into a project for one or two functions. But there are some good small utilities that really do help.

For className, I would use a small helper such as classcat, if it fits the style of the project. The idea is simple: no long ternary chains in TSX and no need to keep class logic in your head.

Example:

className={cc([
  styles.tariff,
  isPremium ? styles.tariff_accent : styles.tariff_default,
])}

For icons, I like react-icons. It is a large library of SVG icons that supports tree shaking, meaning only the icons you actually use end up in the bundle.

For routing, I prefer a minimal approach. In the Preact ecosystem there are solutions such as wouter, and wouter describes itself as a tiny router for React and Preact. In its repository, it is explicitly called a small router based on hooks.

For utilities, I do not see a problem with lodash if it is genuinely justified. But I would not bring it in just out of habit. If you only need one function, it is often better to use just that one, or even write a small custom version. That keeps the code cleaner and the bundle lighter.

About static compression

This is another thing that people often forget.

You can write the code perfectly, and then still send the user heavy files without proper compression. I usually look towards Brotli. It gives very good compression and is well supported by modern browsers. The fact is that Brotli gives better compression than gzip, and in modern environments Brotli and gzip remain the main options for HTTP compression.

If your frontend is already small, good compression makes the whole picture even better.

Final thoughts

My conclusion is simple: frontend does not have to be heavy, complicated, and bloated. In many projects, you can achieve a very good result without unnecessary dependencies and without complicated architecture. For example, I have managed to launch two real projects with bundle sizes of 50 KB and 100 KB. And these were fully working applications.

My approach is this:

  • start with standard tools;
  • then use simple, clear tools;
  • and only then move to more complex solutions, if they are genuinely needed.

This is not about a poor stack, and not about saving money for the sake of saving money. It is about common sense. About code that is easy to read. About a project that is easy to maintain. About a bundle that does not grow for no reason. And about a team that does not have to suffer living inside that code.

To me, that is what proper frontend minimalism is: not doing extra work where things can be made simpler.

Fine-Tuning vs Prompt Engineering

2026-04-11 04:13:56

What Actually Works in Production

A practical framework for choosing your LLM optimisation strategy

Introduction: The Great Optimisation Divide

The combination of Large Language Models (LLMs) and team building presents a common dilemma in the development of systems using LLM technology: Is it better to conduct a fine-tuning of the model or to design better prompts? The industry has falsely simplified a complex problem. In my case, after designing and implementing numerous production systems, the solution has never been fine-tuning LLMs OR prompt engineering techniques. It's recognising when to apply which method, and many times, integrating a fine-tuning with prompt engineering.

The article won't address the engineering and technical details. Rather, it focuses on the pragmatic aspects of the solution when talking about customers waiting, budgets being limited, and time being of the essence.

\

The Promise and the Reality

Most people find prompt engineering too good to be true, without the need to invest in infrastructure or experience costs and obtain results immediately. Imagine spending hours crafting the perfect system message in hopes of obtaining perfect domain understanding from a model. If only.

Now, consider the time and effort involved in the process of fine-tuning, it's too 'scientific' for many, involving data collection, model retraining, and the deployment of a customised system. Nothing about this seemingly long-winded process can be better… or can it? Yes, in fact, the minimal gains of prompt engineering versus fine-tuning are not only true, but are also the dimensions that engineering, fine-tuning, and prompt engineering are too often questioned for in professional settings.

When Prompt Engineering Actually Wins

1. You're Building Fast and Iterating: Need to launch in weeks, not months? Prompt engineering wins speed unparalleled. If you're looking for a quick solution to about 80% of your problems, prompt engineering your system message and implementing examples and reasoning will resolve your problems quickly.

2. Your Task Varies Across Contexts: Explain to me why fine-tuning would not be a maintainability nightmare if I’m tasked with reviewing legal documents or providing customer support in different jurisdictions and industries. Variability is the issue on which prompt engineering thrives, and fine-tuning falls short.

3. You Need to Adapt Frequently: With prompt engineering, your system message is live in a matter of minutes, while fine-tuning leaves you waiting weeks for retraining processes of your systems. Quality is frustratingly useful, but in production settings, flexibility is the most useful feature.

4. Your Data is Proprietary or Sensitive: Certain organisations are unable to share data with third-party APIs for fine-tuning. Others are unable to handle the cost associated with the maintenance of customised models. Data protection through prompt engineering is secured by fine-tuning your data and logs. This is security and compliance.

The Hybrid Reality: What Actually Works

In the production systems I have encountered, the answer is both. The model is

1. Start with Prompt Engineering. To begin, swiftly deploy with a system prompt, examples, and reasoning patterns that are well-structured. This should address 70-80% of your concerns. Then, measure everything.

2. Define Your Failure Modes. Do not use guesswork. Analyse your errors. Is the model failing on the format (then fine-tuning might assist)? Is it failing on the context (then prompt engineering might assist)? Is it failing on the trade-offs you have to make (then either could assist)? Be ruthless in your prioritisation.

3. Data Collection Should be Focused on Your High-Impact Failures. Only failures that concern you the most matter. If you are 95% accurate already, then you likely do not require fine-tuning. If your failures are erratic, then prompt engineering might not help. Construct your dataset wisely.

4. Fine-Tune for Your Designated Patterns. After you have 200+ high-quality examples, fine-tuning will be economically feasible. Combine it with your most effective prompts from step 1.

5. Assess the Improvement. Conduct A/B testing. Compare a fine-tuned model with a great prompt against the base model with a great prompt. If the improvement isn't worth the trade-offs in infrastructure and latency, then continue with prompt engineering.

The Economics: Where the Rubber Meets the Road

Fine-tuning requires an additional expense beyond the initial cost of training. There are ongoing costs related to infrastructure: model hosting, version control, A/B testing, and the mental strain that comes with juggling multiple models.

In addition, prompt engineering is time intensive. There is the time to construct and iterate on prompts, the time to update example dataset repositories, and the time that those examples add to each request.

As a company grows, the following paradigms are what I've observed:

  • Scenario: Winner — Why
  • < 100M inferences/month: Prompt Engineering — Fixed cost infrastructure dominates
  • 100M - 1B inferences/month: Hybrid — Fine-tuning cost per inference becomes significant
  • \

1B inferences/month: Fine-Tuning — Unit economics favour optimised model

  • Frequent changes needed: Prompt Engineering — Update latency matters more than inference cost
  • Locked-in behaviour needed: Fine-Tuning — Cannot achieve with prompts alone
  • Required changes with high frequency:
  • Prompt Engineering — Latency to update the model matters more than the cost of inference.
  • Required behaviours that are 'locked-in':* Fine-Tuning — This cannot be achieved with prompts alone.

\

Common Mistakes I See Every Day

  • Error 1: Baseline absent after fine-tuning: Teams will spend extensive amounts of time refining data and tuning the system without recognising the need for baseline objectives for their prompts. Establish a baseline for prompt engineering in order to justify fine-tuning.

    \

  • Error 2: Misalignment of training data and the intended production: The fine-tuned models' training data is misaligned with your production model and the historical training logs. Continuously check that the training data aligns with the current expectations of the model.

    \

  • Error 3: Failure to address distribution shift: Not considering the distribution shift. A model trained for data for January will not perform at the required standard for a shift to March data. The fine-tuned model will require a wired system. More than prompt engineering is required here.

    \

  • Error 4: Optimising for the wrong metric: A prompt that is improving an F1 score is at the expense of latency. Your prompts become lengthier. Be concerned with what is of importance to the users and the production system: cost, speed, accuracy, and user satisfaction, rather than focusing on the target benchmark for your prompts.

    \

  • Error 5: A single instance of prompt engineering: The best prompts will put in place a required system of control for them to be flexible and be allowed to shift with the data, the evolution of your understanding of the problem at hand, and the evolution of your business needs. Give your prompts a new face through testing them and A/B them out in the production space.

\

What's Changing

Three catalysts are driving changes in the calculus.

Firstly, advancements in prompt engineering. The ability to use more advanced construction systems, such as retrieval-augmented generation (RAG) and function calling, as well as other prompt techniques, means that fine-tuning is no longer necessary to the same level. The differences between prompt techniques and fine-tuning are quickly evaporating.

Second, inference costs are increasingly becoming cheaper. Depending on the type of inference, fine-tuning as an economically viable option may not be possible. The ability to add examples, context, and so on, to each prompt assists in making inference costs cheaper.

Finally, increasing model intelligence. The ability to request greater compliance and robustness to prompt variations improves instruction compliance. Models suffer fewer losses from fewer example patterns. The advantages are dissociated from fine-tuning.

It is evident that fine-tuning is not obsolete but is more specialised and narrow in its application. The majority of teams should view prompt engineering as the foremost technique.

Conclusion: Start Simple, Scale Deliberately

My most successful teams start with prompt engineering and focused measuring, moving to fine-tuning only when it is economically and data justified. They don't consider these as rival tools but rather view each as a means to solve a problem with optimally balanced trade-offs.

To manage a system, you must understand and can troubleshoot. That's what prompt engineering allows you to do. Next, you should focus on assessing what exactly is broken. The data captures what you need to address. After this, focus on resolving issues that are important to your users.

For that last step, fine-tuning is appropriate. Not for the first step. Not for the obvious first step. But for the right step, at the right time, and with the right data.

The Intelligence Paradox: Why We're Building LLMs Wrong (And How to Fix It)

2026-04-11 03:59:02

LLMs aren’t failing because they’re small—they’re failing because scale is mistaken for intelligence. Benchmarks don’t reflect real-world use, alignment remains unsolved, and energy costs are ignored. The future of AI lies in specialized systems, human feedback loops, and interpretable architectures—not bigger models. The winners will build ecosystems, not just models.

How AI-Powered Demand Sensing Is Transforming Real-Time Supply Chain Planning

2026-04-11 02:33:43

Traditionally, supply chains operated within reasonably stable and predictable demand environments. Businesses utilized the historical sales pattern to plan for production, inventory, and logistics. These were usually based on historical data and monthly forecasts. This has changed significantly. Due to the pace of change in consumer buying patterns; increased spikes in demand due to promotional activity; and the conditions occurring outside of the control of businesses (e.g., weather-related disruptions, economic indicators), the way in which consumers purchase goods can change rapidly.

Because of this, companies that use historical data as a planning model are finding it increasingly ineffective; therefore, many are turning to AI-based demand sensing (the ability to use advanced analytics to collect real-time market signals to create timely actionable plans). The changes made from the assistance of AI are revolutionizing how supply chains function, allowing for fast and accurate reactions to volatility.

Why Traditional Forecasting Struggles in Modern Markets

The conventional forecasting approach has been mostly confined to established and stable environments, and therefore, these types of forecasts largely depend on historical values and time series analysis with the assumption made that past demand will remain the same into the future. While time series type models are appropriate for calculating accurate forecast information in low variable situations, they cannot provide accurate forecasts for today's very dynamic environment.

Recent studies have indicated that during an event with very high amounts of volatility (i.e., promotional periods, external events), accuracy of traditional models can be decreased by as much as 20% to 40% relative to actual demand, which has forced planners to operate reactively using manual override methods combined with manual adjustments performed through spreadsheets.

For example, during the COVID-19 pandemic, numerous consumer packaged goods companies experienced Demand variances exceeding 200% in key categories, making their historical methods of forecasting nearly ineffective. When this occurred, consumers experienced extremely low levels of storage in some areas while extremely high levels of storage in other locations. This indicated that lagging indicators cannot provide a high enough level of accuracy to support timely decision-making.

The Emergence of Demand Sensing

Demand sensing is an important move away from traditional historical based forecasting and towards continuous, real-time signal-based planning. Demand sensing doesn't just look at old data; it also analyses current data such as POS transactions, retail distributor stock movement, online and offline searches, macroeconomic statistics, and weather patterns or local events.

Through the use of machine learning algorithms to examine these multiple data sources together, companies will be able to determine demand change sooner than they could ever do before - often within days vs weeks.

For example, several global consumer goods companies have deployed demand sensing platforms that combine retail sell-through data with one or more external signals (such as weather), allowing them to adjust production or restocking decisions almost instantaneously. These applications have been reported to create up to a 30% increase in accuracy of short-term forecast accuracy as well.

Turning Diverse Data into Smarter Forecasts

AI-based demand sensing merges different kinds of information and creates a comprehensive picture of what is happening. Demand forecasting as it has always been such as using only previous sales data to calculate demand. AI Demand Sensing considers other types of data to identify what drives demand change rather than solely looking at sales data.

Some types of data that can be used together include retail point of sale data to see how much of each item has been sold and is on the shelf at stores; demographic and consumer behavior data; weather data which has been shown to be correlated to demand for many different categories of goods (such as apparel, beverages, energy products); and digital signals (such as online reviews, trending search terms, and social media activity) can provide insight on potential changes in demand long before customers actually purchase the product.

Companies are examples of how to use many different sources of data to improve inventory management. One such organization has specifically used predictive analytics to help place inventory close to where it is expected to be needed, which reduces lead time and lowers fulfilment costs. The goal of using all of these multiple data types is to be able to create much more accurate forecasts with much finer levels of detail.

How Leading Companies Are Applying Demand Sensing

Many top companies across multiple industries are incorporating real-time demand sensing capabilities into their supply chain business models. For instance, in the consumer packaged goods sector, some organizations have found tremendous success by combining real-time retail data with promotions and other notifications to help them modify their production schedules and minimize excess inventory. In retail, certain fast-fashion players have received global attention for their extremely fast demand response systems; they utilize near real-time sales data and update production schedules within weeks instead of waiting for an entire season. There are also software vendors that offer AI-based demand sensing solutions that utilize hundreds of unique data points from various locations in the supply chain. Additionally, manufacturers of automobiles are beginning to use predictive analytics to accurately align their vehicle production with the shifting demand for vehicles, including electric vehicle sales, along with specific configurations and/or options for each model.

Examples of companies that have successfully implemented demand sensing solutions have experienced demonstrable results in the form of inventory reductions of 10% to 20% and improved customer service levels of 5% to 10%, or greater.

The Impact on Real-Time Supply Chain Planning

All over the supply chain, demand sensing technologies stand to influence every area of a company. With demand being visible sooner than it ever has before, businesses will now be able to react to changes in demand quicker than they ever have. The enhanced demand sensing capabilities will also provide businesses the opportunity to enhance their overall inventory optimization by ensuring that they will not experience both stockouts and excess by having better plans regarding how to plan for the necessary proficiencies in making future forecasts. In the same vein, transportation networks can organize themselves for changes in volume much earlier than they have in the past, which has made them more successful when it comes to helping companies provide product to their customers on time.

On the supply versus demand side, companies can conduct operational simulations in order to analyze potential future demand scenarios, which will aid them in establishing the projected future supply and demand balances; these simulations can be based off of true demand signals and can also demonstrate how external factors will impact supply and demand balances. With the ability to conduct these types of analyses, companies will transition from a reactionary to a proactive decision-making process; rather than reacting after the fact to an event, planners will plan for an event occurring before it occurs based on continuous updates that allow them to identify the time in which they will respond based on how to use current data that has been collected by consumers to plan.

A More Responsive Future for Supply Chains

In today’s complex and globalized supply chain environment, demand sensing is becoming more critical than ever for improving a business’s competitive position. Rather than being a periodic exercise, demand sensing has evolved into a continuous process of capturing and interpreting demand signals from the marketplace and making operational decisions based on those signals. By successfully implementing demand sensing, companies can improve visibility into the drivers of demand, respond faster to changes in market conditions, and better align their supply with demand. With the new data sources available and advances in artificial intelligence (AI) and machine learning technologies, demand sensing will become an essential capability of next-generation supply chains.