MoreRSS

site iconHackerNoonModify

We are an open and international community of 45,000+ contributing writers publishing stories and expertise for 4+ million curious and insightful monthly readers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of HackerNoon

AI Products Have Terrible UX: Here's Why

2026-04-07 05:01:44

If you're building an AI product right now, the single most high-leverage thing you can do isn't to upgrade your model. It's to watch five users try to use your product without any help from you. Don't say anything. Just watch. You'll see exactly where the AI ends, and the confusion begins. That's your design debt.

Anti-Knowledge Is Blocking Your Next Skill: Here's How

2026-04-07 04:29:22

The opposite of knowledge is not ignorance.

I type fast. Years of specific hand positions, now firing without thought. I learned, recently, that if I want to type significantly faster, I cannot simply practice more. My lack of speed comes from sub-optimal finger movements that have become reflexive. I have to get slower first. I have to rewire the motor patterns before I can reach something mathematically more optimal.

\ This is how skill works when your scheme is wrong.

\ Call it anti-knowledge. Ignorance is the absence of knowledge. Anti-knowledge is the skill you’ve built that actively blocks the skill you need next.

\

A scheme is what the concert pianist has when he knows the sound of the note before his finger touches the key. Not remembering it. Not calculating it. Not reciting G#-F# in his head. The knowledge has dissolved into the nervous system. It runs on its own. It is no longer retrievable as a conscious thought — and that is exactly what makes it powerful, and exactly what makes it dangerous.

\ When the scheme is wrong, it’s difficult to find and fix. The anti-knowledge runs below conscious access. You don’t know you don’t know. You just keep performing, confidently, at a ceiling you can’t see.

\ Anti-knowledge compounds. Every day you perform the suboptimal pattern, you’re trading future capacity for current output.


Optimizing Everything is Suboptimal

The cost of optimizing everything is higher than the cost of running some things suboptimally. Obsessing over every inefficiency is inefficient and creates its own paralysis.

\ And yet — some compounding debts are quietly catastrophic. Sleep is worth optimizing. How you read might be worth optimizing. The question is never “am I suboptimal?” Everything is suboptimal. The question is which suboptimalities sit at the intersection of three conditions:

  1. You’ve built knowledge that works.
  2. A better version of performance exists, and you want it.
  3. Your current skill is the thing blocking you from reaching it.

\ Only the intersection of all three is anti-knowledge.

\ Some people find the optimal path early. A teacher shows them the right way on day one. Lucky them. This only happens in closed systems where there is a cap on maximum performance.

\ Some people are suboptimal, and it genuinely doesn’t matter. They type sixty words per minute. They’re a novelist. The suboptimal pattern is load-bearing for a life that has no use for ninety-five WPM. The cost is a tax on a destination they’re not travelling to.

\ Some skills have no fixed optimum. The jazz musician whose idiosyncratic technique emerged from years of playing wrong notes produces sounds that a technically correct player cannot reach. Remove the wrongness, and you remove the music.


The Thing That Separates World-Class From Average

The question isn’t just what to fix. It’s how you fix it at all.

\ The pianist who has played average for twenty years doesn’t fix his technique by practising harder. He fixes it by going meta — stepping outside the performance, locating the pattern running below it, and deliberately installing a new one, even while it makes him slower and worse in the short term. This is painful. Most people can’t do it. They prefer the ceiling.

\ The athlete who rebuilds their serve from scratch mid-career. The coder who stops and relearns the fundamentals. The writer who throws out their tics. They’re all doing the same thing: locating anti-knowledge, then willingly dismantling a skill that works — because they can see what it’s costing them.

\ What allows this? Not intelligence. Not discipline, exactly.

\ Something more structural: the ability to see your own patterns from the outside.

\ And here is where biology becomes relevant. Because the reason this is so difficult — the reason most people never do it — is not laziness. It’s the way the system works.


No ant has the blueprint.

When an ant finds food, it doesn’t return to the nest and file a report. It drags its abdomen along the ground, releasing a chemical trail — a pheromone line that says something good is this way. The next ant that crosses that line follows it. If it finds food, it lays more signal. The trail doubles. Then quadruples. Within hours, hundreds of ants are walking a highway no single ant designed, toward a destination most of them have never seen.

\ Trails that lead nowhere get walked once. Maybe twice. No return signal. The pheromone evaporates.

\ The entire architecture of the colony — ventilated chambers that regulate temperature to within half a degree, nurseries that protect the brood, food storage organised by type and age, highways that solve shortest-path problems faster than most human algorithms — emerges from this. No architect. No blueprint. No commander. Just signal, response, deposit, repeat.

\ This is called stigmergy: coordination through the environment rather than through command.

\ But here is the tragedy. The food moves.

\ The colony finds a source. The highway forms. The ants walk it, reinforce it, optimize it. Then the food disappears. For a brief window, the highway remains the strongest signal in the environment, and ants are following it to nothing. The trail is perfectly efficient. The destination no longer exists.


Stigmergy is not only in ants. It’s in you.

Open source software. Wikipedia. Markets. Your immune system. Your roads. All of these systems share one property: they’re smarter than any individual participant, and dumber than anyone who can see the whole board from outside.

\ The ant cannot see that the food has moved. It can only follow the strongest trail.

\ This is the condition you are in, inside your own mind, about your own habits, almost all of the time.

\ Billions of neurons. Trillions of synapses. No central architect. Thoughts are trails. Each time a similar input arrives, it follows the stronger path. Do this ten thousand times, and you have a highway. A hundred thousand times, and you have a scheme — knowledge running below thought, automatic, confident, potentially catastrophically wrong.

\ Before a thought reaches consciousness — before it gets allocated to working memory — your brain runs a signal-strength evaluation. Weak signal gets no RAM allocation. The thought dies. You never know you had it. This is why you ignore an artist the first three times you see their name on Spotify. The signal is too faint. But when you encounter them in three different places — a playlist, a conversation, a review — the signal compounds. You “decide” to check them out.

\ The solution to a problem can be obvious, available, right in front of you — but if the search your brain ran evaluated the path as low-signal and energetically expensive, it discarded it. Because the hardware is ancient. The brain optimizes for the most reward at the least energy cost, a heuristic calibrated for an environment where calories were scarce, and unknown trails were frequently lethal. That environment no longer exists.

\ The pause you feel when thinking through something novel — where you hold a sentence in your head, repeating it while the next word won’t come — is the colony protecting its computation. Evolution optimised for survival. Not for finding the best move.


Where agency actually lives

This does not eliminate agency. It locates it.

\ Agency is not the absence of stigmergy. Agency is the conscious act of depositing a signal on trails the algorithm would otherwise ignore — repeatedly, long enough, until the colony eventually walks there. You cannot override the system. You can game it.

\ This is the difference between the world-class and everyone else. Not talent. Not even work ethic. The capacity to reconstruct their own patterns. To find the broken scheme, go below the level of performance, and rebuild from the substrate — even when it makes them slower, even when the old highway is still there and far more comfortable to walk.

\ They don’t just practise. They practise on themselves.

\ You can read and subscribe for more of my essays here.

How to Make On-Call Sustainable

2026-04-07 03:11:54

What Healthy On-Call Looks Like

Incident response gets most of the attention, but teams rarely measure the human cost of on-call with the same rigor they apply to incidents.

Phone charged. Laptop by my bed. The distinctive PagerDuty alarm goes off in the middle of the night, and the process begins. I sit up, open my screen to a burst of blinding light, and try to figure out: is this transient, or is something actually broken? I pull up PagerDuty, Grafana, New Relic, and whatever else I need to piece together what’s happening. Sometimes the runbook helps. Sometimes it’s vague enough to become another problem.

I felt that most clearly one Thanksgiving week years ago. Repeated system failures kept me stuck in my room while the rest of my family gathered. I couldn’t shop, cook, or really take part in the holiday, and I missed Thanksgiving dinner altogether, which for my family is one of the few times each year when everyone is together. That Monday after my rotation, I was expected to return to business as usual: join meetings, pick feature work back up, and handle the follow-up work like postmortems and runbook updates. There was little recognition of the cost. That kind of stress adds up quickly, and over time it becomes burnout.

\

Repeated incidents should be easy to recognize and resolve

Runbooks are not useful just because they exist. They are useful if someone on the team can open one at 2:37 a.m. and know the right next step. On-call rotations include people with different specialties, different contexts, and different levels of experience, and the person writing the runbook will almost always know more than the person relying on it later.

That means the documentation has to work for the person under pressure, not just the expert who wrote it. It also has to stay current. If it no longer reflects how the system behaves or fails to cover a recurring incident pattern, it should be updated.

Repeated incidents should also show up as a pattern, not be rediscovered one page at a time. If the same class of failure keeps returning, that should be obvious in review, reporting, and prioritization. Otherwise, teams end up paying for the same operational weakness again and again.

And teams need enough room to fix what keeps causing pain. If engineers are expected to carry their normal feature load while also handling incidents, tech debt, noisy alerts, and follow-up work, the same problems tend to linger. There has to be space outside of feature delivery to improve reliability, reduce alert noise, and deal with the underlying issues that keep interrupting people. \n

Incidents don’t end when services recover

The cost of a tough on-call often shows up the next day, when someone is still expected to join a standup, sit through meetings, hit deadlines, and do the follow-up work, like updating the runbook or writing the postmortem.

Operationally, the incident may be over. For the person who handled it, it often is not.

That gap shows up in the data, too. Catchpoint’s 2025 SRE [Report](https://www.catchpoint.com/learn/sre-report-2025) found that 14% of respondents reported being more stressed after incidents than during them. It also found that support tends to drop off once the incident is over: 55% reported high support during incidents versus 44% after. That lines up with what many teams already know from experience. Support is visible while the incident is active. Recovery is much easier to ignore.

If a team wants on-call to be sustainable, it has to account for that aftereffect. Recovery time, follow-up capacity, and the ability to return to normal work all matter just as much as whether the incident was resolved.

\

Tooling should reduce work, not create more

This matters even more now that teams are bringing AI into operations workflows. Done well, AI can help responders triage faster, surface relevant context, summarize what changed, and suggest next steps so people spend less time hunting through tabs and more time actually resolving the issue.

That is the standard AI should be held to in incident response. The goal is not just to generate output. It is to reduce ambiguity, shorten the path to the next decision, and help people move with more confidence when time and attention are limited.

We are already seeing why that matters. In March 2026, Fortune reported that Amazon moved humans further back into the loop after a series of retail-site incidents tied to inaccurate advice from an AI agent using an old wiki. The lesson is not that AI has no place in operations. It is that operational AI has to be trustworthy, relevant, and useful in the moments when people need it most.

Historically, teams have measured the health of their systems better than the health of the people operating them. Part of that was a tooling problem. That excuse is getting weaker. With better internal data plumbing, AI agents, and more mature open-source tools for tracking on-call health, teams can get a much clearer picture of who is being interrupted, what keeps repeating, and whether recovery is actually happening after a bad night.

That feels like a much better goal than just trying to page less.

\

Takeaway

Healthy on-call is about whether the people carrying the system can keep doing it without getting ground down in the process.

If the same failures keep repeating, if recovery is invisible, and if the burden quietly concentrates on the same people, the system is not healthy no matter how clean the incident metrics look. The teams that get this right are not the ones with perfect uptime. They are the ones that make the operational burden visible, create space to fix what keeps breaking, and treat recovery as part of the job instead of a private cost paid by whoever had the pager.

\

Why RWA Is Hitting All-Time Highs While the S&P 500 Sells Off

2026-04-07 03:08:25

There's a strange thing happening in markets right now. The S&P 500 is down. Macro uncertainty is up. Crypto majors are choppy. And yet — on-chain RWA markets are printing new records almost every week.

The tokenized real-world asset market exceeded $36 billion (excluding stablecoins) by late 2025. A growth of over 2,200% since 2020.

More importantly, that growth isn't happening despite the macro storm. It's happening because of it.


The Macro Logic Nobody's Talking About

When traditional risk assets sell off, capital rotates. And that's exactly what the RWA composition data shows.

Private credit now accounts for roughly $17B in tokenized assets, and U.S. Treasuries another $7.3B, together dominating the RWA market structure. This is the same instruments institutional capital has always run during risk-off periods available on-chain, 24/7.

Capital started concentrating in income products that fit existing workflows and governance not in speculative categories. Tokenization scaled first where product familiarity, standardized documentation, and operational efficiency already existed. That's the tell. RWA is growing as a macro hedge.


The Infrastructure Has Quietly Matured

A year ago, tokenized RWAs were mostly T-bills and money market funds, which gave crypto-native portfolios a safe yield destination. That was Phase 1.

The on-chain RWA market (excluding stablecoins) reached ~$15 billion by the end of 2024 and grew to over $24 billion by mid-2025 (roughly 85% year-on-year expansion).

Phase 2 is what's happening now: the capital mix is shifting toward private credit, tokenized equities, and derivatives on macro benchmarks.

Tokenized equities across all platforms have grown to approximately $1.09 billion in total on-chain value, up from about $300 million at the start of 2025. That's a 3.6x jump in one year.


The S&P 500 Just Went On-Chain.

This week, S&P Dow Jones Indices, the index provider, did something it has never done before.

On March 18, 2026, S&P DJI licensed the S&P 500 to Trade[XYZ] to launch the first and only officially licensed perpetual derivative contract based on the index on Hyperliquid.

Trade[XYZ] spent months rolling out on-chain markets tied to real-world assets such as gold and oil on Hyperliquid, gradually building an ecosystem that bridges commodities, equities, and crypto under a single derivatives umbrella.

The result: Hyperliquid's S&P 500 market hit $100 million in daily volume within 24 hours of launch. That's institutional demand showing up on-chain, right now.


Why 24/7 Access Is the Real Product

Traditional markets have a schedule. The S&P 500 closes on Friday. It opens again Monday morning. In between, geopolitical events happen, and investors can only watch.

If a macro shock hits over a weekend: a surprise policy decision, a geopolitical flare-up — perp traders on Hyperliquid can immediately reposition without waiting for Monday's bell. That kind of responsiveness has already been visible in oil futures traded on Hyperliquid during major geopolitical incidents while traditional commodity venues were shut.

Aggregate open interest across Hyperliquid's HIP-3 RWA markets recently climbed to about $1.43 billion — more than 100 times higher than six months ago as tokenized equity, commodity, and macro products gained traction alongside crypto pairs.


The Bigger Picture: An Alternative Capital Market Is Forming

The old narrative was that crypto and TradFi would merge through ETFs — Wall Street buying Bitcoin. That's happening. But a parallel process is also underway: TradFi assets moving on-chain.

Since October 2025, Trade[XYZ] markets on Hyperliquid have exceeded $100 billion in volume, with a current annualized run rate in excess of $600 billion.

McKinsey predicts the tokenized RWA market will reach $2 trillion, while BCG estimates $16 trillion by 2030. The gap between those projections and today's $36B is where the opportunity lives.


What This Means in Practice

Three things worth watching:

1. Weekend price discovery is moving on-chain. When traditional markets are closed, and macro events break, Hyperliquid is increasingly where positions get expressed first. This creates an information asymmetry for traders monitoring on-chain flow vs. those waiting for the Monday open.

2. The asset class shift is predictable. RWA started with cash-like instruments (T-bills, MMFs). It's now moving into private credit and equity derivatives. Real estate and structured products are the next frontier. The pattern mirrors how TradFi itself evolved from bonds to equities to alternatives.

3. Institutional legitimacy is no longer a future event. S&P DJI licensing its flagship index to a DeFi platform is a signal. BlackRock tokenizing funds on Ethereum is a signal. Nasdaq filing to list tokenized stocks is a signal.


The bridge between crypto and TradFi isn't coming. It's already being used.

\

Meet the Writer: Hacker Noon's Contributor Marius Eugen Vomir, Independent Builder

2026-04-07 03:04:48

\

Welcome to HackerNoon’s Meet the Writer Interview series, where we learn a bit more about the contributors that have written some of our favorite stories.


So let’s start! Tell us a bit about yourself. For example, name, profession, and personal interests.

My name is Marius Eugen Vomir. I’m from Romania, and I created Smart Home Cinema, a local-first voice-controlled movie system for Windows.

I’m interested in technology that removes everyday friction in a simple and reliable way. Most of what I care about sits at the intersection of automation, media playback, voice control, and user experience.

I’m especially drawn to systems that make technology feel more natural, more predictable, and easier to live with in everyday use.

\

Interesting! What was your latest Hackernoon Top Story about?

My latest HackerNoon story was about building a voice-controlled home cinema system for Windows.

What pushed me toward it was the feeling that people who watch movies from local libraries were left behind when it comes to comfort. Streaming platforms made effortless control feel normal, but local playback on a Windows PC still often depends on keyboards, remotes, or awkward workarounds that interrupt the experience.

I was also drawn to the idea of a more private and user-controlled setup — something local-first, where the user stays in charge of their own system and movie collection.

So the story is really about bringing those two sides together: the comfort people now expect, and the privacy and control that local playback can still offer. That gradually turned into a voice-controlled, local-first system built around VLC, PotPlayer, and a simple folder-based playback model.

\

Do you usually write on similar topics? If not, what do you usually write about?

I usually write about practical technology, but mostly when it grows out of something real I’ve been working on. \n \n So the topics are often similar: automation, local media playback, voice control, and ways to make technology feel simpler and more natural to use. I’m generally drawn to ideas that remove friction instead of adding more layers. \n \n I don’t really write for the sake of writing. I usually write when there’s a concrete idea or system behind it that feels worth explaining.

\

Great! What is your usual writing routine like (if you have one?)

I don’t have a strict writing routine. \n \n Most of the time, I write when an idea has become clear enough that I feel it’s worth explaining. It usually starts with a rough note or a single sentence, then I try to find the real structure underneath it. \n \n After that, it’s mostly rewriting. I tend to cut a lot, simplify a lot, and keep pushing the text until it sounds more natural and less like something that’s trying too hard.

Being a writer in tech can be a challenge. It’s not often our main role, but an addition to another one. What is the biggest challenge you have when it comes to writing?

The biggest challenge for me is making technical writing stay clear without becoming too abstract or detached from real use. \n \n It’s easy to explain a system in a way that is technically correct but feels too theoretical or too far removed from the actual experience behind it. I try not to write that way.

What matters to me is keeping the writing connected to the real friction, the real use case, and the reason the system exists at all.

\

What is the next thing you hope to achieve in your career?

Right now, the main thing I want is to keep pushing Smart Home Cinema further and turn it into something truly solid.

It started from a personal frustration, but I’d like to take it beyond that and make it into a product that feels polished, reliable, and genuinely useful to other people too. \n More generally, I want to keep building practical systems around real problems, not just ideas that sound good in theory.

\

Wow, that’s admirable. Now, something more casual: What is your guilty pleasure of choice?

Probably watching movies late at night when I should really be sleeping. \n There’s something very hard to resist about saying “just one more scene” and then suddenly realizing it’s much later than it should be. A lot of the original frustration behind Smart Home Cinema actually came from that kind of moment.

Do you have a non-tech-related hobby? If yes, what is it?

Yes — I enjoy fishing. \n I think I like it for the opposite reason I like building systems: it puts me in a completely different rhythm. It’s quiet, simple, and it gives me time to step away from screens and clear my head.

\

What can the HackerNoon community expect to read from you next?

If I write something next, it will probably stay in the same general area: practical systems, local-first design, and technology shaped by real use rather than abstraction. \n I’m most interested in ideas that come from building something real and understanding it well enough to explain it clearly. \n So if there is a next piece, it will likely come from that same place.

\

What’s your opinion on HackerNoon as a platform for writers?

I think HackerNoon is a good platform for writers who want to explore technical ideas in a more personal and experience-driven way. \n \n What I like is that it leaves room not just for polished industry commentary, but also for stories that come from actually building something, testing it, struggling with it, and figuring things out along the way. \n \n That makes it a good place for writing that is technical, but still grounded in real use and real experience.

\

Thanks for taking time to join our “Meet the writer” series. It was a pleasure. Do you have any closing words?

Thanks for having me. \n \n I’m glad the story resonated with people. For me, the most meaningful kind of technology is the kind that grows out of real use and solves a real frustration in a simple way. \n \n If this piece connected with readers, I’m happy about that.

The HackerNoon Newsletter: OpenAI Bought TBPN Because PR Can’t Keep Up With AI (4/6/2026)

2026-04-07 00:02:25

How are you, hacker?


🪐 What’s happening in tech today, April 6, 2026?


The HackerNoon Newsletter brings the HackerNoon homepage straight to your inbox. On this day, Pioneer 11 launches in 1973, Launch of Early Bird Satellite in 1965, and we present you with these top quality stories. From Your Work Trained the Model. The Model Replaced You. Philip K. Dick Wrote This Story in 1968. to OpenAI Bought TBPN Because PR Can’t Keep Up With AI, let’s dive right in.

Your Work Trained the Model. The Model Replaced You. Philip K. Dick Wrote This Story in 1968.


By @thegeneralist [ 8 Min read ] The first workers displaced by generative AI werent software engineers. They were translators and $1.32/hr data labelers. Philip K. Dick predicted why. Read More.

OpenAI Bought TBPN Because PR Can’t Keep Up With AI


By @davidjdeal [ 6 Min read ] Read this post to understand why OpenAI bought a media company, TBPN. Read More.

AI Coding Tip 014 - One AGENTS.md Is Hurting Your AI Coding Assistant


By @mcsee [ 4 Min read ] Split your AGENTS.md into layered files so your AI loads only the rules that matter for the code you touch. Read More.

Digital Project Abandonment Crisis: Deadweight Loss in Plain Sight


By @proofofusefulness [ 4 Min read ] What the failure data actually says — and what it means for how we build. Most digital projects fail. This is not a provocative claim. Read More.


🧑‍💻 What happened in your world this week?

It's been said that writing can help consolidate technical knowledge, establish credibility, and contribute to emerging community standards. Feeling stuck? We got you covered ⬇️⬇️⬇️


ANSWER THESE GREATEST INTERVIEW QUESTIONS OF ALL TIME


We hope you enjoy this worth of free reading material. Feel free to forward this email to a nerdy friend who'll love you for it.See you on Planet Internet! With love, The HackerNoon Team ✌️