MoreRSS

site iconHackerNoonModify

We are an open and international community of 45,000+ contributing writers publishing stories and expertise for 4+ million curious and insightful monthly readers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of HackerNoon

The Markup Wins Excellence Award From the AAJA For Its "Languages of Misinformation" Series

2026-04-06 00:00:27

The Markup, now a part of CalMatters, uses investigative reporting, data analysis, and software engineering to challenge technology to serve the public good. Sign up for Klaxon, a newsletter that delivers our stories and tools directly to your inbox.

\ The Markup’s series on the impact of misinformation on the Vietnamese immigrant community, “Languages of Misinformation,” has won the Asian American Journalism Association’s excellence in online/digital journalism engagement award.

\ Judges said the series “brilliantly tackled misinformation on YouTube, hitting home for the Vietnamese and wider AAPI communities. By teaming up with Mai Bui, a 67-year-old YouTuber grandma, and crafting a guide for younger Vietnamese Americans, the work didn’t just tell a story—it gave a platform to real voices and bridged generational gaps. The multi-layered approach to connecting with the audience sets the work apart, making it a standout choice for the category.”

\ Across the “Languages of Misinformation” series, investigative reporter Lam Thuy Vo listened to the Vietnamese community to identify what stories needed to be told, and community and social media manager Maria Puertas translated the pieces into standalone, accessible resources on social media. Here are our first three stories:

\ Our first story identified misinformation the community constantly encountered, and also assessed what information the group needed but did not have easy access to. Based on that, Lam hosted two community workshops, one on how to combat misinformation, and one on artificial intelligence and identifying deep fakes.

\ Our second story featured a community member who was already working to combat misinformation: Bùi Như Mai, a 67-year-old grandmother and retired engineer with no formal journalism training. We worked with Mai to tell her story in English and Vietnamese, and we are proud that Mai is also being honored by this AAJA award.

\ Our third story, created in response to reader requests, is a guide for second-generation Americans, or anyone who wants to have productive conversations with loved ones about misinformation. Our tips come from interviews with misinformation experts—including those who focus on Asian communities—to understand what people can do to help others identify misinformation and find better-quality information online.

\ A big congratulations to the entire team for recognition of their hard work. Congratulations too, to all of this year’s AAJA award winners.


Credits

  • The Markup

\ Also published here

\ Photo by Michael Dziedzic on Unsplash

\

Orchestration vs. Choreography: Navigating the Trade-offs of Modern System Design

2026-04-06 00:00:20

In the early days of distributed systems, we lived in a world of Request-Response. One service asked, another answered. It was synchronous, predictable, and easy to trace. But as our systems scaled from a handful of servers to hundreds of microservices, this "web of calls" began to tangle.

\ Today, architects face a fundamental choice when designing how these services talk to one another: Do we use Orchestration (the Conductor approach) or Choreography (the Dance approach)?

\ This choice isn't just about technical implementation; it defines your system's resilience, scalability, and cognitive load. In this deep dive, we’ll break down the Request-Response vs. Event-Driven paradigms and provide a roadmap for when to choose which.

1. The Request-Response Paradigm: Orchestration

Orchestration is akin to a symphony orchestra. There is a centralized "Conductor" (an orchestrator service or a process manager) that tells every other service exactly when to play their part.

How it Works

In an orchestrated workflow, Service A calls Service B, waits for a response, and then decides whether to call Service C based on that response. This is typically implemented via REST, gRPC, or GraphQL.

The Strengths of Orchestration

  1. Centralized Logic: The entire business process is visible in one place. If you want to know the "Order Flow," you look at the OrderOrchestrator code.
  2. Synchronous Error Handling: If Service B fails, the Orchestrator knows immediately and can trigger a rollback or a "Saga" compensation.
  3. Easier Debugging: Since there is a clear start and end point with a central controller, tracing the state of a single request is straightforward.

The Pitfalls: The "Distributed Monolith"

The biggest risk of over-relying on orchestration is creating a Distributed Monolith. If the Orchestrator goes down, the entire process dies. Furthermore, the Orchestrator must know the API signatures of every service it talks to, leading to high temporal and structural coupling.

2. The Event-Driven Paradigm: Choreography

Choreography removes the conductor. Instead of being told what to do, each service listens for "Events" and decides how to react. It is like a dance troupe where every dancer knows the music and moves in sync without someone shouting instructions.

How it Works

This is built on an Event Bus (like Kafka, RabbitMQ, or AWS EventBridge). When Service A finishes its task, it broadcasts an event: OrderCreated. It doesn't know or care who is listening. Service B (Inventory) and Service C (Shipping) hear the event and start their own work independently.

The Strengths of Choreography

  1. Extreme Decoupling: Service A has no idea Service C exists. You can add a new Service D (Analytics) to listen to the same event without changing a single line of code in Service A.
  2. Scalability and Performance: Since calls are asynchronous, Service A doesn't "block" while waiting for Service B. This leads to higher throughput and better utilization of resources.
  3. Fault Tolerance: If the Shipping service is down, the OrderCreated event stays in the queue. When Shipping comes back online, it processes the backlog. The rest of the system remains unaffected.

The Pitfalls: "Shadow Complexity"

The complexity doesn't disappear; it just moves. In a choreographed system, it becomes very difficult to visualize the entire business process. You might find yourself asking: "Which service is actually responsible for finalizing this payment?" This can lead to "Event Spaghetti" if not managed with strict documentation.

3. Comparing the Patterns: A Technical View

| Feature | Orchestration (Request-Response) | Choreography (Event-Driven) | |----|----|----| | Coupling | High (Services must know each other) | Low (Services know only the Event Bus) | | Visibility | Centralized in the Orchestrator | Distributed across the system | | Performance | Synchronous (Waiting/Latency) | Asynchronous (High Throughput) | | Reliability | Point of failure at the Conductor | Buffer-based (Queues handle downtime) | | Complexity | Low at start, High at scale | High at start, manageable at scale |

4. Code Comparison: The "Order" Workflow

Let's look at how these two patterns look in pseudo-code.

Orchestration (Python/FastAPI Style)

The orchestrator "commands" the flow.

async def create_order_flow(order_data):
    # 1. Save to DB
    order = await db.save(order_data)

    # 2. Synchronous call to Inventory
    inv_response = await http.post("/inventory/reserve", json=order.id)
    if inv_response.status != 200:
        return {"error": "Out of stock"}

    # 3. Synchronous call to Payment
    pay_response = await http.post("/payment/charge", json=order.total)
    if pay_response.status != 200:
        await inventory.release(order.id) # Manual compensation
        return {"error": "Payment failed"}

    return {"status": "Success"}

Choreography (Node.js/Event Style)

The service "emits" and forgets.

// Order Service
async function createOrder(orderData) {
    const order = await db.save(orderData);

    // Publish event to Kafka/RabbitMQ
    await eventBus.publish("ORDER_CREATED", {
        orderId: order.id,
        total: order.total
    });

    return { status: "Accepted" }; // Note: We don't know if it will succeed yet!
}

// Payment Service (Listening)
eventBus.subscribe("ORDER_CREATED", async (event) => {
    const success = await processPayment(event.total);
    if (success) {
        await eventBus.publish("PAYMENT_SUCCESS", { orderId: event.orderId });
    } else {
        await eventBus.publish("PAYMENT_FAILED", { orderId: event.orderId });
    }
});

5. When to Choose Which?

Choose Orchestration When:

  • The process is highly linear and simple: If you only have two services, an event bus is overkill.
  • You need ACID-like consistency: If the business cannot tolerate a "pending" state and needs an immediate "Yes/No" (e.g., a bank transfer authorization).
  • The workflow changes frequently: It’s easier to update one central orchestrator than to re-coordinate five independent services.

Choose Choreography When:

  • Scaling is the priority: You need to handle thousands of requests per second without blocking.
  • You have many "side effects": If an action (like UserSignup) triggers ten different things (Welcome email, CRM update, Analytics, Slack notification, Fraud check), choreography is the only sane way to manage it.
  • The services are owned by different teams: Choreography allows teams to deploy and evolve their services without needing to coordinate API changes constantly.

6. The Hybrid Reality: "Choreographed Orchestration."

In modern production environments, we rarely choose just one. The most robust systems use Orchestration within a Bounded Context and Choreography between Bounded Contexts.

\ For example:

  • The Payment Subsystem might use internal orchestration to ensure that "Authorize," "Capture," and "Tax Calculation" happen in a strict, predictable sequence.
  • Once the Payment Subsystem is done, it emits a PaymentCompleted event to the rest of the company, which is then handled via choreography by the Shipping, Marketing, and Inventory teams.

7. Conclusion: Context is King

There is no "better" pattern—only patterns that fit your context. If you are a startup building an MVP, Orchestration will get you to market faster with fewer moving parts. As you grow into a global enterprise with dozens of teams, Choreography will provide the agility and resilience you need to survive.

\ Before you reach for Kafka or write your next REST endpoint, ask yourself: "Do I need a Conductor to ensure this happens exactly like this, or can I let the dancers follow the music?"

The Next Layer-1 Wars Won't Be Won in the Codebase - They'll Be Won in the Governance Layer

2026-04-05 23:00:44

Performance benchmarks are converging. The real moat now is collective decision-making – and most networks are nowhere near ready.

\ There is a script that every new Layer-1 follows. Launch. Announce throughput numbers. Position yourself as the fix for whatever the previous generation broke. Watch the roadmap grow. Watch the marketing get louder.

\ We have been running this loop for nearly a decade. And it has quietly produced something nobody planned for: convergence.

\ Sub-second finality is no longer a differentiator. Scalability – once the central battlefield – is either solved or solvable. The performance gap between serious L1 networks has narrowed to the point where benchmarks are mostly marketing.

\ So, what actually separates the networks that matter in five years from the ones that don't?

\ Governance. Not the word – the word is everywhere and means almost nothing at this point – but the actual operational capacity of a network to make collective decisions, adapt to circumstances nobody anticipated, and allocate resources in ways the community can legitimately challenge.

\ That is a harder problem than transaction throughput. Most networks have not taken it seriously.

The Concentration Problem Nobody Wants to Talk About

Take Ethereum. Most battle-tested smart contract network in existence. Its off-chain governance model – the EIP process, AllCoreDevs calls, rough consensus among client teams – has produced real upgrades over nearly a decade. The process works.

\ But a 2024 study from researchers at the University of Texas at Austin and the University of Basel found that just 10 individuals were responsible for proposing 68% of all implemented Core EIPs.

\ That is not a conspiracy. That is what happens when governance is informal: influence concentrates around the people who show up to every call and answer every thread. The rules are unwritten, so whoever has the most time and the most confidence ends up holding more than their share of the network's direction.

\ Every network I have watched closely has some version of this problem. The stakes go up, the community scales, and decisions still get made the way they have always been made – in rooms without clear rules, by whoever shows up most consistently.

What Polkadot's OpenGov Experiment Actually Teaches Us

Polkadot's transition to OpenGov in June 2023 is the most instructive governance experiment running in public right now.

\ The numbers, sourced from Parity's DotLake platform: participation in referenda increased by over 1,000% after the transition. The rejection rate jumped from 9% under the previous council-based system to 40% under OpenGov.

\ A 40% rejection rate is not a failure. It is evidence that the process is functioning as a genuine deliberative mechanism rather than a rubber stamp. When nearly half of the proposals get turned down, that means people are actually evaluating them.

\ But more participation does not automatically mean better allocation. In the first half of 2024, Polkadot's treasury spent approximately $87 million while reserves sat at around 38.2 million DOT. Governance volume is not the same as governance quality.

\ The lesson is not that on-chain governance is broken. It is that governance design requires the same rigor as protocol design – and it needs ongoing iteration, not a single deployment.

Why Institutions Keep Asking About Governance First

When I speak with asset managers and banks evaluating blockchain infrastructure, the questions they ask are not about TPS.

\ They ask: who has the authority to change the rules, under what conditions, and with how much notice? Can a small group of token holders push through a protocol change that affects their assets? What recourse exists when something goes wrong?

\ These are the same questions asked of any financial infrastructure. Most Layer-1 networks cannot answer them clearly. That is a direct constraint on institutional capital – one that matters considerably more than whatever the latest benchmark shows.

\ Institutions do not enter ecosystems they cannot model.

What Sustainable Governance Actually Looks Like

Building a network designed to operate where regulatory clarity is still forming and institutional trust is still being earned, I think about three things.

\ First: governance has to be legible. The rules that determine how decisions get made must be transparent and consistent, not locked in the institutional memory of a few long-standing contributors.

\ Second: accountability has to have teeth. Proposals should carry real costs for the people making them, not just for the treasury absorbing the outcomes.

\ Third – and this is the hardest one to name – the system should be able to fix itself before a crisis forces it to. Not after a critical exploit. Not after a community fracture. Before. Most networks have not built that in.

\ None of this is solved. Venom Foundation is still working through these questions, just like every serious network is. But working through them publicly and with rigor is itself a competitive advantage. Networks that treat governance as a checkbox will eventually face a decision they are not designed to handle. How they respond to that moment will determine whether they keep their community's trust or lose it permanently.

The Bottom Line

The next wave of institutional capital will not go to the fastest network. It will go to the network whose rules an investor can read, stress-test, and trust that will still mean something in three years.

\ That is a governance problem, not an engineering one.

\ Most of the industry is still treating it like the other way around.

Godot's Web Performance Boost: Things to Know

2026-04-05 19:00:19

Sometimes, just adding a compiler flag can yield significant performance boosts. And that just happened.

\ For about two years now, all major browsers have supported WASM (WebAssembly) SIMD. SIMD stands for “Single instruction, multiple data” and is a technology that permits CPUs to do some parallel computation, often speeding up the whole program. And that’s exactly why we tried it out recently.

\ We got positive results.

The need for performance on the Web

The Web platform is often overlooked as a viable target, because of its less-than-ideal environment and its perceived poor performance. And the perception is somewhat right: the Web environment has a lot of security-related quirks to take into account—the user needs to interact with a game frame before the browser allows it to play any sound1. Also, due to bandwidth and compatibility reasons, you rarely see high-fidelity games being played on a browser. Performance is better achieved when running software natively on the operating system.

\ But don’t underestimate the potential of the Web platform. As I explained in broad terms at the talk I gave at the last GodotCon Boston 2025, the Web has caught up a lot since the days of Flash games. Not only are there more people playing Web games every year, but standards and browsers improve every year in functionality and in performance.

\ And that’s why we are interested in using WASM SIMD.

WASM SIMD Benchmarks

Our resident benchmark expert Hugo Locurcio (better known as Calinou) ran the numbers for us on a stress test I made. We wanted to compare standard builds to builds with WASM SIMD enabled.

\ Note: You may try to replicate his results, but be aware that he has a beast of a machine. Here are his PC’s specifications:

  • CPU: Intel Core i9-13900K
  • GPU: NVIDIA GeForce RTX 4090
  • RAM: 64 GB (2×32 GB DDR5-5800 CL30)
  • SSD: Solidigm P44 Pro 2 TB
  • OS: Linux (Fedora 42)

\ I built a Jolt physics stress test from a scene initially made by passivestar. By spawning more and more barrels into the contraption, we can easily test the performance difference between the WASM SIMD build and the other.

|   | Without WASM SIMD | With WASM SIMD | Improvement (approx.) | |----|----|----|----| | Test links | Link | Link | - | | Firefox 138 \n (“+100 barrels” 3 times) | Firefox 138 "+ 100 barrels" 3 times without SIMD | Firefox 138 "+ 100 barrels" 6 times without SIMD | | | Firefox 138 \n (“+100 barrels” 6 times) | Firefox 138 "+ 100 barrels" 3 times without SIMD | Firefox 138 "+ 100 barrels" 6 times without SIMD | 10.17×* | | Chromium 134 \n (“+100 barrels” 3 times) | Chromium 134 "+ 100 barrels" 3 times without SIMD | Chromium 134 "+ 100 barrels" 6 times without SIMD | 1.37× | | Chromium 134 \n (“+100 barrels” 6 times) | Chromium 134 "+ 100 barrels" 3 times without SIMD | Chromium 134 "+ 100 barrels" 6 times without SIMD | 14.17×* |

*Please note that once the physics engine enters a “spiral of death”, it is common for the framerate to drop to single digits, SIMD or not. These tests don’t prove 10× to 15× CPU computing speed improvements, but rather that games will be more resilient to framerate drops on the same machine in the same circumstances. The 1.5× to 2× numbers are more representative here of the performance gains by WASM SIMD.

What it means for your games

Starting with 4.5 dev 5, you can expect your Web games to run a little bit more smoothly, without having to do anything. Especially when things get chaotic (for your CPU). It isn’t a silver bullet for poorly optimized games, but it will help nonetheless. Also, note that it cannot do anything for GPU rendering bottlenecks.

\ Be aware that the stress tests are meant by nature to only test the worst case scenarios, so you may not see such large improvements in normal circumstances. But it’s nice to see such stark improvements when the worst happens.

Availability

From here on out, the 4.5 release official templates will only support WebAssembly SIMD-compatible browsers in order to keep the template sizes small. We generally aim to maintain compatibility with the oldest devices we can. But in this case, the performance gains are too large to ignore and the chances of users having browsers that are that far out of date is too small relative to the potential benefits.

\ If you need to use non-SIMD templates, don’t fret. You can always build the Godot Editor and the engine templates without WebAssembly SIMD support by using the wasm_simd=no build option.

What’s next?

As I wrote in my last blog post, we’re currently working very hard to make C#/.NET exports a reality. We do have a promising prototype, we just need to make sure that it’s production-ready.

\ I also mentioned in that article that I wanted to concentrate on improving our asset loading game. Preloading an entire game before even starting it hinders the ability to use Godot for commercial Web games. Once something is implemented to improve that issue, count on me to share the news with you.

\

  1. It’s either that, or we return to the old days of spam-webpages using the “Congratulations, you won!” sound effect when you least expect it. 

Adam Scott

\ Also published here

\ Photo by Haithem Ferdi on Unsplash

\

The Hidden Auditory Knowledge Inside Language Models

2026-04-05 16:14:59

Text-only LLMs may already know enough about sound to predict downstream audio model performance before an encoder is ever attached.

Why EHR Data Doesn't Fit Neat ML Tables

2026-04-05 15:44:59

Hospital data is sparse, irregular, and time-sensitive. Here's why standard machine learning struggles and event stream models work better.