MoreRSS

site iconHackerNoonModify

We are an open and international community of 45,000+ contributing writers publishing stories and expertise for 4+ million curious and insightful monthly readers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of HackerNoon

Dmytro Rukin on how fintech is shaping the next wave of startups in Brazil

2026-04-07 23:51:22

In Brazil, startups are no longer built around financial limitations; they are built on top of a strong financial infrastructure. What differentiates Brazil is not just market size, but the depth and maturity of its financial infrastructure, now robust enough to support rapid, large-scale innovation.

The numbers reflect this shift. Brazil has over 160 million internet users and more than 220 million active smartphones, creating one of the largest digital consumer bases in the world. In parallel, the country’s fintech sector has grown to 1,400+ companies, while venture capital investment consistently positions Brazil as the top destination in Latin America, often capturing 40–50% of total regional funding in recent years.

At the center of this transformation is Pix, the instant payment system launched by the Central Bank in 2020. Its adoption has been unprecedented: over 160 million users and more than 4 billion transactions per month as of 2025. Pix now accounts for a significant share of all electronic payments in Brazil, surpassing traditional methods in frequency. For startups, this means operating in a market where instant, low-cost payments are already embedded in daily behavior.

This level of infrastructure fundamentally changes how businesses are designed. In many Latin American markets, startups still need to navigate fragmented payment ecosystems, combining card processors, cash-based methods, and local bank transfers with varying levels of reliability. In Brazil, however, solutions like Pix and a more integrated financial system allow startups to build directly on top of widely adopted infrastructure from day one.

The result is faster go-to-market, lower operational complexity, and the ability to focus on product differentiation rather than financial plumbing.

“Brazil is one of the few markets where financial infrastructure has reached a level of maturity that directly accelerates startup growth,” says Dmytro Rukin, CEO of LaFinteca. “When payments, identity, and banking systems are already digitized and widely adopted, companies can scale in months instead of years.”

This environment is also enabling new business models. In Brazil, embedded finance is moving beyond early adoption into a core growth layer: the market is projected to reach over $18 billion by 2030, with consistent growth driven by payments, lending, and platform-based financial services.

Infrastructure providers like LaFinteca play a strategic role in this shift by connecting startups to local payment methods and financial systems, not only in Brazil but across Latin America. As more companies expand regionally, the ability to replicate performance and user experience across different markets becomes a key growth factor.

Looking forward, Brazil’s trajectory suggests that fintech will continue to shape how startups emerge and scale. With continued regulatory support, widespread digital adoption, and constant innovation in financial services, the country is setting the foundation for a more integrated and scalable startup ecosystem.

In this context, the next wave of startups in Brazil will be defined not just by digital-first models, but by how effectively they leverage financial infrastructure as a core component of their growth strategy. Between now and 2030, as instant payments, open finance, and embedded financial services continue to expand, infrastructure will increasingly determine which companies are able to scale, and which are not.

GitHub Users, Your Data Will Be Used to Train AI Models: Do You Care?

2026-04-07 23:00:07

Hey Hackers!

Welcome back to 3 Tech Polls, HackerNoon's Weekly Newsletter that curates results from our Poll of the Week, and 2 related polls around the web. Thank you for having voted in our polls in the past.

This week, we’ll be discussing GitHub’s plans to use Copilot interaction data to train AI models, starting April 24th. People have the opportunity to opt out, but even still, the user base appears to be mixed over the whole situation.

We’ll see what the HackerNoon community thinks about it. Plus, we’ll take a look at other companies’ AI models.


\

HackerNoon Poll

GitHub announced it will use Copilot interaction data, including code from your private repos, to train AI models by default starting April 24. The same week, Copilot got caught injecting ads into 1.5M+ pull requests. Developers are calling it peak enshittification. The opt-out clock is ticking. What's your play?

Unlike in previous polls, there was no landslide winner. All the options were within 8% of each other.

24% of voters chose acceptance since other platforms already or will in the future do this anyway. However, 23% have decided to opt out and believe that the trust they had is now completely gone. 20% also opted out but decided they would continue on GitHub. The final two options were tied with 16% each: people either didn’t care enough to do anything about it, or they were just fine with it.

\ Some also made their voice heard in the comments below and explained what they felt about it all. One user said:

Not a dev, but I must admit that though I don't like how EVERYTHING is being fed to AI nowadays, it's inevitable 🥹

\

:::tip Want to say your piece? Share your thoughts on the poll results here.

:::

\ That’s what the HackerNoon community thinks about the subject. Now, let’s take a look at what the rest of the internet is discussing and debating.

\

🌐 From Around the Web: Polymarket Pick

Which company has the best Coding AI model end of April?

Companies are racing to deliver the best coding AI model, but according to Polymarket voters, it appears to be a lopsided race. Voters believe there is a 91% chance Anthropic will have the best coding AI model by the end of April. Second place is OpenAI with a 5% chance. Third place is DeepSeek with a 4% chance. Then, the rest of the other companies have less than a 1% chance of winning the race.

There’s always a chance that an underdog might win, but a 91% chance is very hard to beat.

\

🌐From Around The Web: Kalshi Pick

Best AI at the end of 2026?

Over on Kalshi, there’s a similar poll; however, they’re wondering which company will have the best AI by the end of the year. And just like in the last poll, Anthropic is also winning with a 58% chance. The competition is a bit heavier this time around, though; second place is Google with a 25% chance, and third place is OpenAI with a 10.7% chance.

Although The odds are heavily in Anthropic’s favor, it’s anyone’s game.

\

:::tip 👉 Vote on this week’s poll: Should AI-generated content be taxed differently than human-created content?

:::

\ That’s all for this week.

Until next Time, Hackers!

\

Engineering Resilience: A Deep Dive into Chaos Engineering in Distributed Systems

2026-04-07 19:52:29

Hyper-scalars with their inherent nature of distributed networks/systems, have completely changed the outlook of the word “software reliability”. With distributed systems, we can no longer rely just on traditional QA; but start to deal with the chaos that is inherent in communication with multiple systems. Chaos Engineering is the practice of intentionally experimenting on a system, based on our assumptions or understanding, and gaining understanding of the unknown behaviour of our systems, so we can gain confidence in its ability to withstand turbulent conditions in production.

Chaos Engineering is not a random discipline, but a highly disciplined, systematic approach to finding vulnerabilities before they turn into 3 AM outages. As our architectures shift from predictable monoliths to micro-services built on hundreds of interdependent components spread over multiple cloud providers and SaaS solutions, the ability to catch failure modes through standard unit or integration testing has hit its limit. To understand this, let's start by understanding the mental model behind it.

The Mental Model: Hypothesis Testing at Scale

To practice Chaos Engineering in the right sense, engineering teams need to fundamentally shift how they think about complex systems. The New mental model goes likes this - Distributed systems are inherently chaotic and that failure isn't just possible, it's inevitable.

\

Systems need to embrace failure as a natural occurrence. - Amazon CTO Werner Vogels

\ Challenging the Fallacies of Distributed Computing

The reason why we need Chaos Engineering comes down to "Fallacies of Distributed Computing” where a classic set of dangerous assumptions are made, which leads to fragile architectures. When developers assume that the network is reliable, latency is zero, or bandwidth is infinite, they build systems unprepared for production. Chaos experiments are designed to break these assumptions and understand where and when these assumptions break.

| Assumptions of Distributed Computing | Production Reality | Chaos Experiment Countermeasure | |----|----|----| | The network is always reliable | Stuff breaks. Cables get cut, routers choke, and packets just drop in transit. | Dropping about 5% of packets or forcing a hard network partition. | | Latency is zero | Physics gets in the way. Physical distance and crowded pipes mean data takes time to travel. | Slapping a 500ms to 2-second delay on traffic to see what times out or breaks. | | Bandwidth is infinite | A sudden, unexpected traffic spike will easily choke our app. | Flooding the network to max out capacity and watching how the system handles the bottleneck. | | Topology never changes | Pods spin up, instances die, and IP addresses shift constantly. | Killing random pods, containers, or EC2 instances without any warning. | | Transport cost is zero | Moving data across regions actually costs real money and adds noticeable lag. | Cutting off a whole Availability Zone or region to see if fail-overs actually work. |

\ Moving from Unknowns to Knowns

\ It helps us to think of our system as a matrix of known-unknowns. Traditional testing covers the "known knowns"—the behaviours we expect and understand. Chaos Engineering targets the "unknown unknowns."

There are bizarre, cascading failures and complex component interactions that nobody anticipates until the system falls over in production. By scientifically testing our assumptions, we are dragging these vulnerabilities out of the dark and into the light, building the organisation's infrastructure's immune system.

The Scientific Method to Chaos

We can't randomly just unplug/shutdown servers and call it Chaos Engineering. The practice relies on four core principles that ensure experiments yield actual insights without impacting the business.

1. Define Your Steady State

First, we need to define baseline by answering ‘What does "normal" functioning look like?’.

Define steady state business-level metrics that reflect the user experience, instead of focusing on CPU or memory usage.

| Steady State Metric Type | Examples | Why do we need this | |----|----|----| | Business Output | Orders processed per second, new signups, or video stream starts. | If these numbers dip, the business is actively losing cash, and customers are probably complaining ( if not now, then in a few minutes/hours). | | User Experience | P99 latency spikes, error rates, and Time-To-First-Byte (TTFB). | We need to start looking at the outside system as a giant black box (for front-end, that would be back-end and vice-versa). | | Systemic Boundary | Raw throughput or successful HTTP 200 requests per second. | We need to start looking at the outside system as a giant black box (for the frontend, that would be the backend and vice versa). |

Once we've got a solid baseline, it's time to set up a hypothesis that we can actually measure and prove wrong. We can't just assume saying that "the system will handle it”; we need to quantify it.

Example Scenario: "If I nuke a container, the load balancer needs to catch it and reroute traffic in under 30 seconds. Meanwhile, our P99 latency shouldn't spike past 250ms, and the error rate needs to stay completely under 0.1%." If the system fails any of those specific checks, our assumption was wrong and we got a bottleneck to fix.

2. Simulate Real-World Disruptions

Create chaos variables which will reflect production reality. Prioritise events based on their likelihood or potential blast radius—think in terms of server crashes, traffic spikes, or malformed API responses.

3. Run It in Production

Staging environments simply don't have the scale, messy data, or complex traffic patterns of the live system. Production is the only place where we can validate the authentic request path.

4. Minimize the Blast Radius

While running chaos experiments in production is important, protecting the customer is the ultimate priority.

Start small, target a single container or a tiny fraction of traffic. Widen the scope only once the system can handle a subset of experiments.

| Controlled Chaos Techniques | Implementation Strategy | For what Benefit | |----|----|----| | Targeted Blast Radius | Instead of hitting the main cluster, we target a single canary group or just one non-critical microservice. | It stops a controlled burn from turning into a massive forest fire that takes down the whole app. | | Strict Time-Boxing | If the system totally craters, only a tiny fraction of users will notice, keeping the business folks off our back. | Ensures we don't accidentally leave a broken state lingering in production while we are busy looking at dashboards. | | Off-Peak Scheduling | Schedule the breakage for 2 AM on a Tuesday, or whenever our site is basically a ghost town. | If the system totally craters, only a tiny fraction of users will notice, keeping the business folks off our back. | | The "Big Red Button” | Hook the chaos tool directly into our Pager-Duty or DataDog alerts. If error rates spike unexpectedly, the test auto-kills itself. | It halts the damage before it gets bad enough to trigger an actual Sev-1 incident. |

Chaos Engineering in Practice

We need to start treating Chaos Engineering as an engineering habit, which can be repeatable across a structured life-cycle.

Some Common Scenarios

  • Infrastructure: Killing EC2 instances or maxing out disks to test auto-scaling.

  • Network: Injecting packet loss or DNS failures to validate circuit breakers and retries.

  • Application: Exhausting connection pools or killing processes to ensure graceful degradation.

  • Dependencies: Simulating third-party API timeouts.

Automating Chaos Engineering

Manual tests don't scale. There is real value when we can automate this by piping these experiments to CI/CD pipelines (like GitHub Actions or Jenkins) so we validate resilience every single time we deploy.

| Scheduling Pattern | Frequency | Objective | |----|----|----| | CI/CD Pipeline Gate | Every single deployment. | Make sure the new code doesn't ruin platform resilience. | | Scheduled Cadence | Weekly or Monthly. | Catch configuration drift and keep the on-call team on their toes. | | Game Days | Quarterly or Ad-hoc. | Get the whole team in a room, break something major on purpose, and see how fast we can fix it together. | | Event-Driven Chaos | Real-time | Trigger chaos dynamically based on specific system events or unexpected traffic surges. |

\

Tooling: AWS FIS vs. LitmusChaos

AWS Fault Injection Simulator (FIS)

If you’re already paying the AWS tax and locked into their ecosystem, just use the native Fault Injection Simulator (FIS). It’s fully managed, you don't need to install agents, and it hooks right into CloudWatch.

| AWS FIS Action Category | Example Actions | Impacted Services | |----|----|----| | Instance Management | stop-instances, terminate-instances | EC2, EKS, ECS | | Resource Stress | cpu-stress, memory-stress, io-stress | EC2, Lambda | | Connectivity/Network | disrupt-connectivity, inject-api-throttle | VPC, RDS, Kinesis | | Managed Failover | failover-db-cluster, zonal-shift | RDS, NLB, ALB | | | | |

You can use it to kill EC2 instances, starve Lambdas of memory, or sever VPC connections to test your database failovers.

Litmus-Chaos (The Cloud-Native)

For Kubernetes-heavy shops, Litmus-Chaos is an incredibly powerful open-source (CNCF) alternative. It uses a cloud-native architecture based on Kubernetes Operators and manages experiments via Custom Resource Definitions (CRDs).

| Feature Comparison | AWS FIS | Litmus-Chaos | |----|----|----| | Deployment Model | Managed SaaS | Self-managed | | Kubernetes Integration | Deep AWS Ecosystem | Native Kubernetes / Cloud-agnostic | | Extensibility | API/CLI controlled | CRD-based / Pluggable Probes | | Experiment Discovery | FIS Scenario Library | ChaosHub Public Marketplace |

\ A massive advantage of Litmus is its "Resilience Probes," which constantly verify our steady-state health via HTTP checks or Prometheus queries throughout the experiment life-cycle.

Security Chaos Engineering: DevSecOps Matures

This whole "break it on purpose" mindset is bleeding into cybersecurity, too. Security Chaos Engineering (SCE) is testing defences by simulating common, preventable mistakes.

You probably will not get hacked through a nation-state, zero-day exploit. More often, breaches happen because someone leaves an S3 bucket public or misconfigures a production IAM role. SCE validates whether your controls, logging, and alerts actually fire when those things happen. Try quietly disabling a security group rule and see whether your monitoring notices. Or "accidentally" commit fake AWS keys to a GitHub repo and see how quickly your security automation revokes them.

  • Disabling firewall rules to validate logging and alerting.
  • Deploying an overly permissive IAM role to trace lateral movement.
  • Simulating expired SSL certificates.
  • Injecting dummy credentials leaks to trigger automated rotation policies.
  • \

The Business Case: ROI of Resilience

Building a chaos practice takes time and engineering cycles, so how do we sell it to leadership? Point to the cost of downtime.

\ Enterprise downtime can easily burn $4,000 to $15,000 per minute. Chaos Engineering attacks that risk by reducing Mean Time to Detection (MTTD) and Mean Time to Resolution (MTTR). Teams that actively practice chaos engineering often report a 30% reduction in P1 incidents.

| Resilience Metric | Description | Value Proposition | |----|----|----| | MTTD Reduction | Figuring out the system is broken before the roof completely caves in. | Shrinks the window where customers are actually feeling the pain during an outage. | | MTTR Improvement | Recovering from a disruption and getting things back online fast. | The shorter the outage, the less money the company bleeds. | | Availability Score | Percentage of service uptime (e.g., 99.95%) | Keeps customers trusting your platform and stops the company from getting slapped with massive SLA penalties. | | Incident Frequency | How often does the pager go off at 3 AM for a critical, hair-on-fire outage? | Proves the long-term hardening of the system. |

For SaaS and e-commerce, availability and uptime are revenue. Framing reliability as a competitive advantage makes the ROI conversation easier.

How Pioneers Built It

Netflix: Netflix essentially birthed the movement back in 2010 when they built Chaos Monkey. They literally wrote a script to randomly assassinate their own production servers. It was brutal, but it forced their developers to stop relying on sticky sessions and build truly stateless, resilient apps. A few years later, when an AWS regional outage completely melted down half the internet, Netflix didn't even blink.

LinkedIn: LinkedIn approaches it a bit differently with Project Waterbear. They basically treat resilience like an internal product. They use tools to intentionally make the site laggy or break specific features for a microscopic fraction of users. They lean hard on their A/B testing setup to keep the blast radius incredibly tight while they watch what breaks downstream.

Amazon: Before the insane traffic crushes of Prime Day or Black Friday, they don't just cross their fingers and hope for the best. They run massive, company-wide "Game Days" where they pretend an entire data centre just got hit by a meteor. It builds organisational muscle memory. When a real Sev-1 fire breaks out, nobody panics because the engineers have already drilled that exact scenario fifty times.

Good Observability - A Prerequisite

We cannot do Chaos Engineering without world-class serviceability. You need deep, world-class observability. If you inject a fault and can't immediately trace exactly how it cascades through your micro-services, you aren't doing science. You’re just a vandal breaking things in production. Lock down your four golden signals—latency, traffic, errors, and saturation. You have to be able to set a baseline, instantly spot the deviation, and mathematically prove the system recovered.

The Bottom Line

Chaos Engineering is not just a testing strategy. It is a shift in how we engineer reliability. In a world of infinite complexity, the only way to build trust in a system is to actively try to break it. By embracing failure, setting strict parameters, and automating experiments, engineering teams can stop putting out fires and start building systems that are far more resilient.

\

The Deepfake Paradox: Why Blockchain Holds the Key to Digital Trust

2026-04-07 19:45:05

Deepfake tech can now generate synthetic media that is virtually indistinguishable from authentic content. Surfshark’s data revealed that deepfake-related scams defrauded victims of $1.1 billion globally in 2025. This represents a threefold increase from 2024.

If human senses can be easily deceived, what foundation remains for establishing digital trust? Scott Stornetta, a co-inventor of blockchain timestamping cited in the Bitcoin whitepaper, envisions applying distributed trust mechanisms to solve this identity crisis.

The goal is not winning an unwinnable detection race against generation tools but rendering the entire detection paradigm irrelevant.

The Collapse of Visual Evidence

During a recent interview with Binance, Stornetta detailed the severity of this shift, "We're moving into a world where due to AI, while it has so many benefits, we're also moving into a world where seeing is no longer believing," Stornetta explained. "The question in my mind has been the last couple of years. Is there a way to use similar principles of distributing that trust broadly so that even if you can't trust your eyes, you can trust that the person you think you're interacting with is in fact that person."

Source: DeepStrike

This challenge is accelerating as deepfakes reached a high enough realism to reliably fool non-expert viewers last year. DeepStrike tracked a significant increase in deepfakes—from 500,000 online deepfakes in 2023 to 8 million in 2025. This accounts to nearly 900% annual growth. SurfShark’s analysis shows over 80% of the $1.1 billion in related losses took place on social platforms.

Dr. Nadia Naffi from UNESCO characterizes this as a crisis of knowing itself. Our standard mechanisms for establishing truth face an unprecedented epistemological assault.

Why Detection Is a Losing Battle

Relying on software to catch synthetic media is a flawed strategy. Detection tools lag behind creation technologies in an unwinnable arms race. This creates the liar's dividend, where the sheer volume of synthetic media allows bad actors to dismiss authentic recordings as probable fakes. Neither belief nor disbelief in digital evidence can be easily justified.

The World Economic Forum warns that moderate-quality face-swapping models utilizing camera injection techniques can deceive biometric systems, identifying five trends that will accelerate this risk over 12 to 15 months.

Stornetta compares the current anxiety to the famous Indiana Jones marketplace scene, where a menacing swordsman builds tension until the protagonist draws a revolver. The narrative questions how society will cope with worsening deepfakes, but the solution requires a different tool. The answer is implementing systems that render detection unnecessary.

Distributed Trust as the Foundation

Blockchain offers a way to establish verifiable provenance. "The narrative is that these deep fakes are going to get worse and worse. And how will we cope?" Stornetta stated. "There's this dramatic build up, and the answer in fact is we can make deep fakes irrelevant and just move on."

His core principle suggests that just as developers distributed trust across many people to make a trusted third party obsolete, similar architecture can verify identity. Sovereign implementations are underway with Bhutan rolling out its National Digital Identity platform on Ethereum. The project is scheduled for early 2026 and its success would mean that Bhutan could become the first country to anchor a population-scale identity system on a public network.

According to Vitalik Buterin, decentralized digital identity empowers people by giving them more secure control over their data and their online lives. Indeed, blockchain enables digital scarcity by allowing creators to record content authenticity on a public ledger. And no one can change the data there once the blocks are added to the chain.

Real-World Implementation Signals

Bhutan's identity initiative aligns with its broader digital asset strategy, holding approximately 6,370 BTC worth $725 million and partnering with Binance Pay for tourism ecosystem payments. In regions with rapid digital adoption, the urgency is acute.

A Smile ID report found the crypto industry witnessed the highest cases of identity fraud in Africa compared to other sectors, noting that biometric digital identity systems are harder to fake than traditional text-based ID methods. WEF research confirms deepfake attacks increasingly target KYC processes.

Financial institutions documented sophisticated face-swapping assaults in the report. Industry analysis highlights that agentic AI and stablecoins are among five major trends redefining anti-money laundering in 2026, making robust digital identity systems essential. Binance Co-CEO Richard Teng recently commented on this, “With agentic AI emerging, crypto and stablecoins will become the mode through which agentic AI facilitates activities such as booking hotels, making payments, and more.”

Binance's End of Year Report demonstrates this pivot and details 24 AI initiatives across compliance and over 100 AI models deployed for anti-fraud controls.

Trust Without Seeing

The ultimate goal is not to win an endless detection arms race, but to build an infrastructure that makes synthetic media completely irrelevant to trust decisions. From powering global payments to securing personal identity, blockchain principles are rapidly becoming the foundational underlayer for the AI era.

The technological capacity to authenticate reality independent of visual evidence already exists. As Stornetta suggests, the core challenge moving forward is no longer theoretical capability, but widespread implementation and adoption.

\

:::tip This story was distributed as a release by Jon Stojan under HackerNoon’s Business Blogging Program.

:::

\

The UK Must Choose Between Protecting Creators and Backing Big Tech in AI

2026-04-07 19:38:09

The House of Lords Communications and Digital Committee issued a report calling on the UK government to protect creative industries from big tech on March 6.

In stressing that the government must choose between becoming a “world-leading home for responsible, licensing-based artificial intelligence development” or “drift” towards acceptance of large-scale use of unlicensed creative content by U.S.-based models, the committee argued that only the first path is in the UK’s interests.

The House of Lords believes, in fact, that it would be a “poor bet” for the UK government to allow changes to copyright that could undermine the UK’s creative industries – which contributed £124 billion (€143.58 billion) to the UK economy in 2023, compared to just £12 billion (€13.90 billion) from AI in 2024.

At the same time, the report suggests that weakening the UK’s copyright law would harm rightsholders and stall the licensing market. Instead, the Committee recommends the government develop a licensing-first regime that can support creators’ livelihoods while encouraging sustainable AI growth.

A hard line toward scraping intellectual property

The controversy over unauthorised scraping has remained at the heart of the AI race, with tools like ChatGPT, DALL-E, and Midjourney receiving criticism for being trained on the world of third-party rightsholders without compensation or consent. Such practices have led to legal action such as The New York Times’ lawsuit against OpenAI and Disney’s lawsuit against Midjourney.

From this perspective, the House of Lord’s report demonstrates an interest in protecting creative industries against scraping by frontier AI companies. It was also skeptical that introducing a commercial text and data mining exception for AI training would expand the AI sector in the country.

“Our creative industries face a clear and present danger from uncredited and unremunerated use of copyright material to train AI models. Photographers, musicians, authors and publishers are seeing their work fed into AI models, which then produce imitations that take employment and earning opportunities from the original creators,” committee chair Baroness Keeley said in the official press release.

“The government should now make clear it will not pursue a new text and data mining exception with an opt-out mechanism for training commercial AI models. Instead, it should focus on strengthening UK protections for creators, including against unauthorised digital replicates and ‘in the style of’ uses of creators’ work and identity,” Keeley continued.

Can the UK government rein in AI scraping?

The report puts some pressure on the UK to reign in AI scraping, but it is still up to the government whether it will implement these findings.

Given that Liz Kendall, Secretary of State for Science, Innovation, and Technology, recently announced a strategic partnership with Google DeepMind, it appears unlikely that the government would make a move that could risk alienating AI vendors in the region.

However, the report could increase pressure on the government to revise its policy. Mumtaz Kynaston-Pearson, principal legal counsel at global cybersecurity firm Mimecast argued that the report will intensify pressure on ministers while in conversation with 150sec.

“But I’d temper expectations of rapid legislative change. The government has so far favoured a pro-innovation, sector-led approach, prioritising voluntary principles over hard regulation,” she added.

That being said, as counsel, the Data Use and Access Act 2025 requires both an economic impact assessment and an AI copyright report by March 2026, plus a new collective licensing framework expected later this year, through which stakeholders are likely to see incremental policy shifts.

“The Lords’ intervention has moved the Overton window: outright rejection of creator protections will now be politically harder. I’d expect some substantive action within 12-24 months though primary legislation remains some way off,” Kynaston-Pearson told 150sec.

Initial reactions to the report

The initial response to the House of Lords report has been mixed between supporters of AI regulation, and critics:

“They destroy the argument that big tech should be given the country’s creative output for free, and they lay out the case for maintaining and even strengthening existing copyright law to protect creatives from exploitation,” Ed Newton-Rex, CEO of Fairly Trained, a non-profit that certifies training data for use in AI, posted on X.

On the other hand, Kay Jebelli, Senior Director for Europe at the Chamber of Progress, argued that, if adopted, the report “would significantly damage the UK’s AI ambitions” and “amplifies the false narrative that technology and creativity are at odds, and that existing rights holders must be compensated by AI companies for changing industry dynamics.”

There is a natural rift between those who want to see more protections for creatives and those who want to prioritize innovation in the AI industry. In any case, the report puts some pressure on the government to allow a licensing-first approach to AI training, which has been absent in the market so far.


:::info Tim Keary, Journalist, 150Sec

:::

\

"To Tag or Not to Tag is the Question" - Prince Hamlet Turning in His Grave

2026-04-07 19:38:04

Like many others, I have been using one of the most popular workplace messaging platforms extensively for work. Over the last five years, I have noticed a habit that both bothers me and makes me curious: In a chat between exactly two people, or as some of us call it, “1:1” or “one- on-one” chat, why do so many users still @mention or tag the other person? To simplify for those who may not be familiar with this messaging platform, it’s like chatting with one friend, mind it, just one friend, and calling their name out loudly. Is it necessary? If you are talking to me, you are talking to me. Is there a need to call my name with an emphasis where it pops up as an alert, pushing my mind to wake up, thinking, is the world on fire or whether I left the geyser on before leaving home? Is this just a habit, a misuse of a feature, a false sense of urgency or simple ignorance? I ask these questions because, last time I checked, the people sending these pings are still human beings, not robots blindly executing a script with zero awareness of the chaos they're causing on the other end. I don’t know, so I decided to probe a bit and explore what might be going on.

\

The heart-ache and the thousand natural shocks

I went down the rabbit hole (yes, I am creative. I can have Alice and Hamlet in the same storyline), to understand why this happens. My research assistants were the traditional Google Search and my aide, Claude AI. Credit where due. What I found is that the reasons behind this behaviour can be bucketed into 3 categories, phrased in a way that might feel familiar and perhaps wake up the subconscious.

(1) It’s not a bug, it’s a feature - To be fair, many people don't fully understand how the notifications work and assume tagging is always necessary. That’s fine. I see an opportunity here to educate them. Fellow earthlings - Breaking news, don’t assume, use your brain to decide till you have not outsourced your thinking to AI. Another reason could be more from an angle of not trusting the recipient’s tech saviness enough; they don't fully trust that the other person will see the message, so the tag feels like a double notification guarantee. But is it a guarantee, really? The point that they are missing is, just because you can, you don’t have to do.

(2) If others are doing it, it must be right - A lot of habits are picked up through observation. It’s how we teach kids: demonstrate a behaviour, and they learn it. They learn it often without formal teaching or questioning. Workplace norms spread the same way. If someone sees their seniors using a lot of 1:1 in tags, they just replicate it, assuming that’s expected and accepted. I understand doing it once in a while when urgent attention is genuinely needed. But as a default habit, why? It’s worth pausing to ask whether the tag adds anything at all.

(3) I need instant attention - Tagging feels like underlining the message or calling out your name loudly. It signals "This is important, I really mean YOU" even in a 1:1 where that's already obvious. Isn’t it obvious? Take a pause and think again. Isn’t it obvious, it’s 1:1, which is self-explanatory, that the discussion is between you and the other one? Tagging someone, even in private, can feel like a subtle assertion of authority. It’s like summoning someone with a loudspeaker rather than just talking to them. It’s like a reminder that ‘You better address my chat immediately’. Is it needed?

\

That makes a calamity of so long life

I am not a neuroscience expert, but I am endlessly curious. A quick read of Kent Berridge’s work (Source: The debate over dopamine’s role in reward: the case for incentive salience) suggests that every ping, tag, or message can trigger a dopamine response in the brain. Your brain treats it as a potential threat or reward and forces attention toward it. You cannot fully ignore it even if you try. I also read a research paper on Frontiers (Source: Digital workplace technology intensity), and took away a related point - an unexpected ping, especially one that feels urgent (because of the tag), causes a small but real cortisol (stress hormone) release. If one experiences it repeatedly during the day, they would end up running on stress. Do we want to delegate such stress to our colleagues? There’s a very interesting study about how the amygdala, an important component of human brain functions. The amygdala is essentially your brain's smoke alarm. It constantly scans your environment for anything that could be dangerous, threatening, or emotionally significant. In our premise, the tagged ping activates the brain's alarm system. Even if the message turns out to be trivial, your amygdala is already fired. That reaction time is wasted mental energy. A redundant, non-urgent tag-ping is still an interruption. Your brain doesn't know it's non-urgent until after it has already broken focus. For knowledge workers doing creative or analytical work, repeated flow interruptions mean significantly reduced output quality, not just speed. It’s also ironic because the recipient often feels guilty for not responding immediately, which adds another layer of cognitive load.

\

Must give us pause: there’s the respect

In my opinion, if each of us becomes a little more thoughtful about how we communicate, it can create a butterfly effect for the people on the receiving end. Habits take dedication to cultivate, but it’s possible. As an observer (and someone who still believes we can learn and grow), here are some practices we can adopt and a few we can unlearn.

Here’s a simple rule of three to begin with -

(1) Use @mention in a 1:1 sparingly - Since both people already receive notifications for every message in a private chat, adding an @mention on top is unnecessary in most cases. Think of it as in 1:1 chats, @mentions are more of a luxury than a necessity. Reserve them only for when you are emphasising something genuinely urgent. Be judicious; you save your energy and the recipient’s, too.

(2) Use @mention in a 1:1 purposefully - When you do use an @mention, make sure it carries clear intent. I often get chat messages 1:1 where I am just tagged with a Hi. Well, I don’t mind a Hi, but I do mind just a Hi, especially when you are tagging me and not giving any context. Let’s be honest, if you genuinely wished to say me a Hi, you can make that purposeful with not tagging and adding a note that - Hi , thought of sending you a Hi and checking in if everything is fine at your end. It’s better not to leave a message hanging with a tag and no follow-up. In today’s world of information overload and constant chat streaming, optimizing the chat message content helps you and the recipient. When I get a chat message with a tag and a Hi and no context, it feels like one of my parents is calling me from the other room, and that already makes me anxious, thinking that I might have left our pet cat inside the shoe rack. Hence, let’s be purposeful and save the cat, ehh I mean save the energy.

(3) Use @mention to use not overuse - Many cultures have a version of the “cry wolf” story: if every alarm is urgent, people stop believing any of them. When everything is tagged, nothing feels special. So the one time you truly need immediate attention, your signal may get ignored.

\

Let’s choose to Be… Thoughtful and Not Be… Mindless

You might be thinking, "Why is this all on the sender’s behaviour? Of course, the recipient always has choices too, but the origin is usually the sender, so that’s where I focused in this article. If you’re reading this from the recipient side, remember you can set expectations: “What’s urgent about this?” “Can this wait until tomorrow?”, etc. That said, there’s a separate angle worth exploring: sometimes a sender escalates to tagging because the recipient is often unresponsive and progress stalls without a nudge. That’s a different topic.

Small actions lead to change. I am not here to sweat over the small stuff, but small stuff accumulates and becomes culture and gets normalised. Consider the virus - tiny, microscopic, seemingly harmless. Yet look what it can do to an entire world. Now imagine that virus, but for your focus and sanity, spreading one redundant @mention at a time. I'll spare you the rest of that mental image, but you get the idea. If we can be a bit more deliberate with @mentions in 1:1 chats, we can reduce unnecessary stress and keep “urgent” meaningful when it truly matters.

It was said about Hamlet that though this be madness, yet there is a method in it. So, can we also try to find that method in this madness of notification fatigue and all things overload?