MoreRSS

site iconHackerNoonModify

We are an open and international community of 45,000+ contributing writers publishing stories and expertise for 4+ million curious and insightful monthly readers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of HackerNoon

Measuring Catastrophic Forgetting in AI

2026-03-19 06:00:17

TABLE OF LINKS

Abstract

1 Introduction

2 Related Work

3 Problem Formulation

4 Measuring Catastrophic Forgetting

5 Experimental Setup

6 Results

7 Discussion

8 Conclusion

9 Future Work and References

\

4 Measuring Catastrophic Forgetting

In this section, we examine the various ways which people have proposed to measure catastrophic forgetting. The most prominent of these is retention. Retention-based metrics directly measure the drop in performance on a set of previously-learned tasks after learning a new task. Retention has its roots in psychology (e.g., Barnes and Underwood (1959)), and McCloskey and Cohen (1989) used this as a measure of catastrophic forgetting. The simplest way of measuring the retention of a learning system is to train it on one task until it has mastered that task, then train it on a second task until it has mastered that second task, and then, finally, report the new performance on the first task. McCloskey and Cohen (1989) used it in a two-task setting, but more complicated formulations exist for situations where there are more than two tasks (e.g., see Kemker et al. (2018)). An alternative to retention that likewise appears in psychological literature and the machine learning literature is relearning. Relearning was the first formal metric used to quantify forgetting in the psychology community (Ebbinghaus, 1913), and was first used to measure catastrophic forgetting in Hetherington and Seidenberg (1989).

\ The simplest way of measuring relearning is to train a learning system on a first task to mastery, then train it on a second task to mastery, then train it on the first task to mastery again, and then, finally, report how much quicker the learning system mastered the first task the second time around versus the first time. While in some problems relearning is of lesser import than retention, in others it is much more significant. A simple example of such a problem is one where forgetting is made inevitable due to resource limitations, and the rapid reacquisition of knowledge is paramount. A third measure for catastrophic forgetting, activation overlap, was introduced in French (1991). In that work, French argued that catastrophic forgetting was a direct consequence of the overlap of the distributed representations of ANNs. He then postulated that catastrophic forgetting could be measured by quantifying the degree of this overlap exhibited by the ANN. The original formulation of the activation overlap of an ANN given a pair of samples looks at the activation of the hidden units in the ANN and measures the element-wise minimum of this between the samples. To bring this idea in line with contemporary thinking (e.g., Kornblith et al. (2019)) and modern network design, we propose instead using the dot product of these activations between the samples. Mathematically, we can thus write the activation overlap of a network with hidden units h0, h1, …, hn with respect to two samples a and b as

\ \

:::info Authors:

  1. Dylan R. Ashley
  2. Sina Ghiassian
  3. Richard S. Sutton

:::

:::info This paper is available on arxiv under CC by 4.0 Deed (Attribution 4.0 International) license.

:::

\

Study Finds Optimizer Choice Significantly Impacts Model Retention

2026-03-19 06:00:14

TABLE OF LINKS

Abstract

1 Introduction

2 Related Work

3 Problem Formulation

4 Measuring Catastrophic Forgetting

5 Experimental Setup

6 Results

7 Discussion

8 Conclusion

9 Future Work and References

\

2 Related Work

This section connects several closely related works to our own and examines how our work compliments them. The first of these related works, Kemker et al. (2018), directly observed how different datasets and different metrics changed the effectiveness of contemporary algorithms designed to mitigate catastrophic forgetting. Our work extends their conclusions to non-retention-based metrics and to more closely related algorithms. Hetherington and Seidenberg (1989) demonstrated that the severity of the catastrophic forgetting shown in the experiments of McCloskey and Cohen (1989) was reduced if catastrophic forgetting was measured with relearning-based rather than retention-based metrics. Our work extends their ideas to more families of metrics and a more modern experimental setting. Goodfellow et al. (2013) looked at how different activation functions affected catastrophic forgetting and whether or not dropout could be used to reduce its severity. Our work extends their work to the choice of optimizer and the metric used to quantify catastrophic forgetting.

\ While we provide the first formal comparison of modern gradient-based optimizers with respect to the amount of catastrophic forgetting they experience, others have previously hypothesized that there could be a potential relation. Ratcliff (1990) contemplated the effect of momentum on their classic results around catastrophic forgetting and then briefly experimented to confirm their conclusions applied under both SGD and SGD with Momentum. While they only viewed small differences, our work demonstrates that a more thorough experiment reveals a much more pronounced effect of the optimizer on the degree of catastrophic forgetting. Furthermore, our work includes the even more modern gradient-based optimizers in our comparison (i.e., RMSProp and Adam), which—as noted by Mirzadeh et al. (2020, p. 6)—are oddly absent from many contemporary learning systems designed to mitigate catastrophic forgetting.

3 Problem Formulation

In this section, we define the two problem formulations we will be considering in this work. These problem formulations are online supervised learning and online state value estimation in undiscounted, episodic reinforcement learning. The supervised learning task is to learn a mapping f : R n → R from a set of examples (x0, y0), (x1, y1), …, (xn, yn). The supervised learning framework is a general one as each xi could be anything from an image to the full text of a book, and each yi could be anything from the name of an animal to the average amount of time needed to read something. In the incremental online variant of supervised learning, each example (xt, yt) only becomes available to the learning system at time t and the learning system is expected to learn from only this example at time t. Reinforcement learning considers an agent interacting with an environment. Often this is formulated as a Markov Decision Process, where, at each time step t, the agent observes the current state of the environment St ∈ S, takes an action At ∈ A, and, for having taken action At when the environment is in state St, subsequently receives a reward Rt+1 ∈ R. In episodic reinforcement learning, this continues until the agent reaches a terminal state ST ∈ T ⊂ S. In undiscounted policy evaluation in reinforcement learning, the goal is to learn, for each state, the expected sum of rewards received before the episode terminates when following a given policy (Sutton and Barto, 2018, p. 74). Formally

\ \

\ where π is the policy mapping states to actions, and T is the number of steps left in the episode. We refer to vπ(s) as the value of state s under policy π. In the incremental online variant of value estimation in undiscounted episodic reinforcement learning, each transition (St−1, Rt, St) only becomes available to the learning system at time t and the learning system is expected to learn from only this transition at time t.

\ \

:::info Authors:

  1. Dylan R. Ashley
  2. Sina Ghiassian
  3. Richard S. Sutton

:::

:::info This paper is available on arxiv under CC by 4.0 Deed (Attribution 4.0 International) license.

:::

\

The Patient Digital Twin Has No Inner Life and That Is a Design Failure

2026-03-19 05:35:12

Healthcare is getting very good at building digital representations of the body.

\ We can model organs, simulate disease progression, track treatment response, and predict clinical risk with increasing precision. Hospitals are investing in smarter monitoring, smarter imaging, smarter decision support, and smarter workflow systems. On paper, this looks like progress. And in many ways, it is.

\ But there is a problem sitting at the center of all this innovation.

\ The patient inside these systems is still strangely incomplete.

\ We are building digital twins that can approximate physiology, but not fear. We can track biomarkers, but not the collapse of emotional stability that often begins long before a patient says a word. We can map the tumor, the lesion, the heart rhythm, or the sleep cycle, but we still do not know how to model what it feels like to be inside a body that no longer feels safe.

\ This is not a minor gap. It is a design failure.

\ A patient is not only a medical object. A patient is also a nervous system under pressure. A person entering treatment is dealing with anticipation, stress, confusion, helplessness, sensory overload, and often, a quiet loss of control. These are not soft side effects. They shape how people tolerate treatment, how they make decisions, how they recover, and whether they can stay psychologically connected to themselves through the process.

\ Yet, most healthcare technologies still treat the inner experience as secondary. Something subjective. Something too vague to measure. Something to address later, if there is time.

\ There usually is not.

\ This is one of the reasons so many advanced systems still feel emotionally primitive. They are optimized to detect, classify, and respond to clinical events. They are not built to understand destabilization before it turns into a visible crisis. They do not recognize when the patient has begun to dissociate from their own body, when fear is narrowing cognitive capacity, or when exhaustion has reduced a person’s ability to process information or participate in care.

\ If we are serious about digital twins in healthcare, we need to ask a more uncomfortable question: what exactly are we twinning?

\ If the answer is only anatomy, pathology, and workflow, then we are not building a twin of the patient. We are building a twin of the case.

\ That distinction matters.

\ A case can be managed. A patient has to live through what the case means.

\ This is where the next generation of healthcare design needs to evolve. The missing layer is not only more data. It is a different category of data. We need systems that are capable of registering emotional load, sensory tolerance, cognitive fatigue, and changes in self-perception. We need environments that do not simply deliver care, but help preserve the patient’s internal stability while care is happening.

\ That does not mean every human experience should be flattened into a metric. It means we should stop pretending the inner state is irrelevant just because it is harder to quantify.

\ There are already signals available to us. Voice changes, breathing patterns, body tension, gaze behavior, response latency, movement quality, orientation in space, tolerance to stimulation, patterns of withdrawal, etc.

\ Even the language patients use to describe themselves can reveal whether they still feel present in their body or whether they have started to retreat from it.

\ The question is not whether these states matter. The question is whether healthcare systems are willing to treat them as part of reality rather than as emotional background noise.

\ In my view, this is where immersive technology becomes especially important. Not because virtual reality is futuristic, but because it gives us a way to work with experience directly. It creates structured spaces where attention, fear, bodily awareness, and emotional regulation can be observed and supported in real time. In some cases, it allows patients to reconnect with a sense of agency before that sense disappears completely.

\ That is valuable because recovery does not begin at discharge. It begins much earlier, often at the moment a person starts losing contact with themselves.

\ This is the part many health systems still miss. A patient can be clinically stable and psychologically fragmented at the same time. The scans may improve while the person feels less and less real inside their own life. If our technologies are blind to that condition, then our intelligence is incomplete, no matter how advanced the interface looks.

\ I suspect this is where healthcare AI will eventually have to mature. Right now, much of the focus is on automation, efficiency, and prediction. Those are important goals. But the more healthcare becomes data-driven, the more dangerous it becomes to exclude the inner state from the model. Efficiency without psychological insight can produce cleaner workflows and worse human outcomes.

\ The future of care will not belong to the systems that merely know more about disease. It will belong to the systems that understand what illness does to presence, perception, and emotional continuity.

\ We do not need a sentimental version of healthcare technology. We need a more truthful one.

\ A real patient digital twin should not stop at the body. It should account for the lived strain of being a person under treatment. It should help clinicians recognize not only what is happening medically, but what is beginning to fracture internally. It should support intervention before emotional collapse hardens into disengagement, panic, or long-term trauma.

\ If we fail to design for that, we will keep producing impressive medical systems that remain incomplete at the human level.

\ And we will keep calling them intelligent, even though they still cannot see the full patient in front of them.

API Contract Drift - An Unsolved CI Problem

2026-03-19 05:15:32

Developers like to say their APIs are contract-first. In practice, many are contract-eventually. Of course, parts of the contract exist: the specification, the schema, and even the CI check. But the actual governance around change is often inconsistent and surprisingly fragile. What results is a partial order - just enough automation to create a sense of confidence, but not really enough to ensure nothing breaks. This is a gap that shows up everywhere APIs live. \n \n In OpenAPI workflows, teams worry about accidentally removing fields, tightening request requirements, or changing response shapes in ways that break consumers. In GraphQL, the concern is often more subtle: a schema can evolve in ways that are technically valid but operationally risky, especially when clients depend on fields, enum values, or assumptions that were never written down. In Protobuf, the concerns are even more explicit because wire compatibility forces engineers to think carefully about field numbers, message evolution, and long-lived consumers.

\ None of this is new. Schema evolution has been a known problem for years, and there are mature tools for checking it, as well as best practices to avoid it. And yet, contract drift remains a recurring source of CI noise and production risk. While there is an abundance of diff tooling, the real problem is that most teams still do not have a reliable way to turn diff output into policy. Many engineering organizations run mixed contract ecosystems by default. One service may expose REST with OpenAPI, another uses GraphQL; internal systems talk over protobuf-backed interfaces. Sometimes, all three can exist in the same repository, or even the same deployment pipeline. \n \n The obvious response is to use the best native tool for each format, which sounds like a reasonable solution. But, the moment you have multiple schema ecosystems, you inherit multiple definitions of severity, multiple output formats, multiple assumptions about what counts as a breaking change, and multiple ways for CI to fail unclearly. One team’s “dangerous but acceptable” is a critical blocker for another team. One tool exists with a policy-relevant failure, while another exists with an execution error. A third tool simply produces a format no one outside that ecosystem wants to parse. Before long, the organization has not one contract policy, but several local interpretations of one.

Recurring Gaps

This kind of fragmentation creates a few recurring gaps.

1. Normalization

Most schema diff tools are good at answering a local question: what changed between version A and version B of this specification? What they do not solve on their own is the cross-ecosystem question: how should an organization reason consistently about those changes?

\ That matters because engineering teams do not really operate on raw diff output. They operate on categories like “fail the build,” “warn but allow,” “document and continue,” and “ignore temporarily with justification.” Those are policy categories, not tool categories. A breaking change in one schema system and a dangerous change in another may both deserve human review, but most teams do not have a clean way to express that consistently across repos and API styles.

2. Determinism

A surprising amount of schema checking in CI is still tied too closely to the working tree. That sounds harmless until branches diverge, generated files drift, refs are missing, or CI compares the wrong state of the repository. Then, the same pull request may pass in one environment, fail in another, or produce an empty output for the wrong reason. This is the kind of failure mode engineers hate the most: an ambiguous, quiet failure. A diff check that says “no issues found” is only useful if you trust what was actually compared. In practice, many teams do not. They trust the intent of the script more than the mechanics of it.

3. Suppression Hygiene

This is where many otherwise sensible systems may start failing. Real teams need exceptions. A contract change may be intentional, and a migration may be phased. A consumer may already be updated, but not reflected in the local repo. A technically risky diff may actually be harmless within a known time window. All this leads to some kind of suppression mechanism being implemented, but most suppression mechanisms cause more harm than good.

\ They may be too broad, too opaque, too permanent, or too easy to forget. For example, a pipeline flag can be added temporarily, only to be forgotten. Findings can be ignored, or, in the worst-case scenario, a comment somewhere in a workflow file becomes the only record of why a breaking change was allowed.

\ This creates a second-order problem: the organization no longer knows whether its contract checks are actually strict or merely ceremonial. And, once teams lose trust in the discipline around suppressions, they start distrusting the whole gate. At that point, even valid failures may get treated as process friction rather than useful signals.

4. Error Semantics

This one feels under-discussed, but it matters a lot in CI. There is a major difference between “the contract changed in a way policy forbids” and “the check could not run correctly.” Those are not the same event. They should not share an exit path, and they definitely should not produce the same kind of message, yet many pipelines mix them together. Missing refs, missing files, missing binaries, malformed config, unsupported targets, and actual schema violations can all get flattened into some version of “the job failed.” That is terrible for engineering feedback loops, as it makes developers spend time debugging the check itself instead of understanding the contract decision.

\ A good gate needs to distinguish policy failure from execution failure very clearly, while many current setups do not.

5. Human Readability

This is another place where local tool quality does not automatically scale into organizational usability. A specialized schema diff tool may produce excellent output for people who already understand that ecosystem deeply. But CI is not read only by schema experts: it is read by product engineers, reviewers, release managers, etc., and in this case, if the output is technically correct but hard to understand, it loses a lot of its value. \n \n What people usually need is a compact answer to a simpler question: what changed, how serious is it, what is suppressed, what failed to run, and what action is expected from me right now?

The Bigger Issue

All of these gaps point to the same broader issue: schema checking is mature, but schema governance is not. Most teams have some ability to compare specs, but far fewer have a coherent model for enforcing change policy across different API technologies, repositories, and team habits. In other words, the hard part is not diffing, but operationalizing the diff.

\ I think that is why API contract drift continues to produce outsized pain relative to how small many of the changes look on paper. Even singular, tiny changes like a removed property, a narrowed enum, or a changed requirement level can, at scale, accumulate into broken clients, confusing deploys, rollback risk, and a slow erosion of trust between producers and consumers.

\ This is especially visible in organizations that are otherwise fairly mature. They have a CI setup, typed interfaces, schema files in version control, maybe even ecosystem-specific contract checks. But the checks often stop one layer short of what is actually needed: a shared policy model, deterministic comparisons between refs, explicit suppression discipline, and outputs that make sense across technical boundaries.

\ The issue is not bad diff algorithms, but engineers not caring to maintain contracts, rather that contract drift is still usually managed as a collection of local checks rather than a coherent governance problem.

What a Better Layer Would Need to Answer

There are plenty of teams that do not need to solve this fully. If you have one API style, one disciplined team, one repo, native tools plus a small wrapper may be enough.

\ But once you have multiple schema ecosystems or multiple services evolving in parallel, ad hoc checking starts to break down. At that point, what you need is not more raw detection - you need a policy layer.

\ That could take many forms, and it does not have to be one specific implementation. But it does need to answer the same core questions clearly:

  • What exactly changed?
  • How risky is it?
  • Should this fail the build, and if not, why?
  • Was that exception intentional?
  • Will it expire?
  • Can a developer tell the difference between a policy decision and a broken check?

\

Aster Expands WLFI Collaboration, Launches USD1-Denominated Perpetual Markets

2026-03-19 00:46:49

George Town, British Virgin Islands, March 18th, 2026/Chainwire/--Aster, a trading ecosystem backed by YZi Labs, today announced a major expansion of its collaboration with World Liberty Financial (WLFI).

The collaboration introduces USD1-denominated perpetual contracts and new trading incentives, including WLFI token rewards and reduced fees on USD1 pairs, while also allowing users to earn additional rewards on their holdings.

The integration is intended to support USD1 liquidity on the platform, laying the groundwork for Aster Chain, the project's newly-launched Layer 1 blockchain.

Building a Diverse Foundation for Aster Chain

Adding USD1 as collateral and USD1-denominated perpetual markets reduce Aster's reliance on any single stablecoin, giving users greater flexibility as the Aster Chain launches.

WLFI's global community helps support Aster’s efforts to expand access to USD1 markets within DeFi.

"Aster Chain's success depends on the depth of its underlying liquidity," said Leonard, CEO at Aster. "By bringing USD1 into our core trading engine during this phase, we're building the trading foundation for the Aster Chain launch. Our 0-bps maker fees are designed to encourage participation in USD1 markets on Aster as the mainnet launch."

“Perpetual markets are where a significant portion of trading volume lives. Aster listing USD1 perps pairs and matching USDT collateral ratios means traders can use USD1 in a manner similar to any major stablecoin. That's the bar we set: functional parity, rather than positioning USD1 a secondary option.” said Zak Folkman, Co-founder & COO of World Liberty Financial.

Establishing the USD1 Trading Hub

Aster supports USD1-denominated perpetual contracts, launching with BTC, ETH, and SOL pairs, with an additional 10+ pairs planned in the coming weeks.

To encourage market participation, Aster is offering zero-bps maker fees and a competitive 0.5-bps taker fee. USD1 is also supported as a core margin asset and collateral, with a collateral ratio on par with USDT – allowing traders to maximize capital efficiency.

Rewards for Early Adopters

This partnership introduces several incentives as part of Aster Chain's mainnet launch:

  • USD1 Perp Trading Rewards: Up to 2.5 million WLFI tokens distributed monthly through the USD1 perpetual trading incentive program based on trading activity, with rewards distributed weekly. WLFI reserves all rights regarding program interpretation and distribution.
  • USD1 Holding Incentives: Users holding USD1 on Aster may be eligible to participate in platform incentive programs.
  • Reduced Trading Fees: Zero maker fees and 0.5-bps taker fees on all USD1 pairs, a significant reduction compared to USDT pairs.*

Aster will also launch tracking tools including integrated Points Program entry points across web and mobile, allowing users to monitor their progress and participation in early Aster Chain market activity.

*Aster's standard taker fee on USDT pairs is 4 bps. USD1 taker fee is 0.5 bps, representing an approximate 87.5% reduction. Maker fees on USD1 pairs are 0 bps. All fees are set by Aster and subject to change. See Aster's fee schedule at Aster fee page for current rates.

About Aster

Aster is a privacy-first onchain trading platform backed by YZi Labs, featuring innovations like Hidden Orders to shield user trading activity. It offers perpetual contracts across crypto, stocks and commodities, as well as crypto spot trading, and is powered by Aster Chain, a Layer 1 blockchain built to power the future of decentralized finance.

Users can learn more about Aster on the official website or follow Aster on X.

About World Liberty Financial (WLFI)

World Liberty Financial (WLFI) operates at the intersection of traditional financial infrastructure with blockchain innovation, creating accessible, transparent, and scalable solutions for a new era of digital finance. This documentation is intended for developers, integrators, researchers, and community members seeking to understand the World Liberty Financial ecosystem.

Contact

PR & Content Manager

Lola Chen

Aster

[email protected]

:::tip This story was published as a press release by Chainwire under HackerNoon’s Business Blogging Program

:::

Disclaimer:

This article is for informational purposes only and does not constitute investment advice. Cryptocurrencies are speculative, complex, and involve high risks. This can mean high prices volatility and potential loss of your initial investment. You should consider your financial situation, investment purposes, and consult with a financial advisor before making any investment decisions. The HackerNoon editorial team has only verified the story for grammatical accuracy and does not endorse or guarantee the accuracy, reliability, or completeness of the information stated in this article. #DYOR

Building Secure and Performant Web Applications Using Azure AD and .NET Core Identity on macOS

2026-03-19 00:45:58

The creation of web applications on macOS has become much more accessible and viable over the past few years, particularly to ASP.NET Core developers. It is not only possible but even fun to create secure and performant applications with the strength of .NET Core and the tight integration with Azure AD. You are in the right place if you are a macOS user and want to know how you can combine modern authentication with robust performance.

Getting Started on macOS with .NET Core Identity

The users of macOS may believe that it requires development on Windows only in relation to ASP.NET Core, but .NET Core has made cross-platform development feel like native. Using only .NET and a choice of a favorite editor, such as Visual Studio Code, the macOS developer will be able to boot up a full-fledged web application that relies on Identity features and Azure authentication.

Installation of .NET Core Identity on a Mac is easy. Identity is accompanied by user account management, where the user can register, log in, change their password, and manage their user roles. The identity system is flexible and has secure default behavior, like enforcing password strength and locking the account on failed attempts. They are characteristics that assist in establishing a strong base when developing secure apps.

Security First in the Cloud

In modern application development, security is always in the limelight. Most common vulnerabilities can be avoided by using the appropriate tools at the appropriate time. .NET Core Identity also takes care of the fundamentals, such as password hashing and token generating, but when a developer is on macOS and the application is aimed at enterprise scale, it is paired with the Azure AD to bring the security to the next level.

Microsoft Entra ID, also known as Azure AD, enables developers to add enterprise-level access control and authentication. It extends the functionality of a .NET Core application to incorporate functionality (such as multi-factor authentication, cross-application single sign-on, and external identity provider) without the need to implement custom authentication code. To macOS developers, configuration may be managed either by terminal or an integrated terminal with editors such as Visual Studio Code, and therefore, setup is effortless and scriptable.

The developers embrace a security model that is more cloud-friendly by prioritizing identity as the new perimeter and not using conventional network boundaries. This identity-first model gives the user a chance to be verified and authorized not only based on location on the network but also based on role, which is fully facilitated by platform-agnostic tools and SDKs on the macOS-based development environments.

Performance That Doesn’t Compromise Security

Security has a poor reputation for being a slugger, but it is no more. The ASP.NET Core applications today are designed to be fast, and adding the Identity doesn’t create a visible delay. Actually, Identity is effectively integrated into middleware to ensure that the authentication processes are lean and secure. Authentication pipelines are simplified, and cookie policy can be set with expiration and sliding expiration to provide a balance between user-friendliness and session security.

Azure supplements this with monitoring tools that monitor authentication patterns and system performance. macOS developers can enable diagnostics and performance metrics that assist them in comprehending the attempts to log in, the number of attempted log-ins, 2FA usage, and so on, which can also be used to make a decision to optimize security and performance further.

Customizing Identity on Your Terms

To the users of macOS, the most important feature of the .NET Core Identity is its customization. Identity pages, such as Register and Login, can be scaffolded out and customized. Regardless of whether you use MVC, Razor pages, or Blazor, the skeleton is modular, and it is simple to shape the experience. Given that macOS can address all these workflows through cross-platform tooling, you can never be left behind on features.

Have to change password policies or change cookie policies? You can all configure these in your startup files. And, yes, these environments exist and work on both macOS local development and on Azure deployment of your application.

Seamless Integration with Modern Tooling

The developers of macOS have access to powerful tooling and smooth workflows. With or without embedded Git support, the .NET CLI, or VS Code, you can do your identity settings, database migrations, and even scaffold new components without ever leaving the environment. Azure integration would support the one as well as the command line or GitHub Actions, where the developers can deploy safely and efficiently.

The integration of .NET Core Identity and Azure AD works, and it is native on macOS. This setup can help you whether it comes to crafting secure login flows or user role administration and making sure your application performs well during periods of high traffic. The cloud is no longer concerned with which operating system you are running on, and that is even more liberating as far as making development is concerned.

Final Thoughts

Performance and security do not go hand in hand. Both can co-exist with the combination of the Azure AD and the Identity of the .NET Core. This workflow is as native and streamlined on macOS as on any other platform. The developers do not need to compromise or transform environments to come up with secure, performant applications. Rather, they are able to concentrate on providing quality software, as they are working with tools that are supported by sound design concepts and designed to scale.