MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

How to Learn .NET in 2026 (Without Getting Lost)

2026-02-03 03:11:16

Learning .NET in 2026 is very different from how it was a few years ago.

The platform has grown massively.
So has the confusion.

If you are new to .NET, or even coming back after a break, you will quickly run into questions like:

  • Should I start with ASP.NET Core or APIs?
  • Do I need Blazor now?
  • Which .NET version actually matters?
  • How much C# should I know before touching frameworks?
  • When do things like Docker, Azure, or DevOps come into play?

Most people don’t struggle because .NET is hard.
They struggle because they don’t know what to learn next.

The real problem is not learning, it’s direction

Today, learning .NET usually looks like this:

  • Jumping between random tutorials
  • Watching outdated videos
  • Mixing beginner and advanced topics too early
  • Copy-pasting code without understanding fundamentals

This leads to burnout, not progress.

Many beginners quit thinking “.NET is too big.”
When in reality, they didn’t have a clear path.

Common mistakes I see beginners make

After years of experience and mentoring developers, these mistakes show up again and again:

  • Skipping fundamentals like OOP, C#, and debugging
  • Jumping straight into frameworks without understanding the runtime
  • Learning tools before concepts
  • Trying to learn everything at once

Without structure, even the most skilled developers can get stuck.

What a structured roadmap actually solves

A good roadmap does not teach you everything.

It tells you:

  • What to learn first
  • What to ignore for now
  • When to go deeper
  • How topics connect

Instead of guessing, you follow a sequence that builds confidence step by step.

That’s the difference between consuming content and making progress.

A practical .NET roadmap for 2026

To solve this problem, I created a complete, step-by-step .NET roadmap based on real-world experience, not trends.

It covers:

  • Core fundamentals before frameworks
  • Web and API tracks
  • Data access and EF Core
  • Cloud and DevOps at the right stage
  • Advanced architecture and performance topics

Most importantly, it explains why each stage matters and when you should move forward.

You can explore the roadmap here:
👉 https://umerfarooqofficial.com/dotnet-roadmap.html

(One path. No noise.)

Final thoughts

You don’t need to rush.
You don’t need to learn everything.

You need a clear direction.

If you are serious about learning .NET in 2026, focus on fundamentals, follow a structured path, and let depth come naturally.

Progress comes from clarity, not speed.

Your Data Lakehouse Is Passive. Here’s How to Make It Agentic.

2026-02-03 03:09:47

Dremio Free 30-Day Trial, Sign-up and experience Agentic Analytics in Minutes

Building a modern data lakehouse from scratch is a massive undertaking. Data teams often find themselves stitching together a complex puzzle of open-source components, a process that can delay value and drain resources. This DIY approach often results in a brittle system stitched together with technical debt, delaying insights indefinitely.

Building a modern data lakehouse from scratch is a massive undertaking.

There is a different path. The Dremio Agentic Lakehouse is a new breed of data platform built for AI agents and managed by AI agents. This article unveils five surprising and impactful ways this new approach delivers insights from day one, rather than creating a perpetual work-in-progress.

1. You Don't Just Query Your Data—You Have a Conversation With It

Perhaps the most surprising feature of the Dremio Agentic Lakehouse is the built-in AI Agent, which provides a truly conversational analytics experience. Any user, regardless of technical skill, can now ask questions in plain English and receive not only an answer but also the generated SQL and even automated visualizations.

The key is providing specific business context, which elevates a simple query into a strategic insight.

Okay Prompt

  • Show me sales data.

Great Prompt

  • Show me total sales revenue by region and customer segment for each month of 2025. Visualize this as a stacked bar chart with month on the x-axis.

For technical users, the AI Agent acts as an expert peer for code review. It can provide plain-English explanations of complex query logic and suggest optimizations, accelerating development and debugging.

This capability extends far beyond the Dremio UI. The Dremio MCP (Model Context Protocol) server, an open standard that allows AI applications to connect to data, lets you connect external AI clients like ChatGPT and Claude directly to your Dremio project. This integration transforms your lakehouse into a first-class citizen in any AI workflow, democratizing data access by removing the SQL barrier while respecting all underlying security and governance policies.

You Don't Just Query Your Data—You Have a Conversation With It

2. It Unifies Your Entire Data Estate, Without Moving a Thing

A common misstep is to think of a lakehouse platform as just a catalog. Dremio is a complete, high-performance query engine that acts as a central hub for all your data, wherever it resides. It can connect to and query a vast array of existing data sources in-place, including object storage like Amazon S3, databases like PostgreSQL and MongoDB, and traditional data warehouses such as Snowflake and Redshift.

This provides a strategic on-ramp for adoption. Analysts can immediately join data from legacy systems with new Apache Iceberg tables, enabling a smooth, incremental path to a modern data architecture without a disruptive migration.

To boost performance, Dremio intelligently delegates parts of the query to the source system using techniques like predicate pushdowns, ensuring federated queries are as efficient as possible.

By synthesizing Polaris-tracked tables with federated connectivity, Dremio serves as a single, governed entry point for the entire enterprise data estate, regardless of where that data physically resides.

It Unifies Your Entire Data Estate, Without Moving a Thing

3. Your Lakehouse Manages and Optimizes Itself, Autonomously

An Apache Iceberg lakehouse is not a set-it-and-forget-it system. Without constant maintenance, tables can accumulate thousands of small files and bloated metadata, which quickly degrades query performance.

Dremio puts Iceberg table management on autopilot. The platform runs background jobs that automatically handle compaction, clustering, and vacuuming. These autonomous processes improve query speed and reduce storage costs, transforming the data engineering function from reactive maintenance to proactive value creation.

Performance is further enhanced by Dremio Reflections, which are physically optimized copies of your data, similar to indexes or materialized views on steroids. With Autonomous Reflections, the Dremio engine learns from your usage patterns to automatically create, update, and drop these accelerations, making sub-second query performance the default.

Under the hood, Dremio’s performance is powered by Apache Arrow. In most data stacks, moving data between systems requires costly serialization and deserialization. Because Dremio uses Arrow as its native in-memory format, it eliminates this overhead entirely, ensuring fast processing within Dremio and across federated sources.

Your Lakehouse Manages and Optimizes Itself, Autonomously

4. It Transforms Unstructured Dark Data into Governed Assets with SQL

Every organization has dark data. This includes valuable information locked away in unstructured files like PDFs, call transcripts, and legal documents sitting idle in data lakes.

Dremio unlocks this value by embedding Large Language Models directly into its SQL engine through native AI functions such as AI_GENERATE, AI_CLASSIFY, and AI_COMPLETE.

A user can run a single query using the LIST_FILES table function to discover thousands of PDF contracts in an S3 bucket. In the same CREATE TABLE AS SELECT statement, they can use AI_GENERATE to extract structured fields like vendor name, contract value, and expiration date. The result is a new, governed, and optimized Iceberg table.

This single query replaces document processing pipelines, OCR tools, and manual ETL jobs. It transforms the data lake into an interactive database where every document is queryable.

It Transforms Unstructured Dark Data into Governed Assets with SQL

5. The Semantic Layer Becomes Your AI’s Brain

A major challenge for AI data assistants is hallucinations. These are confident but incorrect answers caused by missing business context.

Dremio’s AI Semantic Layer addresses this problem by acting as a business-friendly map that translates raw technical data into terms like churn rate or active customer. This layer teaches the AI your business language.

It moves beyond a passive catalog and becomes a dynamic knowledge base. You can even use the AI Agent to build this layer, such as asking it to create a medallion architecture with Bronze, Silver, and Gold views without writing complex ETL pipelines.

Dremio also uses generative AI to automate metadata creation. It generates table wikis and suggests relevant tags, resulting in a living, self-documenting data asset.

The defining challenge for data leaders in 2026 is no longer managing files. It is managing the context that allows AI to speak your business language.

The Semantic Layer Becomes Your AI’s Brain

Conclusion

The agentic lakehouse enables a core shift. It moves from a passive data repository to an active decision-making partner. By automating management, performance tuning, and documentation, Dremio frees data teams to focus on delivering value.

It creates a single source of truth that humans and AI agents can trust equally.

Now that your data can finally understand you, what is the first question you will ask?

Ready to start the conversation with your data?

Sign up for a 30-day free trial of Dremio's Agentic Lakehouse today.

Schemas and Data Modelling in Power BI

2026-02-03 03:08:38

When most people hear Power BI, they immediately think of dashboards, colorful charts, large numeric cards, clickable slicers, and polished visuals. These elements are what users interact with, so it is natural to assume they are what makes a report “good.”

In reality, however, dashboards are only the final layer. Long before any visual appears on the screen, critical decisions have already been made. These decisions determine whether a report is fast or slow, reliable or misleading, intuitive or frustrating. That earlier and often invisible stage is data modelling.

Data modelling in Power BI is the process of organizing data so that Power BI understands what the data represents and how different pieces of information relate to one another. It involves deciding:

  • which tables are needed?
  • what each table should contain?
  • how those tables should be connected?

When this structure is well designed, Power BI feels logical and predictable. When it is not, even simple questions can return confusing or incorrect results. In other words, the quality of a Power BI report is decided before the first chart is ever created.

What “Schema” Means in Power BI

In Power BI, a schema refers to the overall structure of the data model. This is not a theoretical concept, it is the actual layout you see in Model view, including the tables and the relationships between them.

A schema answers very practical questions:

  • What tables exist in the model?
  • Which tables store measurements, and which store descriptions?
  • When a user clicks a slicer, how does Power BI know which data to include?

Power BI does not “reason” about data in a human way. Instead, it follows the paths you define. The schema determines:

  • how filters move from one table to another,
  • how totals and averages are calculated,
  • and how fast visuals respond when users interact with the report.

    Two schema patterns appear most frequently in Power BI models:

  • Star schema

  • Snowflake schema

    Star Schema Vs Snowflake Schema
    Understanding the difference between these two explains why some Power BI models feel simple and trustworthy, while others feel fragile and unpredictable.

Fact Tables and Dimension Tables: Understanding the Roles of Tables

Most Power BI models are built using two types of tables. Understanding what each one does is the foundation of data modelling.

Fact Tables: Recording What Happened

A fact table records events. Each row represents something that actually occurred.

In a dataset such as Kenya crops data, a single row in the fact table might represent:

  • a specific crop,
  • grown in a specific county,
  • during a specific year or season,
  • with a measurable outcome such as yield in kilograms.

Because these events are recorded repeatedly over time, fact tables typically:

  • grow very large,
  • repeat the same crops or counties many times,
  • focus on numeric values that can be summed, averaged, or counted.

A fact table does not explain what a crop is or where a county is located. It simply records that something happened.

Dimension Tables: Giving Meaning to the Events

Dimension tables exist to describe and contextualize the facts. Instead of repeating names and descriptions in every row of the fact table, that information is stored once in separate tables, such as:

  • a Crop table that stores crop names and types,
  • a County table containing county names,
  • a Date table containing years or seasons.

Dimension tables typically:

  • change slowly compared to fact tables,
  • contain descriptive rather than numerical data,
  • are used to filter, group, and label results in reports.

When you select a county or crop in a slicer, Power BI relies on the dimension table to determine which rows in the fact table should be included. This separation is what makes analysis both efficient and accurate.

The Star Schema: A Structure That Matches How Power BI Thinks

The star schema is the most effective and widely recommended structure for Power BI models.

In a star schema:

  • one fact table sits at the center (for example, crop yield records),
  • each dimension table connects directly to that fact table (crop, county, date),
  • dimension tables do not connect to each other. Star Schema

This structure aligns closely with how Power BI processes filters.

When you selects a county in a slicer, Power BI:

  1. Looks at the County table.
  2. Identifies the selected county’s unique key.
  3. Follows the relationship directly to the fact table.
  4. Keeps only the matching rows.
  5. Performs calculations using those rows.

Because each dimension connects straight to the fact table: filters move directly to the data being analyzed and Power BI does not need to pass through intermediary tables which leads to calculations behaving consistently.

This makes much of the analytical logic to be handled by the structure itself, reducing the need for complex formulas later.

Why the Star Schema Performs Better in Power BI

Power BI stores data in columns and is optimized for fast aggregation. It performs best when relationships are simple and unambiguous.

In a star schema, you will observe that:

  • Power BI follows one clear relationship path,
  • fewer joins are required to answer questions,
  • the model is easier to understand and debug.

As a result, reports load faster, slicers respond more smoothly and DAX formulas tend to be shorter and easier to reason about.

The Snowflake Schema: A bit more complex

A snowflake schema starts with the same idea as a star schema but splits descriptive information across multiple related tables.

For example, instead of storing all location details in a single County table, the data might be organized as:

  • a County table stores county information,
  • a Region table stores regional information,
  • the Country table stores country information. When a user selects a country, Power BI must follow a longer path before reaching the data. For Example, Start at the Country table. Then, Move to the Region table. Then move to the County table. Finally reach the fact table.

Snowflake Schema
Each additional step increases processing work for Power BI and increases the chance of errors if any relationship is incorrect.

While snowflake schemas reduce duplicated data, they create challenges in Power BI because filters must travel through multiple tables, more relationships must be managed. Hence, it becomes harder to predict how calculations will behave.

For this reason, snowflake schemas are common in source systems but are often reshaped into star schemas for reporting.

Relationships: How Tables Actually Work Together

Relationships define how tables communicate and how filters flow.

When you select a county, crop, or year in a slicer, Power BI does not search the fact table directly. It looks at the dimension table, then identifies the matching key, then it follows the relationship to the fact table and filters the fact rows accordingly.

In a well-designed model:

  • each dimension table contains unique values (each crop or county appears once),
  • fact tables contain many related records linked to those values,
  • filters flow from dimension tables to the fact table.

This mirrors real-world logic: one county can have many crop records, and one crop can appear across many years.

Cardinality: Understanding “One” and “Many”

Cardinality describes how many rows in one table relate to rows in another.

  • One-to-Many means one row in a dimension table relates to many rows in the fact table.
  • One-to-One means one row matches exactly one row in another table. (rare in reporting)
  • Many-to-Many means multiple rows relate to multiple rows (can cause duplicated totals if not handled carefully)

Note: Incorrect cardinality may still produce a result but those results may not represent reality.

Why Good Data Modelling Matters

Data modelling affects every Power BI report in three key ways.

Performance

Simple structures reduce processing work, resulting in faster visuals and smoother interaction.

Accuracy

Correct relationships ensure each fact is counted once, preventing inflated totals and misleading averages.

Simplicity

Clear models make reports easier to build, understand, and maintain. Complex DAX is often a sign of a model that needs improvement.

Effective models typically:

  • separate measurements from descriptions,
  • use star schemas where possible,
  • define relationships clearly,
  • rely on the model to handle logic instead of forcing visuals to compensate.

When this foundation is solid, Power BI becomes easier to use and easier to trust the results. Schemas and data modelling directly determine whether Power BI produces reliable insight or confusing results. By understanding fact and dimension tables, choosing appropriate schemas, and defining relationships carefully, analysts create reports that are fast, accurate, and understandable.For more information, feel free to visit Microsoft on more information about PowerBI.

Also Feel free to leave a comment sharing how you approach data modelling in your own Power BI projects. Discussion and different perspectives are always welcome.

Building a Two-Tower Recommendation System

2026-02-03 03:05:13

I was using Algolia for search and recommendations on POSH, my ecommerce app. It worked great, but the bill kept growing. Every search request, every recommendation call—it all adds up when users are constantly browsing products.

So I built my own recommendation system using a two-tower model. It's the same approach YouTube and Google use: one tower represents products as vectors, the other represents users based on their behavior. To get recommendations, you just find products closest to the user's vector.

Here's how I built it.

Data Pipeline

Everything starts with user behavior. I use Firebase Analytics to track how users interact with products:

  • Product viewed — just browsing
  • Product clicked — showed interest
  • Added to cart — strong intent

Not all interactions are equal. Someone adding a product to cart is way more valuable than a passing view. So I weight them:

Event Weight
View 0.1
Click 2.0
Add to cart 5.0

Product Vectorization

All my products live in Elasticsearch. To make recommendations work, I need to represent each product as a vector — a list of 384 numbers that captures what the product is about.

I use the all-MiniLM-L6-v2 model from Sentence Transformers. It's fast, lightweight, and works well for semantic similarity.

For each product, I combine its attributes into a single text string:

Nike Air Max | by Nike | Shoes | Sneakers | Running | Blue color | premium

This includes:

  • Product name
  • Merchant name
  • Category hierarchy (parent → category → subcategory)
  • Color
  • Price tier (budget / mid-range / premium / luxury)

The model turns this text into a 384-dimensional vector. Products with similar attributes end up close together in vector space — a blue Nike sneaker will be near other blue sneakers, other Nike products, and other premium shoes.

These vectors get stored back in Elasticsearch as a dense_vector field, ready for similarity search.

User Tower Architecture

This is the core of the system. The user tower takes someone's interaction history and outputs a single vector that represents their preferences.

Input: up to 20 recent interactions, each with:

  • The product's vector (384 dims)
  • The interaction type (view, click, or add-to-cart)

Output: one user vector (384 dims) that lives in the same space as product vectors

How it works

The model combines each product vector with an embedding for the interaction type. A clicked product gets different treatment than a viewed one.

Then it runs through a multi-head attention layer — this lets the model figure out which interactions matter most. Maybe that one add-to-cart from yesterday is more important than ten views from last week.

I also add recency decay. Newer interactions get higher weight. Someone's taste from yesterday matters more than what they looked at two weeks ago.

Finally, everything gets pooled into a single vector and normalized. This user vector now sits in the same 384-dimensional space as all the products.

Training

I trained the model using contrastive learning. For each user:

  • Positive: the next product they actually interacted with
  • Negatives: 10 random products they didn't interact with

The model learns to push the user vector closer to products they'll engage with, and away from ones they won't.

Real-Time Updates

Training the model is a one-time thing. But user preferences change constantly — someone might discover a new brand or shift from sneakers to boots. The system needs to keep up.

I use AWS SQS to handle this. When a user interacts with a product, Firebase sends an event, and a message lands in my queue:

{
  "customer_id": 12345,
  "product_id": 5678,
  "event_name": "product_clicked"
}

An SQS consumer picks it up and:

  1. Fetches the product's vector from Elasticsearch
  2. Loads the user's recent interaction history
  3. Runs it through the trained user tower model
  4. Saves the new user vector back to Elasticsearch

The whole thing takes milliseconds. By the time the user scrolls to the next page, their recommendations are already updated.

I also prune old interactions — anything older than 2 days gets dropped. This keeps the model focused on recent behavior, not what someone browsed months ago.

Recommendations with Cosine Similarity

Now the fun part — actually recommending products.

Both user vectors and product vectors live in the same 384-dimensional space. To find relevant products, I just look for the ones closest to the user's vector.

When a user browses products, the API checks if they have a stored vector. If they do, Elasticsearch uses script_score to rank products by cosine similarity:

script: {
  source: "cosineSimilarity(params.user_vector, 'product_vector') + 1.0",
  params: { user_vector: userVector }
}

The + 1.0 shifts scores to positive range since cosine similarity can be negative.

If the user has no vector yet (new user, not enough interactions), it falls back to default sorting — popularity score and recency. Same goes if they explicitly sort by price.

The result: logged-in users with interaction history get a personalized feed. Everyone else still gets a sensible default. No hard-coded limits on recommendations — it works with the existing pagination, just reordered by relevance to that user.

Results & Learnings

I'll be honest — I'm not a data scientist. This was my first time building anything like this. I just knew Algolia was too expensive and figured there had to be a way to do it myself.

Turns out there was.

I self-hosted everything — Elasticsearch, the PyTorch model, the SQS consumers. No managed ML services, no third-party recommendation APIs. Just my own infrastructure.

An unexpected bonus: latency dropped. When everything runs on the same private network, there's no round-trip to external APIs. My app server talks to Elasticsearch over the local subnet — way faster than hitting Algolia's servers.

Since launching the two-tower model:

  • 40% increase in app orders
  • 10% increase in user retention

Users are finding products they actually want, and they're coming back more often.

What's next

The model works, but there's room to improve:

  • More eventsadding product_favorited, product_shared, and product_purchased to capture stronger intent signals
  • Product labels — tagging products with attributes like "vintage", "handmade", "streetwear" and using those labels to fine-tune the model

Takeaway

You don't need a machine learning team to build personalized recommendations. The two-tower architecture is well-documented, PyTorch is approachable, and tools like Elasticsearch and SQS handle the infrastructure. If your recommendation costs are eating into your margins, it might be worth building your own.

If you've built something similar or have suggestions to improve this approach, I'd love to hear from you.

An Overview of Schemas and Data Modelling in Power BI

2026-02-03 03:04:38

Data modeling in Power BI refers to how data tables are organized, connected through relationships, and optimized for calculations and reporting.
In simple language, a data model is the engine in Power BI. Efficient data modeling transforms raw, messy data into a high-performance analytical structure. This article explains key concepts such as fact tables, dimension tables, star schema, snowflake schema, relationships, and why good modelling is critical for performance and accurate reporting.

Fact and Dimension Tables
Before choosing a schema, you must categorize your data into two distinct table types.

Fact Tables
A fact table contains quantitative, measurable data related to business events such as Sales, Orders, Revenue, and Discount.
Usually contain millions of rows, many foreign keys, and numeric columns meant for aggregation (Sum, Average).
Example: A Sales table containing OrderID, DateKey, ProductKey, and SalesAmount.
Fact table answers these questions: “how much” or “how many”

Dimension Tables
A dimension table has descriptive attributes. It provides the context for the facts on the who, what, where, and when.
The dimension tables are characterized by a smaller size, text, or categorical data used for filtering and grouping, and apply one record per entity.
Example: A product table containing productName, Category, and Color.
Dimension tables answer these questions ”who,what,where,when.”

Types of Schemas in Power BI

The Star Schema
The Star Schema is the recommended modeling pattern for Power BI. In this setup, a central Fact table is surrounded by multiple Dimension tables.

Dimension tables are not connected to each other.
It is the best option for Power BI due to the simple relationships, ease of understanding and maintenance, and its compatibility with DAX calculations.

Snowflake Schema
A Snowflake schema occurs when a dimension table is normalized, meaning a dimension table connects to another dimension table rather than the fact table.
Example: A Product(Fact) table connects to Product(Dim), which then connects to a separate Category(Dim) table.

While this saves a tiny amount of storage by reducing redundancy, it creates complex relationship chains. This forces Power BI to work harder to filter data, often leading to slower report performance. In short they are not ideal for Power BI reporting.

Managing Relationships
Relationships define how data flows between tables. In Power BI, you must pay attention to two specific settings because correct relationships ensure that filters and slicers behave correctly across visuals.
These are the relationships in Power BI:

  • One-to-Many
  • Many-to-One
  • One-to-One

Why Modeling is Critical
A poor model isn't just a technical annoyance; it leads to broken data and unwanted errors.
Why Good Data Modelling Is Critical:

  • Performance A well-designed model reduces memory usage, improves query speed and makes visuals load faster
  • Accurate Reporting Good modelling ensures filters propagate correctly and measures calculate as expected
  • Simpler DAX Measures Clean models lead to shorter DAX formulas, easier debugging and fewer calculation errors
  • Scalability and Maintenance A good model is easy to extend with new data and easier for other analysts to understand. It supports long-term reporting needs.

Best Practices for Power BI Data Modelling

  • Use star schema whenever possible
  • Separate facts and dimensions clearly
  • Avoid bi-directional relationships unless necessary
  • Create a dedicated Date dimension
  • Remove unused columns
  • Keep column names business-friendly
  • Validate totals against source systems

Conclusion
Effective Power BI reporting is built on data modeling. Understanding fact tables, dimension tables, relationships, star schemas, and snowflake schemas allows you to construct models that are fast, accurate, and easy to manage. In Power BI, good visualizations start with good models. Performance, dependability, and confidence in your reports are all enhanced by devoting time to accurate data modeling.

Understanding Reactive Context in Angular(v21)

2026-02-03 03:02:46

In the Angular Signal system, a Reactive Context is a special environment where Angular’s "dependency tracking engine" is turned on. It is the invisible bubble that allows Signals to know who is watching them.

If you read a Signal inside a reactive context, you are automatically "subscribed" to it. If you read a Signal outside of one, it’s just a normal function call with no future updates.

What creates a Reactive Context?

Only a few specific places in Angular create this special environment:

  • template: When you call mySignal() in your HTML, the template creates a reactive context.
  • computed(): The function inside a computed signal is a reactive context.
  • effect(): The function body of an effect is a reactive context.

Things to remember when working in reactive context

  • Its Ok to use signals inside effect
constructor() {
   effect(() => {
      const x = mySignal() // ✅ this is fine
   });
}

Don't do this

  • ❌ Causing side effects.
  • ❌ Creating or updating signals inside reactive context
readonly x = signal(0);
readonly y = signal(0);

  readonly c = computed(() => {
    const total = this.x() + this.y();
    return {sum : signal(total)}; // ❌ BAD practice
  });
readonly x = signal(0);

  readonly c = computed(() => {
    this.x.update(v => v + 1); // ❌ BAD practice
  });
  • ❌ Calling API which could update signals
  • ❌ Changing DOM directly
  • Avoid conditional effects like:
    • ❌ Creating effect inside effect. It’s technically possible but creates a nightmare for memory management and logic flow.
    • ❌ Creating signal inside effect
effect(()=> {
    if (this.x() > 10) {
      effect(() => console.log(this.x())); // ❌ BAD practice
    }
  });
  • Effects are meant to trigger changes but not to other signals

Signal change batching in Angular

It refers to the process where angular groups multiple signal changes together and process them at once rather than individually. This helps to improve performance by reducing unnecessary re-execution of effects and re-computations. And because of batching, you should not change signals in side effects.

Angular lets you run your code synchronously. It tracks all the signals that you modify and then when the task is completed, it schedules a code that checks for changes and then decides which effects to run.

  • If inside one of the effects you modify a signal, you can think about it as doing it too late. This can cause cyclic dependencies or infinite loops.

  • So effects should only focus on things like logging, rendering things that do not affect changes in signals.

  • It is a good practice not to run, not to trigger business logic inside effects, because you can never know which services or APIs in your business logic may cause signals to change.

So you might ask yourself, what should I do with effects?

Effects should mostly be used for things like logging, rendering, updating local storage. And in fact, when you think about rendering, this is exactly what angular does. Every one of your templates is run like an effect. So when you change signals, the template is re-evaluated pretty much like any other code running inside an effect.

So this is how the mechanism of change detection without zone works.

Thanks for reading. Happy coding!!!

References: