MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

How infrastructure outages in 2025 changed how businesses think about servers

2026-01-18 11:43:38

In 2025, many companies learned a practical lesson about infrastructure reliability.
Not from whitepapers or architectural diagrams, but from real outages that directly affected daily operations.

What stood out was not that failures happened — outages have always existed — but how broadly and deeply their impact was felt, even by teams that believed their setups were “safe enough.”

⸻⸻⸻

When a single region becomes a business problem

One of the most discussed incidents in 2025 was a prolonged regional outage at Amazon Web Services.
For some teams, this meant temporary inconvenience. For others, it meant hours of unavailable internal systems: CRMs, billing tools, internal dashboards, and operational services.

What surprised many companies was that they did not necessarily host workloads directly in the affected region. Dependencies told a different story. Third-party APIs, SaaS tools, and background services built on the same infrastructure became unavailable, creating a chain reaction.

For an online business, even a few hours of full unavailability can mean a meaningful share of daily revenue lost. But the bigger cost often appeared later: delayed processes, manual recovery work, and pressure on support teams.

⸻⸻⸻

When servers are fine but the network isn’t

Later in the year, a large-scale incident at Cloudflare highlighted a different weak point.
Servers were running. Data was intact. But network access degraded.

From a user perspective, the difference did not matter. Pages failed to load, APIs returned errors, and customer-facing services became unreliable. Even teams with redundant server setups found themselves affected because the bottleneck was outside their compute layer.

This incident changed how many engineers and managers talked about reliability. “The servers are up” stopped being a reassuring statement if the network path to those servers could fail in unexpected ways.

⸻⸻⸻

The quiet accumulation of “minor” failures

Not every problem in 2025 made headlines. In fact, most did not.

Many teams experienced:
• intermittent routing degradation,
• partial regional unavailability,
• short network interruptions that did not trigger incident alerts.

Individually, these issues were easy to dismiss. Collectively, they created friction. Engineers spent more time troubleshooting. Deployments slowed down. Systems became harder to reason about.

Over time, these “minor” failures affected velocity just as much as a single large outage.

⸻⸻⸻

What changed in how businesses evaluate infrastructure

By the end of 2025, the conversation inside many companies had shifted.

Instead of asking “Which provider is the biggest?”, teams started asking:
• How quickly can we recover if a region fails?
• What dependencies exist outside our direct control?
• Can traffic or workloads be moved without a full outage?
• How predictable is the infrastructure under stress?

This shift mattered. Reliability stopped being a checkbox and became an architectural property that had to be designed, not assumed.

⸻⸻⸻

Why some teams reconsidered VPS-based setups

An interesting side effect of this shift was renewed interest in VPS infrastructure — not as a “cheap alternative,” but as a way to regain architectural control.

For certain workloads, VPS deployments allowed teams to:
• spread services across multiple regions,
• reduce reliance on a single platform ecosystem,
• make network behavior more explicit and testable.

Some teams began combining hyperscalers with VPS providers, treating infrastructure diversity as a form of risk management rather than technical debt. Providers commonly discussed in this context included Hetzner, Vultr, Linode, and justhost.ru, each used for different regional or operational needs.

⸻⸻⸻

A practical takeaway from 2025

The main lesson from 2025 was not that clouds are unreliable.
It was that reliability cannot be outsourced entirely.

Infrastructure failures became a management issue as much as a technical one. Teams that treated outages as architectural scenarios — and planned for them explicitly — recovered faster and with fewer side effects.

By contrast, teams that relied on reputation or scale alone often discovered their risk surface only after something broke.

⸻⸻⸻

Final thought

Infrastructure in 2025 stopped being background noise.
It became a variable that businesses actively model, question, and design around.

Not because outages suddenly appeared, but because their real cost became impossible to ignore.

On the 'Joy of Creating' in the Age of AI

2026-01-18 11:38:43

Why we still build, even when machines can do it faster.

I love scribbling down thoughts. There is a specific joy in taking difficult, complex concepts and breaking them down into soft, digestible pieces that anyone can understand.

I also enjoy drawing. Although my skills are strictly "programmer art" level, I believe a single clean diagram is often far more powerful than a hundred words.

And above all, I love coding. I truly cherish the process of creating something that actually works right at my fingertips.

I first dipped my toes into the massive wave of Generative AI back in 2023, right when Stable Diffusion was starting to take off. I had poked around game AI before that, but looking back, the shift that began then seems to be shaking the entire IT ecosystem to its roots.

At the time, I felt a strange thirst when looking at models trained primarily on Western art styles. So, I spent time painstakingly crafting datasets and fell deeply into the fun of teaching AI the brushstrokes of Shin Yun-bok, a painter from the Joseon Dynasty.

Then came a period where I felt infinitely small in front of the high-quality images pouring out so effortlessly. What sustained me through that overwhelming feeling was the realization that "teaching a new style and setting the direction" was still a human task.

A similar sense of helplessness arrived with writing. Watching models evolve from ChatGPT and Gemini, I witnessed AI’s writing skills quickly surpass my own. However, I realized that deciding what to write, bearing the weight of a piece published under my name, and finally putting the period at the end of the sentence is something only "I" can do. That sense of responsibility is something AI cannot take away.

Is coding any different? Although I still do a lot of the typing myself, my jaw drops at the speed of evolution every time a new model is released.

When it comes to writing or drawing, AI is already the superior player. So, a collaborative process has settled into my daily life: I throw out a rough draft, the AI polishes it smoothly, and I do the final review and adjustments. As the models get smarter, the parts I touch are becoming fewer and fewer. I have a hunch that coding will follow this exact process before long.

At an AI Workshop I attended yesterday, someone asked me a heavy question:

"What on earth should humans do in the future?"

As is the case now, even more things will be automated by AI in the future.

However, people like me will still want to make things ourselves. Even if we borrow the power of a tool as potent as AI, the starting point and the intention of that creation will still remain with the "person."

There will be a clear distinction between what AI generates because it "wants" to (if ever), and what a human creates with specific intent. The value will likely be assessed differently, too.

Isn't it similar to the variety of choices we have when we need a chair?

Sure, you can pay money and buy a comfortable, finished product. But some enjoy the process of buying parts from IKEA and assembling them; others buy the tools and cut the lumber to build from scratch; and some even choose the primitive labor of carving wood, stitch by stitch, by hand.

Just as we pay different prices for factory-made goods and artisanal handicrafts today, I believe the "us" of the future will continue to live on, assigning different meanings based on the "process" and "values" embedded in the result.

🧩 Beginner-Friendly Guide 'Largest Magic Square' – LeetCode 1895 (C++, Python, JavaScript)

2026-01-18 11:37:48

Searching for patterns in a grid is a classic challenge in algorithmic thinking. In this problem, we are tasked with finding the largest possible sub-grid where every row, column, and both main diagonals add up to the exact same value.

You're given:

  • An integer grid.
  • A definition of a Magic Square: a sub-grid where the sum of every row, every column, and both diagonals are equal.

Your goal:

  • Find and return the maximum side length of any magic square hidden within the grid.

Example 1 :

Image description ex1
Input: grid = [[7,1,4,5,6],[2,5,1,6,4],[1,5,4,3,2],[1,2,7,3,4]]
Output: 3
Explanation: The largest magic square has a size of 3.
Every row sum, column sum, and diagonal sum of this magic square is equal to 12.

  • Row sums: 5+1+6 = 5+4+3 = 2+7+3 = 12
  • Column sums: 5+5+2 = 1+4+7 = 6+3+3 = 12
  • Diagonal sums: 5+4+3 = 6+4+2 = 12

Example 2 :

Image description ex2
Input: grid = [[5,1,3,1],[9,3,3,1],[1,3,3,8]]
Output: 2

Constrains :

  • m == grid.length
  • n == grid[i].length
  • 1 <= m, n <= 50
  • 1 <= grid[i][j] <= 106

Intuition

The brute-force approach would be to check every possible square of every size and manually sum up its rows, columns, and diagonals. However, that involves a lot of repeated work. To make this efficient, we use a technique called Prefix Sums.

Think of a prefix sum like a "running total." If you know the total sum of a row from the start up to index 10, and the total sum up to index 5, you can find the sum of the elements between 5 and 10 instantly by subtracting the two totals.

In this solution, we pre-calculate four types of running totals for every cell :

  1. Horizontal: Sum of elements in that row.
  2. Vertical: Sum of elements in that column.
  3. Main Diagonal: Sum of elements moving from top-left to bottom-right.
  4. Anti-Diagonal: Sum of elements moving from top-right to bottom-left.

Once we have these tables, checking if a square is "magic" becomes a matter of simple subtraction rather than looping through every single cell. We start searching from the largest possible side length (the minimum of and ) and work our way down. The first one that satisfies the magic square condition is our answer.

Code Blocks

C++

class Solution {
public:
    bool isMagic(vector<vector<array<int,4>>> const & prefixSum, int r, int c, int sz) {
        // Calculate the main diagonal sum
        int targetSum = prefixSum[r+sz][c+sz][2] - prefixSum[r][c][2]; 

        // Check the anti-diagonal sum
        if (targetSum != prefixSum[r+sz][c+1][3] - prefixSum[r][c+sz+1][3]) {
            return false;
        }

        // Check all row sums within the square
        for (int j = r; j < r + sz; j++) {
            if (targetSum != prefixSum[j+1][c+sz][0] - prefixSum[j+1][c][0]) {
                return false;
            }
        }

        // Check all column sums within the square
        for (int j = c; j < c + sz; j++) {
            if (targetSum != prefixSum[r+sz][j+1][1] - prefixSum[r][j+1][1]) {
                return false;
            }
        }

        return true;
    }

    int largestMagicSquare(vector<vector<int>>& grid) {
        int m = grid.size(), n = grid[0].size();
        // prefixSum stores: [row, col, diag, anti-diag] sums
        vector<vector<array<int,4>>> prefixSum(m + 1, vector<array<int,4>>(n + 2));

        for (int i = 1; i <= m; i++) {
            for (int j = 1; j <= n; j++) {
                int val = grid[i-1][j-1];
                prefixSum[i][j][0] = prefixSum[i][j-1][0] + val; // Row
                prefixSum[i][j][1] = prefixSum[i-1][j][1] + val; // Col
                prefixSum[i][j][2] = prefixSum[i-1][j-1][2] + val; // Diag
                prefixSum[i][j][3] = prefixSum[i-1][j+1][3] + val; // Anti-Diag
            }
        }

        for (int k = min(m, n); k >= 2; k--) {
            for (int i = 0; i <= m - k; i++) {
                for (int j = 0; j <= n - k; j++) {
                    if (isMagic(prefixSum, i, j, k)) return k;
                }
            }
        }
        return 1;
    }
};

Python

class Solution:
    def largestMagicSquare(self, grid: List[List[int]]) -> int:
        m, n = len(grid), len(grid[0])

        # prefixSum[i][j] stores [row, col, diag, anti_diag]
        # We add padding to handle boundary conditions easily
        pref = [[[0] * 4 for _ in range(n + 2)] for _ in range(m + 1)]

        for r in range(1, m + 1):
            for c in range(1, n + 1):
                val = grid[r-1][c-1]
                pref[r][c][0] = pref[r][c-1][0] + val
                pref[r][c][1] = pref[r-1][c][1] + val
                pref[r][c][2] = pref[r-1][c-1][2] + val
                pref[r][c][3] = pref[r-1][c+1][3] + val

        def is_magic(r, c, k):
            # Target sum from the main diagonal
            target = pref[r+k][c+k][2] - pref[r][c][2]

            # Check anti-diagonal
            if target != pref[r+k][c+1][3] - pref[r][c+k+1][3]:
                return False

            # Check all rows
            for i in range(r, r + k):
                if pref[i+1][c+k][0] - pref[i+1][c][0] != target:
                    return False

            # Check all columns
            for j in range(c, c + k):
                if pref[r+k][j+1][1] - pref[r][j+1][1] != target:
                    return False

            return True

        # Check from largest possible side length downwards
        for k in range(min(m, n), 1, -1):
            for i in range(m - k + 1):
                for j in range(n - k + 1):
                    if is_magic(i, j, k):
                        return k
        return 1

JavaScript

/**
 * @param {number[][]} grid
 * @return {number}
 */
var largestMagicSquare = function(grid) {
    const m = grid.length;
    const n = grid[0].length;

    // Create prefix sum 3D array: [m+1][n+2][4]
    const pref = Array.from({ length: m + 1 }, () => 
        Array.from({ length: n + 2 }, () => new Int32Array(4))
    );

    for (let r = 1; r <= m; r++) {
        for (let c = 1; c <= n; c++) {
            const val = grid[r-1][c-1];
            pref[r][c][0] = pref[r][c-1][0] + val;     // Row
            pref[r][c][1] = pref[r-1][c][1] + val;     // Col
            pref[r][c][2] = pref[r-1][c-1][2] + val;   // Diag
            pref[r][c][3] = pref[r-1][c+1][3] + val;   // Anti-Diag
        }
    }

    const isMagic = (r, c, k) => {
        const target = pref[r+k][c+k][2] - pref[r][c][2];

        if (target !== pref[r+k][c+1][3] - pref[r][c+k+1][3]) return false;

        for (let i = r; i < r + k; i++) {
            if (pref[i+1][c+k][0] - pref[i+1][c][0] !== target) return false;
        }

        for (let j = c; j < c + k; j++) {
            if (pref[r+k][j+1][1] - pref[r][j+1][1] !== target) return false;
        }

        return true;
    };

    for (let k = Math.min(m, n); k >= 2; k--) {
        for (let i = 0; i <= m - k; i++) {
            for (let j = 0; j <= n - k; j++) {
                if (isMagic(i, j, k)) return k;
            }
        }
    }

    return 1;
};

Key Takeaways

  • Prefix Sums in 2D: This technique is a powerhouse for range-sum queries in grids, reducing the time complexity of checking a sub-grid from to for each row or column.
  • Space-Time Tradeoff: By using extra memory to store our prefixSum tables, we significantly speed up the validation process for each potential square.
  • Greedy Optimization: Searching from the largest possible down to 2 allows us to return the answer immediately upon finding a match, saving unnecessary computations.

Final Thoughts

As a mentor, I often see students struggle with grid problems because they try to "count" everything manually. Learning to use prefix sums is like leveling up your vision in game development or data processing: you stop seeing individual pixels and start seeing regions. This problem is excellent practice for interviews at companies like Google or Amazon, where multidimensional array manipulation and optimization are frequently tested. In the real world, these concepts are the foundation for image processing filters and spatial data analysis.

Expense Buddy: Local-first expense tracking with GitHub sync

2026-01-18 11:37:22

I’ve tried a bunch of expense trackers over the years, and I kept running into the same problems: slow flows, cluttered screens, and a nagging feeling that I was handing over more data than I should. So I built my own.

Expense Buddy is a privacy‑first, local‑first expense tracker that stays out of your way. It’s built with React Native (Expo), keeps everything on‑device by default, and gives you optional GitHub sync if you want a personal backup you control.

Why Expense Buddy exists

Most expense trackers either felt slow, intrusive, or vague about where my data lived. I wanted something simple enough to open daily, fast enough to never frustrate me, and honest about storage. Expense Buddy is the result—focused on clarity, speed, and ownership:

  • Local-first by default (your data stays on-device)
  • Private by design (no analytics, ads, or data selling)
  • Fast on large datasets with a lightweight UI
  • In your control with optional GitHub sync to your own repo

Feature highlights

  • Daily expense tracking with category and payment method tags
  • Edit expenses anytime with a full history view to revisit and update past entries
  • Smart GitHub sync with incremental, differential, and batched commits
  • Analytics that go deeper: category and payment method breakdowns, instrument‑level splits, plus a spending trend chart
  • Custom categories with colors and icons
  • Saved payment instruments (cards and UPI IDs)
  • In‑app update notes with a “What’s New” sheet after updates
  • Issue reporting directly from Settings
  • Light and dark mode with theme‑aware styling

Dashboard

The dashboard is my “daily check‑in” screen. It gives me a quick view of recent spending and a simple 7‑day trend. I made the graph tappable because I kept wanting to jump straight into that day’s entries.

Dashboard

Add expense

Adding an expense should be boring—in a good way. It’s a one‑screen flow with quick category and payment method picks so I can log something in a few seconds and move on.

Add new expense

Analytics you can trust

I wanted answers, not charts for the sake of charts. The analytics tab helps me see where money goes by category, payment method, and saved instrument, then zoom out with a spending trend view. I also added multiple time windows so I can compare “this week” vs. “the last 3 months” without leaving the screen.

Analytics-1
Analytics-2

History and edits

I mess up entries all the time. The history view lets me browse past entries, open any expense, and fix it in place—no weird edit mode, no hunting.

History

Settings that stay out of your way

Settings are intentionally boring. You can set a default payment method, add custom categories (the app ships with 8 defaults you can edit), manage saved payment instruments for deeper analytics, and enable GitHub sync to keep everything in sync across devices. Auto‑sync can run on change or on app launch.

Settings-1

Settings-2

How GitHub sync works (optional)

I designed GitHub sync to be safe, predictable, and fully optional. Your data stays local unless you explicitly turn this on.

  • Daily CSV files: expenses are stored as expenses-YYYY-MM-DD.csv in your repo
  • Merge‑first workflow: fetch → merge → push, to avoid accidental data loss
  • Conflict handling: timestamp‑based resolution, with true conflict prompts only when needed
  • Differential uploads: only changed files are uploaded, and changes are batched into a single commit
  • Settings sync (optional): categories and saved instruments can be synced via settings.json

Built for speed

I’m allergic to janky lists, so performance was a first‑class goal. Virtualized lists, memoized components, and a lightweight state layer keep things snappy even with long histories. The UI stays minimal so logging an expense takes seconds, not minutes.

Privacy notes

  • No required external accounts to use the app
  • GitHub sync is opt‑in and stores files in your own repository (daily CSVs)
  • No selling of data

If you want to dig deeper, here are the relevant links:

Get access

Expense Buddy is currently in internal testing on Google Play. To get access, DM @sudokaii on X (Twitter) or send me an email.

Chrome OS-Inspired Portfolio: Where Beauty Meets Functionality

2026-01-18 11:26:09

This is a submission for the New Year, New You Portfolio Challenge Presented by Google AI

Chrome OS-Inspired Portfolio

About Me

Hey there! I'm Depa Panjie, a Software Quality Assurance Engineer with 7+ years of breaking things professionally (and then fixing them).

Picture this: You're a QA Engineer who's tired of boring, static portfolios. You think, "What if my portfolio was an entire operating system?"

Crazy? Maybe. Awesome? Absolutely.

So I teamed up with Antigravity (powered by Gemini 3 Pro) and said: "Let's build Chrome OS... but make it a portfolio."

What happened next was pure magic.

Portfolio

Note: The embedded preview below has limited screen size. For the full desktop experience with draggable windows, multiple apps, and all interactive features, please click the Live Demo link below to open it in a new tab! The portfolio is optimized for screens wider than 768px.

What You're About to Experience

This isn't your typical portfolio. This is a fully functional Chrome OS-inspired desktop that runs entirely in your browser:

  • 🪟 Drag, minimize, maximize windows like a real OS
  • 📁 Files App - Explore my bio with a stunning glassmorphic design
  • 🌐 Browser App - Browse my projects in a Chrome-style browser with tabs
  • 💻 Terminal App - See my tech stack in an interactive CLI
  • 📄 Docs App - View my resume in Google Docs style
  • ✉️ Mail App - Contact me through a Gmail-inspired interface
  • 🌓 Dark Mode - Smooth theme switching with Material You colors
  • 🎯 Interactive Tour - Guided onboarding that feels like a game tutorial
  • 📱 Smart Mobile Detection - Beautiful blocking screen for mobile users

Pro tip: Try opening multiple apps, dragging them around, and toggling dark mode. It's oddly satisfying.

How I Built It

The Dream Team

  • Me: "I want a Chrome OS portfolio"
  • Antigravity + Gemini 3 Pro: "Say no more, fam"

The Tech Stack

Frontend Magic:
├── React 18 + TypeScript (Type safety? Yes please!)
├── Vite (Lightning-fast builds ⚡)
├── Pure CSS (No frameworks, just vibes)
└── Lucide React (Beautiful icons)

AI Superpowers:
├── Antigravity (The AI pair programmer)
└── Gemini 3 Pro (The brain)

Deployment:
├── Docker (Multi-stage builds)
├── Nginx (Serving with style)
├── Google Cloud Run (Serverless magic)
└── Cloud Build (Auto-deploy from GitHub)

The AI-Assisted Development Process

Phase 1: The Foundation

Me: "Let's build a window management system"

Gemini: "Here's a React Context-based architecture with z-index management, drag handlers, and state persistence"

Result: A fully functional window manager in one session!

Phase 2: The Apps

We built 5 complete applications:

  • Files app with glassmorphic cards
  • Browser with tab management
  • Terminal with command history
  • Docs with Google-style toolbar
  • Mail with form validation

Each app was crafted with Gemini suggesting optimal patterns and best practices.

Phase 3: The Polish

Me: "The dark mode text is hard to read"

Gemini: "Let's use a blue-tinted glassmorphic design with proper contrast ratios"

Result: That stunning "Who am I?" card you see in the Files app!

Phase 4: The Tour System

Me: "Users need guidance"

Gemini: "Let's integrate Driver.js with event-driven panel management"

Result: A complete tour loop that even closes panels automatically!

Phase 5: Mobile Strategy

Me: "Mobile responsive is hard for a desktop OS"

Gemini: "Let's detect mobile and show a beautiful blocking screen instead"

Result: A polite, well-designed message that maintains the desktop experience integrity!

Phase 6: Cloud Deployment

Me: "How do we deploy this?"

Gemini: "Here's a Dockerfile, nginx config, and Cloud Run setup with CI/CD"

Result: Push to GitHub → Auto-deploy to Cloud Run!

The AI Advantage

Working with Antigravity and Gemini was like having a senior developer who:

  • Never gets tired
  • Suggests best practices instantly
  • Catches bugs before they happen
  • Explains complex concepts clearly
  • Iterates at the speed of thought

Real Example:

When I said "the Files app needs better dark mode colors," Gemini didn't just change colors, it suggested an entire design system with:

  • Glassmorphic backgrounds
  • Proper contrast ratios
  • Backdrop blur effects
  • Consistent spacing
  • Accessible color combinations

That's not just coding that's design thinking powered by AI!

What I'm Most Proud Of

1. The "It Just Works" Factor

Everything is functional. Not "looks functional", actually functional. You can:

  • Open 5 apps simultaneously
  • Drag them anywhere
  • Minimize and restore them
  • Toggle dark mode mid-session
  • Take a guided tour
  • Contact me through the mail app

2. The Glassmorphic Design
That "Who am I?" card in the Files app? Pure art:

background: rgba(138, 180, 248, 0.08);
border: 2px solid rgba(138, 180, 248, 0.2);
backdrop-filter: blur(10px);

It glows in dark mode like a Chrome OS dream!

3. The Smart Tour System
The Driver.js integration is chef's kiss:

  • Auto-starts on first visit
  • Closes panels intelligently
  • Completes a full loop (Login → Desktop → Logout)
  • Skips on mobile devices
  • Can be restarted from Quick Settings

4. The Window Manager
Built from scratch with:

  • Drag and drop positioning
  • Z-index stacking
  • Focus management
  • Minimize/maximize animations
  • State persistence

5. The Deployment Pipeline

GitHub Push → Cloud Build → Container Registry → Cloud Run → Live!

Zero downtime. Automatic scaling. HTTPS by default. All configured with AI assistance!

6. The AI Collaboration

This project proves that AI isn't replacing developers, it's supercharging them. Gemini helped me:

  • Write cleaner code
  • Make better design decisions
  • Catch edge cases early
  • Optimize performance
  • Deploy professionally