2026-01-18 11:43:38
In 2025, many companies learned a practical lesson about infrastructure reliability.
Not from whitepapers or architectural diagrams, but from real outages that directly affected daily operations.
What stood out was not that failures happened — outages have always existed — but how broadly and deeply their impact was felt, even by teams that believed their setups were “safe enough.”
⸻⸻⸻
When a single region becomes a business problem
One of the most discussed incidents in 2025 was a prolonged regional outage at Amazon Web Services.
For some teams, this meant temporary inconvenience. For others, it meant hours of unavailable internal systems: CRMs, billing tools, internal dashboards, and operational services.
What surprised many companies was that they did not necessarily host workloads directly in the affected region. Dependencies told a different story. Third-party APIs, SaaS tools, and background services built on the same infrastructure became unavailable, creating a chain reaction.
For an online business, even a few hours of full unavailability can mean a meaningful share of daily revenue lost. But the bigger cost often appeared later: delayed processes, manual recovery work, and pressure on support teams.
⸻⸻⸻
When servers are fine but the network isn’t
Later in the year, a large-scale incident at Cloudflare highlighted a different weak point.
Servers were running. Data was intact. But network access degraded.
From a user perspective, the difference did not matter. Pages failed to load, APIs returned errors, and customer-facing services became unreliable. Even teams with redundant server setups found themselves affected because the bottleneck was outside their compute layer.
This incident changed how many engineers and managers talked about reliability. “The servers are up” stopped being a reassuring statement if the network path to those servers could fail in unexpected ways.
⸻⸻⸻
The quiet accumulation of “minor” failures
Not every problem in 2025 made headlines. In fact, most did not.
Many teams experienced:
• intermittent routing degradation,
• partial regional unavailability,
• short network interruptions that did not trigger incident alerts.
Individually, these issues were easy to dismiss. Collectively, they created friction. Engineers spent more time troubleshooting. Deployments slowed down. Systems became harder to reason about.
Over time, these “minor” failures affected velocity just as much as a single large outage.
⸻⸻⸻
What changed in how businesses evaluate infrastructure
By the end of 2025, the conversation inside many companies had shifted.
Instead of asking “Which provider is the biggest?”, teams started asking:
• How quickly can we recover if a region fails?
• What dependencies exist outside our direct control?
• Can traffic or workloads be moved without a full outage?
• How predictable is the infrastructure under stress?
This shift mattered. Reliability stopped being a checkbox and became an architectural property that had to be designed, not assumed.
⸻⸻⸻
Why some teams reconsidered VPS-based setups
An interesting side effect of this shift was renewed interest in VPS infrastructure — not as a “cheap alternative,” but as a way to regain architectural control.
For certain workloads, VPS deployments allowed teams to:
• spread services across multiple regions,
• reduce reliance on a single platform ecosystem,
• make network behavior more explicit and testable.
Some teams began combining hyperscalers with VPS providers, treating infrastructure diversity as a form of risk management rather than technical debt. Providers commonly discussed in this context included Hetzner, Vultr, Linode, and justhost.ru, each used for different regional or operational needs.
⸻⸻⸻
A practical takeaway from 2025
The main lesson from 2025 was not that clouds are unreliable.
It was that reliability cannot be outsourced entirely.
Infrastructure failures became a management issue as much as a technical one. Teams that treated outages as architectural scenarios — and planned for them explicitly — recovered faster and with fewer side effects.
By contrast, teams that relied on reputation or scale alone often discovered their risk surface only after something broke.
⸻⸻⸻
Final thought
Infrastructure in 2025 stopped being background noise.
It became a variable that businesses actively model, question, and design around.
Not because outages suddenly appeared, but because their real cost became impossible to ignore.
2026-01-18 11:38:43
I love scribbling down thoughts. There is a specific joy in taking difficult, complex concepts and breaking them down into soft, digestible pieces that anyone can understand.
I also enjoy drawing. Although my skills are strictly "programmer art" level, I believe a single clean diagram is often far more powerful than a hundred words.
And above all, I love coding. I truly cherish the process of creating something that actually works right at my fingertips.
I first dipped my toes into the massive wave of Generative AI back in 2023, right when Stable Diffusion was starting to take off. I had poked around game AI before that, but looking back, the shift that began then seems to be shaking the entire IT ecosystem to its roots.
At the time, I felt a strange thirst when looking at models trained primarily on Western art styles. So, I spent time painstakingly crafting datasets and fell deeply into the fun of teaching AI the brushstrokes of Shin Yun-bok, a painter from the Joseon Dynasty.
Then came a period where I felt infinitely small in front of the high-quality images pouring out so effortlessly. What sustained me through that overwhelming feeling was the realization that "teaching a new style and setting the direction" was still a human task.
A similar sense of helplessness arrived with writing. Watching models evolve from ChatGPT and Gemini, I witnessed AI’s writing skills quickly surpass my own. However, I realized that deciding what to write, bearing the weight of a piece published under my name, and finally putting the period at the end of the sentence is something only "I" can do. That sense of responsibility is something AI cannot take away.
Is coding any different? Although I still do a lot of the typing myself, my jaw drops at the speed of evolution every time a new model is released.
When it comes to writing or drawing, AI is already the superior player. So, a collaborative process has settled into my daily life: I throw out a rough draft, the AI polishes it smoothly, and I do the final review and adjustments. As the models get smarter, the parts I touch are becoming fewer and fewer. I have a hunch that coding will follow this exact process before long.
At an AI Workshop I attended yesterday, someone asked me a heavy question:
"What on earth should humans do in the future?"
As is the case now, even more things will be automated by AI in the future.
However, people like me will still want to make things ourselves. Even if we borrow the power of a tool as potent as AI, the starting point and the intention of that creation will still remain with the "person."
There will be a clear distinction between what AI generates because it "wants" to (if ever), and what a human creates with specific intent. The value will likely be assessed differently, too.
Isn't it similar to the variety of choices we have when we need a chair?
Sure, you can pay money and buy a comfortable, finished product. But some enjoy the process of buying parts from IKEA and assembling them; others buy the tools and cut the lumber to build from scratch; and some even choose the primitive labor of carving wood, stitch by stitch, by hand.
Just as we pay different prices for factory-made goods and artisanal handicrafts today, I believe the "us" of the future will continue to live on, assigning different meanings based on the "process" and "values" embedded in the result.
2026-01-18 11:37:48
Searching for patterns in a grid is a classic challenge in algorithmic thinking. In this problem, we are tasked with finding the largest possible sub-grid where every row, column, and both main diagonals add up to the exact same value.
Example 1 :

Input: grid = [[7,1,4,5,6],[2,5,1,6,4],[1,5,4,3,2],[1,2,7,3,4]]
Output: 3
Explanation: The largest magic square has a size of 3.
Every row sum, column sum, and diagonal sum of this magic square is equal to 12.
Example 2 :

Input: grid = [[5,1,3,1],[9,3,3,1],[1,3,3,8]]
Output: 2
Constrains :
The brute-force approach would be to check every possible square of every size and manually sum up its rows, columns, and diagonals. However, that involves a lot of repeated work. To make this efficient, we use a technique called Prefix Sums.
Think of a prefix sum like a "running total." If you know the total sum of a row from the start up to index 10, and the total sum up to index 5, you can find the sum of the elements between 5 and 10 instantly by subtracting the two totals.
In this solution, we pre-calculate four types of running totals for every cell :
Once we have these tables, checking if a square is "magic" becomes a matter of simple subtraction rather than looping through every single cell. We start searching from the largest possible side length (the minimum of and ) and work our way down. The first one that satisfies the magic square condition is our answer.
class Solution {
public:
bool isMagic(vector<vector<array<int,4>>> const & prefixSum, int r, int c, int sz) {
// Calculate the main diagonal sum
int targetSum = prefixSum[r+sz][c+sz][2] - prefixSum[r][c][2];
// Check the anti-diagonal sum
if (targetSum != prefixSum[r+sz][c+1][3] - prefixSum[r][c+sz+1][3]) {
return false;
}
// Check all row sums within the square
for (int j = r; j < r + sz; j++) {
if (targetSum != prefixSum[j+1][c+sz][0] - prefixSum[j+1][c][0]) {
return false;
}
}
// Check all column sums within the square
for (int j = c; j < c + sz; j++) {
if (targetSum != prefixSum[r+sz][j+1][1] - prefixSum[r][j+1][1]) {
return false;
}
}
return true;
}
int largestMagicSquare(vector<vector<int>>& grid) {
int m = grid.size(), n = grid[0].size();
// prefixSum stores: [row, col, diag, anti-diag] sums
vector<vector<array<int,4>>> prefixSum(m + 1, vector<array<int,4>>(n + 2));
for (int i = 1; i <= m; i++) {
for (int j = 1; j <= n; j++) {
int val = grid[i-1][j-1];
prefixSum[i][j][0] = prefixSum[i][j-1][0] + val; // Row
prefixSum[i][j][1] = prefixSum[i-1][j][1] + val; // Col
prefixSum[i][j][2] = prefixSum[i-1][j-1][2] + val; // Diag
prefixSum[i][j][3] = prefixSum[i-1][j+1][3] + val; // Anti-Diag
}
}
for (int k = min(m, n); k >= 2; k--) {
for (int i = 0; i <= m - k; i++) {
for (int j = 0; j <= n - k; j++) {
if (isMagic(prefixSum, i, j, k)) return k;
}
}
}
return 1;
}
};
class Solution:
def largestMagicSquare(self, grid: List[List[int]]) -> int:
m, n = len(grid), len(grid[0])
# prefixSum[i][j] stores [row, col, diag, anti_diag]
# We add padding to handle boundary conditions easily
pref = [[[0] * 4 for _ in range(n + 2)] for _ in range(m + 1)]
for r in range(1, m + 1):
for c in range(1, n + 1):
val = grid[r-1][c-1]
pref[r][c][0] = pref[r][c-1][0] + val
pref[r][c][1] = pref[r-1][c][1] + val
pref[r][c][2] = pref[r-1][c-1][2] + val
pref[r][c][3] = pref[r-1][c+1][3] + val
def is_magic(r, c, k):
# Target sum from the main diagonal
target = pref[r+k][c+k][2] - pref[r][c][2]
# Check anti-diagonal
if target != pref[r+k][c+1][3] - pref[r][c+k+1][3]:
return False
# Check all rows
for i in range(r, r + k):
if pref[i+1][c+k][0] - pref[i+1][c][0] != target:
return False
# Check all columns
for j in range(c, c + k):
if pref[r+k][j+1][1] - pref[r][j+1][1] != target:
return False
return True
# Check from largest possible side length downwards
for k in range(min(m, n), 1, -1):
for i in range(m - k + 1):
for j in range(n - k + 1):
if is_magic(i, j, k):
return k
return 1
/**
* @param {number[][]} grid
* @return {number}
*/
var largestMagicSquare = function(grid) {
const m = grid.length;
const n = grid[0].length;
// Create prefix sum 3D array: [m+1][n+2][4]
const pref = Array.from({ length: m + 1 }, () =>
Array.from({ length: n + 2 }, () => new Int32Array(4))
);
for (let r = 1; r <= m; r++) {
for (let c = 1; c <= n; c++) {
const val = grid[r-1][c-1];
pref[r][c][0] = pref[r][c-1][0] + val; // Row
pref[r][c][1] = pref[r-1][c][1] + val; // Col
pref[r][c][2] = pref[r-1][c-1][2] + val; // Diag
pref[r][c][3] = pref[r-1][c+1][3] + val; // Anti-Diag
}
}
const isMagic = (r, c, k) => {
const target = pref[r+k][c+k][2] - pref[r][c][2];
if (target !== pref[r+k][c+1][3] - pref[r][c+k+1][3]) return false;
for (let i = r; i < r + k; i++) {
if (pref[i+1][c+k][0] - pref[i+1][c][0] !== target) return false;
}
for (let j = c; j < c + k; j++) {
if (pref[r+k][j+1][1] - pref[r][j+1][1] !== target) return false;
}
return true;
};
for (let k = Math.min(m, n); k >= 2; k--) {
for (let i = 0; i <= m - k; i++) {
for (let j = 0; j <= n - k; j++) {
if (isMagic(i, j, k)) return k;
}
}
}
return 1;
};
prefixSum tables, we significantly speed up the validation process for each potential square.As a mentor, I often see students struggle with grid problems because they try to "count" everything manually. Learning to use prefix sums is like leveling up your vision in game development or data processing: you stop seeing individual pixels and start seeing regions. This problem is excellent practice for interviews at companies like Google or Amazon, where multidimensional array manipulation and optimization are frequently tested. In the real world, these concepts are the foundation for image processing filters and spatial data analysis.
2026-01-18 11:37:22
I’ve tried a bunch of expense trackers over the years, and I kept running into the same problems: slow flows, cluttered screens, and a nagging feeling that I was handing over more data than I should. So I built my own.
Expense Buddy is a privacy‑first, local‑first expense tracker that stays out of your way. It’s built with React Native (Expo), keeps everything on‑device by default, and gives you optional GitHub sync if you want a personal backup you control.
Most expense trackers either felt slow, intrusive, or vague about where my data lived. I wanted something simple enough to open daily, fast enough to never frustrate me, and honest about storage. Expense Buddy is the result—focused on clarity, speed, and ownership:
The dashboard is my “daily check‑in” screen. It gives me a quick view of recent spending and a simple 7‑day trend. I made the graph tappable because I kept wanting to jump straight into that day’s entries.
Adding an expense should be boring—in a good way. It’s a one‑screen flow with quick category and payment method picks so I can log something in a few seconds and move on.
I wanted answers, not charts for the sake of charts. The analytics tab helps me see where money goes by category, payment method, and saved instrument, then zoom out with a spending trend view. I also added multiple time windows so I can compare “this week” vs. “the last 3 months” without leaving the screen.
I mess up entries all the time. The history view lets me browse past entries, open any expense, and fix it in place—no weird edit mode, no hunting.
Settings are intentionally boring. You can set a default payment method, add custom categories (the app ships with 8 defaults you can edit), manage saved payment instruments for deeper analytics, and enable GitHub sync to keep everything in sync across devices. Auto‑sync can run on change or on app launch.
I designed GitHub sync to be safe, predictable, and fully optional. Your data stays local unless you explicitly turn this on.
expenses-YYYY-MM-DD.csv in your reposettings.json
I’m allergic to janky lists, so performance was a first‑class goal. Virtualized lists, memoized components, and a lightweight state layer keep things snappy even with long histories. The UI stays minimal so logging an expense takes seconds, not minutes.
If you want to dig deeper, here are the relevant links:
Expense Buddy is currently in internal testing on Google Play. To get access, DM @sudokaii on X (Twitter) or send me an email.
2026-01-18 11:26:09
This is a submission for the New Year, New You Portfolio Challenge Presented by Google AI
Hey there! I'm Depa Panjie, a Software Quality Assurance Engineer with 7+ years of breaking things professionally (and then fixing them).
Picture this: You're a QA Engineer who's tired of boring, static portfolios. You think, "What if my portfolio was an entire operating system?"
Crazy? Maybe. Awesome? Absolutely.
So I teamed up with Antigravity (powered by Gemini 3 Pro) and said: "Let's build Chrome OS... but make it a portfolio."
What happened next was pure magic.
Note: The embedded preview below has limited screen size. For the full desktop experience with draggable windows, multiple apps, and all interactive features, please click the Live Demo link below to open it in a new tab! The portfolio is optimized for screens wider than 768px.
What You're About to Experience
This isn't your typical portfolio. This is a fully functional Chrome OS-inspired desktop that runs entirely in your browser:
Pro tip: Try opening multiple apps, dragging them around, and toggling dark mode. It's oddly satisfying.
The Dream Team
The Tech Stack
Frontend Magic:
├── React 18 + TypeScript (Type safety? Yes please!)
├── Vite (Lightning-fast builds ⚡)
├── Pure CSS (No frameworks, just vibes)
└── Lucide React (Beautiful icons)
AI Superpowers:
├── Antigravity (The AI pair programmer)
└── Gemini 3 Pro (The brain)
Deployment:
├── Docker (Multi-stage builds)
├── Nginx (Serving with style)
├── Google Cloud Run (Serverless magic)
└── Cloud Build (Auto-deploy from GitHub)
The AI-Assisted Development Process
Phase 1: The Foundation
Me: "Let's build a window management system"
Gemini: "Here's a React Context-based architecture with z-index management, drag handlers, and state persistence"
Result: A fully functional window manager in one session!
Phase 2: The Apps
We built 5 complete applications:
Each app was crafted with Gemini suggesting optimal patterns and best practices.
Phase 3: The Polish
Me: "The dark mode text is hard to read"
Gemini: "Let's use a blue-tinted glassmorphic design with proper contrast ratios"
Result: That stunning "Who am I?" card you see in the Files app!
Phase 4: The Tour System
Me: "Users need guidance"
Gemini: "Let's integrate Driver.js with event-driven panel management"
Result: A complete tour loop that even closes panels automatically!
Phase 5: Mobile Strategy
Me: "Mobile responsive is hard for a desktop OS"
Gemini: "Let's detect mobile and show a beautiful blocking screen instead"
Result: A polite, well-designed message that maintains the desktop experience integrity!
Phase 6: Cloud Deployment
Me: "How do we deploy this?"
Gemini: "Here's a Dockerfile, nginx config, and Cloud Run setup with CI/CD"
Result: Push to GitHub → Auto-deploy to Cloud Run!
The AI Advantage
Working with Antigravity and Gemini was like having a senior developer who:
Real Example:
When I said "the Files app needs better dark mode colors," Gemini didn't just change colors, it suggested an entire design system with:
That's not just coding that's design thinking powered by AI!
1. The "It Just Works" Factor
Everything is functional. Not "looks functional", actually functional. You can:
2. The Glassmorphic Design
That "Who am I?" card in the Files app? Pure art:
background: rgba(138, 180, 248, 0.08);
border: 2px solid rgba(138, 180, 248, 0.2);
backdrop-filter: blur(10px);
It glows in dark mode like a Chrome OS dream!
3. The Smart Tour System
The Driver.js integration is chef's kiss:
4. The Window Manager
Built from scratch with:
5. The Deployment Pipeline
GitHub Push → Cloud Build → Container Registry → Cloud Run → Live!
Zero downtime. Automatic scaling. HTTPS by default. All configured with AI assistance!
6. The AI Collaboration
This project proves that AI isn't replacing developers, it's supercharging them. Gemini helped me: