MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

🚀 Deploying a Highly Available Web Application on AWS using ALB & Auto Scaling (Beginner-Friendly)

2026-01-20 07:51:04

👋 Introduction

In this hands-on project, I built a production-ready AWS architecture using core services like VPC, Application Load Balancer, Auto Scaling Group, EC2, and NAT Gateway.

This setup follows AWS best practices:

Secure networking

High availability

Automatic scaling

Zero public access to EC2 instances

This guide is beginner-friendly, yet interview-ready.

🧠 What You Will Learn

✅ How to design a secure AWS VPC
✅ Public vs Private Subnets (real use-case)
✅ Application Load Balancer (ALB)
✅ Auto Scaling Group (ASG)
✅ NAT Gateway for outbound internet
✅ Real-world architecture used in companies

🏗️ Architecture Overview
Internet
|

Application Load Balancer (Public Subnets)
|

Target Group
|

Auto Scaling Group
(EC2 Instances in Private Subnets)
|

NAT Gateway → Internet (Outbound Only)

🔒 EC2 instances have NO public IPs
🌐 Only ALB is exposed to the internet

🛠️ Services Used

Amazon VPC

EC2 (Ubuntu)

Application Load Balancer

Auto Scaling Group

Target Groups

NAT Gateway

Elastic IP

Security Groups

🚦 Step-by-Step Implementation
1️⃣ Create a Custom VPC

CIDR: 10.0.0.0/16

Enable:

DNS Hostnames

DNS Resolution

2️⃣ Create Subnets

Create 4 subnets:

Public Subnets

Public-Subnet-1 (ALB)

Public-Subnet-2 (NAT Gateway)

Private Subnets

Private-Subnet-1 (EC2)

Private-Subnet-2 (EC2)

⚠️ Enable Auto-assign Public IP = YES only for public subnets

3️⃣ Internet Gateway

Create and attach an Internet Gateway to the VPC

Required for ALB and NAT Gateway

4️⃣ NAT Gateway (CRITICAL)

Create NAT Gateway in public subnet

Attach Elastic IP

Allows private EC2 to access internet securely

5️⃣ Route Tables

Public Route Table

0.0.0.0/0 → Internet Gateway

Private Route Table

0.0.0.0/0 → NAT Gateway

Associate correctly with subnets.

6️⃣ Security Groups
🔹 ALB Security Group

HTTP (80) → 0.0.0.0/0

🔹 EC2 Security Group

HTTP (80) → ALB Security Group

SSH (22) → Your IP (optional)

🔐 EC2 is accessible only via ALB

7️⃣ Launch Template (EC2)

AMI: Ubuntu 22.04
Instance Type: t2.micro

🧾 User Data Script

!/bin/bash

apt update -y
apt install apache2 -y
systemctl start apache2
systemctl enable apache2

echo "

Welcome from ALB + Auto Scaling

Hostname: $(hostname)

" > /var/www/html/index.html

8️⃣ Target Group

Target Type: Instance

Protocol: HTTP

Port: 80

Health Check Path: /

9️⃣ Application Load Balancer

Type: Internet-facing

Subnets: Public Subnets

Listener: HTTP 80

Forward to Target Group

🔟 Auto Scaling Group

Use Launch Template

Subnets: Private Subnets

Desired: 2

Min: 1

Max: 3

Attach to ALB Target Group

📈 Optional: CPU-based scaling policy

✅ Final Verification

Copy ALB DNS name

Paste into browser

Refresh multiple times

🎉 You will see different hostnames
This confirms:

Load balancing

Auto scaling

High availability

📂 GitHub Repository

🔗 Project Source Code & Documentation
👉 https://github.com/IrfanPasha05/aws-alb-autoscaling-project

Includes:

Folder structure

User-data scripts

Setup steps

Troubleshooting guide

🎯 Why This Project Matters

This architecture is used in:

Real production environments

Enterprise applications

DevOps & Cloud Engineer roles

Perfect for:

Resume

Interviews

Portfolio

LinkedIn & DEV

🧩 Future Enhancements

HTTPS with ACM

Custom domain (Route 53)

CloudFront CDN

Monitoring with CloudWatch

🙌 Final Thoughts

This project strengthened my understanding of AWS networking, security, and scalability. If you’re learning AWS or preparing for cloud roles — build this once, and you’ll remember it forever.

Happy Clouding ☁️🚀

Speeding Up PostgreSQL in Containers

2026-01-20 07:45:50

The Problem

Running a test suite on an older CI machine with slow disks revealed PostgreSQL as a major bottleneck. Each test run was taking over 1 hour to complete. The culprit? Tests performing numerous database operations, with TRUNCATE commands cleaning up data after each test.

With slow disk I/O, PostgreSQL was spending most of its time syncing data to disk - operations that were completely unnecessary in a ephemeral CI environment where data persistence doesn't matter.

Catching PostgreSQL in the Act

Running top during test execution revealed the smoking gun:

242503 postgres  20   0  184592  49420  39944 R  81.7   0.3   0:15.66 postgres: postgres api_test 10.89.5.6(43216) TRUNCATE TABLE

PostgreSQL was consuming 81.7% CPU just to truncate a table! This single TRUNCATE operation ran for over 15 seconds. On a machine with slow disks, PostgreSQL was spending enormous amounts of time on fsync operations, waiting for the kernel to confirm data was written to physical storage - even though we were just emptying tables between tests.

The Solution

Three simple PostgreSQL configuration tweaks made a dramatic difference:

services:
  postgres:
    image: postgres:16.11-alpine
    environment:
      POSTGRES_INITDB_ARGS: "--nosync"
      POSTGRES_SHARED_BUFFERS: 256MB
    tmpfs:
      - /var/lib/postgresql/data:size=1g

1. --nosync Flag

The --nosync flag tells PostgreSQL to skip fsync() calls during database initialization. In a CI environment, we don't care about data durability - if the container crashes, we'll just start over. This eliminates expensive disk sync operations that were slowing down database setup.

2. Increased Shared Buffers

Setting POSTGRES_SHARED_BUFFERS: 256MB (up from the default ~128MB) gives PostgreSQL more memory to cache frequently accessed data. This is especially helpful when running tests that repeatedly access the same tables.

3. tmpfs for Data Directory (The Game Changer)

The biggest performance win came from mounting PostgreSQL's data directory on tmpfs - an in-memory filesystem.
This completely eliminates disk I/O for database operations:

tmpfs:
  - /var/lib/postgresql/data:size=1g

With tmpfs, all database operations happen in RAM. This is especially impactful for:

  • TRUNCATE operations - instant cleanup between tests
  • Index updates - no disk seeks required
  • WAL (Write-Ahead Log) writes - purely memory operations
  • Checkpoint operations - no waiting for disk flushes

The 1GB size limit is generous for most test databases. Adjust based on your test data volume.

The Impact

Before: ~60 minutes per test run

After: ~10 minutes per test run

Improvement: 6x faster! 🚀

Real Test Performance Examples

You should have seen my surprise when I first saw a single test taking 30 seconds in containers.
I knew something was terribly wrong. But when I applied the in-memory optimization and
saw the numbers drop to what you'd expect on a normal machine - I literally got tears in my eyes.

Before tmpfs optimization:

API::FilamentSupplierAssortmentsTest#test_create_validation_negative_price = 25.536s
API::FilamentSupplierAssortmentsTest#test_list_with_a_single_assortment = 29.996s
API::FilamentSupplierAssortmentsTest#test_list_missing_token = 25.952s

Each test was taking 25-30 seconds even though the actual test logic was minimal!
Most of this time was spent waiting for PostgreSQL to sync data to disk.

After tmpfs optimization:

API::FilamentSupplierAssortmentsTest#test_list_as_uber_curator = 0.474s
API::FilamentSupplierAssortmentsTest#test_list_as_assistant = 0.466s
API::FilamentSupplierAssortmentsTest#test_for_pressman_without_filament_supplier = 0.420s

These same tests now complete in 0.4-0.5 seconds - a 50-60x improvement per test! 🎉

Where the Time Was Going

The biggest gains came from reducing disk I/O during:

  • TRUNCATE operations between tests - PostgreSQL was syncing empty table states to disk
  • Database initialization at the start of each CI run
  • INSERT operations during test setup - creating test fixtures (users, roles, ...)
  • Transaction commits - each test runs in a transaction that gets rolled back
  • Frequent small writes during test execution

With slow disks, even simple operations like creating a test user or truncating a table would take seconds instead of milliseconds. The top output above shows a single TRUNCATE TABLE operation taking 15+ seconds and consuming 81.7% CPU - most of that was PostgreSQL waiting for disk I/O. Multiply that across hundreds of tests, and you get hour-long CI runs.

The Math

  • 24 tests in this file alone
  • Before: ~27 seconds average per test = ~648 seconds (10.8 minutes) for one test file
  • After: ~0.45 seconds average per test = ~11 seconds for the same file
  • Per-file speedup: 59x faster!

With dozens of test files, the cumulative time savings are massive.

Why This Works for CI

In production, you absolutely want fsync() enabled and conservative settings to ensure data durability. But in CI:

  • Data is ephemeral - containers are destroyed after each run
  • Speed matters more than durability - faster feedback loops improve developer productivity
  • Disk I/O is often the bottleneck - especially on older/slower CI machines

By telling PostgreSQL "don't worry about crashes, we don't need this data forever," we eliminated unnecessary overhead.

Key Takeaways

  1. Profile your CI pipeline - we discovered disk I/O was the bottleneck, not CPU or memory
  2. CI databases don't need production settings - optimize for speed, not durability
  3. tmpfs is the ultimate disk I/O eliminator - everything in RAM means zero disk bottleneck
  4. Small configuration changes can have big impacts - three settings saved us 50 minutes per run
  5. Consider your hardware - these optimizations were especially important on older machines with slow disks
  6. Watch your memory usage - tmpfs consumes RAM; ensure your CI runners have enough (1GB+ for the database)

Implementation in Woodpecker CI

Here's our complete PostgreSQL service configuration:

services:
  postgres:
    image: postgres:16.11-alpine
    environment:
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: dbpgpassword
      POSTGRES_DB: api_test
      POSTGRES_INITDB_ARGS: "--nosync"
      POSTGRES_SHARED_BUFFERS: 256MB
    ports:
      - 5432
    tmpfs:
      - /var/lib/postgresql/data:size=1g

Note: The tmpfs field is officially supported in Woodpecker CI's backend (defined in pipeline/backend/types/step.go). If you see schema validation warnings, they may be from outdated documentation - the feature works perfectly.

Lucky us! Not all CI platforms support tmpfs configuration this easily. Woodpecker CI makes it trivial with native Docker support - just add a tmpfs: field and you're done. If you're on GitHub Actions, GitLab CI, or other platforms, you might need workarounds like docker run with --tmpfs flags or custom runner configurations.

Simple, effective, and no code changes required - just smarter configuration for the CI environment.

Why Not Just Tune PostgreSQL Settings Instead of tmpfs?

TL;DR: I tried. tmpfs is still faster AND simpler.

After seeing the dramatic improvements with tmpfs, I wondered: "Could we achieve similar performance by aggressively tuning PostgreSQL settings instead?" This would be useful for environments where tmpfs isn't available or RAM is limited.

Tested Aggressive Disk-Based Tuning

Experimenting with disabling all durability features:

services:
  postgres:
    command:
      - postgres
      - -c
      - fsync=off # Skip forced disk syncs
      - -c
      - synchronous_commit=off # Async WAL writes
      - -c
      - wal_level=minimal # Minimal WAL overhead
      - -c
      - full_page_writes=off # Less WAL volume
      - -c
      - autovacuum=off # No background vacuum
      - -c
      - max_wal_size=1GB # Fewer checkpoints
      - -c
      - shared_buffers=256MB # More memory cache

The Results: tmpfs Still Wins

Even with all these aggressive settings, tmpfs was still faster.

Disk-based (even with fsync=off):

  • ❌ File system overhead - ext4/xfs metadata operations
  • ❌ Disk seeks - mechanical latency on HDDs, limited IOPS on SSDs
  • ❌ Kernel buffer cache - memory copies between user/kernel space
  • Docker overlay2 - additional storage driver overhead
  • ❌ Complexity - 7+ settings to manage and understand

tmpfs-based:

  • ✅ Pure RAM operations - no physical storage involved
  • ✅ Zero disk I/O - everything happens in memory
  • ✅ Simple configuration - just one tmpfs line
  • ✅ Maximum performance - nothing faster than RAM

Bonus: Other PostgreSQL CI Optimizations to Consider

If you're still looking for more speed improvements:

  • Disable query logging - reduces I/O overhead:
  command:
    - postgres
    - -c
    - log_statement=none           # Don't log any statements
    - -c
    - log_min_duration_statement=-1  # Don't log slow queries
  • Use fsync=off in postgresql.conf - similar to --nosync but for runtime (redundant with tmpfs)
  • Increase work_mem - helps with complex queries in tests

Data Engineering Uncovered: What It Is and Why It Matters

2026-01-20 07:32:01

Introduction

Every day, organizations generate massive amounts of data. But raw data sitting in scattered systems is worthless. Someone needs to collect it, transform it, move it, and make it available for analysis.

That someone is a Data Engineer.

After years of working as a data engineering consultant and training professionals across industries, I've seen one consistent truth: companies are desperate for skilled data engineers, yet most people still don't fully understand what the role entails.

This article is the first in a series designed to take you from zero to job-ready. Whether you're a developer looking to pivot, a student exploring career options, or a professional curious about the field — this series is for you.

What Is Data Engineering?

In simple terms, data engineering is the practice of designing, building, and maintaining the infrastructure that allows data to flow reliably from source to destination.

Think of it this way:

  • Data Scientists ask questions and build models.
  • Data Analysts interpret data and create reports.
  • Data Engineers make sure the data is there in the first place.

Without data engineers, there is no clean dataset. No dashboard. No machine learning model. Nothing.

A Practical Definition

Data engineering involves:

  • Extracting data from multiple sources (databases, APIs, files, streams)
  • Transforming data into usable formats
  • Loading data into storage systems (data warehouses, data lakes)
  • Ensuring data quality, consistency, and availability
  • Building and maintaining pipelines that automate this entire process

This process is often referred to as ETL (Extract, Transform, Load) or increasingly ELT (Extract, Load, Transform) in modern cloud architectures.

Why Does Data Engineering Matter?

Organizations today are data-driven — or at least they want to be. But being data-driven requires reliable data infrastructure.

Consider these scenarios:

Without Data Engineering With Data Engineering
Reports take days to generate Real-time dashboards
Data is inconsistent across teams Single source of truth
Analysts spend 80% of time cleaning data Analysts focus on insights
Decisions based on gut feeling Decisions backed by data

Data engineering is the bridge between raw chaos and actionable intelligence.

Data Engineer vs. Data Scientist vs. Data Analyst

One of the most common questions I get from students:

"What's the difference between these roles?"

Here's a simplified breakdown:

Role Focus Key Skills
Data Engineer Building infrastructure SQL, Python, ETL, Cloud Platforms
Data Scientist Modeling and prediction Statistics, ML, Python/R
Data Analyst Reporting and insights SQL, Excel, BI Tools

These roles collaborate closely. But if data science is the engine, data engineering is the fuel line.

Is Data Engineering Right for You?

Data engineering might be a good fit if you:

  • Enjoy solving problems systematically
  • Like building things that work reliably at scale
  • Are comfortable with code but don't want to be a traditional software developer
  • Want a career with strong demand and competitive compensation

It might not be for you if:

  • You prefer working directly with business stakeholders daily
  • You want to focus on statistical modeling or visualization
  • You dislike debugging and troubleshooting pipelines

What You'll Learn in This Series

This is part one of a six-part series:

  1. Data Engineering Uncovered: What It Is and Why It Matters (You are here)
  2. Pipelines, ETL, and Warehouses: The DNA of Data Engineering
  3. Tools of the Trade: What Powers Modern Data Engineering
  4. The Math You Actually Need as a Data Engineer
  5. Building Your First Pipeline: From Concept to Execution
  6. Charting Your Path: Courses and Resources to Accelerate Your Journey

By the end of this series, you will have a solid understanding of what data engineers do, the skills required, and a clear roadmap to start your journey.

Final Thoughts

Data engineering is not glamorous. You won't be building flashy AI demos or presenting to executives every week. But without data engineers, none of that would be possible.

If you're looking for a career that combines problem-solving, technical depth, and real impact — data engineering deserves your attention.

In the next article, we'll dive into the core concepts: pipelines, ETL processes, and data architecture.

See you there.

Have questions? Drop them in the comments. I read every one.

Cloudflare Workers performance: an experiment with Astro and worldwide latencies

2026-01-20 07:31:23

Why use Cloudflare Workers?

Cloudflare Workers let you host pages and run code without managing servers. Unlike traditional servers placed in a single or a few locations, the deployed static assets and code are mirrored around the globe in the data centers shown as blue dots below. Naturally, this offers better latencies, scalability and robustness.

Map of data centers

Their developer platform also extends beyond “Workers” (the compute part) and include storage, databases, queues, AI and lots of other developer tooling. The whole with a generous free tier and reasonable pricing beyond that.

Why am I writing this? I find it fairly good, had a good experience with it, and that’s why I will present it here. This article is not sponsored in any way. I just think it’s somehow a responsibility of developers to communicate about the tools they use in order to keep their ecosystem lively. I’ve seen too much good stuff getting abandoned because there was no “buzz”.

The benefits of using Cloudflare Workers is:

  • Great latencies worldwide

  • Unlimited scalability

  • No servers to take care of

  • Further tooling for data, files, AI, etc.

  • GitHub pull requests preview URLs

  • Free tier good enough for most hobby projects

When not to use it

Like every tool, it has use cases for which it shines and others it is not suited for. This is important to grasp and understanding the underlying technology helps tremendously. Basically, in loads your whole app bundled as a script and evaluates it on the fly. It’s fast and works wonderfully if your API and used frameworks are slim and minimalistic. However, it would be ill-advised in following use cases:

  • Large complex apps

    The cost of evaluating your API / SSR script will grow as your app grows. The larger it becomes, the more inefficient its invocation as a whole will become. There are also some limits how large your “script” can be. Although it has been raised multiple times in the past, the fact that this is extremely inefficient will always remain. Thus, be careful when picking dependencies/frameworks since they can quickly bloat your codebase.

  • Heavy resource consumption

    Due to its nature, it is not suited to compute stuff requiring large amounts of CPU/RAM/time like statistic models or scientific computation. Large caches are problematic too. Waiting for long-running async server-side requests is OK though, the execution is suspended in-between and do not count towards execution time.

  • Long-lived connections

    That’s also problematic. You should rather use polling than keeping connections open.

In other words: “The slimmer, the better!”

It’s kind of difficult to say what’s small enough and when it becomes too large. This is rather suited for small self-contained microservices of modest size. Even debugging using breakpoint might turn out challenging. For such larger applications, traditional server deployments would be more suited.

What will we build?

A “Quote of the Day” Web application.

Screenshot

The purpose is not to build something big, but rather a simple proof-of-concept. The quotes will be stored in a KV store and fetched Client-side. That way, we can measure how fast the whole works and if it lives up to the expectations.

The default version of https://quoted.day is available in two flavours:

I swapped which one is the default from time to time to perform experiments. Performance (latency) may vary depending where you are located and whether what you fetch is “hot” or “cold”. Before we delve into details on how to build such an app, let’s take a look at the performance we can expect.

Benchmarking latencies worldwide

Unlike the internal Cloudflare latency measures, measured “inside” the worker and therefore quite optimistic, we will look at the “real” external latency thanks to the great tool https://www.openstatus.dev/play/checker .

Thanks to that, we can obtain a pretty good idea of the overall latencies that can be observed all over the world. Note however that Australia, Asia and Africa may have rather erratic latencies that “jump” sometimes.

We will also benchmark multiple things separately:

  • Static assets

  • Stateless functions

  • Hot KV read

  • Cold KV read

  • KV writes

Also, every case will get “two passes”, to hopefully fill caches on the way, and only record the second one.

Static assets

This was obtained by fetch the main page at https://quoted.day/spa

Region Latency
🇩🇪 fra Frankfurt, Germany 30ms
🇩🇪 koyeb_fra Frankfurt, Germany 31ms
🇫🇷 cdg Paris, France 33ms
🇳🇱 railway_europe-west4-drams3a Amsterdam, Netherlands 33ms
🇬🇧 lhr London, United Kingdom 31ms
🇸🇪 arn Stockholm, Sweden 32ms
🇫🇷 koyeb_par Paris, France 31ms
🇳🇱 ams Amsterdam, Netherlands 54ms
🇺🇸 ewr Secaucus, New Jersey, USA 32ms
🇺🇸 iad Ashburn, Virginia, USA 36ms
🇺🇸 koyeb_was Washington, USA 35ms
🇨🇦 yyz Toronto, Canada 50ms
🇺🇸 ord Chicago, Illinois, USA 36ms
🇺🇸 lax Los Angeles, California, USA 28ms
🇺🇸 sjc San Jose, California, USA 26ms
🇺🇸 railway_us-east4-eqdc4a Virginia, USA 41ms
🇺🇸 railway_us-west2 California, USA 49ms
🇺🇸 koyeb_sfo San Francisco, USA 29ms
🇸🇬 railway_asia-southeast1-eqsg3a Singapore, Singapore 53ms
🇮🇳 bom Mumbai, India 95ms
🇺🇸 dfw Dallas, Texas, USA 30ms
🇯🇵 nrt Tokyo, Japan 28ms
🇦🇺 syd Sydney, Australia 31ms
🇸🇬 sin Singapore, Singapore 294ms
🇸🇬 koyeb_sin Singapore, Singapore 436ms
🇧🇷 gru Sao Paulo, Brazil 252ms
🇿🇦 jnb Johannesburg, South Africa 559ms
🇯🇵 koyeb_tyo Tokyo, Japan 28ms

Stateless function

his is obtained by fetching the endpoint https://quoted.day/api/time which simply returns the current time.

Region Latency
🇬🇧 lhr London, United Kingdom 38ms
🇩🇪 koyeb_fra Frankfurt, Germany 32ms
🇳🇱 railway_europe-west4-drams3a Amsterdam, Netherlands 36ms
🇫🇷 cdg Paris, France 75ms
🇳🇱 ams Amsterdam, Netherlands 76ms
🇩🇪 fra Frankfurt, Germany 88ms
🇫🇷 koyeb_par Paris, France 73ms
🇸🇪 arn Stockholm, Sweden 97ms
🇺🇸 railway_us-east4-eqdc4a Virginia, USA 36ms
🇺🇸 koyeb_was Washington, USA 62ms
🇺🇸 ewr Secaucus, New Jersey, USA 95ms
🇺🇸 lax Los Angeles, California, USA 39ms
🇺🇸 sjc San Jose, California, USA 25ms
🇺🇸 iad Ashburn, Virginia, USA 92ms
🇺🇸 dfw Dallas, Texas, USA 90ms
🇨🇦 yyz Toronto, Canada 22ms
🇺🇸 ord Chicago, Illinois, USA 108ms
🇮🇳 bom Mumbai, India 99ms
🇸🇬 railway_asia-southeast1-eqsg3a Singapore, Singapore 45ms
🇯🇵 nrt Tokyo, Japan 27ms
🇺🇸 railway_us-west2 California, USA 99ms
🇧🇷 gru Sao Paulo, Brazil 89ms
🇦🇺 syd Sydney, Australia 26ms
🇸🇬 sin Singapore, Singapore 220ms
🇺🇸 koyeb_sfo San Francisco, USA 26ms
🇿🇦 jnb Johannesburg, South Africa 540ms
🇸🇬 koyeb_sin Singapore, Singapore 354ms
🇯🇵 koyeb_tyo Tokyo, Japan 71ms

Hot KV read

This is obtained by fetching a fixed quote from the KV store using the endpoint https://quoted.day/api/quote/123

Region Latency
🇬🇧 lhr London, United Kingdom 34ms
🇫🇷 cdg Paris, France 39ms
🇳🇱 railway_europe-west4-drams3a Amsterdam, Netherlands 35ms
🇫🇷 koyeb_par Paris, France 37ms
🇸🇪 arn Stockholm, Sweden 34ms
🇳🇱 ams Amsterdam, Netherlands 77ms
🇩🇪 koyeb_fra Frankfurt, Germany 103ms
🇨🇦 yyz Toronto, Canada 25ms
🇺🇸 dfw Dallas, Texas, USA 33ms
🇺🇸 koyeb_was Washington, USA 55ms
🇩🇪 fra Frankfurt, Germany 168ms
🇺🇸 iad Ashburn, Virginia, USA 106ms
🇺🇸 railway_us-west2 California, USA 52ms
🇺🇸 ewr Secaucus, New Jersey, USA 122ms
🇺🇸 koyeb_sfo San Francisco, USA 33ms
🇺🇸 railway_us-east4-eqdc4a Virginia, USA 123ms
🇿🇦 jnb Johannesburg, South Africa 43ms
🇮🇳 bom Mumbai, India 99ms
🇸🇬 railway_asia-southeast1-eqsg3a Singapore, Singapore 88ms
🇺🇸 ord Chicago, Illinois, USA 69ms
🇧🇷 gru Sao Paulo, Brazil 99ms
🇺🇸 sjc San Jose, California, USA 40ms
🇦🇺 syd Sydney, Australia 64ms
🇺🇸 lax Los Angeles, California, USA 91ms
🇸🇬 sin Singapore, Singapore 345ms
🇯🇵 nrt Tokyo, Japan 126ms
🇯🇵 koyeb_tyo Tokyo, Japan 65ms
🇸🇬 koyeb_sin Singapore, Singapore 856ms

Cold KV read

This is obtained by fetching a random quote from the KV store using the endpoint https://quoted.day/api/quote

Note that each call will cache the result for a day at the edge location, resulting in possibly turning cold reads into hot reads as traffic increases.

Region Latency
🇩🇪 fra Frankfurt, Germany 131ms
🇩🇪 koyeb_fra Frankfurt, Germany 105ms
🇬🇧 lhr London, United Kingdom 110ms
🇳🇱 ams Amsterdam, Netherlands 130ms
🇫🇷 cdg Paris, France 145ms
🇸🇪 arn Stockholm, Sweden 134ms
🇫🇷 koyeb_par Paris, France 127ms
🇳🇱 railway_europe-west4-drams3a Amsterdam, Netherlands 133ms
🇺🇸 ewr Secaucus, New Jersey, USA 197ms
🇺🇸 ord Chicago, Illinois, USA 201ms
🇺🇸 iad Ashburn, Virginia, USA 220ms
🇨🇦 yyz Toronto, Canada 243ms
🇺🇸 koyeb_was Washington, USA 229ms
🇺🇸 dfw Dallas, Texas, USA 287ms
🇺🇸 railway_us-east4-eqdc4a Virginia, USA 270ms
🇸🇬 sin Singapore, Singapore 288ms
🇺🇸 sjc San Jose, California, USA 245ms
🇮🇳 bom Mumbai, India 502ms
🇿🇦 jnb Johannesburg, South Africa 322ms
🇸🇬 railway_asia-southeast1-eqsg3a Singapore, Singapore 323ms
🇺🇸 lax Los Angeles, California, USA 247ms
🇺🇸 koyeb_sfo San Francisco, USA 217ms
🇺🇸 railway_us-west2 California, USA 300ms
🇧🇷 gru Sao Paulo, Brazil 601ms
🇯🇵 nrt Tokyo, Japan 822ms
🇸🇬 koyeb_sin Singapore, Singapore 574ms
🇯🇵 koyeb_tyo Tokyo, Japan 335ms
🇦🇺 syd Sydney, Australia 964ms

KV writes

This is obtained by fetching quoted.day/api/bump-counter which creates a temporary KV pair with an expiration time of 10 minutes. It kind of emulates the concept of initiating a “session”.

🇫🇷 cdg Paris, France 128ms
🇩🇪 koyeb_fra Frankfurt, Germany 151ms
🇩🇪 fra Frankfurt, Germany 147ms
🇫🇷 koyeb_par Paris, France 194ms
🇳🇱 ams Amsterdam, Netherlands 145ms
🇸🇪 arn Stockholm, Sweden 240ms
🇬🇧 lhr London, United Kingdom 176ms
🇺🇸 dfw Dallas, Texas, USA 212ms
🇺🇸 railway_us-west2 California, USA 238ms
🇺🇸 koyeb_was Washington, USA 305ms
🇺🇸 railway_us-east4-eqdc4a Virginia, USA 295ms
🇺🇸 ewr Secaucus, New Jersey, USA 408ms
🇺🇸 iad Ashburn, Virginia, USA 423ms
🇨🇦 yyz Toronto, Canada 337ms
🇺🇸 ord Chicago, Illinois, USA 359ms
🇸🇬 koyeb_sin Singapore, Singapore 409ms
🇺🇸 lax Los Angeles, California, USA 335ms
🇮🇳 bom Mumbai, India 347ms
🇺🇸 sjc San Jose, California, USA 438ms
🇺🇸 koyeb_sfo San Francisco, USA 247ms
🇸🇬 sin Singapore, Singapore 508ms
🇯🇵 nrt Tokyo, Japan 684ms
🇦🇺 syd Sydney, Australia 713ms
🇯🇵 koyeb_tyo Tokyo, Japan 734ms
🇳🇱 railway_europe-west4-drams3a Amsterdam, Netherlands 1,259ms
🇸🇬 railway_asia-southeast1-eqsg3a Singapore, Singapore 1,139ms
🇿🇦 jnb Johannesburg, South Africa 2,266ms

SSR Page with KV cold reads

Lastly, in this test, we combine the reading a random quote (that usually results in a cold KV read) and renders it server-side in a page.

Region Latency
🇫🇷 koyeb_par Paris, France 111ms
🇬🇧 lhr London, United Kingdom 108ms
🇳🇱 railway_europe-west4-drams3a Amsterdam, Netherlands 125ms
🇫🇷 cdg Paris, France 133ms
🇩🇪 koyeb_fra Frankfurt, Germany 139ms
🇩🇪 fra Frankfurt, Germany 146ms
🇸🇪 arn Stockholm, Sweden 142ms
🇳🇱 ams Amsterdam, Netherlands 70ms
🇺🇸 railway_us-east4-eqdc4a Virginia, USA 151ms
🇺🇸 koyeb_was Washington, USA 159ms
🇺🇸 ewr Secaucus, New Jersey, USA 201ms
🇺🇸 iad Ashburn, Virginia, USA 209ms
🇺🇸 ord Chicago, Illinois, USA 217ms
🇺🇸 dfw Dallas, Texas, USA 220ms
🇺🇸 sjc San Jose, California, USA 191ms
🇺🇸 railway_us-west2 California, USA 201ms
🇨🇦 yyz Toronto, Canada 255ms
🇺🇸 lax Los Angeles, California, USA 257ms
🇺🇸 koyeb_sfo San Francisco, USA 268ms
🇮🇳 bom Mumbai, India 422ms
🇯🇵 nrt Tokyo, Japan 332ms
🇸🇬 sin Singapore, Singapore 284ms
🇧🇷 gru Sao Paulo, Brazil 327ms
🇸🇬 railway_asia-southeast1-eqsg3a Singapore, Singapore 632ms
🇸🇬 koyeb_sin Singapore, Singapore 677ms
🇿🇦 jnb Johannesburg, South Africa 673ms
🇦🇺 syd Sydney, Australia 385ms
🇯🇵 koyeb_tyo Tokyo, Japan 350ms

Observations

In is interesting to see how you can infer how the KV works just by watching the numbers. It appears the KV store is not actively replicated, but rather KV pairs are copied “on-demand” at remote locations. When cached (by default 1 minute), subsequent reads are fast. The latencies of such “hot” KV pairs are pretty good overall. No complains here. How long the pair remains cached there can also be configured using the cacheTtl parameter during the KV get request. However, the downside of increasing that value is that this cached copy do not reflect changes / updates triggered from other locations during that time.

Unsurprisingly, cold reads have worse latencies. The other thing you can infer from the numbers is that there seem to be an “origin location”, and cold reads latencies increase proportionally according to the distance to this location. Therefore, pay attention “where” you create the KV store, as it impacts all future latencies around the globe. Note that workers KV might change in the future, this is merely an observation of its state right now.

While read operations are OK, the write operations are rather disappointing right now. I expected it to have great latencies too, writing to the “edge” and letting the propagation take place asynchronously, but it is the opposite. Writes appear to communicates with the “origin” storage. The time it takes to set a value gets higher the further away you are from where you created the KV store. This is kind of bad news, because setting/updating values is a pretty common operation, for example to authenticate users. Dear Cloudflare team, I hope you improve that part in the future.

A word of caution

If you develop your webapp, publish it and take a look at it, you will probably not even notice the bad latencies. You will face the optimal latencies with the origin KV store being near you. However, someone at the other end of the planet will have an uglier experience. If that person has a handful of cache misses or writes, the response time might quickly climb into a few seconds before the response arrives. That is not how I would expect a “distributed” KV store to behave. Let us be clear, right now this behaves more like a centralized KV store with on-demand cached copies at the edge.

Quite ironically, it basically feels more like a traditional single-location database right now (+caches). While latencies of a single cache miss or a single write is not dramatic, it can quickly pile up with multiple calls and especially write-heavy webapps risk facing increased “sluggishness” depending on their location. Here as well, being “minimalistic” regarding KV calls should be taken to heart during the conception of the webapp using workers.

Lastly, there was one more setting available in the Worker: “Default Placement” vs “Smart Placement”. I tried both but I did not see noticeable changes within the latencies. I think it’s due to the fact that there is a single KV store call and that it takes time and traffic to gather telemetry and adjust the placement of workers. It might be great, but for this experiment, it had no effect at all.

Single-Page-Applications vs Server-Side-Rendering

Here as well, one is not universally better or worse than the other and the answer which one to use is “it depends”.

Besides strong differences regarding frameworks and overall architecture, it also has practical fundamental differences for the end user. It’s also fascinating to see history repeating itself, where the internet first started with server rendered pages, than single-page-application with data fetching took over and a resurgence of SSR, just like in the past, just with new tech stacks.

SSR is actually the easiest one to explain: you fetch all the required data server side, put everything in a template and return the resulting page to the end user. It takes a bit of time and processing power server-side, is not cachable, but the client gets a “finished” page.

The SPA does the opposite. Although the HTML/CSS/JS is static and cached (hence quickly fetched), the resources are typically much larger due to all the client-side javascript libs needed. Then starts the heavy lifting, where data is fetched and the page rendered, typically while showing a loading spinner. As a result, the total time to render the page is longer.

However, interacting with the SPA is typically smoother afterwards, because interactions just exchange data with the server and make local changes to the page. In contrast, SSR means navigating and loading a new page. Hence, the choice whether SPA or SSR is more suited depends on how “interactive” the page/app should be.

As a rule of thumb, if it’s more like a static “web page”, go for SSR, if it’s more like an interactive “web app”, go for SPA.

Lastly, the nice thing about Astro, picked here as illustrative example, is that the whole spectrum is possible: static pages, SPA and SSR.

Sources

The source code of this experiment is here: https://github.com/dagnelies/quoted-day

If you have a Github and a Cloudflare Account, you can also fork & deploy by clicking here:

Deploy to Cloudflare

If the button doesn’t work, here it is as link instead: https://deploy.workers.cloudflare.com/?url=https://github.com/dagnelies/quoted-day

It will fork the GitHub repository and deploy it on an internal URL so that you can preview it. Afterwards, you can edit the code and it will auto-deploy it, etc.

Note that the example references a KV store that is mine. So you will have to create your own KV store named and swap the QUOTES KV id in the wrangler.json file with yours. You will also have to initially fill it with quotes if you want to reproduce the example. Luckily, there are scripts in the package.json to do just that.

Everything beyond this point would deserve a tutorial on its own. This was merely the result of an experiment, how the latencies hold up and some insights on the platform. Enjoy!

Let’s Build with AI Like It’s 1998

2026-01-20 07:30:45

This is a submission for the New Year, New You Portfolio Challenge Presented by Google AI

About Me

I’m a full-stack developer and digital designer with over a decade of experience building robust, scalable solutions that solve real-world problems. I thrive on both thoughtful architecture and polished design — from concept through deployment — and I’m passionate about creating work that is both technically sound and visually engaging.

For this challenge, I took a personal and creative direction with my portfolio: a Windows 98-style experience with a MySpace-inspired profile. This project isn’t just a collection of links — it’s a window into who I am as a developer and a creator.

Growing up in the 90s, Windows 98 was my first computer and one of my earliest influences in computing. The look, sound, and playfulness of those early graphical interfaces sparked my curiosity about how software can be expressive and fun. Likewise, MySpace was the first social platform where I crafted my digital identity, experimented with layout and style, and shared music and stories that mattered to me as a teenager.

That nostalgia fuels this portfolio: blending retro aesthetics with modern cloud-native tech. I built and deployed this project to Google Cloud Run, leveraging scalable infrastructure and embedding the live site directly in my submission.

This portfolio reflects my technical skills, creative thinking, and willingness to break conventions — using nostalgia not as a gimmick, but as a thoughtful design lens. I hope it shows not only where I’ve been, but also where I’m going as a developer.

Portfolio

Take me for a spin!
View UribeJr98

The deployment is hosted on Google Cloud Run and features a custom dev-tutorial label per the submission guidelines.

How I Built It

This portfolio was built as a fully interactive, OS-style web application, inspired by Windows 98 and early MySpace layouts, but implemented with modern frontend architecture and cloud-native deployment.

The app is structured around a centralized window manager that controls open state, focus, minimization, and z-index — allowing draggable, overlapping windows similar to a real desktop environment. Each “application” (About Me, Media Player, Projects, Notepad Messenger, Pong, Explorer) is implemented as a self-contained React component, keeping the system modular and extensible.

The frontend is built with React and Vite for fast development and optimized builds. I used 98.css as a stylistic base and layered custom CSS for responsive behavior, window chrome, and interaction polish. Project case studies open in a dedicated viewer that supports structured data and GitHub-style Markdown rendering, making content easy to maintain while staying on-theme.

For interactivity, I integrated:

  • A SoundCloud-powered media player
  • A Pong mini-game
  • A persistent Notepad Messenger
  • Desktop icons and a taskbar that mirror classic OS behavior

In production, the app is served by a lightweight Node.js + Express server, which handles SPA routing and static assets. The project is deployed to Google Cloud Run, demonstrating a modern, scalable deployment model with minimal operational overhead and a live, embeddable URL for the DEV submission.

The goal was to combine nostalgic design with modern engineering, showing that expressive interfaces and production-ready architecture can coexist.

What I'm Most Proud Of

I’m proud of who I am and where I come from.

This portfolio acts as a time capsule of my life, capturing the technology, music, and digital spaces that shaped my earliest experiences with computers. Windows 98 was my first operating system, and MySpace was the first place I learned how to express myself online. Those moments sparked my curiosity and influenced how I approach design and interaction today.

What makes this project meaningful to me is how it connects past and present. I took those formative experiences and rebuilt them using modern tools, thoughtful architecture, and cloud deployment — turning nostalgia into something functional and forward-looking.

This portfolio isn’t just a collection of projects. It’s a reflection of my journey, preserving where I started while showing who I’ve become as a developer.

Querying & Filtering in Oracle Databases: What Actually Clicked for Me As a Beginner

2026-01-20 06:59:00

Today felt like one of those quiet but important SQL days. No new tables. No fancy joins. Just learning how to ask better questions of the data I already have.

This lesson was all about querying and filtering rows basically learning how to tell the database exactly what I want back, and nothing more.

I learnt this through a simple toys table, which honestly helped a lot. Oracle courses teach in a weird but funny manner which allows you to learn and have fun while you do. They make very daunting topics look less intimidating by approaching them with easy to understand and relatable concepts.

create table toys (
  toy_name varchar2(100),
  colour   varchar2(10),
  price    number(10, 2)
);

insert into toys values ( 'Sir Stripypants', 'red', 0.01 );
insert into toys values ( 'Miss Smelly_bottom', 'blue', 6.00 );
insert into toys values ( 'Cuteasaurus', 'blue', 17.22 );
insert into toys values ( 'Mr Bunnykins', 'red', 14.22 );
insert into toys values ( 'Baby Turtle', 'green', null );

commit;

Selecting Rows (and Why SELECT * Is a Trap)

The very first thing was learning that SELECT really has two jobs:

  • FROM → where the data lives
  • SELECT → what columns I actually want back

At first, SELECT * FROM toys; is very convenient but only when your database is small. Imagine a bigger database with over 10000 rows. A select * isn't going to help you find Mr BunnyKins in there.

select toy_name, price
from toys;

This forces you to think about what you actually need, and it also:

  • Sends less data over the network
  • Breaks less when columns change

That alone already changed how I write queries. Be specific and effective.

Filtering Rows with WHERE

So to start being more effective instead of getting everything in the table, you can start asking questions like:

“Only show me the red toys”

select *
from toys
where colour = 'red';

Or:

“Give me just one specific row”

select *
from toys
where toy_name = 'Sir Stripypants';

Simple, but this is the foundation of almost every real query.

Combining Conditions: AND, OR, and Confusion

This part tripped me up more than I expected.

At first glance, this feels logical:

where toy_name = 'Mr Bunnykins'
or toy_name = 'Baby Turtle'
and colour = 'green';

But the results weren’t what I expected.

That’s when I understood that:

AND runs before OR

Which means SQL doesn’t read conditions left to right the way my brain wants it to.

The fix?

Use parentheses (). Always.

where ( toy_name = 'Mr Bunnykins' or toy_name = 'Baby Turtle' )
and colour = 'green';

After that, the query does exactly what it looks like it should do. This alone has saved me from future bugs.

Lists of Values with IN

Instead of writing this:

where colour = 'red'
or colour = 'green'
or colour = 'blue'
or colour = 'yellow'

You can write this:

where colour in ( 'red', 'green', 'blue', 'yellow' );

Much cleaner. Much easier to read and very effective. This feels like one of those features you don’t appreciate until you really need it. Imagine the 100,000 rows in the table and I want just a handful that meet some conditions. it's more effective to use the IN than to write multiple OR statements.

Ranges with <, >=, and BETWEEN

where price between 6 and 20;

Important detail I learned:

  • BETWEEN includes both ends
  • If you want strict boundaries, you must write them yourself
where price > 6
and price <= 20;

It's the small details that make the big differences.

Wildcards and Pattern Matching (LIKE)

where colour like 'b%';

Finds anything starting with b.

where toy_name like '%B%';

Finds toy names containing uppercase B.

So wildcards:

  • _ matches exactly one character
  • % matches zero or more characters

And if you actually want to search for _ or % themselves… you need ESCAPE.

That’s one of those things you won’t know until it breaks something.

NULL Is… Weird (But Makes Sense)

This line returning nothing:

where price = null;

Turns out:

NULL isn’t a value — it’s unknown

So you must write:

where price is null;

And the opposite:

where price is not null;

Negation: Saying “NOT This”

You can flip most conditions using NOT:

where not colour = 'green';

Or by using <>:

where colour <> 'green';

But again — NULL is special.

To exclude nulls, you must use:

where colour is not null;

There’s no shortcut here.

Final Thoughts

This lesson didn’t feel flashy but it felt important.

Everything else in SQL builds on this:

  • Aggregations
  • Joins
  • Subqueries
  • Real-world analytics

If you can’t filter data confidently, everything else feels fragile.

I’m learning to slow down, be explicit, and write queries that are readable and effective.

If you’re also learning SQL and sometimes feel silly getting tripped up by WHERE clauses… you’re not alone.

I’ll keep documenting this journey, the confusion, clarity, and all.