MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

How to Set Up an Automated WordPress LEMP Server with SSL on AWS (Using Ansible)

2025-11-12 18:21:23

Deploying a WordPress website doesn’t have to be manual or repetitive. By combining AWS EC2, Ansible, and the LEMP stack (Linux, Nginx, MySQL, PHP), you can automate the entire process — from server provisioning to SSL configuration — in one smooth run.

In this guide, I’ll walk you through how I built an automated WordPress deployment with Let’s Encrypt SSL, running on an Ubuntu 24.10 EC2 instance, all managed through Ansible playbooks.

What We’ll Cover

  • Understanding the LEMP stack architecture

  • Setting up AWS EC2 for automation

  • Creating Ansible playbooks and roles

  • Automating WordPress installation and SSL setup

  • Troubleshooting common deployment issues

Project Overview

This project automates WordPress deployment on AWS using Ansible. It installs and configures:

  • Linux (Ubuntu 24.10)

  • Nginx as the web server

  • MySQL for the database

  • PHP for dynamic content

  • Let’s Encrypt SSL for HTTPS security

You’ll end up with a fully operational WordPress site, protected by firewall rules and accessible securely via HTTPS.

Folder Structure

automated_wordpress_lemp_ssl_setup/
├── inventory
├── main-playbook.yaml
├── roles/
│ ├── common/
│ │ └── tasks/main.yaml
│ ├── firewall/
│ │ └── tasks/main.yaml
│ ├── lemp/
│ │ └── tasks/main.yaml
│ ├── wordpress/
│ │ └── tasks/main.yaml
│ ├── ssl/
│ │ └── tasks/main.yaml
│ └── nginx/
│ ├── tasks/main.yaml
│ ├── handlers/main.yaml
│ └── templates/wordpress_nginx.conf.j2

Tools & Technologies

  • OS ---> Ubuntu 24.10 (AWS EC2)
  • Web Server ---> Nginx
  • Database ---> MySQL
  • Language ---> PHP
  • CMS ---> Wordpress
  • Automation ---> Ansible
  • SSL ---> Let’s Encrypt (Certbot)

Role Descriptions

  • common → Updates and upgrades all system packages

  • firewall → Enables UFW and allows ports 22 (SSH), 80 (HTTP), 443 (HTTPS)

  • lemp → Installs Nginx, MySQL, PHP, and extensions

  • wordpress → Downloads WordPress, configures database and user

  • nginx → Configures Nginx for WordPress and reloads on change

  • ssl → Obtains Let’s Encrypt SSL certificate and sets HTTPS redirection

SSL Configuration

Once the domain (e.g., mywordpressite.zapto.org) is mapped to your EC2 public IP, Certbot automatically generates an SSL certificate and configures HTTPS redirection.

💡 Tip: Always ensure ports 80 and 443 are open in both AWS Security Group and UFW.

Inventory Example

[wordpress]
<YOUR_EC2_PUBLIC_IP> ansible_user=ubuntu ansible_ssh_private_key_file=~/.ssh/your-key.pem

Running the Playbook

  1. Update domain and email inside roles/ssl/tasks/main.yaml.

  2. Run the playbook:

ansible-playbook -i inventory main-playbook.yaml

  1. Once complete, visit:
- **[http://yourdomain.com](http://yourdomain.com)**

- **[https://yourdomain.com](https://yourdomain.com)**

Complete the WordPress setup and log in to your admin dashboard.

Troubleshooting Tips

Nginx default page showing ---> Remove /etc/nginx/sites-enabled/default

Certbot “No such auth” error ---> Wait for DNS propagation, then re-run playbook

WordPress database error ---> Check if DB and user are created properly

SSL not working ---> Ensure firewall and AWS SG allow 80/443

💡 Pro Tips

  • Always remove default Nginx config before adding new ones.

  • Make sure DNS A record points to your EC2 IP before running Certbot.

  • You can re-run the playbook safely — Ansible ensures idempotence.

👤 Author

Alan Varghese

🔗 GitHub

📧 [email protected]

Conclusion

By automating the entire deployment with Ansible, you’ve saved hours of manual setup and reduced human error. This approach is scalable, repeatable, and production-ready — ideal for DevOps portfolios or personal hosting environments.

Getting Started with Requestly’s Local Workspace: API Testing Offline and On Your Terms

2025-11-12 18:21:22

As developers, we live and breathe APIs. We use tools like Requestly to intercept, modify, and test our network requests. It's an indispensable part of the modern development workflow. But what about when you're working on a flight, in a coffee shop with spotty Wi-Fi, or handling highly sensitive API keys that you'd rather not have synced to a cloud?

This is where Requestly's Local Workspace feature changes the game.

I recently spent time exploring this feature for the "No Bugs November" challenge and was blown away by its simplicity and utility. It addresses a core need for privacy, speed, and offline access.

This post is a beginner-friendly guide to get you started with Requestly's Local Workspace, based on my own experience. We'll cover what it is, why you should use it, and how to set up your first local collection.

🚀 What is a Local Workspace?

In short, it's a way to create, test, and save all your API requests and collections directly on your local machine.

Instead of syncing your data to Requestly's cloud servers, a local workspace saves everything into a folder on your computer. This might sound simple, but it unlocks some powerful benefits.

Why Should You Care?

  • Offline-First: No internet? No problem. You have full access to create, run, and manage your API requests. This is a lifesaver for travel or unreliable connections.
  • Privacy & Security: If you're working with sensitive data, personal access tokens, or internal-only APIs, a local workspace ensures that data never leaves your machine.
  • Version Control (The "Aha!" Moment): This was my favorite discovery. Since your workspace is just a folder of files (like JSON) on your disk, you can git init it! You can commit your changes, create new branches for testing, and share your API collections with teammates via a standard Git workflow.
  • Speed: It's incredibly fast. With no network sync to wait for, creating and saving requests is instantaneous.

🛠️ Getting Started: Creating Your First Local Workspace

Okay, let's get our hands dirty. The process is incredibly straightforward.

Prerequisite: You'll need the Requestly Desktop App. This feature is not available on the browser extension alone.

Step 1: Find the Workspace Switcher

In the Requestly app, look at the top-left corner. You'll see the name of your current workspace (it probably says "My Workspace" or your team's name). Click on it.

Step 2: Create a New Workspace

From the dropdown menu, select Create New Workspace.

Step 3: Choose "Local Workspace"

Requestly will give you two options: "Cloud Workspace" (the default) and "Local Workspace." Select Local Workspace.

Step 4: Name and Save Your Workspace

A dialog box will appear asking you to name your workspace and choose a location on your computer to save it.

  1. Give it a clear name (e.g., "My Project API," "Offline Tests").
  2. Choose a directory. I recommend creating a new, empty folder for it.
  3. Click "Save."

And that's it! You'll be switched to a clean, new workspace. You can confirm you're in the right place by checking the workspace name in the top-left corner again.

👨‍💻 Managing Your API Requests Locally

Now that you're in your local workspace, the interface will feel very familiar. You can manage your API requests just as you would in a cloud workspace.

1. Creating Collections

It's always a good practice to organize your requests.

  • Click the + icon in the left-hand sidebar.
  • Select "New Collection."
  • Give your collection a name (e.g., "User Service," "Payment APIs").

This creates a new "folder" to hold all related requests.

2. Creating and Running a Request

  • With your new collection selected, click the + icon next to it and choose "New HTTP Request."
  • This opens the main request panel. You can set your HTTP method (GET, POST, etc.), enter the URL, add headers, and define a body.
  • I tested this with a simple GET request to https://api.github.com/users/requestly.
  • Hit the Send button.

3. Viewing the Response

Just like you'd expect, the response appears in the bottom panel. You can inspect the Body, Headers, and Status code.

The best part? When you close and reopen the app, it's all still there—no sign-in or syncing required.

⚡ The Real Power: Version Control with Git

Here's the trick I mentioned. I navigated to the folder where I saved my workspace on my computer. Inside, I found a requestly.json file and other data.``

I immediately ran:

`bash
git init
git add .
git commit -m "Initial API collection for User Service"
`

Now, my entire API testing setup is version-controlled. I can create a new branch (git checkout -b feature/new-endpoint) before I start testing a new feature, and if I mess up my requests, I can just reset. This is a massive win for team collaboration.

you've ever been frustrated by cloud-sync delays or concerned about where your API keys are being stored, this feature is built for you.

Give it a try—it might just become your new default.

The Vibe Coding Evolution: Why AI Needs Enterprise-Grade Team Features (And How Breaking Down Your Workflow Changes Everything)

2025-11-12 18:20:59

Hey dev.to community! This is my first post here, and I'm excited to jump in. As a full-stack developer and AI enthusiast, I've been experimenting a lot with AI in my coding workflow lately. I originally shared some of these thoughts on LinkedIn after a bit of a hiatus, but I figured this crowd would vibe with it too. Let's dive into what I've learned about "vibe coding" and why I think AI tools are ready for a team-level upgrade.
The Vibe Coding Revelation
I've done AI-assisted projects before, but here's what I'm realizing: the more you use AI, the better you get at using it. It's not just about crafting perfect prompts anymore. It's about understanding structure, knowing how to break things down, and letting AI handle what it does best while you architect the vision.
I used to get back absolute spaghetti—500+ line files that felt personally offended by my existence. But now? I split everything cleanly. Auth lives in one module. UI components stay separate. Data flow has its own domain. Suddenly, AI makes sense of each piece. The difference is night and day.
The Infrastructure Gap
But here's what keeps nagging at me: the tools themselves need to evolve.
We're using AI like it's 2023—solo workflows, no version control, no real collaboration. Meanwhile, we've had Microsoft 365, GitHub, Figma, and other enterprise tools built for teams for years. AI is already embedded in the workplace. Developers, designers, writers, analysts—we're all using it daily. So why are we treating it like a personal assistant instead of a team platform?
What Enterprise AI Could Look Like
Imagine shared AI projects with teammates, version history you can roll back, auto-debugging that learns from your codebase, collaborative prompts with branching and merging, and real permissions and security for enterprise use.
Not just "Pro plans" with extra features. True enterprise editions—company-grade AI platforms built for how teams work, with the governance, security, and collaboration features modern organizations require.
A Workflow Revolution
If you're still using AI raw without breaking your workflow into modular pieces, try it. Split your requests. Separate concerns. Let each conversation focus on one thing. You'll be shocked how much cleaner the output becomes.
But beyond individual workflow optimization, platforms need to catch up. AI-as-a-service for teams isn't just nice-to-have; it's the next logical step.
The Open Question
Folks like Andrej Karpathy and Lex Fridman live this daily and know the landscape inside out. Real talk: when are we getting enterprise-grade AI collaboration tools?
The technology is ready. The demand is there. The workflows are forming organically. Someone just needs to build the infrastructure.
Who's working on this? And if no one is, why not?
What do you think, dev.to? Have you tried vibe coding with AI? What's your take on enterprise features for these tools? Drop your thoughts in the comments—let's discuss!

Using aria-labelledby for accessible names

2025-11-12 18:14:58

When building UI, we often rely on icons or visual cues that don’t include visible text. While that might look clean, it can leave screen reader users guessing.

aria-labelledby is one of the techniques we can use to give clear context to assistive technologies.

What is aria-labelledby?

aria-labelledby lets you reference visible text elsewhere in the DOM to create an accessible name for an element.

It basically tells assistive technology: “Use this other element’s text to label me.”

It’s commonly used when the text that explains an element is located outside the element itself.

<h2 id="dialog-title">Edit Profile</h2>

<button aria-labelledby="dialog-title">
  ✏️
</button>

Here, the button doesn’t have its own text label, but because it references dialog-title, assistive tech will announce “Edit Profile.”

Why use it?

Sometimes we have elements, especially icon-only controls, that need textual context.

There are several techniques:

  • aria-label="Edit profile"
  • <button>Edit Profile</button> (preferred)
  • title="Edit profile"
  • aria-labelledby="id"

However, aria-labelledby has the highest priority, and if used, its referenced text will always take precedence over any other naming source.

How it works

1) Create an element with descriptive text:

<h2 id="dialog-title">Edit profile</h2>

2) Reference its ID using aria-labelledby on the target element:

<button aria-labelledby="dialog-title">✏️</button>

And that’s it.

⚠️ Important things to note

  • aria-labelledby has the highest priority among naming sources → It will override aria-label, title, and even visible text

the accessibility tree showing the available labels

  • It depends on other elements. If the referenced element is hidden or removed, the accessible name becomes invalid
<button aria-labelledby="label"></button>
<span id="label" hidden>Example</span>

In the example above, if the span is hidden, the accessible name of the button is lost❌.

✅ As a rule of thumb, use aria-labelledby when:

  1. A descriptive text already exists on the page
  2. The label needs to be more than just a plain string
  3. You want to use multiple elements as a label

✅ Learn more

✅ Final Thoughts

aria-labelledby is a powerful tool for providing context, especially when UI patterns scatter text visually.

Just remember:

  • It takes top priority
  • It relies on another element
  • If that reference breaks → the name breaks

When used well, it helps assistive tech users understand your UI as clearly as everyone else.

Building a Virtual Private Cloud (VPC) from Scratch on Linux

2025-11-12 18:13:37

Introduction

Ever wondered how cloud providers like AWS, GCP, or Azure implement Virtual Private Clouds (VPCs) under the hood? In this comprehensive guide, we'll recreate VPC fundamentals entirely on Linux using native networking primitives like network namespaces, veth pairs, bridges, and iptables.

By the end of this tutorial, you'll have built a fully functional mini-VPC environment supporting:

  • Multiple isolated subnets
  • Inter-subnet routing
  • NAT gateway for internet access
  • VPC peering
  • Firewall rules (Security Groups)

Prerequisites

  • Linux machine (Ubuntu 20.04+ or similar)
  • Root/sudo access
  • Basic understanding of networking concepts (IP addresses, routing, NAT)
  • Python 3.6+ or Bash

Architecture Overview

┌─────────────────────────────────────────────────────────┐
│                         Host OS                         │
│                                                         │
│  ┌──────────────────────────────────────────────────┐  │
│  │                    VPC 1                         │  │
│  │                                                  │  │
│  │  ┌──────────────┐         ┌──────────────┐     │  │
│  │  │   Public     │         │   Private    │     │  │
│  │  │   Subnet     │◄───────►│   Subnet     │     │  │
│  │  │ (Namespace)  │         │ (Namespace)  │     │  │
│  │  │ 10.0.1.0/24  │         │ 10.0.2.0/24  │     │  │
│  │  └──────┬───────┘         └──────┬───────┘     │  │
│  │         │                        │             │  │
│  │         └────────┬───────────────┘             │  │
│  │                  │                             │  │
│  │           ┌──────▼──────┐                      │  │
│  │           │   Bridge    │                      │  │
│  │           │ (VPC Router)│                      │  │
│  │           └──────┬──────┘                      │  │
│  └──────────────────┼──────────────────────────────┘  │
│                     │                                 │
│                     │ NAT                             │
│                     ▼                                 │
│              ┌─────────────┐                          │
│              │ eth0 (WAN)  │                          │
│              └─────────────┘                          │
└─────────────────────────────────────────────────────────┘

Key Components

  1. Linux Bridge: Acts as a virtual switch/router for the VPC
  2. Network Namespaces: Isolated network environments representing subnets
  3. veth Pairs: Virtual ethernet cables connecting namespaces to the bridge
  4. iptables: Implements NAT and firewall rules
  5. Routing Tables: Controls packet flow between subnets

Implementation

Step 1: Set Up the Project

Create a project directory and the main CLI tool:

mkdir vpc-project
cd vpc-project

Create vpcctl.py with the implementation (see the complete code in the GitHub repository).

Make it executable:

chmod +x vpcctl.py

Step 2: Understanding the Core Commands

Our CLI tool provides these commands:

# Create a VPC
sudo ./vpcctl.py create-vpc <name> <cidr> [--interface eth0]

# Add subnet to VPC
sudo ./vpcctl.py add-subnet <vpc-name> <subnet-name> <cidr> [--type public|private]

# Peer two VPCs
sudo ./vpcctl.py peer <vpc1> <vpc2>

# Apply firewall rules
sudo ./vpcctl.py apply-firewall <vpc> <subnet> <policy.json>

# List all VPCs
sudo ./vpcctl.py list

# Delete VPC
sudo ./vpcctl.py delete-vpc <name>

Step 3: Create Your First VPC

Let's create a VPC with CIDR 10.0.0.0/16:

sudo ./vpcctl.py create-vpc vpc1 10.0.0.0/16 --interface eth0

What happens behind the scenes:

  1. Creates a Linux bridge named br-vpc1
  2. Assigns the first IP from the CIDR (10.0.0.1) to the bridge
  3. Enables IP forwarding on the system
  4. Saves VPC state to ~/.vpcctl/vpcs.json

Verify the bridge was created:

ip link show br-vpc1

Step 4: Add Subnets

Add a public subnet (with internet access):

sudo ./vpcctl.py add-subnet vpc1 public 10.0.1.0/24 --type public

Add a private subnet (isolated from internet):

sudo ./vpcctl.py add-subnet vpc1 private 10.0.2.0/24 --type private

What happens:

  1. Creates network namespace (e.g., vpc1-public)
  2. Creates veth pair connecting namespace to bridge
  3. Assigns IP address to namespace interface
  4. Configures default route through bridge
  5. For public subnets: sets up NAT rules using iptables

List the created namespaces:

ip netns list

Step 5: Deploy Test Applications

Let's deploy simple HTTP servers to test connectivity:

In public subnet:

# Start HTTP server in public subnet namespace
sudo ip netns exec vpc1-public python3 -m http.server 8080 &

In private subnet:

# Start HTTP server in private subnet namespace
sudo ip netns exec vpc1-private python3 -m http.server 8081 &

Step 6: Test Connectivity

Test 1: Inter-subnet communication within VPC

# From private subnet, ping public subnet
sudo ip netns exec vpc1-private ping -c 3 10.0.1.1

Expected: Success - subnets in same VPC can communicate

Test 2: Internet access from public subnet

# Public subnet should reach internet via NAT
sudo ip netns exec vpc1-public ping -c 3 8.8.8.8

Expected: Success - NAT gateway allows outbound traffic

Test 3: Internet access from private subnet

# Private subnet should NOT have internet access
sudo ip netns exec vpc1-private ping -c 3 8.8.8.8

Expected: Failure or timeout - private subnet is isolated

Test 4: HTTP access

# Access HTTP server in public subnet
sudo ip netns exec vpc1-private curl http://10.0.1.1:8080

Expected: Success - receives HTTP response

Step 7: Implement VPC Isolation

Create a second VPC:

sudo ./vpcctl.py create-vpc vpc2 172.16.0.0/16
sudo ./vpcctl.py add-subnet vpc2 web 172.16.1.0/24 --type public

Test isolation:

# Try to ping vpc2 from vpc1
sudo ip netns exec vpc1-public ping -c 3 172.16.1.1

Expected: Failure - VPCs are isolated by default

Step 8: Enable VPC Peering

Allow controlled communication between VPCs:

sudo ./vpcctl.py peer vpc1 vpc2

What happens:

  1. Creates veth pair between the two bridges
  2. Adds static routes for cross-VPC traffic

Test cross-VPC communication:

# Now vpc1 can reach vpc2
sudo ip netns exec vpc1-public ping -c 3 172.16.1.1

Expected: Success - peering enables cross-VPC routing

Step 9: Apply Firewall Rules

Create a security policy file firewall-policy.json:

{
  "subnet": "10.0.1.0/24",
  "ingress": [
    {"port": 8080, "protocol": "tcp", "action": "allow"},
    {"port": 22, "protocol": "tcp", "action": "deny"},
    {"port": 443, "protocol": "tcp", "action": "allow"}
  ]
}

Apply the policy:

sudo ./vpcctl.py apply-firewall vpc1 public firewall-policy.json

Test the rules:

# HTTP on port 8080 should work
sudo ip netns exec vpc1-private curl http://10.0.1.1:8080

# SSH on port 22 should be blocked
sudo ip netns exec vpc1-private nc -zv 10.0.1.1 22

Step 10: Monitoring and Debugging

View routing table in namespace:

sudo ip netns exec vpc1-public ip route

View iptables rules:

sudo iptables -t nat -L -n -v
sudo ip netns exec vpc1-public iptables -L -n -v

Check bridge connections:

bridge link show br-vpc1

Monitor network traffic:

# Capture traffic on bridge
sudo tcpdump -i br-vpc1 -n

# Capture traffic in namespace
sudo ip netns exec vpc1-public tcpdump -i veth-ns-public -n

Step 11: Cleanup

Remove all resources:

# Delete individual VPC
sudo ./vpcctl.py delete-vpc vpc1

# Or delete all VPCs
sudo ./vpcctl.py delete-vpc vpc1
sudo ./vpcctl.py delete-vpc vpc2

The delete operation automatically removes:

  • All network namespaces
  • veth pairs
  • Bridge interfaces
  • iptables NAT rules
  • Routing table entries

Troubleshooting

Issue: Cannot ping between subnets

Solution: Check that IP forwarding is enabled:

sysctl net.ipv4.ip_forward
# Should return: net.ipv4.ip_forward = 1

Issue: No internet access from public subnet

Solution:

  1. Verify the interface name: ip link show
  2. Update --interface parameter to match your actual internet interface
  3. Check NAT rules: sudo iptables -t nat -L -n -v

Issue: "Operation not permitted" errors

Solution: All commands must run with sudo or as root user

Issue: Namespace already exists

Solution: Clean up existing namespaces:

sudo ip netns del vpc1-public
# Or delete the entire VPC
sudo ./vpcctl.py delete-vpc vpc1

Advanced Topics

Custom NAT Rules

For more complex NAT scenarios:

# Port forwarding from host to namespace
sudo iptables -t nat -A PREROUTING -p tcp --dport 8080 \
  -j DNAT --to-destination 10.0.1.1:80

VPC Traffic Shaping

Limit bandwidth on veth interface:

sudo tc qdisc add dev veth-vpc1-public root tbf \
  rate 1mbit burst 32kbit latency 400ms

Multi-tenancy

Create isolated VPCs for different users/projects using unique naming prefixes.

Real-World Use Cases

  1. Development Environment: Create isolated networks for testing microservices
  2. CI/CD Pipelines: Spin up temporary network environments for integration tests
  3. Learning Platform: Understand cloud networking without cloud costs
  4. Network Security Testing: Practice firewall configurations and network segmentation

Key Takeaways

  • VPCs are built on standard Linux networking primitives
  • Network namespaces provide complete network isolation
  • Bridges act as virtual switches/routers
  • iptables implements NAT and security policies
  • VPC peering requires explicit routing configuration

Conclusion

You've successfully built a fully functional VPC implementation on Linux! This project demonstrates the core concepts behind cloud networking and gives you hands-on experience with Linux networking tools.

The same principles apply to production cloud environments, just at a much larger scale with additional features like:

  • High availability and redundancy
  • Load balancing
  • VPN connectivity
  • Flow logs and monitoring
  • Service mesh integration

Resources

GitHub Repository

Full code available at: [Your GitHub Repo URL]

Have questions or suggestions? Feel free to open an issue on GitHub!

How macOS, Linux, and Windows detect file changes (and why it's hard to catch them)

2025-11-12 18:10:36

File watching seems simple on the surface: "tell me when a file changes" But the reality is three completely different APIs with three fundamentally different philosophies about how computers should monitor file systems.

The three main approaches

I recently spent weeks building FSWatcher, a cross-platform file watcher in Go, and the journey taught me that understanding file watching means understanding how each operating system approaches the problems differently.

🍎 macOS: FSEvents (Directory-first)

Apple's philosophy is to monitor directory trees, rather than individual files.

// You say: "watch /Users/me/fswatcher"
// macOS says: "OK, I'll tell you when ANYTHING in that tree changes"

Pros

  • Low CPU usage
  • Efficient for large directories
  • Event-driven (no polling)

Cons

  • Gives you directory-level info, not file-level
  • Can flood you with redundant events
  • You must filter what you actually care about

Example event

//Event: /Users/me/project/src changed
// You have to figure out WHAT changed in /src

🐧 Linux: inotify (File-First)

Linux's philosophy: granular control over specific files and directories.

// You say: "watch /home/me/fswatcher/main.go"
// Linux says: "OK, I'll tell you exactly what happens to that file"

Pros

  • Precise, file-level events
  • You know exactly what changed
  • Low-level control

Cons

  • Each watch = one file descriptor (limited resource)
  • Easy to hit system limits on large projects
  • More prone to event flooding

Example event

// Event: /home/me/fswatcher/main.go MODIFIED
// Event: /home/me/fswatcher/main.go ATTRIBUTES_CHANGED
// Event: /home/me/fswatcher/main.go CLOSE_WRITE
// Same file save = 3 events

🪟 Windows: ReadDirectoryChangesW (Async-first)

Windows philosophy: asynchronous I/O with overlapping operations.

// You say: "watch C:\project and give me async notifications"
// Windows says: "I'll buffer changes and notify you asynchronously"

Pros

  • Fast asynchronous I/O
  • Efficient buffering
  • Scales well

Cons

  • Requires careful buffer management
  • Can lose events if the buffer overflows
  • Complex synchronization needed

Example event

// Event: C:\project\main.go MODIFIED (buffered)
// Event: C:\project\test.go CREATED (buffered)
// Events may be batched by Windows

The real challenges

Challenge 1: Event Inconsistency
Same action (save a file) → different events per OS:

// macOS
Event: /project changed (directory-level)

// Linux  
Event: file.go MODIFIED
Event: file.go ATTRIB
Event: file.go CLOSE_WRITE

// Windows
Event: file.go MODIFIED (buffered)

Challenge 2: Editor Spam
Modern editors (VSCode, GoLand) don't just save once:

1. Create temp file (.file.go.tmp)
2. Write content to temp
3. Delete original
4. Rename temp to original
5. Update attributes
6. Flush buffers

That's 6+ events for ONE save operation!

Challenge 3: Bulk Operations
When you run git checkout, thousands of files change instantly:
bashgit checkout main

# 10,000 files changed
# = 10,000+ file system events in ~1 second

Your watcher must handle this flood without crashing.

A unified pipeline to solve inconsistency

In FSWatcher, I built a pipeline that normalizes all these differences:

┌─────────────┐
 OS Events    (platform-specific)
└──────┬──────┘
       
       
┌─────────────┐
 Normalize    (consistent Event struct)
└──────┬──────┘
       
       
┌─────────────┐
 Debounce     (merge rapid duplicates)
└──────┬──────┘
       
       
┌─────────────┐
 Batch        (group related changes)
└──────┬──────┘
       
       
┌─────────────┐
 Filter       (regex include/exclude)
└──────┬──────┘
       
       
┌─────────────┐
 Clean Event  (to consumer)
└─────────────┘

Debouncing in action

// Without debouncing:
Event: main.go changed at 10:00:00.100
Event: main.go changed at 10:00:00.150
Event: main.go changed at 10:00:00.200
Event: main.go changed at 10:00:00.250
Event: main.go changed at 10:00:00.300

// With debouncing (300ms window):
Event: main.go changed at 10:00:00.300 (final)

Batching in action

// Without batching:
10 separate event notifications

// With batching:
Batch: []Event{10 events} (one notification)

Real example

Here's a hot-reload system using FSWatcher:

go get github.com/sgtdi/fswatcher
package main

import (
    "context"
    "fmt"

    "github.com/sgtdi/fswatcher"
)

func main() {
    // Only watch .go files and ignore .go files under test dir
    fsw, _ := fswatcher.New(
        fswatcher.WithIncRegex([]string{`\.go$`}),
        fswatcher.WithExcRegex([]string{`test/.*\.go$`}),
    )

    ctx := context.Background()
    go fsw.Watch(ctx)

    fmt.Println("Starting..")
    for e := range fsw.Events() {
        fmt.Println(e.String())
        fmt.Println("Changed..")
    }
}

I also wrote a detailed article on Medium about the implementation journey and lessons learned: Read the full story

Resources

FSWatcher
Apple FSEvents Documentation
Linux inotify man page
Windows ReadDirectoryChangesW