2025-11-12 18:21:23
Deploying a WordPress website doesn’t have to be manual or repetitive. By combining AWS EC2, Ansible, and the LEMP stack (Linux, Nginx, MySQL, PHP), you can automate the entire process — from server provisioning to SSL configuration — in one smooth run.
In this guide, I’ll walk you through how I built an automated WordPress deployment with Let’s Encrypt SSL, running on an Ubuntu 24.10 EC2 instance, all managed through Ansible playbooks.
Understanding the LEMP stack architecture
Setting up AWS EC2 for automation
Creating Ansible playbooks and roles
Automating WordPress installation and SSL setup
Troubleshooting common deployment issues
This project automates WordPress deployment on AWS using Ansible. It installs and configures:
Linux (Ubuntu 24.10)
Nginx as the web server
MySQL for the database
PHP for dynamic content
Let’s Encrypt SSL for HTTPS security
You’ll end up with a fully operational WordPress site, protected by firewall rules and accessible securely via HTTPS.
automated_wordpress_lemp_ssl_setup/
├── inventory
├── main-playbook.yaml
├── roles/
│ ├── common/
│ │ └── tasks/main.yaml
│ ├── firewall/
│ │ └── tasks/main.yaml
│ ├── lemp/
│ │ └── tasks/main.yaml
│ ├── wordpress/
│ │ └── tasks/main.yaml
│ ├── ssl/
│ │ └── tasks/main.yaml
│ └── nginx/
│ ├── tasks/main.yaml
│ ├── handlers/main.yaml
│ └── templates/wordpress_nginx.conf.j2
common → Updates and upgrades all system packages
firewall → Enables UFW and allows ports 22 (SSH), 80 (HTTP), 443 (HTTPS)
lemp → Installs Nginx, MySQL, PHP, and extensions
wordpress → Downloads WordPress, configures database and user
nginx → Configures Nginx for WordPress and reloads on change
ssl → Obtains Let’s Encrypt SSL certificate and sets HTTPS redirection
Once the domain (e.g., mywordpressite.zapto.org) is mapped to your EC2 public IP, Certbot automatically generates an SSL certificate and configures HTTPS redirection.
💡 Tip: Always ensure ports 80 and 443 are open in both AWS Security Group and UFW.
[wordpress]
<YOUR_EC2_PUBLIC_IP> ansible_user=ubuntu ansible_ssh_private_key_file=~/.ssh/your-key.pem
Update domain and email inside roles/ssl/tasks/main.yaml.
Run the playbook:
ansible-playbook -i inventory main-playbook.yaml
- **[http://yourdomain.com](http://yourdomain.com)**
- **[https://yourdomain.com](https://yourdomain.com)**
Complete the WordPress setup and log in to your admin dashboard.
Nginx default page showing ---> Remove /etc/nginx/sites-enabled/default
Certbot “No such auth” error ---> Wait for DNS propagation, then re-run playbook
WordPress database error ---> Check if DB and user are created properly
SSL not working ---> Ensure firewall and AWS SG allow 80/443
Always remove default Nginx config before adding new ones.
Make sure DNS A record points to your EC2 IP before running Certbot.
You can re-run the playbook safely — Ansible ensures idempotence.
Alan Varghese
🔗 GitHub
📧 [email protected]
By automating the entire deployment with Ansible, you’ve saved hours of manual setup and reduced human error. This approach is scalable, repeatable, and production-ready — ideal for DevOps portfolios or personal hosting environments.
2025-11-12 18:21:22
As developers, we live and breathe APIs. We use tools like Requestly to intercept, modify, and test our network requests. It's an indispensable part of the modern development workflow. But what about when you're working on a flight, in a coffee shop with spotty Wi-Fi, or handling highly sensitive API keys that you'd rather not have synced to a cloud?
This is where Requestly's Local Workspace feature changes the game.
I recently spent time exploring this feature for the "No Bugs November" challenge and was blown away by its simplicity and utility. It addresses a core need for privacy, speed, and offline access.
This post is a beginner-friendly guide to get you started with Requestly's Local Workspace, based on my own experience. We'll cover what it is, why you should use it, and how to set up your first local collection.
In short, it's a way to create, test, and save all your API requests and collections directly on your local machine.
Instead of syncing your data to Requestly's cloud servers, a local workspace saves everything into a folder on your computer. This might sound simple, but it unlocks some powerful benefits.
git init it! You can commit your changes, create new branches for testing, and share your API collections with teammates via a standard Git workflow.Okay, let's get our hands dirty. The process is incredibly straightforward.
Prerequisite: You'll need the Requestly Desktop App. This feature is not available on the browser extension alone.
In the Requestly app, look at the top-left corner. You'll see the name of your current workspace (it probably says "My Workspace" or your team's name). Click on it.
From the dropdown menu, select Create New Workspace.
Requestly will give you two options: "Cloud Workspace" (the default) and "Local Workspace." Select Local Workspace.
A dialog box will appear asking you to name your workspace and choose a location on your computer to save it.
And that's it! You'll be switched to a clean, new workspace. You can confirm you're in the right place by checking the workspace name in the top-left corner again.
Now that you're in your local workspace, the interface will feel very familiar. You can manage your API requests just as you would in a cloud workspace.
It's always a good practice to organize your requests.
+ icon in the left-hand sidebar.This creates a new "folder" to hold all related requests.
+ icon next to it and choose "New HTTP Request."GET request to https://api.github.com/users/requestly.Just like you'd expect, the response appears in the bottom panel. You can inspect the Body, Headers, and Status code.
The best part? When you close and reopen the app, it's all still there—no sign-in or syncing required.
Here's the trick I mentioned. I navigated to the folder where I saved my workspace on my computer. Inside, I found a requestly.json file and other data.``
I immediately ran:
`bash
git init
git add .
git commit -m "Initial API collection for User Service"
`
Now, my entire API testing setup is version-controlled. I can create a new branch (git checkout -b feature/new-endpoint) before I start testing a new feature, and if I mess up my requests, I can just reset. This is a massive win for team collaboration.
you've ever been frustrated by cloud-sync delays or concerned about where your API keys are being stored, this feature is built for you.
Give it a try—it might just become your new default.
2025-11-12 18:20:59
Hey dev.to community! This is my first post here, and I'm excited to jump in. As a full-stack developer and AI enthusiast, I've been experimenting a lot with AI in my coding workflow lately. I originally shared some of these thoughts on LinkedIn after a bit of a hiatus, but I figured this crowd would vibe with it too. Let's dive into what I've learned about "vibe coding" and why I think AI tools are ready for a team-level upgrade.
The Vibe Coding Revelation
I've done AI-assisted projects before, but here's what I'm realizing: the more you use AI, the better you get at using it. It's not just about crafting perfect prompts anymore. It's about understanding structure, knowing how to break things down, and letting AI handle what it does best while you architect the vision.
I used to get back absolute spaghetti—500+ line files that felt personally offended by my existence. But now? I split everything cleanly. Auth lives in one module. UI components stay separate. Data flow has its own domain. Suddenly, AI makes sense of each piece. The difference is night and day.
The Infrastructure Gap
But here's what keeps nagging at me: the tools themselves need to evolve.
We're using AI like it's 2023—solo workflows, no version control, no real collaboration. Meanwhile, we've had Microsoft 365, GitHub, Figma, and other enterprise tools built for teams for years. AI is already embedded in the workplace. Developers, designers, writers, analysts—we're all using it daily. So why are we treating it like a personal assistant instead of a team platform?
What Enterprise AI Could Look Like
Imagine shared AI projects with teammates, version history you can roll back, auto-debugging that learns from your codebase, collaborative prompts with branching and merging, and real permissions and security for enterprise use.
Not just "Pro plans" with extra features. True enterprise editions—company-grade AI platforms built for how teams work, with the governance, security, and collaboration features modern organizations require.
A Workflow Revolution
If you're still using AI raw without breaking your workflow into modular pieces, try it. Split your requests. Separate concerns. Let each conversation focus on one thing. You'll be shocked how much cleaner the output becomes.
But beyond individual workflow optimization, platforms need to catch up. AI-as-a-service for teams isn't just nice-to-have; it's the next logical step.
The Open Question
Folks like Andrej Karpathy and Lex Fridman live this daily and know the landscape inside out. Real talk: when are we getting enterprise-grade AI collaboration tools?
The technology is ready. The demand is there. The workflows are forming organically. Someone just needs to build the infrastructure.
Who's working on this? And if no one is, why not?
What do you think, dev.to? Have you tried vibe coding with AI? What's your take on enterprise features for these tools? Drop your thoughts in the comments—let's discuss!
2025-11-12 18:14:58
When building UI, we often rely on icons or visual cues that don’t include visible text. While that might look clean, it can leave screen reader users guessing.
aria-labelledby is one of the techniques we can use to give clear context to assistive technologies.
aria-labelledby?
aria-labelledby lets you reference visible text elsewhere in the DOM to create an accessible name for an element.
It basically tells assistive technology: “Use this other element’s text to label me.”
It’s commonly used when the text that explains an element is located outside the element itself.
<h2 id="dialog-title">Edit Profile</h2>
<button aria-labelledby="dialog-title">
✏️
</button>
Here, the button doesn’t have its own text label, but because it references dialog-title, assistive tech will announce “Edit Profile.”
Sometimes we have elements, especially icon-only controls, that need textual context.
There are several techniques:
aria-label="Edit profile"
<button>Edit Profile</button> (preferred)
title="Edit profile"
aria-labelledby="id"
However, aria-labelledby has the highest priority, and if used, its referenced text will always take precedence over any other naming source.
1) Create an element with descriptive text:
<h2 id="dialog-title">Edit profile</h2>
2) Reference its ID using aria-labelledby on the target element:
<button aria-labelledby="dialog-title">✏️</button>
And that’s it.
aria-labelledby has the highest priority among naming sources
→ It will override aria-label, title, and even visible text<button aria-labelledby="label"></button>
<span id="label" hidden>Example</span>
In the example above, if the span is hidden, the accessible name of the button is lost❌.
✅ As a rule of thumb, use aria-labelledby when:
aria-labelledby is a powerful tool for providing context, especially when UI patterns scatter text visually.
Just remember:
When used well, it helps assistive tech users understand your UI as clearly as everyone else.
2025-11-12 18:13:37
Ever wondered how cloud providers like AWS, GCP, or Azure implement Virtual Private Clouds (VPCs) under the hood? In this comprehensive guide, we'll recreate VPC fundamentals entirely on Linux using native networking primitives like network namespaces, veth pairs, bridges, and iptables.
By the end of this tutorial, you'll have built a fully functional mini-VPC environment supporting:
┌─────────────────────────────────────────────────────────┐
│ Host OS │
│ │
│ ┌──────────────────────────────────────────────────┐ │
│ │ VPC 1 │ │
│ │ │ │
│ │ ┌──────────────┐ ┌──────────────┐ │ │
│ │ │ Public │ │ Private │ │ │
│ │ │ Subnet │◄───────►│ Subnet │ │ │
│ │ │ (Namespace) │ │ (Namespace) │ │ │
│ │ │ 10.0.1.0/24 │ │ 10.0.2.0/24 │ │ │
│ │ └──────┬───────┘ └──────┬───────┘ │ │
│ │ │ │ │ │
│ │ └────────┬───────────────┘ │ │
│ │ │ │ │
│ │ ┌──────▼──────┐ │ │
│ │ │ Bridge │ │ │
│ │ │ (VPC Router)│ │ │
│ │ └──────┬──────┘ │ │
│ └──────────────────┼──────────────────────────────┘ │
│ │ │
│ │ NAT │
│ ▼ │
│ ┌─────────────┐ │
│ │ eth0 (WAN) │ │
│ └─────────────┘ │
└─────────────────────────────────────────────────────────┘
Create a project directory and the main CLI tool:
mkdir vpc-project
cd vpc-project
Create vpcctl.py with the implementation (see the complete code in the GitHub repository).
Make it executable:
chmod +x vpcctl.py
Our CLI tool provides these commands:
# Create a VPC
sudo ./vpcctl.py create-vpc <name> <cidr> [--interface eth0]
# Add subnet to VPC
sudo ./vpcctl.py add-subnet <vpc-name> <subnet-name> <cidr> [--type public|private]
# Peer two VPCs
sudo ./vpcctl.py peer <vpc1> <vpc2>
# Apply firewall rules
sudo ./vpcctl.py apply-firewall <vpc> <subnet> <policy.json>
# List all VPCs
sudo ./vpcctl.py list
# Delete VPC
sudo ./vpcctl.py delete-vpc <name>
Let's create a VPC with CIDR 10.0.0.0/16:
sudo ./vpcctl.py create-vpc vpc1 10.0.0.0/16 --interface eth0
What happens behind the scenes:
br-vpc1
~/.vpcctl/vpcs.json
Verify the bridge was created:
ip link show br-vpc1
Add a public subnet (with internet access):
sudo ./vpcctl.py add-subnet vpc1 public 10.0.1.0/24 --type public
Add a private subnet (isolated from internet):
sudo ./vpcctl.py add-subnet vpc1 private 10.0.2.0/24 --type private
What happens:
vpc1-public)List the created namespaces:
ip netns list
Let's deploy simple HTTP servers to test connectivity:
In public subnet:
# Start HTTP server in public subnet namespace
sudo ip netns exec vpc1-public python3 -m http.server 8080 &
In private subnet:
# Start HTTP server in private subnet namespace
sudo ip netns exec vpc1-private python3 -m http.server 8081 &
Test 1: Inter-subnet communication within VPC
# From private subnet, ping public subnet
sudo ip netns exec vpc1-private ping -c 3 10.0.1.1
✅ Expected: Success - subnets in same VPC can communicate
Test 2: Internet access from public subnet
# Public subnet should reach internet via NAT
sudo ip netns exec vpc1-public ping -c 3 8.8.8.8
✅ Expected: Success - NAT gateway allows outbound traffic
Test 3: Internet access from private subnet
# Private subnet should NOT have internet access
sudo ip netns exec vpc1-private ping -c 3 8.8.8.8
❌ Expected: Failure or timeout - private subnet is isolated
Test 4: HTTP access
# Access HTTP server in public subnet
sudo ip netns exec vpc1-private curl http://10.0.1.1:8080
✅ Expected: Success - receives HTTP response
Create a second VPC:
sudo ./vpcctl.py create-vpc vpc2 172.16.0.0/16
sudo ./vpcctl.py add-subnet vpc2 web 172.16.1.0/24 --type public
Test isolation:
# Try to ping vpc2 from vpc1
sudo ip netns exec vpc1-public ping -c 3 172.16.1.1
❌ Expected: Failure - VPCs are isolated by default
Allow controlled communication between VPCs:
sudo ./vpcctl.py peer vpc1 vpc2
What happens:
Test cross-VPC communication:
# Now vpc1 can reach vpc2
sudo ip netns exec vpc1-public ping -c 3 172.16.1.1
✅ Expected: Success - peering enables cross-VPC routing
Create a security policy file firewall-policy.json:
{
"subnet": "10.0.1.0/24",
"ingress": [
{"port": 8080, "protocol": "tcp", "action": "allow"},
{"port": 22, "protocol": "tcp", "action": "deny"},
{"port": 443, "protocol": "tcp", "action": "allow"}
]
}
Apply the policy:
sudo ./vpcctl.py apply-firewall vpc1 public firewall-policy.json
Test the rules:
# HTTP on port 8080 should work
sudo ip netns exec vpc1-private curl http://10.0.1.1:8080
# SSH on port 22 should be blocked
sudo ip netns exec vpc1-private nc -zv 10.0.1.1 22
View routing table in namespace:
sudo ip netns exec vpc1-public ip route
View iptables rules:
sudo iptables -t nat -L -n -v
sudo ip netns exec vpc1-public iptables -L -n -v
Check bridge connections:
bridge link show br-vpc1
Monitor network traffic:
# Capture traffic on bridge
sudo tcpdump -i br-vpc1 -n
# Capture traffic in namespace
sudo ip netns exec vpc1-public tcpdump -i veth-ns-public -n
Remove all resources:
# Delete individual VPC
sudo ./vpcctl.py delete-vpc vpc1
# Or delete all VPCs
sudo ./vpcctl.py delete-vpc vpc1
sudo ./vpcctl.py delete-vpc vpc2
The delete operation automatically removes:
Solution: Check that IP forwarding is enabled:
sysctl net.ipv4.ip_forward
# Should return: net.ipv4.ip_forward = 1
Solution:
ip link show
--interface parameter to match your actual internet interfacesudo iptables -t nat -L -n -v
Solution: All commands must run with sudo or as root user
Solution: Clean up existing namespaces:
sudo ip netns del vpc1-public
# Or delete the entire VPC
sudo ./vpcctl.py delete-vpc vpc1
For more complex NAT scenarios:
# Port forwarding from host to namespace
sudo iptables -t nat -A PREROUTING -p tcp --dport 8080 \
-j DNAT --to-destination 10.0.1.1:80
Limit bandwidth on veth interface:
sudo tc qdisc add dev veth-vpc1-public root tbf \
rate 1mbit burst 32kbit latency 400ms
Create isolated VPCs for different users/projects using unique naming prefixes.
You've successfully built a fully functional VPC implementation on Linux! This project demonstrates the core concepts behind cloud networking and gives you hands-on experience with Linux networking tools.
The same principles apply to production cloud environments, just at a much larger scale with additional features like:
Full code available at: [Your GitHub Repo URL]
Have questions or suggestions? Feel free to open an issue on GitHub!
2025-11-12 18:10:36
File watching seems simple on the surface: "tell me when a file changes" But the reality is three completely different APIs with three fundamentally different philosophies about how computers should monitor file systems.
I recently spent weeks building FSWatcher, a cross-platform file watcher in Go, and the journey taught me that understanding file watching means understanding how each operating system approaches the problems differently.
Apple's philosophy is to monitor directory trees, rather than individual files.
// You say: "watch /Users/me/fswatcher"
// macOS says: "OK, I'll tell you when ANYTHING in that tree changes"
Pros
Cons
Example event
//Event: /Users/me/project/src changed
// You have to figure out WHAT changed in /src
Linux's philosophy: granular control over specific files and directories.
// You say: "watch /home/me/fswatcher/main.go"
// Linux says: "OK, I'll tell you exactly what happens to that file"
Pros
Cons
Example event
// Event: /home/me/fswatcher/main.go MODIFIED
// Event: /home/me/fswatcher/main.go ATTRIBUTES_CHANGED
// Event: /home/me/fswatcher/main.go CLOSE_WRITE
// Same file save = 3 events
Windows philosophy: asynchronous I/O with overlapping operations.
// You say: "watch C:\project and give me async notifications"
// Windows says: "I'll buffer changes and notify you asynchronously"
Pros
Cons
Example event
// Event: C:\project\main.go MODIFIED (buffered)
// Event: C:\project\test.go CREATED (buffered)
// Events may be batched by Windows
Challenge 1: Event Inconsistency
Same action (save a file) → different events per OS:
// macOS
Event: /project changed (directory-level)
// Linux
Event: file.go MODIFIED
Event: file.go ATTRIB
Event: file.go CLOSE_WRITE
// Windows
Event: file.go MODIFIED (buffered)
Challenge 2: Editor Spam
Modern editors (VSCode, GoLand) don't just save once:
1. Create temp file (.file.go.tmp)
2. Write content to temp
3. Delete original
4. Rename temp to original
5. Update attributes
6. Flush buffers
That's 6+ events for ONE save operation!
Challenge 3: Bulk Operations
When you run git checkout, thousands of files change instantly:
bashgit checkout main
# 10,000 files changed
# = 10,000+ file system events in ~1 second
Your watcher must handle this flood without crashing.
In FSWatcher, I built a pipeline that normalizes all these differences:
┌─────────────┐
│ OS Events │ (platform-specific)
└──────┬──────┘
│
▼
┌─────────────┐
│ Normalize │ (consistent Event struct)
└──────┬──────┘
│
▼
┌─────────────┐
│ Debounce │ (merge rapid duplicates)
└──────┬──────┘
│
▼
┌─────────────┐
│ Batch │ (group related changes)
└──────┬──────┘
│
▼
┌─────────────┐
│ Filter │ (regex include/exclude)
└──────┬──────┘
│
▼
┌─────────────┐
│ Clean Event │ (to consumer)
└─────────────┘
Debouncing in action
// Without debouncing:
Event: main.go changed at 10:00:00.100
Event: main.go changed at 10:00:00.150
Event: main.go changed at 10:00:00.200
Event: main.go changed at 10:00:00.250
Event: main.go changed at 10:00:00.300
// With debouncing (300ms window):
Event: main.go changed at 10:00:00.300 (final)
Batching in action
// Without batching:
10 separate event notifications
// With batching:
Batch: []Event{10 events} (one notification)
Here's a hot-reload system using FSWatcher:
go get github.com/sgtdi/fswatcher
package main
import (
"context"
"fmt"
"github.com/sgtdi/fswatcher"
)
func main() {
// Only watch .go files and ignore .go files under test dir
fsw, _ := fswatcher.New(
fswatcher.WithIncRegex([]string{`\.go$`}),
fswatcher.WithExcRegex([]string{`test/.*\.go$`}),
)
ctx := context.Background()
go fsw.Watch(ctx)
fmt.Println("Starting..")
for e := range fsw.Events() {
fmt.Println(e.String())
fmt.Println("Changed..")
}
}
I also wrote a detailed article on Medium about the implementation journey and lessons learned: Read the full story
FSWatcher
Apple FSEvents Documentation
Linux inotify man page
Windows ReadDirectoryChangesW