2025-11-11 00:22:16
Over the past few months, I’ve been working on one of the most exciting and challenging cloud migration projects of my career, a large-scale transformation initiative between xyz (a global technology consulting firm) and abc (a major telecommunications company).
The goal of this project is simple but powerful: to move dozens of critical enterprise applications from on-premises data centers into a secure, scalable, and automated AWS environment without disrupting existing business operations. This is part of a broader modernization program that will allow abc to run its workloads faster, more securely, and more efficiently in the cloud.
The migration involves a mix of rehosting, redeploying, and refactoring applications using AWS native tools, Terraform for Infrastructure as Code (IaC), and automated CI/CD pipelines.
Our scope includes:
Designing and deploying AWS environments for multiple business units.
Setting up VPCs, subnets, IAM roles, security groups, load balancers, and monitoring tools.
Implementing cross-account role assumption for secure access and governance.
Coordinating closely with both the cloud engineering and application teams to ensure smooth transitions.
As of now, we’re working through a pipeline of over 30 applications scheduled for migration to AWS. Several have already been successfully redeployed and tested in non-production and production environments.
As a Cloud Engineer on this project, my role cuts across technical execution, documentation, and coordination. I’ve been directly involved in:
Designing AWS architectures for multiple applications; defining network layout, IAM roles, and access controls.
Collaborating on Terraform automation, including maintaining GitHub repositories and building modular scripts for consistent infrastructure deployment.
Managing environment readiness, which includes server provisioning, IAM access configuration, and validating connectivity between on-prem databases and cloud servers.
Supporting MGN setup, ensuring source servers are correctly configured for migration replication.
Contributing to migration design documents, including the logical architecture, deployment models, dependencies, and database connections.
Coordinating with the application and security teams to manage firewall requests, DNS cutovers, and access provisioning.
Successfully supported the AWS environment setup for multiple application migrations.
Created and refined design documentation that clearly outlines migration strategy, deployment processes, and technical architecture.
Validated IAM roles and cross-account access, ensuring secure provisioning in multi-account AWS setups.
Helped establish a Terraform-based deployment workflow, integrating version control and infrastructure automation.
Worked closely with the project leads to align AWS provisioning with organizational compliance standards.
Gained practical experience in AWS MGN (Migration Hub), Terraform automation, and enterprise migration governance.
Our team has completed the foundation setup AWS accounts, networking, IAM, and automation frameworks are now in place. We’re currently focused on:
This project is more than just moving workloads to AWS; it’s a complete cloud enablement initiative. I’ve learned the value of collaboration between cloud engineers, app owners, database administrators, and project managers, all working toward a shared goal of modernization.
Every day comes with new learning: understanding enterprise migration frameworks, managing IAM at scale, troubleshooting deployment issues, and optimizing Terraform code for real-world production environments.
This experience has strengthened my skills not only as a Cloud Engineer but as a problem solver and collaborator. From designing secure AWS architectures to supporting production rollouts, I’ve seen firsthand how cloud migration works at scale the challenges, the coordination, and the satisfaction that comes when an application successfully goes live in the cloud.
We’re still moving forward, with many more applications to migrate in the coming weeks. And as we continue to build, automate, and refine, I’m proud to be part of a team that’s shaping the future of cloud transformation for one of the biggest enterprises in the industry.
2025-11-11 00:18:17
It is great to come back to sharing my thoughts on the next line of action I want for my career. At this moment, I want to go all in with data engineering with focus on mastery.
Data is critical in this modern time as it has always been. Some social media platforms like LinkedIn and Instagram have made it explicity known that they will be using your data for training if you are comfortable with it.
In this data challenge series, I will be going from the basics to the complex to grasp the full concept of what it is to be a data engineer right from ingesting, storing, cleaning, and processing the data.
From the brief intro above, data engineers are integral part of the company for maintaining systems that ingest data from both internal or external sources like databases and APIs, store this data for further processing, clean, and process them after going through a series of transformation steps.
To use Spark, you first need to have a Spark cluster. A cluster is a collection of computers running Spark software.
For a Spark application, the cluster consists of two components:
brew install apache-spark
# after installation, run:
pyspark
# install jupyter notebook in the virtual env
pip install jupyter
# configure pyspark to use the jupyter notebook when we start it
# define these two environment variables
export PYSPARK_DRIVER_PYTHON='jupyter'
export PYSPARK_DRIVER_PYTHON_OPTS='lab'
# run or start the development server in the venv
pyspark
For you to become better, you must practice. This demo will use the data processing framework, Apache Spark to start the development server in Jupyter Notebook in a virtual environment.
This documentation is a good starting guide to writing efficient Spark applications and its functionality.
Now, let's create a SparkSession by reading an actual data from a CSV. To follow along, download a sample data from this website.
from pyspark.sql import SparkSession
spark = SparkSession \
.builder \
.appName('Read inside airbnb data') \
.getOrCreate()
listings = spark.read.csv('data/listings.csv.gz',
header=True,
inferSchema=True,
sep=',',
quote='"',
escape='"',
multiLine=True,
mode='PERMISSIVE'
)
listings.printSchema()
for list in listings.schema:
print(list)
description = listings.select(listings.description)
description.show(20, truncate=False)
The public repo is available here:
https://github.com/Terieyenike/data-engineering
If you want to connect and reach out, I am active on LinkedIn:
2025-11-11 00:07:04
Any song you've had on repeat lately?
Drop a YouTube, SoundCloud, Bandcamp or Spotify {% embed %} in the comments. Let's discover something new beyond what the algorithms are feeding us.
Any genre is welcome! Music taste is subjective, and there's no "good" or "bad" here. Share what you love and include why if you feel like it.
2025-11-11 00:05:00
So, you've decided to dive into the world of Home Assistant! Congratulations. You're on the path to building a truly private, powerful, and customized smart home that isn't locked into a single corporate ecosystem. But as you stand at the starting line, the sheer number of choices can feel overwhelming. What computer should you run it on? What are Zigbee and Z-Wave? Which smart devices should you even buy first?
Don't worry. As a home improvement expert, I know that the best projects start with a solid plan and the right materials. This guide will be your blueprint. We'll cut through the noise and walk you through selecting the perfect core hardware and your first few game-changing smart devices.
Before we even think about shopping, let's establish some foundational rules. Think of this as your safety check before a big project. Following these principles will save you immense frustration down the road.
Every great smart home needs a brain and senses. First, we'll choose the "brain" (the computer that runs Home Assistant), then we'll pick the "senses" (your first few devices).
This is the central computer that will run the Home Assistant software 24/7.
Before you add anything to your cart, take five minutes to answer these questions.
Once your packages arrive, the real fun begins. The process generally involves:
Your first project? Create a simple automation. A great one is: "When motion is detected in the living room AND it's after sunset, turn on the lamp's smart plug." Accomplishing this first task is a hugely satisfying moment.
From there, the sky's the limit. You can expand with climate sensors, smart thermostats, leak detectors, energy monitoring, and so much more.
Jumping into Home Assistant is one of the most rewarding home improvement projects you can undertake. By starting with a solid hardware host, committing to local-control protocols like Zigbee, and choosing a few simple sensors and plugs, you set yourself up for success. You're not just buying gadgets; you're building a home that is smarter, more efficient, and truly yours. Welcome to the community
2025-11-11 00:04:28
Last week, I ran into one of those tricky production bugs where everything looked fine, but Azure Front Door kept throwing a 504 Gateway Timeout.
And the real cause? A simple missing HTTP method. 😅
I had a typical production setup:
8000 as a background process
(managed via systemd, so it auto-starts on reboot)So the flow looked like this:
Frontend → Azure Front Door → Nginx → FastAPI (port 8000)
When I tested everything manually:
curl http://localhost:8000/health
✅ returned “healthy”
✅ app responded instantly
✅ SSL working fine via Nginx
✅ no errors in systemd logs
Still, Front Door said:
“No healthy backends available.”
And my frontend just sat there… until it failed with a 504 Gateway Timeout. 😩
While checking my logs, I noticed this repeating line every 30 seconds:
INFO: 127.0.0.1:44438 "HEAD /health HTTP/1.0" 405 Method Not Allowed
That was the “aha!” moment.
Azure Front Door wasn’t using GET at all — it was sending a HEAD /health request.
Here’s what Front Door was doing under the hood:
Every 30 seconds:
Front Door → HEAD /health → backend VM
But my FastAPI code looked like this:
@app.get("/health")
async def health():
return {"status": "healthy"}
So my app said:
“I don’t accept that method.” → 405 Method Not Allowed ❌
Front Door took that as:
“Backend is unhealthy.”
And started marking every VM as unhealthy.
HEAD /health → 405 ❌ → Backend unhealthy
All backends unhealthy → No route to send traffic
Front Door → Waits → Times out after 60s
Frontend → 504 Gateway Timeout
Meanwhile, when I ran a GET manually, everything looked healthy.
That’s what made this bug so sneaky.
I updated my FastAPI route to accept both GET and HEAD:
from fastapi import FastAPI, Response
app = FastAPI()
@app.api_route("/health", methods=["GET", "HEAD"], include_in_schema=False)
async def health():
return Response(content="healthy", status_code=200)
Now both checks pass:
curl http://localhost:8000/health # GET → healthy ✅
curl -I http://localhost:8000/health # HEAD → 200 OK ✅
And Azure Front Door is happy again 💙
HEAD is like a lightweight GET it only retrieves headers, no body.
It’s faster and cheaper to run for hundreds of backend checks.
That’s why all major load balancers (Azure Front Door, AWS ALB, GCP LB, Nginx) prefer using it for health probes.
| Problem | Explanation |
|---|---|
Azure Front Door used HEAD /health
|
Industry-standard health probe |
FastAPI accepted only GET
|
Returned 405 |
| Front Door marked backend unhealthy | ❌ |
| Frontend got 504 Gateway Timeout | 💥 |
| Fix |
@app.api_route("/health", methods=["GET", "HEAD"]) ✅ |
Front Door: "Knock knock, are you alive?" (HEAD)
FastAPI: "I only reply to phone calls (GET)" → 405
Front Door: "Okay, unhealthy then."
After the fix:
Front Door: "Knock knock?"
FastAPI: "Yep, alive and ready!" → 200 OK
Front Door: "Perfect, routing traffic now."
This one was a great reminder
sometimes, your system isn’t broken… it’s just misunderstanding the protocol.
A tiny missing method caused a full 504 outage.
But with one line of code, everything started flowing smoothly again. 🚀
2025-11-11 00:02:13
CinemaSins rips into Thunderbolts (a.k.a. The New Avengers) in true “Everything Wrong With” style, ticking off plot holes, janky CGI and character missteps—all in under 20 minutes—yet can’t help admitting the movie’s got a weird charm. Is it perfect? Far from it. Is it kinda fun? Totally.
For more sin-counting madness, head to their website, hop on Discord or Reddit, follow them on social, fill out their polls and even support the team on Patreon for extra behind-the-scenes treats.
Watch on YouTube