MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Terraform remote state for multi-account AWS: complete setup

2026-03-10 15:14:06

Terraform remote state for multi-account AWS: complete setup

Local state is a trap. Two engineers run apply simultaneously and state diverges. Here's the complete remote state setup.

Architecture

S3 Bucket (management account)
  project-alpha/prod/terraform.tfstate
  project-alpha/staging/terraform.tfstate

DynamoDB Table: terraform-state-locks (LockID key)

Bootstrap (run once per management account)

resource "aws_s3_bucket" "state" {
  bucket = "my-org-terraform-state-${data.aws_caller_identity.current.account_id}"
}
resource "aws_s3_bucket_versioning" "state" {
  bucket = aws_s3_bucket.state.id
  versioning_configuration { status = "Enabled" }
}
resource "aws_s3_bucket_server_side_encryption_configuration" "state" {
  bucket = aws_s3_bucket.state.id
  rule { apply_server_side_encryption_by_default { sse_algorithm = "AES256" } }
}
resource "aws_s3_bucket_public_access_block" "state" {
  bucket = aws_s3_bucket.state.id
  block_public_acls = true; block_public_policy = true
  ignore_public_acls = true; restrict_public_buckets = true
}
resource "aws_dynamodb_table" "locks" {
  name = "terraform-state-locks"; billing_mode = "PAY_PER_REQUEST"; hash_key = "LockID"
  attribute { name = "LockID"; type = "S" }
  point_in_time_recovery { enabled = true }
}

Backend configuration per project

terraform {
  backend "s3" {
    bucket         = "my-org-terraform-state-111111111111"
    key            = "project-alpha/prod/terraform.tfstate"
    region         = "us-east-1"
    dynamodb_table = "terraform-state-locks"
    encrypt        = true
    role_arn       = "arn:aws:iam::111111111111:role/terraform-state-access"
  }
}

Common pitfalls

  • Key collisions: Use <project>/<environment>/terraform.tfstate consistently
  • Forgetting encrypt = true: Bucket encryption alone doesn't protect in-transit operations
  • Lock table region mismatch: DynamoDB must be in the same region as S3
  • No versioning: State corruption happens — versioning is your rollback

Step2Dev provisions a remote state backend automatically for every new project.

👉 step2dev.com

2 YOE Full‑Stack dev: how to choose between AI and Blockchain?

2026-03-10 15:13:53

I have ~2 years of experience as a software engineer working across full‑stack web and some mobile development.

I want to upskill for better growth and salary, and I’m currently stuck between focusing on Blockchain or AI.

For Blockchain, I have a rough idea of the roadmap (smart contracts, Solidity, web3, etc.).

For AI, I’m not clear what core skills I should build first (math, Python, ML, data, etc.) or which roles are realistic for someone with my background.

What I’d love advice on:

Given my experience, which path (AI or Blockchain) has better long‑term career prospects right now?

If I choose AI, what would be a concrete skill/learning roadmap for someone coming from full‑stack web dev?

Are there AI or Blockchain roles that are open to people without prior domain experience, as long as they can demonstrate projects/skills?

Any specific role names, tech stacks, or learning resources that fit a 2 YOE full‑stack dev would be really helpful. Thanks in advance!

Your app validates emails in production but not in dev. What else are you skipping?

2026-03-10 15:12:59

We all use mailpit or mailtrap in dev. Nobody sends real emails locally. That's been solved forever.

But email is the only thing we actually solved. Everything else? We just... skip it.

Email validation

In production your signup checks MX records, blocks disposable domains, all of it. In dev you type [email protected] and move on with your life.

Small problem: someone owns test.com. They receive your test data. That address shows up on 1,500+ blacklists because every developer on earth uses it.

And MX validation can't even run properly in most dev setups. The DNS lookups fail offline, they fail with fake domains. Symfony has an open issue about disabling checkMX in test environments for exactly this reason. So the one piece of validation that actually matters ships completely untested.

Remember when HBO Max sent "Integration Test Email #1" to all their subscribers? Yeah.

Regex

You know the workflow. regex101, three test strings, looks good, paste into code, commit, push. No unit test. Ever.

There's a library on GitHub called regex-tester. The whole pitch in the README is one line: "Because regular expressions are complex and error prone and you probably don't unit test them." That's it. That's the selling point. And honestly? Fair enough.

Character limits

120-char company_name column in your database. No maxlength on the form. Works great in dev because you always type "Test Company." Blows up in prod when someone pastes their full legal entity name.

Microsoft added verbose truncation warnings as a major feature in SQL Server 2019 because the old error was so useless nobody could debug it. Emojis cut SMS limits from 160 to 70 characters. Twitter's 280-char limit turned out to be a byte limit that breaks with multibyte characters. Entity Framework silently truncates without an error. This stuff is everywhere.

CORS

Locally you run frontend and backend on the same origin. Or you throw Access-Control-Allow-Origin: * on everything. Or you install that browser extension. Or you launch Chrome with --disable-web-security.

Then you deploy to real subdomains and spend your afternoon googling CORS errors that didn't exist yesterday. I've seen senior devs lose half a day on this. There's just no clean way to test real cross-origin behavior locally.

SSL

curl -k
NODE_TLS_REJECT_UNAUTHORIZED=0
verify=False
CURLOPT_SSL_VERIFYPEER => false

We've all done it. "Temporary." Then it's in the Dockerfile. Then staging. Then someone copies that config to prod and now your app trusts every cert on the planet.

So what

The Twelve-Factor App told us about dev/prod parity in 2011. We're still not doing it.

Production runs real validation, real security, real constraints. Dev runs on vibes and [email protected]. The gap between those two is where your next production bug already lives.

The mailpit pattern works for more than email. Use tools that give real results but cost nothing and stay out of your way. I built some free utility APIs for exactly this (apixies.io) but honestly it doesn't matter which tool. It matters that you stop skipping the check entirely.

If it only runs in production, it's not a check. It's a hope.

What's the dumbest prod bug you've shipped that a simple dev check would've caught?

GPT-5.4 Just Dropped - Here's Everything Developers Need to Know

2026-03-10 15:11:46

OpenAI released GPT-5.4 on March 5, 2026. I've spent the past week testing it, building YouTube content about it, and digging into the benchmarks. Here's everything that matters for developers - no hype, just facts.

The TL;DR

GPT-5.4 is a unified model that combines the best of the GPT-5 series into one package. The headline features:

  • Native computer use - first general-purpose model with it baked in
  • 1M token context window (API/Codex)
  • Tool search - 47% fewer tokens for tool-heavy workflows
  • 83% on professional benchmarks (GDPval) - beats 83% of professionals across 44 occupations
  • 75% on OSWorld - beats human performance (72.4%) at desktop navigation

Pricing starts at $2.50/M input tokens, $10/M output tokens.

1. Native Computer Use - This Is the Big One

GPT-5.4 can operate a computer. Not through some bolted-on plugin - it's built into the model itself. It reads screenshots, clicks buttons, types text, navigates apps.

The benchmarks tell the story:

Benchmark GPT-5.2 GPT-5.4 Human
OSWorld-Verified (desktop) 47.3% 75.0% 72.4%
WebArena-Verified (browser) 65.4% 67.3% -
Online-Mind2Web (screenshots) 70.9%* 92.8% -

*ChatGPT Atlas Agent Mode

That OSWorld number is wild. GPT-5.4 is better than humans at navigating a desktop through screenshots and keyboard/mouse actions. A 27.7 percentage point jump from 5.2.

For developers, the computer-use tool is fully configurable - you can steer behaviour through developer messages and adjust safety/confirmation policies per your app's risk tolerance.

What This Actually Means

Think automated QA testing that actually sees your UI. Think RPA workflows that don't break when someone moves a button. Think AI agents that can fill out forms, run scripts, and navigate multi-step processes without you decomposing every click.

2. 1M Token Context Window

The API and Codex support up to 1 million tokens of context. Standard workflows use 272K, but when you need to process an entire codebase or a chain of 50 prior agent actions, the 1M option is there.

Caveats:

  • Requests over 272K count at 2x the normal rate
  • The 1M context is API/Codex only - ChatGPT stays at the standard window
  • Recall degrades at extreme lengths (79.3% at 128–256K on MRCR 8-needle)

Use it selectively. It's not "throw your whole repo in every request" - it's for tasks that genuinely need long-horizon planning.

3. Tool Search - The MCP Game Changer

This is the feature I'm most excited about as someone who builds MCP servers.

The problem: When you connect multiple MCP servers, all tool definitions get crammed into the prompt. 36 MCP servers = tens of thousands of tokens before you've even asked a question.

The solution: Tool search gives GPT-5.4 a lightweight tool index. It looks up full definitions only when needed, instead of preloading everything.

The result: 47% fewer tokens, same accuracy. Tested on 250 tasks from Scale's MCP Atlas benchmark with all 36 MCP servers enabled.

If you're building AI agents that connect to multiple tools/APIs, this alone justifies upgrading.

4. Professional Knowledge Performance

On GDPval - which measures performance across 44 professional occupations - GPT-5.4 scores 83%. That means it outperforms 83% of human professionals in those domains.

For context:

  • GPT-5.2 scored ~72% on similar benchmarks
  • This includes domains like law, medicine, accounting, engineering

The "GPT-5.4 Pro" variant pushes even higher for enterprise users who need maximum performance.

5. Deep Research Got Smarter

BrowseComp benchmark (finding hard-to-find info across the web):

Model Score
GPT-5.2 65.8%
GPT-5.4 82.7%
GPT-5.4 Pro 89.3%

A 17-point jump. The model is more persistent at following information trails across multiple sources and better at synthesizing scattered evidence.

6. Mid-Response Course Correction

A quality-of-life feature in ChatGPT: GPT-5.4 Thinking now shows a preamble - an outline of its approach before generating the full response. You can redirect it before it goes down the wrong path.

No more "that's not what I meant" followed by regenerating 2,000 tokens. This saves time and money.

Pricing

Tier Input Output
GPT-5.4 $2.50/M tokens $10/M tokens
GPT-5.4 Pro Higher (enterprise) Higher (enterprise)

Competitive with Claude Opus 4.6 and Gemini 3.1 Pro. The tool search feature means your effective cost per request could be significantly lower for tool-heavy workloads.

Who Should Upgrade?

Upgrade now if you're:

  • Building AI agents with multiple tool integrations (MCP servers)
  • Doing automated browser/desktop interaction
  • Processing long documents or codebases
  • Building professional knowledge systems

Wait if you're:

  • Happy with GPT-5.2 for simple chat/completion tasks
  • Cost-sensitive and not using tool-heavy workflows
  • On the free tier (GPT-5.4 requires Plus/Team/Pro)

My Take

GPT-5.4 isn't a "wow, AI can write poems now" release. It's a "AI can do real work" release. Computer use + tool search + long context = agents that can actually complete multi-step workflows with minimal hand-holding.

The tool search feature is especially significant for the MCP ecosystem. If you're building MCP servers (shameless plug: I have a starter kit for that), your tools just got 47% cheaper to use.

The computer-use benchmarks beating humans is the headline, but the practical impact will come from tool search and the 1M context window. Those are the features that change what you can build.

What feature are you most excited to try? I'm deep in the MCP tool search integration - would love to hear what others are building with it.

I also made a video breakdown if you prefer watching over reading.

Building AI tools in production? Check out the MCP Server Starter Kit - free and open source, with a Pro version for production deployments.

test blog 1

2026-03-10 15:07:58

*this is a test post
*

Investigating Display Stability Issues on Intel Tiger Lake (HP 15t-dy200) PART -1

2026-03-10 15:07:40

Laptop: HP Laptop 15t-dy200  
Core i7  
Graphics card: Intel Iris Xe Graphics. TGL GT2

Where it started:
Windows 11 suddenly showed a blank screen after logging in. I had to force shutdown. 
This happened multiple times, and refused to be fixed.

ATTEMPT 1: Reinstalling Windows. 

Issue: Windows 11 installer can't detect drives and partition

The hard disk didn't show up while installation of windows

✅Fix: Install RST driver

Steps:

  1. Go to intel official website and download Intel Rapid Storage Technology Driver.
    [https://www.intel.com/content/www/us/en/download/19512/intel-rapid-storage-technology-driver-installation-software-with-intel-optane-memory-10th-and-11th-gen-platforms.html]

  2. Extract the driver

    • open cmd locate downloaded RST.exe file on cmd (named SetupRST.exe)
    • extract .exe file contents into a folder cd dir_name
    • locate the VMD folder `SetupRST.exe -extractdrivers RST
  3. Copy VMD folder to the Bootable Windows 11 USB (main folder)

  4. When asked for drivers, select 'Load Drivers >> Browse' and browse to VMD folder

  5. Select the 1st driver (Intel RST VMD Controller)

  6. Now the partitions should appear 

  7. Select partition and install driver

This worked until the wifi wasn't switched on. 

After switching on wifi, due to auto-updates happening, it would crash again.... 

ATTEMPT 2: Installing Ubuntu 24.04

Issues: while installing, the screen flickered after installing, screen still flickered a lot.

Fix: switching off panel screen refresh, and preventing it from entering deep power saving modes. 

Steps:

1) enter grub
sudo nano /etc/default/grub

2) change the line: 
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash intel_pstate=1 intel_idle.max_cstate=1"

3) ctrl + o → save

4) ctrl + x → exit

5) updating grub
sudo update-grub

6) reboot system
sudo reboot

Issue: while doing this, i had formatted and changed the partition and overwritten windows 11 with Ubuntu. 

This leads to space issues in the future when i try installing Windows again.

Also, all drives were now in Ubuntu format.. not the FAT nor NTFS as required by Windows

may be because of the fact that Iris XE graphics is not supported as much anymore. but we don't know.

Note:
This issue has not been solved yet. Further additions will be added as I discover more about this issue.