MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

It's 2025. You Should Probably Be Using Expo for React Native.

2025-11-12 12:10:41

EN: The "Expo vs. React Native CLI" debate is over, but not everyone has gotten the memo. The React Native team itself now recommends a framework-first approach for new projects. I wrote a deep-dive on why modern @Expo (with EAS, Config Plugins) is the default, production-ready choice for most teams in 2025. This isn't the Expo you remember. #ReactNative #Expo #MobileDevelopment

The "Expo vs. bare React Native CLI" debate is one of the oldest traditions in the React Native community. For years, the conventional wisdom was simple: start with Expo for a hobby project, but for "serious" production work, you'll eventually need the power and flexibility of the CLI.

That advice is now dangerously out of date.

The landscape has fundamentally shifted. The React Native core team itself no longer recommends starting new projects with react-native init. Instead, they advocate for using a framework. And in today's ecosystem, Expo is, for most use cases, the superior choice for building production-grade applications. If you're still starting projects with the bare CLI in 2025, you're likely choosing a path with more friction, more manual maintenance, and fewer powerful features out of the box.

This isn't about hype. It's about a pragmatic look at the tools that solve real-world development problems. Let's break down why.

The Game Has Changed: Why This Isn't the Old "Walled Garden" Debate
The old argument against Expo was its "walled garden" approach. If you needed a third-party package with custom native code that wasn't in the Expo SDK, you were stuck. Your only option was to eject, a one-way process that left you with a complex, often messy bare CLI project.

That entire paradigm is dead, thanks to two key innovations: Development Builds and Config Plugins.

Escape Hatches You Can Actually Use: Config Plugins
Config Plugins (also known as Continuous Native Generation or CNG) are the single most important feature that dismantled the "walled garden." They are small JavaScript functions that run during the native build process, allowing you to modify native project files like Info.plist on iOS or AndroidManifest.xml on Android.

What does this mean in practice? You can integrate almost any third-party React Native library into your Expo project. The community has already created plugins for most popular packages.

Let's say you need to set up different environments (staging, production) with unique API keys and app icons. Instead of manually managing Xcode projects or Gradle files, you create a dynamic configuration.

// app.config.ts
import { ExpoConfig } from 'expo/config';

// Define your environment-specific variables
const environments = {
  development: {
    name: 'MyApp (Dev)',
    bundleIdentifier: 'com.myapp.dev',
    apiKey: 'DEV_API_KEY',
  },
  staging: {
    name: 'MyApp (Staging)',
    bundleIdentifier: 'com.myapp.staging',
    apiKey: 'STAGING_API_KEY',
  },
  production: {
    name: 'MyApp',
    bundleIdentifier: 'com.myapp.production',
    apiKey: 'PROD_API_KEY',
  },
};

// Get the current environment from an environment variable
const currentEnv = process.env.APP_VARIANT || 'development';
const envConfig = environments[currentEnv];

const config: ExpoConfig = {
  name: envConfig.name,
  slug: 'my-app',
  ios: {
    bundleIdentifier: envConfig.bundleIdentifier,
  },
  android: {
    package: envConfig.bundleIdentifier,
  },
  // Use extra to pass environment variables to your app code
  extra: {
    apiKey: envConfig.apiKey,
    eas: {
      projectId: 'YOUR_EAS_PROJECT_ID'
    }
  },
};

export default config;

With this one file, you can generate distinct builds for each environment by simply setting an environment variable (APP_VARIANT=staging eas build). This eliminates a massive source of manual error and simplifies CI/CD pipelines immensely.

Production-Grade Superpowers, Out of the Box
Choosing Expo isn't just about avoiding problems; it's about gaining capabilities that are difficult and time-consuming to build yourself.

Ship Critical Fixes Instantly with Expo Updates (OTA)
Your app is in production. A critical bug that breaks the login flow is discovered. With a traditional CLI workflow, your fix is at the mercy of the App Store and Google Play review times, which can take hours or even days.

With Expo Application Services (EAS) Update, you can push the JavaScript-only fix directly to your users' devices in minutes. It's an Over-the-Air (OTA) update that bypasses the store review process for JS bundle changes.

[Here, a flowchart would illustrate the OTA update process: 1. Developer pushes JS code to EAS. 2. EAS builds the JS bundle. 3. User opens the app. 4. The app checks EAS for a new bundle. 5. The new bundle is downloaded in the background. 6. The next time the user opens the app, they're running the patched code.]

This isn't just a convenience; it's a powerful tool for risk management and rapid iteration. It makes your team more agile and your application more resilient.

A Build Process That Just Works
Ask any developer who has maintained a bare React Native project: managing native build environments is a headache. Keeping Xcode, CocoaPods, Android Studio, and Gradle versions in sync across a team and a CI server is fragile and time-consuming.

EAS Build abstracts this away completely. It provides a consistent, managed, and cloud-based build environment for your app. You run a single command, and it queues up a build for you. This frees your team from being native build experts and lets them focus on writing features.

A Reality Check: What Are the Downsides?
No tool is perfect. The most common argument I hear against Expo today is, "It just adds more dependencies."

And it's true. Expo is a framework; it provides a curated set of libraries and services. But this is a trade-off. You're trading direct control over every single dependency for a cohesive, well-maintained ecosystem where upgrades are smoother and packages are designed to work together. With the bare CLI, you're not avoiding dependencies; you're just becoming the sole maintainer of your own, hand-picked "framework."

For a highly specialized app—perhaps a social media app with complex, custom native video processing where you use zero Expo libraries—the overhead might not be worth it. But for the 95% of applications (e-commerce, internal tools, productivity, content apps), the benefits of the managed ecosystem far outweigh the cost of a few extra packages in your node_modules.

The Verdict: It's a Question of Focus
The decision is no longer about "capability" vs. "limitation." It's about where you want to spend your team's time.

Do you want to become an expert in configuring Xcode build settings, managing Gradle dependencies, and building your own OTA update mechanism? Or do you want to focus on building features, shipping faster, and delivering a better product?

By handling the undifferentiated heavy lifting of builds, configuration, and updates, Expo allows you to focus purely on the application itself. The escape hatches are there if you need them, but they are the exception, not the rule. The React Native team's official guidance confirms this direction. For your next production project, the question isn't "Why Expo?" but rather, "Do I have a very specific, compelling reason not to use Expo?"

For those who have already made the switch from CLI to Expo, what was the "aha!" moment for you? And for anyone still on the fence, what’s the biggest question left in your mind?

Kudos to the teams at @Expo and @React Native for pushing the ecosystem forward. Folks like @charlie Cheever and @brent Vatne have shared great insights on this topic.

ReactNative #Expo #React #MobileDevelopment #DeveloperExperience #JavaScript #AppDevelopment #CrossPlatform #EAS

Understanding L1 DevOps: The First Line of Support in Modern Operations

2025-11-12 12:05:37

What is L1 in DevOps?

L1 stands for Level 1 Support in the context of DevOps or IT operations. It is the first line of support for monitoring, incidents, and basic troubleshooting in the software delivery and infrastructure lifecycle.

In DevOps, L1 is primarily reactive, handling alerts, tickets, and basic operational issues before escalating to higher levels (L2/L3).

2.Who is involved in L1 DevOps?

People involved in L1 support typically include:

L1 DevOps Engineer / DevOps Support Engineer: Handles first-level alerts and tickets.

Operations / IT Support Team Members: May overlap with L1 duties.

Monitoring & Alerting System Administrators (like Grafana, Prometheus admins).

L1 interacts with:

Developers (for unclear tickets or deployment issues)

L2/L3 DevOps or SREs (for escalations)

End-users or clients reporting incidents

  1. What to focus on in L1 DevOps?

Key focus areas for L1:

1.Monitoring & Alerts

Watch dashboards (Grafana, Kibana) for system health

Respond to alert notifications (CPU, memory, disk, service down)

2.Incident Management

Logging incidents in ticketing systems (Jira, ServiceNow)

Acknowledging alerts and categorizing severity

Performing basic triage before escalation

3.Basic Troubleshooting

Restarting services or applications

Checking logs to identify obvious issues

Network, disk, or permission checks

4.Documentation & Reporting

Maintaining knowledge base for recurring issues

Providing clear handover notes for L2/L3

5.Automation Awareness

Using simple scripts/playbooks for repetitive tasks (like Ansible, Shell scripts)

Following runbooks for standard operating procedures

4.Responsibilities of L1 DevOps

Responsibility Details

Monitoring & Alerts: Observing dashboards, responding to notifications for CPU, memory, disk usage, or application failures.

Incident Management: Logging, categorizing, and triaging issues before escalation.

Basic Troubleshooting: Restarting services, checking logs, performing routine checks, following runbooks.

Communication & Documentation: Updating tickets, notifying stakeholders, maintaining SOPs and knowledge bases.

Routine Operations: Backup verification, log rotation, and patch checks.

Automation Awareness: Using scripts or playbooks for repetitive tasks, following predefined procedures.

5.Key Points

L1 is mostly reactive, not designing or building systems.

Focus is on stability and uptime, following standard procedures.

L1 does not usually modify production architecture; any risky actions are escalated.

It's entry-level, but a strong L1 engineer understands monitoring tools, incident flow, and basic scripting.

💡 Tip: L1 is the foundation for higher DevOps roles. A good L1 engineer who learns from incidents, tools, and runbooks can advance to L2/L3 or SRE roles.

AWS Automated Report Generator: SQS & Lambda

2025-11-12 12:02:49

Project Overview

  • Our AWS Lambda function is triggered by an AWS SQS queue.
  • This SQS queue contains JSON data, which we use to generate a static report website and store it on AWS S3.
  • To view the code, click here.
  • Everything done using the AWS UI can also be done with the AWS CLI — choose whichever you prefer.
  • Note: This project focuses on infrastructure setup rather than code implementation.

Output of Lambda

  • Generated Website Report (Gif quality may be reduced)
    Generated Website Report

  • Generated Excel file (Gif quality may be reduced)
    Generated xlsx

AWS Services Workflow - Big picture

Big picture

  1. An SQS queue receives messages containing JSON payloads.
    • SQS Queue messages can be sent by an API, AWS API Gateway, aws cli, aws sdk, or other methods.
    • The queue triggers an AWS Lambda function.
  2. The Lambda function generates a static website report and an xlsx after processing the received JSON data.
  3. The generated static website is stored in an AWS S3 bucket
  4. The website files in S3 are served via CloudFront, providing low latency and edge caching.
  • Optional: integrate with AWS SES to automatically send reports via email to users.

What is AWS Lambda, SQS and S3

  • AWS Lambda: is an event-driven, serverless Function as a Service (FaaS) by AWS. You pay only for execution time and resources used. Limits include max runtime 15 minutes and configurable memory up to 10 GB.

    • There are different ways to package and manage dependencies for Lambda functions:
    • Lambda Layers (for Node.js): Main benefits are code reuse across multiple functions and smaller deployment packages. They don't reduce cold start times or avoid runtime installation delays - this is a common misconception.
    • Docker images: Allow for more control over the runtime environment and larger dependency sets. lambda-layers-vs-ecr-pre-built-image
    • These limits can change; please check the official AWS website for the latest details.
  • AWS SQS: is a message queue service that enables decoupled communication between components.

  • AWS S3: is an object storage service for storing and retrieving data.

  • AWS CloudFront: is a content delivery network (CDN) that delivers content with low latency.

Hands-on: How it was developed

1. Installing AWS CLI and Connecting to AWS

2. Creating Lambda - AWS UI

  • Open AWS Lambda page -> Click Create function -> Node.js v22.x -> Name function -> Create new role for service -> Create function
  • Lambda function Configurations -> Environment variables -> set:
    • S3_BUCKET_NAME

3. Creating S3 Bucket - AWS UI

  • Open AWS S3 page -> Click Create bucket -> Enter bucket name -> Use same region as your Lambda Function -> Configure options (versioning, encryption, etc.) -> Click Create bucket
    • keep the bucket private, since we will use CloudFront it is not needed to be public

4. Configure Cloudfront to serve your AWS S3 Bucket content

4.1 Create CloudFront Distribution

  • Open AWS CloudFront page -> Click Create Distribution -> Distribution type: Single website -> Choose S3 bucket as origin -> Cache settings, change origin request policy to CORS-S3Origin -> Click Create Distribution

    • This policy forwards CORS-related headers (Origin, Access-Control-Request-*) to S3, allowing browsers to load your files correctly.

4.2 Configure S3 CORS

  • Open AWS S3 page -> Select your bucket -> Go to Permissions tab -> Edit CORS configuration -> Add allowed origins, methods (GET, HEAD), and headers to match your CloudFront settings -> save

4.3 (optional) Configure Custom Cache Policy

  • Configure a custom distribution cache behavior with biggest TTL options (10y)
    • Click into your cloudfront distribution -> behaviors -> cache policy -> create policy -> configure -> add on distribution -> save
[
  {
    "AllowedHeaders": ["*"],
    "AllowedMethods": ["GET", "HEAD"],
    "AllowedOrigins": ["https://YOUR_CLOUDFRONT_DOMAIN.cloudfront.net"],
    "ExposeHeaders": []
  }
]

5. Attaching AWS Lambda specific AWS S3 Bucket Role Based Access - AWS UI

  • Open AWS IAM page -> Open your Lambda IAM role -> Edit permissions -> Add the following statement -> Save
{
  "Effect": "Allow",
  "Action": ["s3:PutObject"],
  "Resource": "arn:aws:s3:::YOUR_BUCKET_NAME_HERE/*"
}

6. Creating an SQS queue - AWS UI

  • Open AWS SQS page -> Click Create Queue -> Enter queue name -> Set "Send messages" permission to your AWS account ID -> Set "Receive messages" permission to the ARN of your Lambda IAM role

7. Adding a Lambda Trigger to the SQS queue

7.1 Configure IAM Permissions

  • Open AWS IAM page -> Lambda IAM role -> Add following statement -> Save
{
  "Effect": "Allow",
  "Action": [
    "sqs:ReceiveMessage",
    "sqs:GetQueueAttributes",
    "sqs:DeleteMessage"
  ],
  "Resource": ["arn:aws:sqs:region:accountid:queuename"]
}

7.2 Configure SQS Trigger

  • Open your SQS queue -> Lambda triggers -> Configure trigger -> Choose your lambda function -> Save

8. Creating Lambda Layers for code reuse and smaller deployments - AWS UI

  • Lambda Layers allow you to package dependencies separately, enabling code reuse across multiple Lambda functions and keeping deployment packages smaller. I will use Lambda Layers, since I don't need a large image as the ECR approach allows.

8.1 Create Dependencies Layer

  • Open your AWS Lambda -> Layers -> Custom layer -> Create new layer -> Name (e.g., "dependencies-layer") -> Upload .zip file -> Runtime Nodejs22 -> Arch x86_64 -> Click on Create
  • Create the layer: npm run create:layer:x64 (installs Linux x64 prod dependencies and creates layer.zip)
    • Important: The layer must have a nodejs/node_modules/ structure at the root of the zip for Lambda to find Node.js dependencies

8.2 Attach Layer to Lambda Function

  • Lambda Function -> Layers -> Add a layer -> Select the dependencies layer

9. Uploading code to Lambda using AWS CLI

  • Minify your Lambda code, and zip it: npm run build:zip
  • Important: The function.zip file structure must have index.js at the root level:
  function.zip
  ├── index.js
  • Run:
aws lambda update-function-code \
  --function-name lambda-function-name \
  --zip-file fileb://function.zip

Done! Now let's test our Static Report Generator

Triggering Lambda with SQS

Send a message to your SQS queue:

Using a file:

aws sqs send-message \
  --queue-url https://sqs.<region>.amazonaws.com/<account-id>/<queue-name> \
  --message-body file://your-file.json

Message format:

{
  "data": [
    {
      "id": "001",
      "name": "Alice Johnson",
      "value": 1250.5,
      "category": "Sales",
      "date": "2024-01-15"
    }
  ],
  "reportTitle": "Monthly Performance Report"
}
  • Note that we could also trigger it via API Gateway or by programmatically sending messages to the SQS queue from one of our APIs.

Lambda Function Verification

  • Check recent Lambda execution (last invocation): aws logs tail /aws/lambda/lambda-name --since 5m
  • Check S3 bucket contents: aws s3 ls s3://YOUR_BUCKET_NAME_HERE/ --recursive
  • Check CloudFront distribution: aws cloudfront get-distribution --id YOUR_DISTRIBUTION_ID
  • Or check everything on AWS console.

Important

  • This is a common feature that can be used across companies, but some adjustments may be needed, for example:
  • 1. Configure retries in your SQS queue in case of failures.
  • 2. Add a DLQ (Dead Letter Queue) for the Lambda to handle failed messages that exceed the maximum retries.
  • 3. Monitor and log using CloudWatch to track errors, execution times, and performance metrics.
  • 4. Consider scalability, ensuring the SQS queue and Lambda can handle bursts of messages efficiently.
    • Avoid bottlenecks, e.g., when your SQS fills up faster than Lambda can process -> queue bottleneck.
    • If Lambda concurrency is too low -> processing bottleneck.
  • 5. Set up CI/CD pipelines with automated tests, easy rollback, and other best practices.

References:

Thanks for Reading!

  • Feel free to reach out if you have any questions, feedback, or suggestions. Your engagement is appreciated!

Contacts

RangeLink v0.3.0: One Keybinding to Rule Them All

2025-11-12 12:00:54

RangeLink Logo

Hey folks! Just shipped RangeLink v0.3.0, and I'm genuinely excited about this one.

If you caught my previous post about v0.2.1, you know RangeLink started as a way to share precise code references with AI assistants in the terminal. That's still there, but v0.3.0 takes it further: one keybinding (Cmd+R Cmd+L) now sends your code references anywhere you need them.

The Evolution

v0.2.0 launched with terminal binding — auto-send links to your integrated terminal where AI assistants can see them — plus clickable navigation to jump back to code.

v0.3.0 introduces Paste Destinations — a unified system that lets you bind RangeLink to wherever you're working: Claude Code Extension, Cursor AI, your terminal, or even a scratchpad file for drafting complex AI prompts.

Same keybinding. Different destinations. Your choice.

Why This Matters

Here's the thing about built-in AI features in editors: they're convenient, but they lock you into one AI model, one workflow, and usually only line-level precision. RangeLink gives you:

  • Character-level precision — Not just line 42, but #L42C10-L58C25 (that exact function signature, that specific condition)
  • Any AI assistant — Claude, GPT, Gemini, whatever you prefer. No vendor lock-in.
  • Flexible workflows — Terminal for quick questions, scratchpad for complex prompts, direct AI chat integrations
  • Universal format — GitHub-style links that work everywhere (PRs, Slack, docs, teammates without RangeLink)

The best part? You don't give up any convenience. Select code, hit Cmd+R Cmd+L, and your link appears exactly where you need it — with the same character-level precision that makes RangeLink special.

What's New in v0.3.0

Paste Destinations (The Big One)

Bind RangeLink to one destination at a time:

  • Claude Code Extension — Links open Claude's chat panel (works in VSCode and Cursor)*
  • Cursor AI — Links open Cursor's AI chat*
  • Terminal — Auto-paste links for terminal-based AI assistants
  • Text Editor — Draft complex prompts in any file (markdown, untitled, whatever)

All destinations share the same seamless UX: select code → Cmd+R Cmd+L → link appears at your cursor position → destination auto-focuses → keep typing.

(*) FULL DISCLAIMER: Claude Code Extension and Cursor AI destinations use a clipboard-based workaround because their APIs don't support programmatic text insertion yet (as of Nov 2025). RangeLink copies the link and opens the chat panel, but you need to paste (Cmd+V / Ctrl+V) yourself. Terminal and Text Editor destinations fully auto-paste without manual intervention.

Editor Link Navigation

Any RangeLink in any editor file (markdown, code, untitled) is now clickable. Hover to preview, Cmd+Click to navigate.

Use case: You're drafting a prompt in a scratchpad file with multiple code references. Before sending to your AI assistant, you can validate each link by clicking it — makes sure you're sharing the right context.

The "One Keybinding" Philosophy

Every AI tool has its own way to share code — different shortcuts, different formats, different workflows.

RangeLink unifies it: Cmd+R Cmd+L works everywhere, with character-level precision everywhere, and connects to any AI assistant.

One keybinding to rule them all.

Why I'm Excited

This release makes RangeLink competitive with integrated AI features without sacrificing its core strengths:

  • You're not locked into one AI model
  • You get more precision (characters, not just lines)
  • Links work universally (paste them anywhere, share with anyone)
  • The workflow is just as seamless as built-in tools

And honestly? The paste destinations architecture feels like the right foundation for whatever comes next.

Behind the Scenes: Working with AI on RangeLink

One thing I've been experimenting with: using AI assistants to help build RangeLink itself. I've progressively added instructions to CLAUDE.md to guide how Claude Code helps me develop.

A pattern I really like is the questions template. When Claude needs design decisions before implementing a feature, instead of asking questions in the terminal (which gets messy), it:

  1. Saves questions to a .txt file in .claude-questions/
  2. Pre-fills recommended answers when it has context
  3. I edit the file with my decisions
  4. Claude reads my answers and proceeds

This keeps the workflow clean and creates a record of design decisions. The questions file becomes documentation.

If you're working with AI on your projects, this pattern might be worth trying!

Try It Out

Install RangeLink:

Quick start:

  1. Command Palette → "Bind RangeLink to [your preferred destination]"
  2. Select code → Cmd+R Cmd+L (or Command Palette → "Copy Range Link" if you have keybinding conflicts)
  3. Your link is ready where you need it

Try the text editor destination with a split-screen scratchpad — it's a game-changer for complex AI prompts.

Would love to hear your feedback, especially if you're bouncing between different AI assistants!

Get Involved

If you find RangeLink useful, I'd love your support:

  • Star the repo on GitHub — it helps others discover it
  • 🐛 Report bugs or request features via GitHub Issues — I've started adding ideas there, not yet organized into a roadmap but wanted to share visibility on what's on my mind
  • 🤝 Contribute — the codebase is well-documented and PR-friendly
  • 🗣️ Share your feedback — I'm actively iterating based on what the community needs

For vim/neovim users interested in building a plugin: the core library is platform-agnostic and designed for multi-editor support. Would love to collaborate!

Links:

✅ Scenario #14 – Integrate Kubernetes with Vault to fetch Secrets

2025-11-12 11:53:26

Goal: A Kubernetes Pod can automatically fetch secrets from Vault using Kubernetes ServiceAccount authentication (no static tokens).

🌟 High-Level Flow

  1. Install Vault on Kubernetes
  2. Initialize & unseal Vault (Dev mode auto-unsealed)
  3. Enable Kubernetes Auth
  4. Configure Vault to trust Kubernetes
  5. Create a Vault policy
  6. Create a Kubernetes ServiceAccount
  7. Map Kubernetes SA → Vault role
  8. Deploy a Pod that auto-fetches secrets from Vault
  9. Verify secrets are injected inside the pod

📌 Prerequisites

✔ GKE cluster running
✔ kubectl configured
✔ helm installed (Cloud Shell already has it)

🧰 STEP 1 — Install Vault on Kubernetes

Add HashiCorp Helm repo

helm repo add hashicorp https://helm.releases.hashicorp.com
helm repo update

Create namespace

kubectl create namespace vault

Install Vault in dev mode (auto-unseal)

helm install vault hashicorp/vault \
  --namespace vault \
  --set "server.dev.enabled=true"

Check pod:

kubectl get pods -n vault

You should see:

vault-0   Running

1

🧰 STEP 2 — Exec into Vault Pod

kubectl exec -it vault-0 -n vault -- /bin/sh

Set Vault address inside pod:

export VAULT_ADDR="http://127.0.0.1:8200"

Check status:

vault status

2

🧰 STEP 3 — Enable Kubernetes Authentication

Inside the Vault pod:

vault auth enable kubernetes

🧰 STEP 4 — Configure Kubernetes Auth Method

Vault needs:

  • Token reviewer JWT
  • Kubernetes API server URL
  • Kubernetes CA cert

Inside the pod:

vault write auth/kubernetes/config \
  token_reviewer_jwt="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
  kubernetes_host="https://${KUBERNETES_PORT_443_TCP_ADDR}:443" \
  kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt

This will work now because you're inside a Kubernetes pod.

3

🧰 STEP 5 — Create a Secret in Vault

Inside Vault pod:

vault kv put secret/myapp username="admin" password="P@ssw0rd123"

Verify:

vault kv get secret/myapp

4

🧰 STEP 6 — Create Vault Policy

Create a file inside Vault pod:

cd /tmp
cat <<EOF > myapp-policy.hcl
path "secret/data/myapp" {
  capabilities = ["read"]
}
EOF

Load the policy:

vault policy write myapp-policy /tmp/myapp-policy.hcl

5

🧰 STEP 7 — Create Kubernetes ServiceAccount

Exit the Vault pod:

exit

Create SA in default namespace:

kubectl create sa myapp-sa

🧰 STEP 8 — Create Vault Role that maps SA → Policy

Go back into Vault pod:

kubectl exec -it vault-0 -n vault -- /bin/sh
export VAULT_ADDR="http://127.0.0.1:8200"

Now create the role:

vault write auth/kubernetes/role/myapp-role \
  bound_service_account_names="myapp-sa" \
  bound_service_account_namespaces="default" \
  policies="myapp-policy" \
  ttl="24h"

6

🧰 STEP 9 — Deploy Application That Fetches Secrets Automatically

Outside Vault pod.

Create a deployment using Vault Agent injector.

Create a file:

cat <<EOF > myapp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  labels:
    app: myapp
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
      annotations:
        vault.hashicorp.com/agent-inject: "true"
        vault.hashicorp.com/role: "myapp-role"
        vault.hashicorp.com/agent-inject-secret-mysecret: "secret/myapp"
    spec:
      serviceAccountName: myapp-sa
      containers:
      - name: myapp
        image: nginx
EOF

Apply:

kubectl apply -f myapp.yaml

🧪 STEP 10 — Verify Vault Injected Secrets

Wait for pod:

kubectl get pods

Get the pod name:

myapp-xxxxxxxx

Exec into it:

kubectl exec -it myapp-xxxxx -- /bin/sh

List injected secrets:

ls /vault/secrets

You should see:

mysecret

View content:

cat /vault/secrets/mysecret

You will see:

{
  "username": "admin",
  "password": "P@ssw0rd123"
}

🎉 SUCCESS — Kubernetes Pod securely pulled secrets from Vault!

7

🌟 Thanks for reading! If this post added value, a like ❤️, follow, or share would encourage me to keep creating more content.

— Latchu | Senior DevOps & Cloud Engineer

☁️ AWS | GCP | ☸️ Kubernetes | 🔐 Security | ⚡ Automation
📌 Sharing hands-on guides, best practices & real-world cloud solutions

Conflict Resolution and the Escalation Trap

2025-11-12 11:47:02

Have you ever noticed how a small disagreement can spiral into a massive conflict? A minor miscommunication escalates into heated arguments? These are examples of the escalation trap, a concept rooted in systems theory that shows how seemingly minor actions can trigger reinforcing feedback loops, amplifying problems far beyond their original magnitude.

In this post, I explore the escalation trap and how de-escalation — a critical soft skill often overlooked in professional environments — can break these destructive cycles.

What is the Escalation Trap?

The escalation trap is a reinforcing feedback loop where one party's action triggers a reaction from another party, which in turn prompts a stronger counter-action, creating a self-amplifying cycle of conflict and tension. Unlike balancing feedback loops that tend toward equilibrium, escalation loops push systems toward extreme states.

In Thinking in Systems, Donella Meadows explores how systems are governed by feedback loops. The escalation trap is a specific type of reinforcing feedback loop (also called a "vicious cycle") where each action amplifies the system's tendency toward conflict. The more a person escalates, the more another one feels justified in reacting back, and the cycle accelerates, leading to outcomes neither party originally intended.

A good example of quick escalation is a traffic conflict. One driver unintentionally cuts off another, who gets angry and honks and yells. If both parties become angry and allow themselves to escalate, the conflict may quickly lead to physical threats and violence. On the other hand, the ability to de-escalate may quickly resolve the issue, allowing everyone to move on in a matter of seconds.

Extreme examples include arms races by nuclear powers and quick escalation of violence between religious groups or soccer fans.

A Simple Business Example

Imagine two engineering teams in a tech company: the backend team and the frontend team.

  • Day 1: The frontend team reports a critical bug in the API response format. The backend team, feeling blamed, responds defensively: "The API is correct. The frontend team isn't using it properly."
  • Day 3: The frontend team, frustrated by the dismissal, reacts by escalating the issue to their engineering manager and labeling it as a "critical blocker."
  • Day 5: The backend team's manager intervenes, escalates back by questioning the frontend team's technical competence in a larger meeting.
  • Day 7: The conflict has now involved senior leadership, created tension between teams, slowed down product development, and damaged working relationships.

What started as a technical issue evolved into an organizational conflict through escalation. Neither team is malicious; both believed they were defending their interests. Yet the system's structure trapped both teams to an outcome worse for everyone.

This reflects what Meadows describes as a fundamental principle of systems: the structure of the system determines its behavior. The escalation trap emerges from a system where actions trigger defensive counter-actions, creating reinforcing loops.

The Cost of Escalation

Escalation may get expensive, both emotionally and economically:

  • Emotional toll: Stress, anxiety, and reduced morale spread through teams
  • Productivity loss: Energy spent on conflict diverts from productive work
  • Relationship damage: Trust erodes, making future collaboration harder
  • Poor decision-making: When threatened, people make defensive rather than optimal choices

De-escalation: The Way Out

The antidote to the escalation trap is de-escalation — the intentional action of reducing tension and breaking the reinforcing feedback loop. Rather than matching escalation with escalation, de-escalation introduces a balancing feedback loop that reduces conflict and creates space for resolution.

De-escalation is not a weakness. It's an important skill and an strategic choice to interrupt the negative spiral and create conditions where problems can be solved effectively.

De-escalation operates by changing how you respond to provocative actions. Instead of triggering the next loop in the escalation cycle, you can respond with:

  • Calm language and tone: Signals that you're not a threat
  • Empathy and understanding: Shows you recognize the other party's concerns
  • Acknowledgment: Validates their experience without necessarily agreeing
  • Collaborative framing: Shifts from adversarial ("you vs. me") to cooperative ("us vs. the problem")
  • Clear boundaries: Maintains respect without aggressive defensiveness

Traffic conflit resolution

Let's further explore the traffic conflit I previously mentioned. Consider both scenarios:

Without de-escalation:

  • Driver A cuts off Driver B
  • Driver B honks angrily
  • Driver A, feeling attacked, slows down intentionally
  • Driver B becomes furious and makes an obscene gesture
  • Driver A, now enraged, escalates further
  • Both drivers are now in a dangerous situation, risking accidents, injuries, or worse

This is the escalation trap. Two reasonable people became dangerous through escalation.

With de-escalation:

  • Driver A cuts off Driver B
  • Driver B, though frustrated, takes a breath and assumes an honest mistake
  • Driver B doesn't honk; instead, gives a brief, friendly wave
  • Driver A, recognizing their mistake, appreciates the grace and waves back
  • Both drivers move on, and everyone arrives safely

In the second scenario, Driver B made a strategic choice to de-escalate. By not matching Driver A's error with aggressive honking, Driver B broke the escalation loop. The result? Safety, reduced stress, and everyone moving forward.

Backend and frontend teams' de-escalation

Now, let's also return to our earlier example of the backend and frontend teams, but imagine a de-escalated version:

  • Day 1: The frontend team reports a critical bug in the API response format. Instead of becoming defensive, the backend team responds: "Thanks for reporting this. Let's schedule a quick call to understand what you're seeing."
  • Day 2: Both teams meet. The backend team listens carefully to the frontend team's concern. They discover the issue stems from a misalignment in expectations, not malice. Both teams contributed to the confusion.
  • Day 3: Together, they create a solution: updated documentation, a quick API adjustment, and agreed-upon communication protocols for future issues.
  • Result: The problem is solved faster. Relationships strengthen. Trust is built. Both teams learned something. The organization benefits from improved collaboration.

Developing Your De-escalation Skills

De-escalation is a learnable skill. Here are practical ways to develop it:

  • Develop Self-Awareness: Before you can de-escalate others, learn to recognize your own escalation triggers.
  • Practice Pause and Breathe: The most powerful de-escalation tool is the pause between stimulus and response.
  • Listen for the Underlying Concern: Escalation often masks deeper concerns: fear, feeling unheard, loss of control.
  • Assume Positive Intent: Most escalation happens when people assume the worst of each other. Always assume the best intentions from other people.
  • Use Calm, Cooperative Language: Words shape people's behavior. Choose them wisely.
  • Validate Before Disagreeing: When people feel heard, they're less likely to escalate.
  • Seek Common Ground: Escalation thrives on division. De-escalation thrives on cooperation.
  • Know When to Step Back: Sometimes, the best de-escalation is recognizing when continuing a conversation will only escalate further.
  • Build a Culture of Psychological Safety: De-escalation works best in environments where people feel safe being vulnerable.

Conclusion

The escalation trap is a fundamental pattern in systems: small actions trigger reinforcing loops that amplify conflict and drive systems toward destructive extremes.

De-escalation is the lever that interrupts this trap. By choosing calm over defensiveness, empathy over blame, and collaboration over adversarialism, you don't just resolve immediate conflicts—you transform relationships.

Businesses are typically high-pressure environments focused on delivering results and meeting ambitious, often uncertain deadlines. Management deliberately introduce some level of conflict and pressure to drive performance. In this scenario, avoiding the escalation trap is no easy task. It demands both skill and intentional action.

Leaders who understand de-escalation become invaluable. They transform potential crises into opportunities for strengthening teams and improving outcomes.

The next time you're in a conflict — whether with a teammate, customer, or leader — pause and ask yourself: "Will my next action escalate or de-escalate this situation?" Choose wisely.