2025-11-12 12:10:41
EN: The "Expo vs. React Native CLI" debate is over, but not everyone has gotten the memo. The React Native team itself now recommends a framework-first approach for new projects. I wrote a deep-dive on why modern @Expo (with EAS, Config Plugins) is the default, production-ready choice for most teams in 2025. This isn't the Expo you remember. #ReactNative #Expo #MobileDevelopment
The "Expo vs. bare React Native CLI" debate is one of the oldest traditions in the React Native community. For years, the conventional wisdom was simple: start with Expo for a hobby project, but for "serious" production work, you'll eventually need the power and flexibility of the CLI.
That advice is now dangerously out of date.
The landscape has fundamentally shifted. The React Native core team itself no longer recommends starting new projects with react-native init. Instead, they advocate for using a framework. And in today's ecosystem, Expo is, for most use cases, the superior choice for building production-grade applications. If you're still starting projects with the bare CLI in 2025, you're likely choosing a path with more friction, more manual maintenance, and fewer powerful features out of the box.
This isn't about hype. It's about a pragmatic look at the tools that solve real-world development problems. Let's break down why.
The Game Has Changed: Why This Isn't the Old "Walled Garden" Debate
The old argument against Expo was its "walled garden" approach. If you needed a third-party package with custom native code that wasn't in the Expo SDK, you were stuck. Your only option was to eject, a one-way process that left you with a complex, often messy bare CLI project.
That entire paradigm is dead, thanks to two key innovations: Development Builds and Config Plugins.
Escape Hatches You Can Actually Use: Config Plugins
Config Plugins (also known as Continuous Native Generation or CNG) are the single most important feature that dismantled the "walled garden." They are small JavaScript functions that run during the native build process, allowing you to modify native project files like Info.plist on iOS or AndroidManifest.xml on Android.
What does this mean in practice? You can integrate almost any third-party React Native library into your Expo project. The community has already created plugins for most popular packages.
Let's say you need to set up different environments (staging, production) with unique API keys and app icons. Instead of manually managing Xcode projects or Gradle files, you create a dynamic configuration.
// app.config.ts
import { ExpoConfig } from 'expo/config';
// Define your environment-specific variables
const environments = {
development: {
name: 'MyApp (Dev)',
bundleIdentifier: 'com.myapp.dev',
apiKey: 'DEV_API_KEY',
},
staging: {
name: 'MyApp (Staging)',
bundleIdentifier: 'com.myapp.staging',
apiKey: 'STAGING_API_KEY',
},
production: {
name: 'MyApp',
bundleIdentifier: 'com.myapp.production',
apiKey: 'PROD_API_KEY',
},
};
// Get the current environment from an environment variable
const currentEnv = process.env.APP_VARIANT || 'development';
const envConfig = environments[currentEnv];
const config: ExpoConfig = {
name: envConfig.name,
slug: 'my-app',
ios: {
bundleIdentifier: envConfig.bundleIdentifier,
},
android: {
package: envConfig.bundleIdentifier,
},
// Use extra to pass environment variables to your app code
extra: {
apiKey: envConfig.apiKey,
eas: {
projectId: 'YOUR_EAS_PROJECT_ID'
}
},
};
export default config;
With this one file, you can generate distinct builds for each environment by simply setting an environment variable (APP_VARIANT=staging eas build). This eliminates a massive source of manual error and simplifies CI/CD pipelines immensely.
Production-Grade Superpowers, Out of the Box
Choosing Expo isn't just about avoiding problems; it's about gaining capabilities that are difficult and time-consuming to build yourself.
Ship Critical Fixes Instantly with Expo Updates (OTA)
Your app is in production. A critical bug that breaks the login flow is discovered. With a traditional CLI workflow, your fix is at the mercy of the App Store and Google Play review times, which can take hours or even days.
With Expo Application Services (EAS) Update, you can push the JavaScript-only fix directly to your users' devices in minutes. It's an Over-the-Air (OTA) update that bypasses the store review process for JS bundle changes.
[Here, a flowchart would illustrate the OTA update process: 1. Developer pushes JS code to EAS. 2. EAS builds the JS bundle. 3. User opens the app. 4. The app checks EAS for a new bundle. 5. The new bundle is downloaded in the background. 6. The next time the user opens the app, they're running the patched code.]
This isn't just a convenience; it's a powerful tool for risk management and rapid iteration. It makes your team more agile and your application more resilient.
A Build Process That Just Works
Ask any developer who has maintained a bare React Native project: managing native build environments is a headache. Keeping Xcode, CocoaPods, Android Studio, and Gradle versions in sync across a team and a CI server is fragile and time-consuming.
EAS Build abstracts this away completely. It provides a consistent, managed, and cloud-based build environment for your app. You run a single command, and it queues up a build for you. This frees your team from being native build experts and lets them focus on writing features.
A Reality Check: What Are the Downsides?
No tool is perfect. The most common argument I hear against Expo today is, "It just adds more dependencies."
And it's true. Expo is a framework; it provides a curated set of libraries and services. But this is a trade-off. You're trading direct control over every single dependency for a cohesive, well-maintained ecosystem where upgrades are smoother and packages are designed to work together. With the bare CLI, you're not avoiding dependencies; you're just becoming the sole maintainer of your own, hand-picked "framework."
For a highly specialized app—perhaps a social media app with complex, custom native video processing where you use zero Expo libraries—the overhead might not be worth it. But for the 95% of applications (e-commerce, internal tools, productivity, content apps), the benefits of the managed ecosystem far outweigh the cost of a few extra packages in your node_modules.
The Verdict: It's a Question of Focus
The decision is no longer about "capability" vs. "limitation." It's about where you want to spend your team's time.
Do you want to become an expert in configuring Xcode build settings, managing Gradle dependencies, and building your own OTA update mechanism? Or do you want to focus on building features, shipping faster, and delivering a better product?
By handling the undifferentiated heavy lifting of builds, configuration, and updates, Expo allows you to focus purely on the application itself. The escape hatches are there if you need them, but they are the exception, not the rule. The React Native team's official guidance confirms this direction. For your next production project, the question isn't "Why Expo?" but rather, "Do I have a very specific, compelling reason not to use Expo?"
For those who have already made the switch from CLI to Expo, what was the "aha!" moment for you? And for anyone still on the fence, what’s the biggest question left in your mind?
Kudos to the teams at @Expo and @React Native for pushing the ecosystem forward. Folks like @charlie Cheever and @brent Vatne have shared great insights on this topic.
2025-11-12 12:05:37
What is L1 in DevOps?
L1 stands for Level 1 Support in the context of DevOps or IT operations. It is the first line of support for monitoring, incidents, and basic troubleshooting in the software delivery and infrastructure lifecycle.
In DevOps, L1 is primarily reactive, handling alerts, tickets, and basic operational issues before escalating to higher levels (L2/L3).
2.Who is involved in L1 DevOps?
People involved in L1 support typically include:
L1 DevOps Engineer / DevOps Support Engineer: Handles first-level alerts and tickets.
Operations / IT Support Team Members: May overlap with L1 duties.
Monitoring & Alerting System Administrators (like Grafana, Prometheus admins).
L1 interacts with:
Developers (for unclear tickets or deployment issues)
L2/L3 DevOps or SREs (for escalations)
End-users or clients reporting incidents
Key focus areas for L1:
1.Monitoring & Alerts
Watch dashboards (Grafana, Kibana) for system health
Respond to alert notifications (CPU, memory, disk, service down)
2.Incident Management
Logging incidents in ticketing systems (Jira, ServiceNow)
Acknowledging alerts and categorizing severity
Performing basic triage before escalation
3.Basic Troubleshooting
Restarting services or applications
Checking logs to identify obvious issues
Network, disk, or permission checks
4.Documentation & Reporting
Maintaining knowledge base for recurring issues
Providing clear handover notes for L2/L3
5.Automation Awareness
Using simple scripts/playbooks for repetitive tasks (like Ansible, Shell scripts)
Following runbooks for standard operating procedures
4.Responsibilities of L1 DevOps
Responsibility Details
Monitoring & Alerts: Observing dashboards, responding to notifications for CPU, memory, disk usage, or application failures.
Incident Management: Logging, categorizing, and triaging issues before escalation.
Basic Troubleshooting: Restarting services, checking logs, performing routine checks, following runbooks.
Communication & Documentation: Updating tickets, notifying stakeholders, maintaining SOPs and knowledge bases.
Routine Operations: Backup verification, log rotation, and patch checks.
Automation Awareness: Using scripts or playbooks for repetitive tasks, following predefined procedures.
5.Key Points
L1 is mostly reactive, not designing or building systems.
Focus is on stability and uptime, following standard procedures.
L1 does not usually modify production architecture; any risky actions are escalated.
It's entry-level, but a strong L1 engineer understands monitoring tools, incident flow, and basic scripting.
💡 Tip: L1 is the foundation for higher DevOps roles. A good L1 engineer who learns from incidents, tools, and runbooks can advance to L2/L3 or SRE roles.
2025-11-12 12:02:49
AWS Lambda: is an event-driven, serverless Function as a Service (FaaS) by AWS. You pay only for execution time and resources used. Limits include max runtime 15 minutes and configurable memory up to 10 GB.
AWS SQS: is a message queue service that enables decoupled communication between components.
AWS S3: is an object storage service for storing and retrieving data.
AWS CloudFront: is a content delivery network (CDN) that delivers content with low latency.
brew install awscli
aws --version
aws configure
aws s3 ls
Open AWS Lambda page -> Click Create function -> Node.js v22.x -> Name function -> Create new role for service -> Create functionLambda function Configurations -> Environment variables -> set:
Open AWS S3 page -> Click Create bucket -> Enter bucket name -> Use same region as your Lambda Function -> Configure options (versioning, encryption, etc.) -> Click Create bucket
Open AWS CloudFront page -> Click Create Distribution -> Distribution type: Single website -> Choose S3 bucket as origin -> Cache settings, change origin request policy to CORS-S3Origin -> Click Create Distribution
Open AWS S3 page -> Select your bucket -> Go to Permissions tab -> Edit CORS configuration -> Add allowed origins, methods (GET, HEAD), and headers to match your CloudFront settings -> saveClick into your cloudfront distribution -> behaviors -> cache policy -> create policy -> configure -> add on distribution -> save
[
{
"AllowedHeaders": ["*"],
"AllowedMethods": ["GET", "HEAD"],
"AllowedOrigins": ["https://YOUR_CLOUDFRONT_DOMAIN.cloudfront.net"],
"ExposeHeaders": []
}
]
Open AWS IAM page -> Open your Lambda IAM role -> Edit permissions -> Add the following statement -> Save
{
"Effect": "Allow",
"Action": ["s3:PutObject"],
"Resource": "arn:aws:s3:::YOUR_BUCKET_NAME_HERE/*"
}
Open AWS SQS page -> Click Create Queue -> Enter queue name -> Set "Send messages" permission to your AWS account ID -> Set "Receive messages" permission to the ARN of your Lambda IAM roleOpen AWS IAM page -> Lambda IAM role -> Add following statement -> Save
{
"Effect": "Allow",
"Action": [
"sqs:ReceiveMessage",
"sqs:GetQueueAttributes",
"sqs:DeleteMessage"
],
"Resource": ["arn:aws:sqs:region:accountid:queuename"]
}
Open your SQS queue -> Lambda triggers -> Configure trigger -> Choose your lambda function -> SaveOpen your AWS Lambda -> Layers -> Custom layer -> Create new layer -> Name (e.g., "dependencies-layer") -> Upload .zip file -> Runtime Nodejs22 -> Arch x86_64 -> Click on Createnpm run create:layer:x64 (installs Linux x64 prod dependencies and creates layer.zip)
nodejs/node_modules/ structure at the root of the zip for Lambda to find Node.js dependenciesLambda Function -> Layers -> Add a layer -> Select the dependencies layernpm run build:zip
function.zip file structure must have index.js at the root level:
function.zip
├── index.js
aws lambda update-function-code \
--function-name lambda-function-name \
--zip-file fileb://function.zip
Send a message to your SQS queue:
Using a file:
aws sqs send-message \
--queue-url https://sqs.<region>.amazonaws.com/<account-id>/<queue-name> \
--message-body file://your-file.json
Message format:
{
"data": [
{
"id": "001",
"name": "Alice Johnson",
"value": 1250.5,
"category": "Sales",
"date": "2024-01-15"
}
],
"reportTitle": "Monthly Performance Report"
}
aws logs tail /aws/lambda/lambda-name --since 5m
aws s3 ls s3://YOUR_BUCKET_NAME_HERE/ --recursive
aws cloudfront get-distribution --id YOUR_DISTRIBUTION_ID
2025-11-12 12:00:54
Hey folks! Just shipped RangeLink v0.3.0, and I'm genuinely excited about this one.
If you caught my previous post about v0.2.1, you know RangeLink started as a way to share precise code references with AI assistants in the terminal. That's still there, but v0.3.0 takes it further: one keybinding (Cmd+R Cmd+L) now sends your code references anywhere you need them.
v0.2.0 launched with terminal binding — auto-send links to your integrated terminal where AI assistants can see them — plus clickable navigation to jump back to code.
v0.3.0 introduces Paste Destinations — a unified system that lets you bind RangeLink to wherever you're working: Claude Code Extension, Cursor AI, your terminal, or even a scratchpad file for drafting complex AI prompts.
Same keybinding. Different destinations. Your choice.
Here's the thing about built-in AI features in editors: they're convenient, but they lock you into one AI model, one workflow, and usually only line-level precision. RangeLink gives you:
#L42C10-L58C25 (that exact function signature, that specific condition)The best part? You don't give up any convenience. Select code, hit Cmd+R Cmd+L, and your link appears exactly where you need it — with the same character-level precision that makes RangeLink special.
Bind RangeLink to one destination at a time:
All destinations share the same seamless UX: select code → Cmd+R Cmd+L → link appears at your cursor position → destination auto-focuses → keep typing.
(*) FULL DISCLAIMER: Claude Code Extension and Cursor AI destinations use a clipboard-based workaround because their APIs don't support programmatic text insertion yet (as of Nov 2025). RangeLink copies the link and opens the chat panel, but you need to paste (
Cmd+V/Ctrl+V) yourself. Terminal and Text Editor destinations fully auto-paste without manual intervention.
Any RangeLink in any editor file (markdown, code, untitled) is now clickable. Hover to preview, Cmd+Click to navigate.
Use case: You're drafting a prompt in a scratchpad file with multiple code references. Before sending to your AI assistant, you can validate each link by clicking it — makes sure you're sharing the right context.
Every AI tool has its own way to share code — different shortcuts, different formats, different workflows.
RangeLink unifies it: Cmd+R Cmd+L works everywhere, with character-level precision everywhere, and connects to any AI assistant.
One keybinding to rule them all.
This release makes RangeLink competitive with integrated AI features without sacrificing its core strengths:
And honestly? The paste destinations architecture feels like the right foundation for whatever comes next.
One thing I've been experimenting with: using AI assistants to help build RangeLink itself. I've progressively added instructions to CLAUDE.md to guide how Claude Code helps me develop.
A pattern I really like is the questions template. When Claude needs design decisions before implementing a feature, instead of asking questions in the terminal (which gets messy), it:
.txt file in .claude-questions/
This keeps the workflow clean and creates a record of design decisions. The questions file becomes documentation.
If you're working with AI on your projects, this pattern might be worth trying!
Install RangeLink:
Quick start:
Cmd+R Cmd+L (or Command Palette → "Copy Range Link" if you have keybinding conflicts)Try the text editor destination with a split-screen scratchpad — it's a game-changer for complex AI prompts.
Would love to hear your feedback, especially if you're bouncing between different AI assistants!
If you find RangeLink useful, I'd love your support:
For vim/neovim users interested in building a plugin: the core library is platform-agnostic and designed for multi-editor support. Would love to collaborate!
Links:
2025-11-12 11:53:26
Goal: A Kubernetes Pod can automatically fetch secrets from Vault using Kubernetes ServiceAccount authentication (no static tokens).
✔ GKE cluster running
✔ kubectl configured
✔ helm installed (Cloud Shell already has it)
Add HashiCorp Helm repo
helm repo add hashicorp https://helm.releases.hashicorp.com
helm repo update
Create namespace
kubectl create namespace vault
Install Vault in dev mode (auto-unseal)
helm install vault hashicorp/vault \
--namespace vault \
--set "server.dev.enabled=true"
Check pod:
kubectl get pods -n vault
You should see:
vault-0 Running
kubectl exec -it vault-0 -n vault -- /bin/sh
Set Vault address inside pod:
export VAULT_ADDR="http://127.0.0.1:8200"
Check status:
vault status
Inside the Vault pod:
vault auth enable kubernetes
Vault needs:
Inside the pod:
vault write auth/kubernetes/config \
token_reviewer_jwt="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
kubernetes_host="https://${KUBERNETES_PORT_443_TCP_ADDR}:443" \
kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
This will work now because you're inside a Kubernetes pod.
Inside Vault pod:
vault kv put secret/myapp username="admin" password="P@ssw0rd123"
Verify:
vault kv get secret/myapp
Create a file inside Vault pod:
cd /tmp
cat <<EOF > myapp-policy.hcl
path "secret/data/myapp" {
capabilities = ["read"]
}
EOF
Load the policy:
vault policy write myapp-policy /tmp/myapp-policy.hcl
Exit the Vault pod:
exit
Create SA in default namespace:
kubectl create sa myapp-sa
Go back into Vault pod:
kubectl exec -it vault-0 -n vault -- /bin/sh
export VAULT_ADDR="http://127.0.0.1:8200"
Now create the role:
vault write auth/kubernetes/role/myapp-role \
bound_service_account_names="myapp-sa" \
bound_service_account_namespaces="default" \
policies="myapp-policy" \
ttl="24h"
Outside Vault pod.
Create a deployment using Vault Agent injector.
Create a file:
cat <<EOF > myapp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
labels:
app: myapp
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
annotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/role: "myapp-role"
vault.hashicorp.com/agent-inject-secret-mysecret: "secret/myapp"
spec:
serviceAccountName: myapp-sa
containers:
- name: myapp
image: nginx
EOF
Apply:
kubectl apply -f myapp.yaml
Wait for pod:
kubectl get pods
Get the pod name:
myapp-xxxxxxxx
Exec into it:
kubectl exec -it myapp-xxxxx -- /bin/sh
List injected secrets:
ls /vault/secrets
You should see:
mysecret
View content:
cat /vault/secrets/mysecret
You will see:
{
"username": "admin",
"password": "P@ssw0rd123"
}
🎉 SUCCESS — Kubernetes Pod securely pulled secrets from Vault!
🌟 Thanks for reading! If this post added value, a like ❤️, follow, or share would encourage me to keep creating more content.
— Latchu | Senior DevOps & Cloud Engineer
☁️ AWS | GCP | ☸️ Kubernetes | 🔐 Security | ⚡ Automation
📌 Sharing hands-on guides, best practices & real-world cloud solutions
2025-11-12 11:47:02
Have you ever noticed how a small disagreement can spiral into a massive conflict? A minor miscommunication escalates into heated arguments? These are examples of the escalation trap, a concept rooted in systems theory that shows how seemingly minor actions can trigger reinforcing feedback loops, amplifying problems far beyond their original magnitude.
In this post, I explore the escalation trap and how de-escalation — a critical soft skill often overlooked in professional environments — can break these destructive cycles.
The escalation trap is a reinforcing feedback loop where one party's action triggers a reaction from another party, which in turn prompts a stronger counter-action, creating a self-amplifying cycle of conflict and tension. Unlike balancing feedback loops that tend toward equilibrium, escalation loops push systems toward extreme states.
In Thinking in Systems, Donella Meadows explores how systems are governed by feedback loops. The escalation trap is a specific type of reinforcing feedback loop (also called a "vicious cycle") where each action amplifies the system's tendency toward conflict. The more a person escalates, the more another one feels justified in reacting back, and the cycle accelerates, leading to outcomes neither party originally intended.
A good example of quick escalation is a traffic conflict. One driver unintentionally cuts off another, who gets angry and honks and yells. If both parties become angry and allow themselves to escalate, the conflict may quickly lead to physical threats and violence. On the other hand, the ability to de-escalate may quickly resolve the issue, allowing everyone to move on in a matter of seconds.
Extreme examples include arms races by nuclear powers and quick escalation of violence between religious groups or soccer fans.
Imagine two engineering teams in a tech company: the backend team and the frontend team.
What started as a technical issue evolved into an organizational conflict through escalation. Neither team is malicious; both believed they were defending their interests. Yet the system's structure trapped both teams to an outcome worse for everyone.
This reflects what Meadows describes as a fundamental principle of systems: the structure of the system determines its behavior. The escalation trap emerges from a system where actions trigger defensive counter-actions, creating reinforcing loops.
Escalation may get expensive, both emotionally and economically:
The antidote to the escalation trap is de-escalation — the intentional action of reducing tension and breaking the reinforcing feedback loop. Rather than matching escalation with escalation, de-escalation introduces a balancing feedback loop that reduces conflict and creates space for resolution.
De-escalation is not a weakness. It's an important skill and an strategic choice to interrupt the negative spiral and create conditions where problems can be solved effectively.
De-escalation operates by changing how you respond to provocative actions. Instead of triggering the next loop in the escalation cycle, you can respond with:
Let's further explore the traffic conflit I previously mentioned. Consider both scenarios:
Without de-escalation:
This is the escalation trap. Two reasonable people became dangerous through escalation.
With de-escalation:
In the second scenario, Driver B made a strategic choice to de-escalate. By not matching Driver A's error with aggressive honking, Driver B broke the escalation loop. The result? Safety, reduced stress, and everyone moving forward.
Now, let's also return to our earlier example of the backend and frontend teams, but imagine a de-escalated version:
De-escalation is a learnable skill. Here are practical ways to develop it:
The escalation trap is a fundamental pattern in systems: small actions trigger reinforcing loops that amplify conflict and drive systems toward destructive extremes.
De-escalation is the lever that interrupts this trap. By choosing calm over defensiveness, empathy over blame, and collaboration over adversarialism, you don't just resolve immediate conflicts—you transform relationships.
Businesses are typically high-pressure environments focused on delivering results and meeting ambitious, often uncertain deadlines. Management deliberately introduce some level of conflict and pressure to drive performance. In this scenario, avoiding the escalation trap is no easy task. It demands both skill and intentional action.
Leaders who understand de-escalation become invaluable. They transform potential crises into opportunities for strengthening teams and improving outcomes.
The next time you're in a conflict — whether with a teammate, customer, or leader — pause and ask yourself: "Will my next action escalate or de-escalate this situation?" Choose wisely.