MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Understanding Variables and Data Types in JavaScript

2026-05-01 15:12:11

In this blog we are going to learn basic fundamentals of programming. We are exploring variables and datatypes in javascript.

First we try to understand variables. So what are variables.

Variables : Variables are containers that stores and manage some data.

There are three ways to declare a variable in javaScript. let , var and const .

syntax to declare a variable :

let variableName = value;
var variableName = value;
const variableName = value;

You can use let ,var and const according to your use case.

What is scope : Scope defines where a variable can be access in program.

Global Scope : A variable declared outside any function can be access anywhere in the program.

Function Scope : A variable declared inside a function cannot be accessed outside that function.

Block Scope : Block scope was introduced in ES6.

Variables declared with let and const can only be accessed inside the block {} where they are declared.

Example : A variable declared with let or const inside an if-else statement is accessible only within that block.

But variables declared with var ignore block scope boundaries and are scoped to the nearest function.

Basic difference between var, let, and const :

var : var is function-scoped.

It can be re-declared and re-assigned (updated).

var x = 10;
var x = 20; // re-declaration allowed

x = 30;     // re-assignment allowed

console.log(x); // 30

let : let is block-scoped.

It can be re-assigned (updated), but cannot be re-declared in the same scope.

let y = 10;

// let y = 20; ❌ Error (cannot re-declare in same scope)

y = 30; // re-assignment allowed

console.log(y); // 30

const : const is also block-scoped.

It cannot be re-declared and cannot be re-assigned.

const z = 10;

// const z = 20; ❌ Error (cannot re-declare)
// z = 30;       ❌ Error (cannot re-assign)

console.log(z); // 10

Naming Rules (Identifiers) to declare a variable :

Variable names must follow these strict rules to be valid:

  • variable names must begin with letter , underscore (_)  or dollar ($). Subsequent characters can also include digits.

  • variables name are case sensitive in javaScript means variable name age and Age are two different variables.

  • you cannot use reserved words like if-else , for loop and function.

Now you understand variables let's dive the datatypes :

Data types : From the name itself, you can understand that a data type means the type of data.

JavaScript is a dynamically typed language.

It means that data types are determined at runtime, not before the execution of the program. unlike many other language we don't need to declare data type first.

Lets take an example :

const data = 12

12 is a data it can be age of any person. But what is the type of 12. It is number.

Here, 12 is a piece of data. It can represent the age of a person.

But what is the type of 12? It is a number.

So, "number" is the data type of 12.

Data types are categorised into two types.

  1. Primitive Data Types

  2. Non-Primitive Data Types

Data types are categorised according to how it stores in memory and how we can access it.

Primitive Data Types : There are seven primitive data types.

  1. Number : The Number data type is used to represent both integer and floating-point values.

    Unlike many other languages, JavaScript does not have separate data types like int, float, or double.

    const age = 21;       // integer
    const price = 99.99;  // floating-point
    
    console.log(typeof age);   // "number"
    console.log(typeof price); // "number"
    
  2. String : A String is used to represent a collection of characters (text data).

    Immutability:

    Once a string is created, it cannot be changed. Methods like toUpperCase() return a new string instead of modifying the original one.

    const name = "john";
    
    const upperName = name.toUpperCase();
    
    console.log(name);       // "john" (original remains same)
    console.log(upperName);  // "JOHN" (new string)
    
  3. Boolean : The Boolean data type represents only two values: true or false.


    These values are case-sensitive and must be written in lowercase.


    TRUE or FALSE are not considered valid Boolean values.

const isLoggedIn = true;
const hasPermission = false;

console.log(typeof isLoggedIn);   // "boolean"
console.log(typeof hasPermission); // "boolean"
  1. null : null is used to represent an intentional absence of value (empty).

    It does not mean 0; it means that no value is assigned.

    let user = null;
    
    console.log(user);        // null
    console.log(typeof user); // "object" (this is a known JavaScript quirk)
    
  2. Undefined : From the name itself , you can understand it represent the value is not defined. When a variable is declared but not assigned any value. It is undefined.

    let name;
    
    console.log(name);        // undefined
    console.log(typeof name); // "undefined"
    
  3. BigInt : BigInt is used to represent very large integers that are beyond the limit of the normal Number type.

    In JavaScript, the Number type can safely store integers only up to:

    2^53 - 1

    For values larger than this, we use BigInt.

    const bigNumber = 1234567890123456789012345678901234567890n;
    
    console.log(bigNumber);        // very large number
    console.log(typeof bigNumber); // "bigint"
    
  4. Symbol : symbol is used to represent unique value.

    It is immutable once created It cannot be changed. It always return a unique value even if the description is same.

    Non-Enumerable: When used as a key in an object, Symbols do not show up in standard loops like for...in or methods like Object.keys().

    const sym1 = Symbol("id");
    const sym2 = Symbol("id");
    
    console.log(sym1 === sym2); // false (always unique)
    

Non-Primitive Data Types : There are two types of non-primitive data types.

Arrays and Objects are non-primitive. They are mutable because they share the reference in the memory. if we change in non-primitive data type it will affect original data.

Arrays : Arrays are collection of multiple data in contiguous order in a single variable.

const numbers = [1 , 2 , 3 , 4]

console.log(numbers);

// output : [1 , 2 , 3 , 4]

Arrays are dynamically sized can grow and shrink its length automatically when you add or remove a value from array.

Array can store multiple datatypes.

const multipleData = [1 , 2 , "email", true};

But in javaScript array is actually a type of object.

When you check its type using typeof, it returns "object".

const numbers = [1, 2, 3];

console.log(typeof numbers); // "object"

So how to check it is an array :

Since typeof() is not reliable for array you can use Array.isArray()

const arr = [1, 2, 3];

console.log(Array.isArray(arr)); // true

If return value is true it is an array otherwise it is not an array.

Objects : An Object in JavaScript is used to store data in the form of key–value pairs.

Each key (also called a property) is associated with a value, which can be of any data type.

const user = {
  name: "john",
  age: 21,
  isStudent: true
};

console.log(user.name); // "john"
console.log(user.age);  // 21
  • Values can be of any type (number, string, array, function, etc.)

  • You can access values using dot notation or bracket notation

Bracket notation Example :

const user = {
  name: "john",
  age: 21
};

console.log(user["name"]); // "john"

Summary : In this blog, we explored the fundamentals of JavaScript, starting with variables and their scope. We also understood the concept of dynamic typing and how JavaScript determines data types at runtime.

Further, we discussed variables using var, let, and const, along with their differences in scope, re-declaration, and re-assignment. The concept of scope—global, function, and block scope—was explained with simple examples to build a strong foundation.

We also explore primitive and non-primitive data types.

By the end of this blog, you should have a clear understanding of how JavaScript handles data and variables, which is essential for writing clean and error-free code.

Network Part 1 - The OSI Model as a Fault Map

2026-05-01 15:09:21

Published: March 27, 2026

In a previous post, we watched a single DNS misconfiguration on one AWS server bring 3,500 companies across 60 countries to a standstill. DNS lives at Layer 7. The failure started there.

This kind of thing repeats. On June 21, 2022, a misconfigured BGP route at Cloudflare blocked 50% of all global HTTP traffic. No server was overloaded. No deployment had gone wrong. Packets simply lost their way and looped endlessly through the network. This time, the failure was at Layer 3.

 

Both incidents share one thing: it took far too long to find the cause. Because no one knew which layer had failed.

The OSI model is not a taxonomy for networking textbooks. It's a fault map — a way to pinpoint exactly where a system breaks.

ref. Cloudflare Blog: Cloudflare outage on June 21, 2022

 

Why the Layers Don't Talk to Each Other

Before the fault map makes sense, this question needs an answer. Why does the OSI model split into 7 layers at all? Wouldn't it be more efficient if each layer could see what the others were doing?

In 1968, software engineer Melvin Conway proposed what has since become foundational in systems design:

"Any organization that designs a system will produce a design whose structure is a copy of the organization's communication structure." — Conway's Law

The OSI model is that principle applied to network architecture. Each layer communicates only through a defined interface. Internal implementation stays private. Layer 4 has no idea whether Layer 7 is speaking HTTP or gRPC. Layer 3 doesn't know — or care — whether Layer 4 is TCP or UDP.

This is deliberate ignorance. And that ignorance produces two trade-offs:

  • Freedom to change: Migrating from HTTP/1.1 to HTTP/2 happens entirely within Layer 7. Everything below stays untouched. The layers are decoupled by design.
  • Fault isolation: A routing failure at Layer 3 has no bearing on your application logic at Layer 7. The blast radius is contained to one layer.

That's why the Cloudflare outage could be called "a Layer 3 problem" immediately. Without the layered design, the cause would have been buried somewhere in the full stack.

Each layer chose not to know the others. That's exactly what makes it possible to know which layer broke.

ref. Martin Fowler: Conway's Law

 

Every Layer Has Its Own Breaking Point

Goldratt's Theory of Constraints is direct: the output of any system is capped by its weakest link. Networks are no exception. But the nature of the bottleneck changes depending on which layer you're looking at.

Packets travel down from L7 to L1 on the sender's side — each layer wrapping the data in its own envelope. On the receiving end, they unwrap back up from L1 to L7. Seven layers. Seven handoffs. Under high-volume traffic, one of those handoffs will crack first. The question is which one, and why.

 

L4 — Speed was the goal. Awareness was the price.

Layer 4 is deliberately blind to content. It sees an IP address, a port number, a protocol — TCP or UDP — and nothing else. It never opens the packet. Think of it as a courier that delivers sealed envelopes without knowing what's inside. That's why it's fast.

But that choice has structural consequences. Every TCP connection occupies a port. Port numbers top out at 65,535 — with a realistic working range of around 28,000. Once concurrent connections hit that ceiling, the system stops accepting new ones. No exceptions.

L4's bottleneck is connection count. Ticketing drops, flash sales, live-streamed events — any scenario where thousands of users connect simultaneously runs straight into this wall.

 

L7 — Awareness was the goal. Speed was the price.

Layer 7 sees everything: HTTP headers, URL paths, cookies, request bodies. It reads the packet, understands the context, and makes decisions accordingly. That's enormously powerful.

But that knowledge is expensive. Parsing takes time. Authentication takes time. Decompression, routing logic, business rules — they all stack. The per-request Logic Latency at L7 is higher than anywhere below it by design. As traffic scales, those costs don't just add — they compound.

L4 stays blind and stays fast. L7 stays aware and pays for it. Neither is a flawed design. They made different trade-offs.

 

Pull back to all seven layers, and the picture looks like this:

         Rate of Saturation → 100%
L7  [████████████░░]  Logic Latency spikes   ← felt first
L4  [███████░░░░░░░]  Concurrency ceiling
L3  [█████░░░░░░░░░]  Routing overhead
L1  [███░░░░░░░░░░░]  Throughput saturation  ← when this goes, everything goes

L7 hits the wall first. L1 going down means nothing gets through at all. Under high-volume load, there's only one question that matters: which layer is closest to 100% Saturation right now?

How to resolve L4 and L7 bottlenecks in practice — that's Part 4 (Load Balancers).

ref. Google SRE Book: Monitoring Distributed Systems
ref. RFC 793: Transmission Control Protocol

 

The Bottom Line

The OSI model isn't a protocol classification system. Each layer is an independent failure candidate with its own breaking point. And the reason those layers exist in the first place is itself a trade-off — give up awareness to gain speed, or give up speed to gain awareness.

The layer where Saturation hits 100% first is the constraint. The boundaries between layers are what make that constraint findable — and fixable — without touching everything else.

Engineers who understand this don't panic when something breaks. They don't touch the whole system. They ask which layer. Then they fix that layer.

Next up: Layer 4, up close. We'll look at the hidden cost of TCP's 3-way handshake — the process every connection must complete before a single byte of real data moves. Under load, that turns out to be anything but cheap.

I built a side project that turns YouTube videos into study notes — here's what I learned shipping solo

2026-05-01 15:07:46

I'm a game developer. I ship games at a studio for my day job. I thought that would make building a solo web app easier.

It didn't. Not even close.

This is the story of building and launching Lynote — a tool that turns any YouTube video into structured study notes — and everything I wish I'd known before I started.

The problem

I watch a lot of YouTube to learn. Tutorials, conference talks, university lectures, technical deep-dives. I always felt productive while watching. Then I'd go to actually use what I'd learned and realise almost none of it had stuck.

The root cause was obvious in hindsight: passive consumption. Watching isn't learning. Learning requires doing something with the information — summarising it, questioning it, testing yourself on it.

I wanted a tool that would take a YouTube link and give me back a proper study note. Summary, key takeaways, action items, flashcards. Something I could actually use to review and retain what I'd watched.

I couldn't find exactly what I wanted, so I built it.

The stack

I went with what I knew would let me ship fast:

  • Next.js 16 App Router — file-based routing, server components, clean architecture
  • React 19 — server actions made form handling much simpler than I expected
  • Supabase — auth and database sorted in an afternoon, SSR integration worked smoothly
  • OpenAI Responses API — the core of the generation pipeline
  • Vercel — zero-config deployment, exactly what you want when you're solo Nothing exotic. The goal was to spend my energy on the product, not the infrastructure.

The interesting technical problem

Getting consistent structured output from an LLM across wildly different video types was harder than I expected.

A 10-minute tutorial has a very different transcript density than a 2-hour university lecture. Short videos would sometimes produce overly padded notes. Long videos would hit token limits or lose coherence toward the end.

I ended up with a two-pass approach for longer content — a chunking pass to extract key segments, then a synthesis pass to generate the final note. It's not perfect but it's consistent enough that the output quality feels predictable regardless of video length.

What I got wrong

I underestimated how long "done" takes. I had a working MVP in about 3 weeks. I spent another 3 months on things I told myself were polish but were actually procrastination — tweaking UI details, refactoring things that worked fine, adding features nobody had asked for yet.

I overthought pricing. I spent days on pricing models when I should have just shipped a free tier and figured it out from real usage data. Token packs are live now. Subscription billing is still not implemented. Ship first, figure out money second.

I underestimated how much the product decisions matter. The engineering was the easy part. Deciding what to show, what to cut, how to structure the output — that took longer than any of the code.

What actually helped

  • Building something I personally needed — I used the tool every day while building it, which meant bugs bothered me personally and the feedback loop was immediate
  • Keeping the stack boring — familiar tools meant I never got stuck on infrastructure
  • Shipping before I felt ready — the version I launched is not the version I wanted to launch, and that's fine

Where it is now

Lynote is live at Lynote. Free tier with 10 notes per day, no account required. Token packs for heavier users.

It's my first solo product launch. If you're a developer who's thought about building something on the side, my only advice is to start smaller than you think you need to and ship faster than feels comfortable.

Happy to answer any questions about the stack, the prompt engineering, or the experience of building solo as a game dev.

Rethinking Focus Apps: An Awareness-Based Approach

2026-05-01 15:05:59

Most focus tools try to reduce distraction by blocking apps or websites.

At first, this seems effective. But over time, it often becomes clear that blocking does not address the underlying habit. It only delays it. When restrictions are removed, the same patterns tend to return.

The Idea

Instead of preventing distractions, the idea is to make them visible. The goal is not to control behaviour, but to introduce a moment of awareness. When you switch away from your task, the app waits briefly and asks, “Was this intentional?” That is the only intervention. There are no restrictions, no lockouts, and no forced limits, and the user remains in control.

How It Works

The workflow is simple:

1. Define your scope

Choose which applications are relevant to your current task

2. Start a session

Set your intention and begin working

3. Detect context changes

If you switch to an application outside your scope, the app notices

4. Prompt for awareness

After a short delay, a prompt appears asking if the action was intentional

5. Continue or return

The decision is left to the user

Design Decisions

No Blocking

Blocking can create friction and often leads to workarounds. This tool avoids that approach entirely.

Local-First

The application does not require accounts or external services.
All data is stored locally on the user’s device.

Minimal Interface

The interface focuses only on what is necessary:

  • defining scopes
  • starting sessions
  • reviewing basic usage

Gentle Interaction

The prompt is not immediate. A short delay helps avoid unnecessary interruptions.

Implementation Overview

The application is built as a desktop app using:

  • Python
  • PySide6 for the interface
  • JSON for local storage

The project is organized into:

  • core/ for monitoring and session logic
  • ui/ for the interface
  • utils/ for storage and helper functions

It is packaged as a Windows executable and available as an open-source project.

Open Source

The project is open source, and contributions are welcome. This can include improving monitoring accuracy, refining the user interface, or suggesting better workflows.

Links

Website:
https://sdishtiyaqahmed.github.io/intent-focus-web

GitHub:
https://github.com/SdIshtiyaqAhmed/intent-focus

Understanding Pointers in Go; The Two Runes (& and *) of Go

2026-05-01 15:02:02

A thorough guide to addresses, pointers, and the elegant dance between them - grounded in how your computer's RAM actually works, with real examples, and an honest look at the trade-offs.

Table of Contents

  1. Let's Start with a Story
  2. How RAM Actually Works
    • RAM is a flat, indexed array of bytes
    • How the CPU reads and writes memory
    • What a "variable" actually is at runtime
    • Memory alignment
  3. The Stack and The Heap
    • The Stack
    • The Heap
    • Escape analysis: Go decides for you
  4. From RAM to Pointers - The Bridge
  5. The & Operator
    • What can & be applied to?
    • The composite literal shortcut
  6. The * Operator
    • The full picture together
  7. Pointer Types
  8. nil Pointers
  9. Pointers in Functions
    • The problem: pass by value
    • The solution: pass the address
    • Performance: avoiding large copies
  10. Pointers & Structs
    • Linked list - the canonical pointer use case
  11. Pointer Receivers
  12. The new() Function
  13. Advantages of Pointers in Go
    • Mutation across function boundaries
    • Avoiding expensive copies
    • Expressing optional values
    • Shared mutable state
    • Recursive data structures
    • Interface satisfaction and polymorphism
  14. Disadvantages & Risks of Pointers in Go
    • Nil pointer dereferences - runtime panics
    • Heap allocations increase GC pressure
    • Pointer indirection degrades cache performance
    • Pointer aliasing makes code harder to reason about
    • Ambiguous data ownership
    • Increased cognitive overhead
  15. When to Use Pointers
    • Use a pointer when…
    • Avoid a pointer when…
  16. Common Gotchas
    • Loop variable capture
    • Returning a pointer to a local variable - safe in Go
    • Pointer comparison
    • Don't over-pointer
  17. Quick Reference Cheat Sheet

Let's Start with a Story

Before we dive into the technical details, let's set the stage with a simple story. This isn't just a metaphor - it's a direct analogy to how pointers work in Go and how they relate to RAM.

🏠 The Address Story

Imagine your town has thousands of houses. Every house has a unique address - say, "42 Elm Street" - and each house contains a value: a family, furniture, secrets.

Now suppose your friend Alice wants to give you her house key. She has two choices: she could photocopy everything inside her house and hand you the copy - expensive, bulky, and changes to your copy don't affect hers. Or she could just write her address on a piece of paper and hand it to you. You now hold a pointer - a small slip of paper that tells you where the real house is. Walk to that address, and you can reach in and change the actual furniture.

In Go, & gives you the address. * lets you walk through the door.

Every Go developer eventually faces the & and * operators. They look cryptic at first - sometimes right next to each other on the same line - but once you understand what they represent at the hardware level, they become second nature. This post is a complete guide: from how RAM actually works, through every nuanced use of these operators, to an honest accounting of their trade-offs.

How RAM Actually Works

Before pointers make complete sense, you need a clear mental model of what RAM is and how your program interacts with it. This isn't hand-wavy background - it's the actual foundation that pointers are built on.

RAM is a flat, indexed array of bytes

Your computer's RAM (Random Access Memory) is, at the hardware level, an enormous contiguous array of bytes. Each byte has a unique numeric index - that index is its memory address. On a modern 64-bit system, addresses are 64-bit integers, giving you a theoretical address space of 2^64 bytes (~18 exabytes). In practice, your OS and hardware limit how much of that space maps to physical chips.

Physical RAM (conceptually)

Address    Byte value
──────────────────────
0x0000     0x00
0x0001     0x4A
0x0002     0x1F
0x0003     0x00
...        ...
0xFFFF...  0x00

When you declare var age int = 42 in Go, the runtime doesn't invent some abstract "variable" - it picks a location in this byte array, writes the binary representation of 42 into it (8 bytes for an int on 64-bit), and associates the name age with that address. The name exists only in your source code and debug symbols. At runtime, it's all addresses and bytes.

How the CPU reads and writes memory

The CPU communicates with RAM through a memory bus. A read: CPU puts an address on the bus, RAM returns the bytes at that location. A write: CPU puts an address and a value on the bus, RAM stores them. This takes nanoseconds - fast, but significantly slower than reading from CPU registers or cache.

This is why CPUs have L1, L2, and L3 caches - small, extremely fast memory banks between the CPU cores and main RAM. When you access an address, the CPU checks its caches first. A cache hit costs ~1–4 cycles. A cache miss - reaching all the way to RAM - costs ~100–300 cycles. That gap is enormous at scale, and it has real implications for how you structure data in Go.

CPU Access Latency (approximate)

Register          ~1 cycle
L1 Cache          ~4 cycles        (32–64 KB per core)
L2 Cache          ~12 cycles       (256 KB – 1 MB per core)
L3 Cache          ~40 cycles       (shared, 4–32 MB)
RAM               ~100–300 cycles  (GBs)
SSD               ~100,000 cycles
HDD               ~40,000,000 cycles

What a "variable" actually is at runtime

When the Go compiler processes your source code, every named variable gets an address assignment - either a stack offset or a heap address. By the time your code runs, the name age is just a shorthand. The emitted machine code uses addresses directly:

// Go source
age := 42
age = age + 1

// Rough equivalent in machine terms
MOV [0xc000014080], 42     // write 42 to address 0xc000014080
MOV RAX, [0xc000014080]    // load from that address into register RAX
ADD RAX, 1                 // add 1
MOV [0xc000014080], RAX    // write result back

This is the key insight: a pointer is simply a variable whose stored value is one of these numeric addresses. There is no magic. It's an integer that the runtime interprets as a location in RAM.

Memory alignment

The CPU doesn't read arbitrary single bytes from RAM in isolation - it reads words (8 bytes on 64-bit, aligned to natural boundaries). A float64 at address 0xc000014000 is one bus transaction. The same value at 0xc000014001 (misaligned) requires two. Go's compiler handles alignment automatically, inserting invisible padding bytes between struct fields where necessary.

type Bad struct {
    A bool    // 1 byte
              // 7 bytes padding (Go inserts this)
    B float64 // 8 bytes - must be 8-byte aligned
    C bool    // 1 byte
              // 7 bytes padding
}
// Total size: 24 bytes - holds only 10 bytes of real data

type Good struct {
    B float64 // 8 bytes
    A bool    // 1 byte
    C bool    // 1 byte
              // 6 bytes padding
}
// Total size: 16 bytes - same data, better layout

Field ordering matters. This is a real micro-optimisation in memory-bound Go programs.

The Stack and The Heap

Your Go program doesn't use RAM as one undifferentiated pool. It carves it into two primary regions with very different characteristics. Knowing which one your data lives in is essential for reasoning about pointer behaviour and performance.

The Stack

The stack is a contiguous block of memory managed in LIFO (last-in, first-out) order. Every goroutine gets its own stack - starting at 2KB in Go, growing dynamically as needed. When a function is called, Go pushes a stack frame onto it: a region holding the function's local variables, arguments, and return values. When the function returns, that entire frame is popped in one operation - the stack pointer register simply moves back.

Stack (grows downward)

High address
┌─────────────────────┐
│  main() frame       │
│    x = 5            │  ← stack pointer before double() call
├─────────────────────┤
│  double() frame     │
│    n = 5 (copy)     │  ← stack pointer during double()
└─────────────────────┘
Low address

When double() returns, its frame is gone instantly.
Stack pointer moves back. No GC. No bookkeeping.

Stack allocation is extremely fast - it's just arithmetic on the stack pointer register. Stack variables are also cache-friendly because they're packed tightly in a small region.

The constraint: the stack frame is temporary. Once a function returns, its frame is gone. If you return the address of a variable that lived on the stack... that address is now dangling. In C, this is undefined behaviour. Go handles it with escape analysis.

The Heap

The heap is a large region of memory managed dynamically. Allocations on the heap persist beyond the function that created them. Go's runtime manages a heap allocator and a garbage collector (GC) that periodically scans for unreferenced objects and reclaims them.

Heap allocation is slower than stack allocation: the allocator must find a free block, update metadata, and potentially trigger GC. Heap objects are also scattered across a large address range - more likely to cause cache misses compared to tightly-packed stack variables.

Memory Layout of a Running Go Program

┌─────────────────────┐  High addresses
│      Stack(s)       │  - one per goroutine, grows downward
│  goroutine 1 ↓      │  - fast, LIFO, automatically freed
│  goroutine 2 ↓      │
├─────────────────────┤
│   (unmapped gap)    │
├─────────────────────┤
│       Heap          │  - grows upward
│  [obj1][obj2]...    │  - GC-managed, persistent
├─────────────────────┤
│   BSS Segment       │  - zero-initialized globals
├─────────────────────┤
│   Data Segment      │  - initialized globals
├─────────────────────┤
│   Text Segment      │  - compiled machine code (read-only)
└─────────────────────┘  Low addresses

Escape analysis: Go decides for you

In Go, you don't call malloc or free. The compiler runs escape analysis - a static pass that determines whether a variable's lifetime can be confined to a stack frame, or whether it needs to "escape" to the heap.

The rules are intuitive:

  • If a variable's address is returned from a function, it escapes (the stack frame will be gone).
  • If a variable is stored in a data structure that outlives the current function, it escapes.
  • If a variable is too large for the stack, it escapes.
func stackAlloc() int {
    x := 42       // stays on stack - doesn't escape
    return x      // value is copied out, address is never exposed
}

func heapAlloc() *int {
    x := 42       // escapes to heap - address is returned
    return &x     // safe in Go - compiler promotes x to heap
}

You can inspect escape decisions yourself:

go build -gcflags="-m" ./...
# Output: ./main.go:6:2: x escapes to heap
#         ./main.go:2:2: x does not escape

Senior engineers use this to find and reduce unnecessary heap allocations in hot paths.

From RAM to Pointers - The Bridge

Now the picture snaps together. You have:

  • RAM: a flat byte array, every location has a numeric address.
  • Stack: fast, temporary, cleaned up automatically when a function returns.
  • Heap: persistent, GC-managed, slower to allocate.
  • Variables: names the compiler associates with specific RAM addresses.

A pointer is exactly what it sounds like: a variable whose stored value is an address in RAM. It points at another location in memory. That's the entire concept.

RAM (partial view of a running Go program)

Address        Value                What it is
──────────────────────────────────────────────────────────────
0xc000014070   [42, 0, 0, 0,        age int = 42
                0,  0, 0, 0]         (8 bytes, little-endian)

0xc000014078   [0x70, 0x40, 0x01,   ptr *int = &age
                0x00, 0xc0, 0x00,    (8 bytes storing address
                0x00, 0x00]           0xc000014070)
  • Reading age: go to 0xc000014070, read 8 bytes, interpret as int42
  • Reading ptr: go to 0xc000014078, read 8 bytes, interpret as address → 0xc000014070
  • Reading *ptr: go to 0xc000014078, get 0xc000014070, then go there, read 8 bytes → 42

Two memory reads instead of one. That's dereferencing - and that cost is real, though usually trivial in isolation. It compounds in tight loops.

The & Operator - "Give me the address"

The & symbol placed before a variable is the address-of operator. It evaluates to the memory address of its operand - not the value stored there, but the location in RAM where the value lives.

&x asks: "Where in RAM does x live?"

package main

import "fmt"

func main() {
    age := 42

    fmt.Println(age)   // 42           - the value
    fmt.Println(&age)  // 0xc000014080 - the RAM address

    // & produces a *int (pointer to int)
    var ptr *int = &age
    fmt.Println(ptr)   // 0xc000014080 - same address
}

The type of &age is *int - a pointer to an int. & can be applied to any addressable value: variables, struct fields, array/slice elements, and more.

What can & be applied to?

type Person struct {
    Name string
    Age  int
}

func main() {
    // ✅ Variable
    x := 10
    _ = &x

    // ✅ Struct field
    p := Person{Name: "Alice", Age: 30}
    _ = &p.Age   // *int pointing into the struct's RAM location

    // ✅ Slice element
    nums := []int{1, 2, 3}
    _ = &nums[0]  // *int pointing at first element of backing array

    // ✅ Composite literal - Go heap-allocates it and gives you the pointer
    pp := &Person{Name: "Bob", Age: 25}  // *Person
    _ = pp

    // ❌ NOT addressable - compile error
    // _ = &42         (literal - no stable RAM location)
    // _ = &len(nums)  (function return value - temporary register value)
}

⚠️ Literals like 42 are typically inlined into machine instructions - they don't live at a stable, named RAM address. Taking their address is a compile-time error.

The composite literal shortcut

// Verbose
p := Person{Name: "Alice", Age: 30}
ptr := &p      // *Person

// Idiomatic - allocates on heap, returns pointer immediately
ptr2 := &Person{Name: "Alice", Age: 30}  // *Person

// Seen everywhere in real Go codebases:
resp := &http.Response{StatusCode: 200}
node := &ListNode{Val: 42, Next: nil}

The * Operator - "Go to that address"

The * symbol has two distinct jobs in Go. Conflating them is the most common source of pointer confusion.

Job Context Meaning
Type declaration Type position *T means "a pointer to T"
Dereference Expression position *ptr means "follow this pointer into RAM and give me the value"

**ptr says: "Follow the address. Read what's in RAM at that location."*

func main() {
    score := 100

    // & → give me the RAM address   (* in TYPE position)
    var ptr *int = &score

    fmt.Println(ptr)   // 0xc000014080  (a RAM address)

    // * → go to that RAM address    (* in EXPRESSION position)
    fmt.Println(*ptr)  // 100           (the value at that address)

    // Modify through the pointer - writes directly to score's RAM location
    *ptr = 200
    fmt.Println(score) // 200 - score itself changed
}

The full picture together

Step 1: score := 100
  ┌──────────────────────────┐
  │ score  @0xc000014080     │
  │        value = 100       │
  └──────────────────────────┘

Step 2: ptr := &score
  ┌──────────────────────────┐      ┌────────────────────────────┐
  │ score  @0xc000014080     │◄─────│ ptr    @0xc000014088       │
  │        value = 100       │      │        value = 0xc000014080│
  └──────────────────────────┘      └────────────────────────────┘

Step 3: *ptr = 200
  ┌──────────────────────────┐      ┌────────────────────────────┐
  │ score  @0xc000014080     │◄─────│ ptr    @0xc000014088       │
  │        value = 200 ✏️    │      │        value = 0xc000014080│
  └──────────────────────────┘      └────────────────────────────┘

Pointer Types Are Strongly Typed

A *int is a completely different type from a *string or a *Person. The compiler enforces this strictly - no implicit casting between pointer types.

name   := "Alice"
age    := 30
active := true

var pName   *string  = &name    // ✅
var pAge    *int     = &age     // ✅
var pActive *bool    = &active  // ✅

// ❌ Type mismatch - compile error
// pAge = &name   // cannot use *string as *int

// Dereferencing gives back the original type
var n string = *pName   // "Alice"
var a int    = *pAge    // 30
Variable Type Pointer Type Dereference Type
int *int int
string *string string
bool *bool bool
float64 *float64 float64
Person (struct) *Person Person
[]int (slice) *[]int []int
*int (pointer) **int *int

💡 **int is valid - a pointer to a pointer to an int. You rarely need more than one level of indirection in practice.

nil - The Zero Value of Pointers

A pointer that hasn't been assigned holds the zero value nil - numerically, address 0x0. The OS deliberately leaves address 0 unmapped. Any dereference of a nil pointer causes a segmentation fault, which Go catches and converts into a runtime panic.

🚪 The Phantom Address

A nil pointer is a slip of paper with no address written on it. Your car starts, you pull out of the driveway, and there's nowhere to go. Go terminates the program: runtime: invalid memory address or nil pointer dereference.

func main() {
    var ptr *int        // nil - holds address 0x0
    fmt.Println(ptr)   // <nil>

    // ❌ Panics at runtime - dereferences address 0x0
    // fmt.Println(*ptr)

    // ✅ Always check before dereferencing
    if ptr != nil {
        fmt.Println(*ptr)
    } else {
        fmt.Println("pointer is nil - nothing to read")
    }
}

// Idiomatic Go: return nil to signal "not found"
func findUser(id int) *User {
    if id <= 0 {
        return nil
    }
    return &User{ID: id, Name: "Alice"}
}

⚠️ Returning *T with a possible nil is a contract. The caller is obligated to check. Forgetting to nil-check before dereferencing is one of the most common sources of production panics in Go.

Pointers in Functions

In Go, function arguments are passed by value - the callee receives a copy, allocated in its own stack frame. Modifying it has no effect on the original.

The problem: pass by value

func double(n int) {
    n = n * 2    // modifies the stack copy, not the original
}

func main() {
    x := 5
    double(x)
    fmt.Println(x)  // 5 - unchanged
}

The solution: pass the address

func double(n *int) {
    *n = *n * 2   // follows the pointer into the caller's stack frame
}

func main() {
    x := 5
    double(&x)
    fmt.Println(x)  // 10
}

📮 The Photocopy vs. House Key

double(x) hands the function a photocopy of 5. It scribbles on the copy and throws it away. Your original is untouched. double(&x) hands it your house key - the function walks to the actual RAM location and changes what's there.

Performance: avoiding large copies

type BigReport struct {
    Title   string
    Data    [10000]float64  // ~80KB
    Summary string
}

// ❌ Copies ~80KB on every call
func processReport(r BigReport) { ... }

// ✅ Copies only 8 bytes (the pointer)
func processReport(r *BigReport) { ... }

In hot paths - data pipelines, request handlers, game loops - this difference is measurable. Benchmarks regularly show 2–5x throughput improvement for moderately sized structs.

Pointers & Structs

When you have a pointer to a struct, Go auto-dereferences on dot access - p.Name and (*p).Name are identical. This is pure syntactic sugar.

type Person struct {
    Name string
    Age  int
}

func main() {
    p := &Person{Name: "Alice", Age: 30}

    fmt.Println((*p).Name)  // "Alice" - explicit dereference
    fmt.Println(p.Name)     // "Alice" - identical, idiomatic

    p.Age = 31  // modifies the heap-allocated Person directly
}

Linked list - the canonical pointer use case

type Node struct {
    Value int
    Next  *Node  // pointer to next node - 8 bytes
}

func main() {
    head := &Node{Value: 1}
    head.Next = &Node{Value: 2}
    head.Next.Next = &Node{Value: 3}

    for curr := head; curr != nil; curr = curr.Next {
        fmt.Println(curr.Value)
    }
    // Output: 1, 2, 3
}

A Node value cannot contain another Node value - that would be infinite size at compile time. A *Node is just 8 bytes. This is why recursive data structures require pointers.

Pointer Receivers vs Value Receivers

type Counter struct {
    count int
}

// Value receiver - operates on a copy
func (c Counter) Value() int {
    return c.count
}

// Pointer receiver - operates on the original in RAM
func (c *Counter) Increment() {
    c.count++
}

func (c *Counter) Reset() {
    c.count = 0
}

func main() {
    c := Counter{}
    c.Increment()  // Go auto-takes address: (&c).Increment()
    c.Increment()
    fmt.Println(c.Value())  // 2
    c.Reset()
    fmt.Println(c.Value())  // 0
}
Situation Use
Method needs to modify the receiver Pointer receiver *T
Struct is large (avoid copying) Pointer receiver *T
Method is read-only, struct is small Value receiver T
Any method on type uses *T - be consistent Pointer receiver *T

💡 Mixed receiver sets cause subtle interface satisfaction bugs. If any method uses a pointer receiver, use pointer receivers throughout.

The new() Function

new(T) heap-allocates zeroed storage for type T and returns a *T. It's equivalent to &T{} for most cases.

func main() {
    p := new(int)      // *int pointing to 0 on the heap
    fmt.Println(*p)    // 0
    *p = 42
    fmt.Println(*p)    // 42

    s1 := new(Person)  // *Person, all fields zeroed
    s2 := &Person{}    // identical
    _ = s1; _ = s2

    flag := new(bool)  // most natural use - primitive zero-value pointer
    *flag = true
}

💡 Prefer &T{} for structs (allows field initialization). new() is cleaner for primitives.

Advantages of Pointers in Go

Pointers are not just a feature - in the right contexts, they're the correct tool. Here's a precise breakdown of what they buy you.

1. Mutation across function boundaries

The primary reason pointers exist. Go's pass-by-value semantics mean a function cannot modify its caller's data without a pointer. Pointer parameters are an explicit, visible contract: this function will modify the value at this address.

func normalise(v *Vector3) {
    mag := math.Sqrt(v.X*v.X + v.Y*v.Y + v.Z*v.Z)
    v.X /= mag
    v.Y /= mag
    v.Z /= mag
}

The caller sees the mutation. The function signature makes it visible and deliberate.

2. Avoiding expensive copies

For structs beyond a few dozen bytes, passing by pointer is meaningfully faster. The function call overhead drops from O(struct size) to O(8 bytes), and the stack frame is smaller.

// Copies 256 bytes on every call
func render(m Matrix4x4) { ... }

// Copies 8 bytes
func render(m *Matrix4x4) { ... }

In tight loops or hot paths, benchmarks regularly show 2–5x throughput improvements for moderately sized structs.

3. Expressing optional values

A *T can be nil, giving you a clean way to represent the absence of a value - without a separate boolean flag or a magic sentinel.

type Config struct {
    Timeout  *time.Duration  // nil means "use the default"
    MaxRetry *int            // nil means "unlimited"
}

Immediately readable: if the pointer is nil, the field was not set.

4. Shared mutable state

When multiple parts of your code operate on the same data - a cache, a connection pool, an in-memory store - pointers give all of them a reference to the same RAM location.

type Cache struct {
    mu    sync.RWMutex
    store map[string]string
}

func NewCache() *Cache {
    return &Cache{store: make(map[string]string)}
}

// Every caller holding *Cache operates on the same object in RAM

Without pointers, every assignment would copy the cache - updates in one copy would be invisible to others.

5. Recursive data structures

Trees, linked lists, graphs, tries - any structure where a node references a same-type node requires a pointer. A Node value cannot contain a Node value. A *Node is 8 bytes.

6. Interface satisfaction and polymorphism

Pointer receivers expand the method set of a type. An interface satisfied by *T cannot be satisfied by T alone. Pointer types and interfaces together form Go's core abstraction mechanism for dependency injection and plugin architectures.

Disadvantages & Risks of Pointers in Go

Every advantage has a corresponding cost. Experienced engineers weigh these deliberately.

1. Nil pointer dereferences - runtime panics

The most immediate risk. A *T can be nil, and any dereference panics at runtime. Unlike type errors, there's no static guarantee that a pointer is non-nil. Go does not have non-nullable pointer types. Every *T is implicitly nullable.

func processUser(u *User) {
    fmt.Println(u.Name)  // panics if u is nil - no compiler warning
}

In large codebases, nil checks become tedious and are frequently omitted. Consider whether a *T parameter is truly necessary, or whether a T value would remove the problem entirely.

2. Heap allocations increase GC pressure

Every time a value escapes to the heap, Go's GC is responsible for eventually reclaiming it. In systems with millions of small, short-lived pointer allocations - a common pattern in naively written Go HTTP servers - GC overhead becomes significant. Even Go's low-latency concurrent GC adds latency jitter that's hard to eliminate without rethinking allocation patterns.

// Allocates a new *Response on the heap for every request
func handleRequest(r *http.Request) *Response {
    return &Response{...}
}

// In hot paths, sync.Pool amortises allocations
var pool = sync.Pool{New: func() any { return &Response{} }}

Profile with go tool pprof and check allocs/op in benchmarks. Stack allocations cost nothing to GC - they're freed when the function returns.

3. Pointer indirection degrades cache performance

Modern CPUs are optimised for sequential memory access. When data is laid out contiguously in RAM ([]struct{}), the CPU prefetcher pulls entire cache lines ahead of your loop. When data is a slice of pointers ([]*struct{}), each element is a random jump somewhere in the heap - a potential cache miss on every access.

// Cache-friendly - all Particle data is contiguous in RAM
particles := make([]Particle, 100_000)
for i := range particles {
    particles[i].X += particles[i].VX
}

// Cache-hostile - each pointer is a separate heap allocation
particles := make([]*Particle, 100_000)
for _, p := range particles {
    p.X += p.VX  // potential cache miss on every iteration
}

For large datasets, the throughput difference can be 10x or more. This is why Go's standard library and high-performance Go code strongly prefer value slices over pointer slices.

4. Pointer aliasing makes code harder to reason about

When two pointers point to the same address, a write through one silently changes what the other sees. The compiler cannot assume pointer parameters are distinct, which limits certain optimizations and makes code harder to audit.

func add(a, b, result *int) {
    *result = *a + *b
}

x := 5
add(&x, &x, &x)  // all three alias the same address
// result = 5 + 5, then written to x - order matters here

In concurrent code, aliasing combined with unsynchronized writes produces data races - some of the hardest bugs to reproduce and diagnose.

5. Ambiguous data ownership

With value semantics, ownership is clear: each copy is independent. With pointers, multiple parts of the code may hold a reference to the same object - and it's not always obvious who owns it, who can mutate it, or when it's safe to discard.

Go's GC removes the memory-safety aspect (no use-after-free), but logical ownership ambiguity remains. In complex systems, poorly managed pointer sharing leads to subtle state corruption.

// Who owns cfg? Can handleFoo mutate it? Can handleBar?
// If both do concurrently, do they race?
func setup(cfg *Config) {
    go handleFoo(cfg)
    go handleBar(cfg)
}

Rust's borrow checker enforces ownership at compile time. Go leaves it to convention, documentation, and sync primitives.

6. Increased cognitive overhead

Code passing and returning pointers requires the reader to track multiple levels of indirection. p.Name looks like a value access, but if p is a *Person, it's a dereference followed by a field read. In deeply nested pointer chains, this becomes genuinely difficult to follow, and mutation bugs are non-obvious.

Summary table

Value (T) Pointer (*T)
Allocation Stack (usually) Heap (usually)
GC pressure None Yes - GC must track and reclaim
Nil risk None Runtime panic if nil
Mutation semantics Copy - caller unaffected Shared - caller sees changes
Cache behaviour Contiguous, prefetcher-friendly Scattered, potential cache misses
Ownership clarity Clear - independent copies Requires explicit discipline
Copy cost on call O(size of T) O(8 bytes) always

When to Use Pointers

Go's philosophy is that you should reach for a pointer deliberately, not reflexively.

✅ Use a pointer when…

  • You need to mutate the original value inside a function or method.
  • The struct is large enough that copying is measurably wasteful (~64–128 bytes as a rough heuristic - benchmark to be sure).
  • You want to express optionality - a nil-able *T instead of a zero value.
  • You're building recursive data structures (trees, linked lists, graphs).
  • You're implementing interfaces where pointer receivers are required.
  • You need shared mutable state across goroutines (with appropriate synchronization).

❌ Avoid a pointer when…

  • The value is small (int, float64, bool, small struct) and doesn't need mutation.
  • You want to signal immutability - a value parameter tells the caller "this function won't touch your data."
  • The type is already a reference type: slices, maps, channels, and interfaces contain internal pointers. Wrapping them in an additional * is almost never necessary.
  • You're iterating over a large dataset - value slices are dramatically more cache-friendly than pointer slices.

⚠️ Slices and maps already have pointer semantics for element mutation. You only need *[]int if the function needs to affect the caller's slice header (e.g. an append that must be visible to the caller).

func modifyElement(s []int) {
    s[0] = 999  // ✅ modifies backing array - visible to caller
}

func appendToSlice(s *[]int) {
    *s = append(*s, 42)  // ✅ caller sees new length
}

func appendWrong(s []int) {
    s = append(s, 42)  // ❌ modifies local copy of slice header only
}

ommon Gotchas

1. Loop variable capture

// ❌ Classic bug - all pointers point to the same loop variable
ptrs := make([]*int, 3)
for i := 0; i < 3; i++ {
    ptrs[i] = &i   // &i is the same address every iteration
}
// After loop, i == 3. All three ptrs point to it.
fmt.Println(*ptrs[0], *ptrs[1], *ptrs[2])  // 3 3 3

// ✅ Fix - new variable per iteration
for i := 0; i < 3; i++ {
    v := i
    ptrs[i] = &v
}
fmt.Println(*ptrs[0], *ptrs[1], *ptrs[2])  // 0 1 2

// Note: Go 1.22+ changed loop variable semantics - per-iteration by default

2. Returning a pointer to a local variable - safe in Go

In C, returning &localVar is undefined behaviour - the stack frame is gone. In Go, escape analysis detects this and promotes x to the heap automatically.

func newInt(v int) *int {
    x := v    // compiler promotes x to heap
    return &x // ✅ perfectly safe
}

Run go build -gcflags="-m" to confirm which variables escape.

3. Pointer comparison

a := 42
b := 42
pa, pb := &a, &b

fmt.Println(pa == pb)    // false - different RAM addresses
fmt.Println(pa == &a)    // true  - same address
fmt.Println(*pa == *pb)  // true  - same value at different addresses

Pointer equality checks address identity, not value equality. A frequent source of bugs when engineers expect == to compare pointed-to values.

4. Don't over-pointer

Scattering * everywhere "for performance" backfires: unnecessary heap allocations increase GC pressure, pointer indirection causes cache misses, and nil checks add noise throughout the codebase. Use pointers when you have a concrete reason: mutation, large size, optionality, or shared state.

Quick Reference Cheat Sheet

Expression Reads as Result Type What it does
&x "address of x" *T Returns the RAM address of variable x
*p "value at p" T Reads the value at the RAM address in p
*p = v "write v to p" - Writes v into RAM at the address in p
var p *T "p is a pointer to T" *T Declares p as a pointer (zero value: nil / 0x0)
new(T) "allocate a T" *T Heap-allocates zero-value T, returns pointer
&T{...} "new T literal" *T Heap-allocates initialised T, returns pointer
p == nil "is p nil?" bool True if p holds 0x0 - points nowhere
p.Field "field via pointer" field type Auto-dereferences; identical to (*p).Field

Stay Updated and Connected

To ensure you don't miss any part of this series and to connect with me for more in-depth
discussions on Software Development (Web, Server, Mobile or Scraping / Automation), data
structures and algorithms, and other exciting tech topics, follow me on:

Stay tuned and happy coding 👨‍💻🚀

AI as a Junior Platform Engineer: How I "Onboard" Coding Agents

2026-05-01 15:00:00

Introduction

The first time I started seriously using AI in my DevOps workflows, I made the same mistake I've seen many others make.
I treated it like a tool.
Something you prompt, get an answer from, and move on. It worked, to a point. But the results were inconsistent. Sometimes surprisingly good, sometimes completely off. It felt less like working with a system and more like rolling a dice.
That changed when I started thinking about AI differently. Not as a tool - but as a junior platform engineer joining the team. That shift alone made everything more predictable.

The First Day Problem

When a new engineer joins a team, we don't expect them to be productive immediately. We don't just hand them access to production systems and ask them be productive.
Instead, we onboard them. We give them:

  • context about the system
  • documentation
  • boundaries
  • a safe environment to contribute
  • time to understand how things work Without that, even a talented engineer will struggle. AI is no different.

Context Is the Difference Between Useful and Dangerous

One of the biggest differences between good and bad AI output is context. Without context, an AI agent will give you generic answers. They might be technically correct, but not aligned with your system, your architecture, or your constraints. This is where something like a context.md file becomes incredibly powerful.
Think of it as the onboarding document you would give a new engineer. It might include:

  • how your infrastructure is structured
  • naming conventions
  • environments and workflows
  • constraints (cost, security, compliance)
  • how Terraform modules are organized
  • what "good" looks like in your system

Once the AI has this context, its suggestions start to feel less generic and more like they belong to your system. Just like a junior engineer who finally understands how things are wired.
Sample context.md:

# Platform Context

## Overview
This repository manages AWS infrastructure using Terraform.
Primary workloads run on EKS clusters across dev, staging, and production environments.

## Key Principles
- Prefer managed services where possible
- Minimize blast radius of changes
- Avoid cross-environment coupling
- All changes must go through PR review

## Terraform Structure
- modules/ → reusable infrastructure components
- envs/dev → development environment
- envs/staging → staging environment
- envs/prod → production environment

## Naming Conventions
- Resources follow: <env>-<service>-<type>
- Example: prod-payments-eks

## Guardrails
- Never modify production directly
- No `terraform apply` without PR approval
- Avoid changes that trigger resource replacement unless explicitly required

## Cost Constraints
- Prefer smaller instance types unless justified
- Autoscaling should always have upper limits defined

## Security
- IAM roles must follow least privilege
- No wildcard permissions unless explicitly approved

## Review Expectations
When reviewing a Terraform plan, focus on:
- Resource replacements
- Changes in networking or IAM
- Scaling or cost implications
- Cross-module impact

## What "Good" Looks Like
- Small, isolated changes
- Clear PR descriptions
- Minimal blast radius

Once I started using something like this, the difference was noticeable.
The AI responses became less generic and more aligned with how the system was actually designed. It started picking up on patterns like naming conventions, environment separation, and even risk signals like resource replacements.
It felt much closer to working with someone who had been onboarded into the system, rather than someone guessing from scratch.

Guardrails Matter More Than Intelligence

When onboarding a new engineer, we don't just give context. We also define boundaries. What they should and should not do. Where they can make changes. What requires review.
AI needs the same guardrails. For example, I'm comfortable letting AI:

  • suggest Terraform changes
  • explain plan outputs
  • summarize pull requests
  • generate draft configurations

But there are clear boundaries. AI should not:

  • directly apply infrastructure changes
  • bypass review processes
  • make decisions that require operational judgment

These are not limitations of capability. They are intentional design choices. Because just like with a new engineer, the goal is not maximum autonomy - it is safe contribution.

Start With PRs, Not Production

When a new engineer joins, we usually don't give them direct production access on day one. We ask them to start with:

  • small changes
  • pull requests
  • code reviews
  • guided feedback

This builds confidence and trust over time. The same model works extremely well with AI. Instead of letting AI operate directly on infrastructure, I treat it as a contributor to the PR workflow. It can:

  • generate changes
  • explain diffs
  • highlight potential issues
  • improve readability

But the final decision still goes through human review. This keeps the system safe while still benefiting from AI acceleration.

Feedback Loops Make It Better

A junior engineer improves with feedback. AI systems also improve with iteration. When something is off, the answer almost never is:

AI doesn't work

More often, it means:

"The context was incomplete"
 "The prompt didn't reflect constraints"
 "The guardrails weren't clear"

Over time, refining context and expectations makes AI far more reliable. It starts behaving less like a random generator and more like a team member who understands the system.

The Real Shift

Thinking of AI as a junior platform engineer changes how you design workflows. Instead of asking:

"What can this tool do?"

You start asking:

"How would I onboard someone into this system?"

That question naturally leads you to:

  • better context
  • clearer boundaries
  • safer workflows
  • more predictable outcomes

Closing Thought

AI in DevOps doesn't need to be treated as an autonomous operator. In many cases, it works best as a well-onboarded junior engineer:

  • guided by context
  • constrained by guardrails
  • contributing through safe workflows
  • improving over time

The goal is not to replace engineers. It is to make systems easier to understand, safer to operate, and faster to evolve. And sometimes, the best way to do that is not to give AI more power - but to onboard it more thoughtfully.

Curious to know what you think of this approach.

Originally published on Medium: