MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Building with Care: My Week 1 Setup for Bloom After

2026-03-06 22:43:18

By Grace Olabode | Engineering Lead, Bloom After

Writing code is usually about solving business problems. But sometimes, you get to build something that feels like a warm hug for someone going through a tough time.

For the International Women’s Day (IWD) Sprint with the Tabi Project and TEE Foundation, our team is building Bloom After. It’s a website designed to safely support Nigerian mothers dealing with Postpartum Depression (PPD).

As the Engineering Lead for this project, my job for Week 1 (our setup week) was to build a strong foundation. I needed to make sure our coding workspace was set up so that the final website would be fast, safe, and incredibly easy to use. Here is a look at how we started.

Keeping it Fast and Simple

When a mother is tired, stressed, or overwhelmed, the last thing she needs is a slow, frustrating website. We had a strict rule from the start: the site must load in under 3 seconds on a mobile phone.

To make this happen, we kept our tools very simple. Instead of using heavy, complicated coding frameworks, we went back to the basics and built the site using plain HTML, CSS, and JavaScript.

We also made strict rules for how our team writes and shares code on GitHub. We set up a system where nobody is allowed to push code directly to the live website. Everything must be checked and approved first, which stops the site from accidentally breaking.

My Work This Week

As the Engineering Lead, I had to make sure my team could code smoothly without stepping on each other's toes. Here is what I focused on:

  • I wrote the Technical Requirements Document (TRD). This is basically our rulebook. It tells the team exactly how to organize our files and how to set up our database to safely hold things like clinic locations and community stories.

  • I set up our JavaScript so we could build things like the top navigation bar just once, and reuse it on every page. This saves the team a massive amount of time.

  • I carefully checked the code my teammates wrote for our Landing Page to make sure it looked great and worked perfectly on mobile phones.

What I Learned This Week

  • Having a great plan doesn't matter if the team isn't talking. Constantly chatting and answering questions in Slack saved us from a lot of mistakes.

  • Building the website with plain HTML, CSS, and JavaScript reminded me that you don't always need the newest and fanciest tools to make something beautiful and functional.

  • Sprints move very fast. Knowing when to step back, take a breath, and just be a "cute potato" for a little while is super important so the team doesn't burn out!

Team Shoutouts

A strong foundation is only as good as the team building on top of it. A massive thank you to the amazing people building this with me:

  • Nanji Lakan (Product Lead): For making sure we are building exactly what these mothers need.

  • Agugua Genevieve (Design Lead): For designing a website that feels safe, warm, and welcoming.

  • Chijioke Uzodinma (Backend Lead): For helping me plan the database perfectly.

  • Prisca Onyemaechi (Lead Maintainer): For keeping our GitHub files perfectly organized.

  • Christine Mwangi (@username): For keeping everyone happy, focused, and moving forward.

  • Adejola Esther: For

  • Ajibola Sophia:

Week 1 is officially done and our first pages are live! Next week, we dive into building the actual resource library and the clinic finder map. Let’s build something that matters.

Link to our live landing page:
Link to our GitHub Repo: https://github.com/Tabi-Project/Bloom-After.git

I built a Claude Code skill that screenshots any website (and it handles anti-bot sites too)

2026-03-06 22:40:32

TLDR;

Automate screenshot capture for any URL with JavaScript rendering and anti-ban protection — straight from your AI assistant.

Taking a screenshot of a webpage sounds trivial, until you need to do it at scale. Modern websites throw every obstacle imaginable in your way: JavaScript-rendered content that only appears after a React bundle loads, bot-detection systems that serve blank pages to automated headless browsers, geo-blocked content, and CAPTCHAs that appear the moment traffic patterns look non-human. For a handful of URLs you can get away with Puppeteer or Playwright. For hundreds or thousands? You need infrastructure built for the job.

The Zyte API was designed specifically for this problem. It handles JavaScript rendering, anti-bot fingerprinting, rotating proxies, and headless browser management so you don't have to and what better way to do it straight from your LLM supplying the URLs? Hence I created this zyte-screenshots Claude Skill, which you can use to trigger the entire workflow- API call, base64 decode, PNG save on your filesystem, all just by chatting with Claude.

In this tutorial, we'll walk through exactly how the skill works, how to set it up, and how to use it to capture production-quality screenshots of any URL.

Why Use the Zyte API for Screenshots?

Before diving into the skill itself, it's worth understanding what makes the Zyte API uniquely suited to screenshot capture at scale.

1. Full JavaScript Rendering

Single-page applications built with React, Vue, Angular, or Next.js don't serve their content in the raw HTML response, they render it client-side after the page loads. Tools that capture the raw HTTP response will get a blank shell. Zyte's screenshot endpoint fires a real headless browser, waits for the DOM to fully settle, then captures the final rendered state.

2. Anti-Bot and Anti-Ban Protection

Enterprise-grade sites use fingerprinting libraries to detect automation. They check TLS fingerprints, browser headers, canvas rendering patterns, mouse movement entropy, and dozens of other signals. Zyte's infrastructure is battle-tested to pass these checks so your screenshots won't return a "Access Denied" page.

3. Scale Without Infrastructure

Managing a fleet of headless browser instances, proxy rotation, retries, and residential IP pools is a serious engineering investment. Zyte abstracts all of this into a single API call.

4. One API, Any URL

Whether the target is a static HTML page, a JS-heavy SPA, a behind-login dashboard (with session cookies), or a geo-restricted site, the same API call structure works. The skill you're about to install uses this endpoint.

What Is the zyte-screenshots Claude Skill?

Claude Skills are reusable instruction packages that extend Claude's capabilities with domain-specific workflows. The zyte-screenshots skill teaches Claude how to:

  • Accept a URL from the user in natural language
  • Read the ZYTE_API_KEY environment variable
  • Construct and execute the correct curl command against https://api.zyte.com/v1/extract
  • Pipe the JSON response through jq and base64 --decode to produce a PNG file
  • Derive a clean filename from the URL (e.g. https://quotes.toscrape.com becomes quotes.toscrape.png)
  • Report the exact file path and describe what's visible in the screenshot in one sentence

In practice, this means you can open Claude, say "screenshot https://example.com", and have a pixel-perfect PNG on your filesystem in seconds, no browser, no script, no Puppeteer config.

Prerequisites

Before installing the skill, make sure you have the following:

Tools

  • curl: Pre-installed on macOS and most Linux distributions. On Windows, use WSL or Git Bash.
  • jq: A lightweight JSON processor. Install via brew install jq (macOS) or sudo apt install jq (Ubuntu/Debian).
  • base64: Standard on all Unix-like systems.
  • Claude desktop app with Skills support enabled.

A Zyte API Key

Sign up at zyte.com and navigate to your API credentials. The free tier includes enough credits to get started with testing. Copy your API key, you'll set it as an environment variable.

💡 Pro tip: Set your ZYTE_API_KEY in your shell profile (~/.zshrc or ~/.bashrc) so it's always available: export ZYTE_API_KEY="your_key_here" or pass it along your prompt

Installing the Skill

Step 1: Download the Skill from GitHub

The skill is open source and available at github.com/apscrapes/claude-zyte-screenshots. Download the latest release ZIP from the repository's Releases page, or clone it directly:

git clone https://github.com/apscrapes/claude-zyte-screenshots.git

Step 2: Import into Claude

  1. Open the Claude desktop app or go to Claude.ai
  2. Navigate to Settings → Skills
  3. Click Import Skill
  4. Select the SKILL.md file from the downloaded ZIP
  5. Claude will confirm the skill has been installed

Step 3: Set Your API Key

In your terminal, export your Zyte API key or pass it along your prompt:

export ZYTE_API_KEY="your_zyte_api_key_here"

To make this permanent across terminal sessions, add the export line to your ~/.zshrc or ~/.bashrc file.

Using the Skill: Step-by-Step

Basic Screenshot

Once the skill is installed and your API key is set, using it is as simple as asking Claude:

Screenshot https://quotes.toscrape.com using zyte-screenshots skill

Claude will:

  1. Verify ZYTE_API_KEY is available in the environment
  2. Derive the output filename: quotes.toscrape.png
  3. Execute the following command:
curl -s https://api.zyte.com/v1/extract \
  -u "$ZYTE_API_KEY": \
  -H "Content-Type: application/json" \
  -d '{
    "url": "https://quotes.toscrape.com",
    "screenshot": true
  }' \
| jq -r '.screenshot' \
| base64 --decode > quotes.toscrape.png
  1. Verify the file is non-empty
  2. Open and inspect the image
  3. Return the file location and a one-line visual description

Example output from Claude:

✅ Screenshot captured!

📁 Location: /home/you/quotes.toscrape.png

🖼️  What's in it: A clean light-themed page listing inspirational quotes
   with author attributions and tag filters in the sidebar.

Under the Hood: How the API Call Works

Let's break down the exact curl command the skill executes:

curl -s https://api.zyte.com/v1/extract \
  -u "$ZYTE_API_KEY": \
  -H "Content-Type: application/json" \
  -d '{
    "url": "https://target-site.com",
    "screenshot": true
  }' \
| jq -r '.screenshot' \
| base64 --decode > output.png

curl -s — Silent mode; suppresses progress output.

-u "$ZYTE_API_KEY": — HTTP Basic Auth. Zyte uses the API key as the username with an empty password.

-H "Content-Type: application/json" — Tells the API to expect a JSON body.

-d '{...}' — The JSON request body. Setting screenshot: true instructs Zyte to return a base64-encoded PNG of the fully rendered page.

| jq -r '.screenshot' — Extracts the raw base64 string from the JSON response.

| base64 --decode — Decodes the base64 string into binary PNG data.

> output.png — Writes the binary data to a PNG file.

The Zyte API handles everything in between — spinning up a headless Chromium instance, loading the page with real browser fingerprints, waiting for JavaScript execution to complete, and rendering the final DOM to a pixel buffer.

This was a fun weekend project I put together, let me know your thoughts on our Discord and feel free to play around with it. I'd also love to know if you create any useful claude skills or mcp server, so say hi on our discord.

Tags: web scraping • Zyte API • screenshots at scale • JavaScript rendering • anti-bot • Claude AI • Claude Skills • automation • headless browser • site APIs

Customizing the MacOS Terminal with help from AI

2026-03-06 22:36:27

After spending a year traveling the world, I've acquired a new Macbook Air and had to set up my terminal interface again.

I do not normally enjoy this process but I loved it this time.
A continuous conversation with AI (Gemini specifically) lead to every random idea I had becoming realized in my .zshrc file.

There are some goodies in here I suspect you'll enjoy too.

Basic settings and imports

# # # # # # # # # # # # #
# import things we need #
# # # # # # # # # # # # #

# load 'live' hooks to execute things every call,
# load TAB completion
autoload -Uz add-zsh-hook compinit

# # # # # # # # # # # # #
# Enable Tab Completion #
# # # # # # # # # # # # #

# 1. Prevent duplicate entries in paths
typeset -U FPATH PATH
# 2. Add Homebrew to FPATH for even better tab completion (if it's not present already)
FPATH="/opt/homebrew/share/zsh/site-functions:${FPATH}"
# 3. Enable TAB completion
compinit

# # # # # # # # # # # # #
# Toggling Misc options #
# # # # # # # # # # # # #

# make ls pretty
export CLICOLOR=1

Basic aliases

# make the current terminal tab aware of things just installed or changed
alias reload='source ~/.zshrc'

# shortcut to go to my main projects folder
alias pj="cd ~/Desktop/PropJockey"

# Launch Gemini in a frameless chrome window
alias gemini="/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome --app=https://gemini.google.com/app"

Whenever you modify the .zshrc file, which I did about a hundred times during this, you either have to open a new tab or run the command I've aliased with a simple "reload".

I wound up installing Gemini using Chrome's built in "Install as App" feature which had the added benefit of getting Gemini in my dock. 🤙 Sweet.

Chrome menu →

Lock Function

One of the first things I do when setting up a mac is turn sleep into hibernate. When I close the lid, I'm done for the night, no reason to sleep.

sudo pmset -a hibernatemode 25

Unfortunately, no matter how you've configured your lock screen settings, after only a minute idle on the lock screen, the mac gets bored and falls asleep, which in my case hibernates it and I have to sit through memory reloading from the harddrive to log back in.

The fix:

# Lock the screen without sleeping the system and keep it awake until ctrl + c to quit
# alias lock='shortcuts run "LockMac" && caffeinate -i -d -t 3600'
# ^ great but this is better because I don't have to ctrl + c!
lock() {
  # Start caffeinate in the background, disown it, hide its output, and save its Process ID (PID)
  caffeinate -i -d -t 3600 &>/dev/null &!
  local caff_pid=$!

  shortcuts run "LockMac"

  # Give macOS a bit to register the locked state
  sleep 3

  # The Magic Loop: Poll the Mac's hardware registry. 
  # This specific string ONLY exists while the screen is actively locked.
  while ioreg -n Root -d1 | grep -q "CGSSessionScreenIsLocked"; do
    sleep 2
  done

  # You're back! Kill the caffeinate process silently
  kill $caff_pid 2>/dev/null

  # Print a Bifrost-styled welcome back message
  # print -P "%F{cyan}🛸 Welcome back. Sleep prevention disabled.%f"
}

I always have at least one terminal open so I just type lock and I'm off to the bathroom in my hostel. The mac stays wide awake the whole time so I can simply type my password and continue.

I use this often

screenshot of a terminal tab with the lock command ran many times

Automatic Node & NPM version mismatch warning

One of the things any seasoned node developer has done time and time again is accidentally run their project with the wrong version, or worse, installed packages.

I wanted to prevent that as best as I could so I requested a function that alerts the node version and the version specified in .nvmrc or package.json with pass/fail indicators based on a comparison of the values.

# Checks current versions against .nvmrc or package.json requirements
nvi() {
  # 1. Ensure Node is actually installed
  if ! command -v node &> /dev/null; then
    print -P "%F{red}✘ Node.js is not installed or not in PATH.%f"
    return 1
  fi

  local current_node=$(node -v)
  local current_npm=$(npm -v)
  local req_node=""
  local req_npm=""

  # 2. Look for project requirements (.nvmrc takes priority for node)
  if [[ -f ".nvmrc" ]]; then
    req_node=$(cat .nvmrc | tr -d '\n' | tr -d '\r')
  fi

  # Fallback to package.json engines if it exists
  if [[ -f "package.json" ]]; then
    if [[ -z "$req_node" ]]; then
      # Safely extract engines.node using Node itself
      req_node=$(node -e "try { console.log(require('./package.json').engines.node || '') } catch(e) {}" 2>/dev/null)
    fi
    # Extract engines.npm
    req_npm=$(node -e "try { console.log(require('./package.json').engines.npm || '') } catch(e) {}" 2>/dev/null)
  fi

  # 3. Print Current Versions
  print -P "%F{magenta}Current Node:%f %F{cyan}${current_node}%f"
  print -P "%F{magenta}Current NPM:%f  %F{cyan}v${current_npm}%f"

  # 4. Evaluate and Print Requirements (if they exist)
  if [[ -n "$req_node" || -n "$req_npm" ]]; then
    echo ""
    print -P "%F{242}Project Requirements:%f"

    # Node Requirement Check
    if [[ -n "$req_node" ]]; then
      # Extract just the major version number for a reliable comparison
      local clean_current=$(echo "$current_node" | grep -oE '[0-9]+' | head -1)
      local clean_req=$(echo "$req_node" | grep -oE '[0-9]+' | head -1)

      if [[ "$clean_current" == "$clean_req" || "$current_node" == *"$req_node"* ]]; then
        print -P "%F{green}✔ Node:%f ${req_node}"
      else
        print -P "%F{red}✘ Node:%f ${req_node} (Mismatch detected)"
      fi
    fi

    # NPM Requirement Check
    if [[ -n "$req_npm" ]]; then
      local clean_current_npm=$(echo "$current_npm" | grep -oE '[0-9]+' | head -1)
      local clean_req_npm=$(echo "$req_npm" | grep -oE '[0-9]+' | head -1)

      if [[ "$clean_current_npm" == "$clean_req_npm" || "$current_npm" == *"$req_npm"* ]]; then
        print -P "%F{green}✔ NPM:%f  ${req_npm}"
      else
        print -P "%F{red}✘ NPM:%f  ${req_npm} (Mismatch detected)"
      fi
    fi
  fi
}

Just run nvi (Node Version Information) and we're golden...

...but we could be platinum if we called this automatically when you change directory into one with node expected!

# # # # # # # # # # # # #
# Run nvi automatically #
# # # # # # # # # # # # #

auto_check_node_env() {
  # Check if we are in a folder with Node files
  if [[ -f "package.json" || -f ".nvmrc" ]]; then
    echo "" # Add a blank line for visual breathing room
    nvi
  fi
}
# Attach it to the 'Change Directory' hook
add-zsh-hook chpwd auto_check_node_env

🤌 Great success.

Now every time I cd into the root of one of my node projects, it runs this command so I'll be shown without needing to remember to check. I stopped short of letting it nvm into the "right" version automatically hah

The Beautifully Colorful Path Highlighting and Git Info

First, I installed this color theme called Bifrost:

A terminal color theme pallet display with corresponding color codes

Then, for the current directory path, I wanted to grey out the roots of the path, highlight the project name by detecting if it's in a git repository, then highlight everything after the project name in a different color.

The end result is the most significant parts being easy to distinguish and effortlessly identified at a glance. I. Love. It.

Screenshot of my terminal showing the current path dissected into colors

There's all kinds of other stuff in here like basic git status indicators, the branch, separate counts of staged and unstaged files, etc.

The first part of the prompt is something that is slightly redundant indicating complete vs a failed reason, followed by a completion timestamp of the previous command AND the number of seconds the previous command took (if it was more than one).

A couple line breaks after that and the typical PROMPT shows up.

# # # # # # # # # # # # #
# Meta info of last cmd #
# # # # # # # # # # # # #

# 1. Capture the start time of a command
preexec() {
  timer=${timer:-$SECONDS}
}
# 2. Calculate the difference and format it
calculate_timer() {
  if [[ -n "$timer" ]]; then
    timer_show=$(($SECONDS - $timer))
    if [[ $timer_show -ge 1 ]]; then
        export ELAPSED="%F{yellow}${timer_show}s %f"
    else
        export ELAPSED=""
    fi
    unset timer
  else
    # THE FIX: If no command was run, clear the old time!
    export ELAPSED=""
  fi
}
add-zsh-hook precmd calculate_timer


# # # # # # # # # # # # #
# Best git aware prompt #
# # # # # # # # # # # # #

# look inside the variables every time the PROMPT is printed
setopt PROMPT_SUBST

# Fetch the Mac Display Name
DISPLAY_NAME=$(id -F)

# Define the raw escape codes for Dim and Reset
DIM=$'\e[2m'
RESET=$'\e[22m'
# %{${DIM}%}%F{yellow}%~%f%{${RESET}%}

set_surgical_path() {
  local exit_code=$? # capture exit code of previous command

  local repo_root
  # Get the absolute path to the repo root
  repo_root=$(git rev-parse --show-toplevel 2>/dev/null)
  local GIT_INFO="" # Start with a blank slate

  if [[ -n "$repo_root" ]]; then
    local full_path=$(pwd)

    # 1. Everything BEFORE the repo root
    # We take the directory of the repo_root (its parent)
    local parent_dir=$(dirname "$repo_root")
    local prefix="${parent_dir}/"
    prefix="${prefix/#$HOME/~}" # Clean up Home path

    # 2. The Repo folder itself
    local repo_name=$(basename "$repo_root")

    # 3. Everything AFTER the repo root
    # We remove the repo_root path from the full_path
    local suffix="${full_path#$repo_root}"

    # Assemble: Dim Prefix + Bright Repo + Dim Suffix
    DYNAMIC_PATH="%{${DIM}%}%F{white}${prefix}%f%{${RESET}%}"
    DYNAMIC_PATH+="%F{magenta}${repo_name}%f"
    DYNAMIC_PATH+="%F{blue}${suffix}%f"
    # Get the branch name, fallback to short hash if in detached HEAD
    local git_branch=$(git branch --show-current 2>/dev/null || git rev-parse --short HEAD 2>/dev/null)

    # Check for Action States (Merging / Rebasing)
    local git_action=""
    if [[ -f "$repo_root/.git/MERGE_HEAD" ]]; then
      git_action="%F{yellow}(merge)%f "
    elif [[ -d "$repo_root/.git/rebase-merge" || -d "$repo_root/.git/rebase-apply" ]]; then
      git_action="%F{yellow}(rebase)%f "
    fi

    # Check Clean/Dirty Status
    local git_state=""
    local git_status=$(git status --porcelain 2>/dev/null)

    if [[ -z "$git_status" ]]; then
      git_state="%F{green}✔%f" # Perfectly clean
    else
      # Check for staged changes (+) and unstaged/untracked files (*)# Count lines matching staged (column 1) and unstaged/untracked (column 2)
      local staged_count=$(echo "$git_status" | grep -c '^[AMRCD]')
      local unstaged_count=$(echo "$git_status" | grep -c '^.[MD?]')

      # If there are staged files, add the yellow count
      if [[ "$staged_count" -gt 0 ]]; then
        git_state+="%F{yellow}+${staged_count}%f"
      fi

      # If there are unstaged/untracked files, add the red count
      if [[ "$unstaged_count" -gt 0 ]]; then
        # Add a space for readability if we also have staged files (e.g., +2 *3)
        [[ -n "$git_state" ]] && git_state+=" "
        git_state+="%F{red}*${unstaged_count}%f"
      fi
    fi

    # Assemble the final Git string
    GIT_INFO=" on %F{cyan}${git_branch}%f ${git_action}${git_state}"
  else
    # Not in a repo: Show the whole path dimmed
    DYNAMIC_PATH="%{${DIM}%}%F{magenta}%~%f%{${RESET}%}"
  fi

  # Define the Prompt
  # %D{%H:%M:%S} = Hour:Minute:Second
  # %n = username
  # %~ = current directory (shortened)
  # %# = % for users, # for root (I replaced this with: %(!.#.$))
  # %(!.#.$) = # for root, $ for users
  # The %(?.X.Y) syntax means: If last exit code was 0, show X, else show Y ## %(?.%F{green}Completed.%F{red}Stopped)%f

  local nl=$'\n' # ${nl}
  local status_line=""
  if [[ $HISTCMD -ne $LAST_HISTCMD ]]; then
    local status_text=""

    # 2. The Exit Code Skyscraper
    case $exit_code in
      0) status_text="%F{green}Completed" ;;

      # The "Oh, I meant to do that" code
      130) status_text="%F{yellow}Stopped" ;; # SIGINT (Ctrl+C)

      # The "Wait, what just happened?" codes
      126) status_text="%F{magenta}Denied (126)" ;; # Permission denied / Not executable
      127) status_text="%F{magenta}Not Found (127)" ;; # Typo in command name

      # The "Violent Crash" codes
      137) status_text="%F{red}Killed / Out of Memory (137)" ;; # SIGKILL (Often the Out-Of-Memory killer)
      139) status_text="%F{red}Segfault (139)" ;; # SIGSEGV (Memory access violation)
      143) status_text="%F{yellow}Terminated (143)" ;; # SIGTERM (Graceful kill command)

      # The Catch-All for standard script errors
      *) status_text="%F{red}Exit code: ${exit_code}" ;;
    esac

    status_line="${nl}%{${DIM}%}${status_text}%f%{${RESET}%} %F{242}at %D{%H:%M:%S}%f ${ELAPSED}${nl}${nl}"
  fi
  LAST_HISTCMD=$HISTCMD

  local name_path_git="%F{magenta}${DISPLAY_NAME}%f in ${DYNAMIC_PATH}${GIT_INFO}"
  local input_indicator="${nl}%{${DIM}%}%F{yellow}%(!.#.$)%f%{${RESET}%}"

  PROMPT="${status_line}${name_path_git}${input_indicator} "
}

# 4. Tell the surgical path function to run before every prompt
add-zsh-hook precmd set_surgical_path

The final touch

Place a little 👽 Extra Terrestrial emoji in right prompt so I can scroll up the wall of text and find my inputs a little more easily.

# # # # # # # # # # # # #
# A little splash of me #
# # # # # # # # # # # # #

RPROMPT='👽'


# # # # # # # # # # # # #
# This was a triumph. ♫ #
# # # # # # # # # # # # #

# I'm making a note here: Huge success.


# # # # # # # # # # # # #
# Things added by tools #
# # # # # # # # # # # # #

export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"  # This loads nvm
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion"  # This loads nvm bash_completion

Open Contact 👽

Please do reach out if you need help with any of this, have feature requests, or want to share what you've created!

PropJockey.io CodePen DEV Blog GitHub Mastodon
PropJockey.io CodePen DEV Blog GitHub Mastodon

🦋@JaneOri.PropJockey.io

𝕏@Jane0ri

My heart is open to receive abundance in all forms, flowing to me in many expected and unexpected ways.

Venmo
https://account.venmo.com/u/JaneOri

PayPal Donate to PropJockey
https://www.paypal.com/cgi-bin/webscr...

Ko-Fi
https://ko-fi.com/janeori

ETH
0x674D4191dEBf9793e743D21a4B8c4cf1cC3beF54

BTC
bc1qe2ss8hvmskcxpmk046msrjpmy9qults2yusgn9

XRP (XRPL Mainnet)
X7zmKiqEhMznSXgj9cirEnD5sWo3iZPqbNPChdEKV9sM9WF

XRP (Base Network)
0xb4eBF3Ec089DE7820ac69771b9634C9687e43F70

How to Recover Claude Code OAuth Token in 30 Seconds

2026-03-06 22:35:28

TL;DR

When Claude Code fails over SSH, extract the OAuth token from macOS Keychain. Inject it into CLAUDE_CODE_OAUTH_TOKEN. This restores access in 30 seconds without breaking tmux sessions.

Prerequisites

  • macOS (Keychain access required)
  • Claude Code CLI v2.1.45 or later installed
  • SSH access to your Mac or working in a tmux session
  • Ran claude setup-token to generate an OAuth token

Problem: SSH Can't Access Keychain

Claude Code stores OAuth tokens in macOS Keychain (service name: Claude Code-credentials). When running over SSH, Keychain is locked and the token is inaccessible.

$ echo "test" | claude -p
Error: Authentication required. Run 'claude auth login'

Running claude auth login opens a browser, which doesn't work over SSH.

Solution: Pass the Token via Environment Variable

Claude Code prioritizes the CLAUDE_CODE_OAUTH_TOKEN environment variable over Keychain. Extract the token from Keychain and set it as an environment variable to make SSH access work.

Step 1: Extract Token from Keychain

On a local Mac terminal (not via SSH), run:

security find-generic-password -s 'Claude Code-credentials' -w

Output example (JSON format):

{"accessToken":"sk-ant-oat01-qCV5O13G...bcGbVQAA","refreshToken":"...","expiresAt":"2027-02-18T07:00:00.000Z"}

Copy the entire JSON output.

Step 2: Set Environment Variable

If connected via SSH, set the environment variable:

export CLAUDE_CODE_OAUTH_TOKEN='{"accessToken":"sk-ant-oat01-qCV5O13G...bcGbVQAA","refreshToken":"...","expiresAt":"2027-02-18T07:00:00.000Z"}'

Step 3: Inject into tmux Session (if using tmux)

If Claude Code is running in a tmux session, update the tmux environment:

tmux set-environment -t mobileapp-factory CLAUDE_CODE_OAUTH_TOKEN '{"accessToken":"sk-ant-oat01-qCV5O13G...bcGbVQAA","refreshToken":"...","expiresAt":"2027-02-18T07:00:00.000Z"}'

Load the environment variable inside the tmux session:

export CLAUDE_CODE_OAUTH_TOKEN=$(tmux show-environment -t mobileapp-factory CLAUDE_CODE_OAUTH_TOKEN | cut -d= -f2-)

Step 4: Verify

echo "test prompt" | claude -p --allowedTools Bash,Read,Write

If it works, you're done.

Persistence: Save to .zshrc and .openclaw/.env

To avoid repeating this every time, save the token to configuration files:

Mac Mini

# Add to ~/.zshrc
export CLAUDE_CODE_OAUTH_TOKEN='{"accessToken":"sk-ant-oat01-qCV5O13G...bcGbVQAA","refreshToken":"...","expiresAt":"2027-02-18T07:00:00.000Z"}'

OpenClaw Gateway

# Add to ~/.openclaw/.env
CLAUDE_CODE_OAUTH_TOKEN='{"accessToken":"sk-ant-oat01-qCV5O13G...bcGbVQAA","refreshToken":"...","expiresAt":"2027-02-18T07:00:00.000Z"}'

OpenClaw Gateway automatically loads .openclaw/.env on startup.

Troubleshooting

Symptom Cause Fix
security: SecKeychainSearchCopyNext: The specified item could not be found in the keychain. Never ran claude setup-token Run claude setup-token in a local terminal and authenticate via browser
Error: Authentication required Environment variable not set Re-run Step 2
Environment variable not reflected in tmux tmux environment not updated Run Step 3
Token expired expiresAt is in the past Re-run Step 1 (Keychain stores the latest token)

Key Takeaways

Lesson Detail
Keychain is inaccessible over SSH Claude Code tokens are stored in macOS Keychain, which is locked over SSH
Environment variable solves it Setting CLAUDE_CODE_OAUTH_TOKEN makes SSH / tmux access work
Persistence is key Save to .zshrc or .openclaw/.env to avoid repeating setup
30-second recovery Extract from Keychain → set env var → inject into tmux = instant recovery
No need to kill running processes Environment variable injection restores access without stopping tmux sessions

This procedure restored access after a token expired in the mobileapp-factory-daily cron job. Recovery took 30 seconds. Ralph.sh execution resumed without interruption.

Process Over Technology: Starting With the Blog Itself

2026-03-06 22:24:17

I have been thinking about starting a blog for a while. Not because the world needs another tech blog, but because I needed a place to think out loud about something I keep coming back to: process matters more than technology.

AI has changed the economics of rigorous engineering. Practices that used to be too expensive or too slow for most teams, things like executable specifications, mutation testing, and formal verification layers, are now economically viable. The tooling is free. The compute is cheap. The only thing standing in the way is how we think about building software.

So when I finally sat down to build this blog, I decided to treat it the way I believe all software should be built. Not as a weekend side project where I pick a static site generator and start writing, but as a properly managed effort with a defined process, infrastructure as code, a publishing pipeline, and security defaults baked in from the start.

This post is the story of that build. Not a tutorial. Just the decisions, the reasoning, and what it cost.

Starting with why, not how

The first decision was deliberate: no code until the process was defined.

Before choosing a platform, a theme, or a static site generator, I wrote a style guide. It captures the voice, tone, formatting rules, and editorial standards for every post. British English. No em dashes. No emoji. Paragraphs over bullet points. Target length of 1,200 to 1,800 words. Always attribute other people's work.

Then I wrote BDD specifications in Gherkin describing how the publishing pipeline should behave. What happens when I push a markdown file to the main branch? What if the post is marked as a draft? What if it already exists on the target platform?

The pipeline behaviour was fully specified before a single line of workflow YAML existed. This is the approach I advocate for in software delivery, and it felt wrong to skip it for my own project.

Choosing the platform

The platform decision came down to a simple question: where does the content live, and who owns it?

I chose Hashnode as the primary platform with a custom domain. It is free, supports custom domains at no cost, has a GraphQL API for automated publishing, handles light and dark themes natively, and supports Mermaid diagrams (which are version-controllable as code). The built-in developer community provides discoverability without me having to build an audience from scratch.

Cross-posting goes to Dev.to via their REST API, automated through the same pipeline. Medium is a manual step using their URL import feature. Their API no longer issues new integration tokens, so automation is not an option. The pipeline handles this gracefully: it is documented, repeatable, and takes about 30 seconds.

Every cross-posted article sets its canonical URL back to blog.nuphirho.dev. This is non-negotiable. The custom domain builds SEO authority over time. The platforms provide reach. Both matter, but ownership comes first.

The domain

The root domain, nuphirho.dev, is kept deliberately separate from the blog. The blog lives at blog.nuphirho.dev. The root hosts a simple static landing page on GitHub Pages, leaving it flexible for whatever comes next.

The .dev TLD was a conscious choice. It sits on the HSTS preload list, which means browsers refuse to load it over plain HTTP. You do not get the option to be insecure. Cloudflare adds a second layer of SSL/TLS and CDN. Hashnode auto-provisions a Let's Encrypt certificate for the custom domain. That is HTTPS enforced at three independent layers, before a single word of content is published.

In a world where AI is generating code and people are shipping software they do not fully understand, security defaults matter more, not less. This blog enforces that principle from the infrastructure up.

The name

Nu, phi, rho. Three Greek letters I picked up studying physics and mathematics at Stellenbosch University. They stuck as a username during university and never left. The name reflects where I started: grounded in rigour, pattern recognition, and first principles thinking. It is sentimental, not a brand exercise.

Infrastructure as code

Everything is managed with Terraform. The Cloudflare DNS configuration, including the blog CNAME pointing to Hashnode, the root A records pointing to GitHub Pages, and the www redirect, is all declared in version-controlled HCL files.

Terraform state is stored in Cloudflare R2, which is S3-compatible and sits within the free tier. This means the entire infrastructure layer, DNS, CDN, SSL, state management, is declarative, reproducible, and costs nothing.

Some might call this overkill for a blog. I call it the point. If we are going to argue that enterprise-grade practices are accessible, we need to demonstrate it. Terraform for a blog is not about complexity. It is about showing that the barrier to doing things properly has collapsed.

The publishing pipeline

The pipeline runs on GitHub Actions, which is free and unlimited for public repositories. The workflow is straightforward:

When a markdown file in the posts directory is pushed to the main branch, the pipeline reads the frontmatter, determines whether the post is new or an update, and publishes it. If the post is marked as a draft, it gets pushed as an unpublished draft to both Hashnode and Dev.to rather than being skipped entirely. This mirrors how code deployments work: you can push to staging without going to production.

Hashnode uses a GraphQL API. Dev.to uses a REST API. The pipeline handles both, sets canonical URLs, manages tags, and reports a summary of what was published and where. Draft posts are checked for duplicates to avoid pushing the same draft twice.

All API tokens and credentials live in GitHub Secrets. Nothing sensitive touches the repository.

Secrets at the boundary

Speaking of security: there is a Husky pre-push hook that scans for AWS keys, GitHub tokens, PEM headers, and generic secret patterns before any code leaves the machine. It is a simple check, but it catches the most common mistakes at the earliest possible point.

This is defence in depth applied to a blog. Three layers of HTTPS. Secrets in a vault, not in code. Scanning at the git boundary. None of this is complex. All of it is free. The only cost is deciding to do it.

On AI assistance

This blog is AI-assisted. I want to be upfront about that because it connects directly to the thesis.

I think in systems and architecture. I do not always communicate those ideas clearly on the first pass. AI helps me bridge that gap. The thinking is mine. The clarity is a collaboration.

This post was drafted with AI assistance. The decisions, the architecture, the reasoning, those are mine. The process of turning those thoughts into clear prose was a collaboration. I believe this is how software will increasingly be built: human judgement and accountability, with AI handling the parts that benefit from scale and speed.

Being transparent about this is not a disclaimer. It is a demonstration.

What it cost

Let me be specific.

Concern Tool Cost
Source control GitHub (public) Free
CI/CD GitHub Actions Free
IaC Terraform + Cloudflare provider Free
Terraform state Cloudflare R2 Free
DNS/CDN/SSL Cloudflare free tier Free
Landing page GitHub Pages Free
Blog platform Hashnode Free
Cross-post Dev.to (automated), Medium (manual) Free
Secret detection Husky pre-push hook Free
Domain Cloudflare Registrar ~$12/year

The domain registration is the only line item. Everything else, the infrastructure, the pipeline, the platform, the security layers, is enterprise-grade tooling at zero cost.

This is the argument made tangible. The economics have changed. The practices that used to require dedicated teams and significant budgets are available to anyone willing to apply the process.

What comes next

This post is the first in what I hope becomes a regular practice: writing about what I am working on, what I am learning, and what I get wrong. The topics will span AI-assisted software delivery, engineering process, organisational transformation, and the practical challenges of introducing new tools and practices to existing teams.

The entire source for this blog, including the infrastructure, pipeline, specifications, and style guide, is public at github.com/czietsman/nuphirho.dev. If the process interests you, the receipts are there.

The technology was the easy part. The process is what made it work.

The 48-Hour Infrastructure Overhaul: From Vercel to a Modular Stack

2026-03-06 22:23:58

The last 48 hours have been intense. I received a “quota usage” warning from Vercel regarding image optimization, which sent me down a rabbit hole of debugging. After cross-referencing documentation for Vercel, Next.js, Cloudflare, and PayloadCMS, I finally optimized my Cache-Control headers. It’s a common headache when working across multiple layers, since settings can conflict with or override one another.

Cleaning up the database fetching cache code is a minor follow-up task that I’ll wrap up today.

I’ve since migrated the app from Vercel to Railway to keep costs under control. The transition had its challenges, but the setup is now stable. Railway handles application deployment, database hosting, and custom domain origin, while Cloudflare manages DNS and blob storage. Zoho remains my email host, Namecheap continues to act as the registrar, and the code repository of course still lives on GitHub.

The long-term goal is to eventually consolidate everything onto a self-managed VPS, but for now this architecture is a massive step forward.