MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

A Procedurally Generated GitHub CLI Roguelike Where Every Dungeon Is Built from Your Code

2026-01-29 01:30:50

This is a submission for the GitHub Copilot CLI Challenge

What I Built

GitHub Dungeons is my love letter to classic roguelike games, terminal nerdery, procedural generation, and my favorite Git client... with a healthy dose of "just one more try" energy.

It’s a GitHub CLI extension that turns any repository into a playable roguelike dungeon. Every run is procedurally generated. Every dungeon is unique. And every dungeon is built from your actual codebase.

You control the hero (@, obviously) using:

  • WASD
  • Arrow keys
  • Or Vim keys (because of course)

Permadeath included. YASD guaranteed.

Demo

Instructions are at https://github.com/leereilly/gh-dungeons, but the short version is that if you have GitHub CLI already installed, just run:

gh extension install leereilly/gh-dungeons

Then running gh dungeons in any repo will generate a unique dungeon for you to conquer.

GIF of GH Dungeons in action

My Experience with GitHub Copilot CLI

GitHub Copilot CLI felt like having a party of NPCs in my terminal... except instead of saying "I used to be an adventurer like you," and asking me to deliver a message to their Uncle in Whiterun, they actually helped!

The biggest win: /delegate

The /delegate commands were a game-changer (pun intended obviously). It let me treat Copilot like a mini guild of specialists. I’d describe the problem, send it off, and keep building while Copilot handled the details.

Some real examples:

/delegate Make the level progressively harder.  On level 2 
there are extra baddies, but also more health potions.

» Resulting pull request

/delegate Resizing the terminal should redraw the dungeon, 
but the dungeon layout must remain the same.

» Resulting pull request

/delegate If the Konami code is entered, the player becomes 
invulnerable and monsters do no damage.

» Resulting pull request

What Worked Well

  • Copilot handled the complex, fiddly logic (input handling, redraws, edge cases)
  • I stayed focused on game feel, pacing, and dumb roguelike jokes
  • Iteration was fast enough to encourage experimentation ("what if…?")

Instead of fighting the implementation, I got to explore ideas, which is how side projects should feel.

Overall

This started as a short challenge project and turned into something I genuinely want to keep building.

GitHub Copilot CLI took care of a lot of the heavy lifting, which let me stay in the fun part of the problem space. That alone made it a win.

If hacking on a terminal roguelike that eats code for breakfast sounds fun...

Adventurers sought. Contributions welcome. 🧙‍♂️⚔️

Visualizing Real-Time-Ish GitHub Activity on a Rotating ASCII Globe in the Terminal

2026-01-29 01:30:40

This is a submission for the GitHub Copilot CLI Challenge

What I Built

gh-firehose: A terminal-based, real-time-ish visualization of GitHub push activity rendered on a rotating ASCII globe. Similar to the fancy display in GitHub HQ, but maybe not quite as impressive LOL.

The app streams recent push events from the GitHub Events API and turns them into a "global pulse" view: while the Earth rotates, pushes show up as quick flashes over land*. It logs users encountered and topics encountered. It’s a lightweight, "leave it running in a terminal pane" kind of project.

* push locations made up... for now

How the world map is generated (the fun bit):

The globe is rendered from an equirectangular Earth landmask bitmap stored directly in code as ASCII. In main.go, earthBitmap is a 120×60 set of strings where # means land and . means ocean. Every frame:

  • renderGlobe(angle) iterates over each terminal cell in an 80×40 viewport.
  • It treats that viewport as a sphere by:
    • measuring each cell’s distance from the center (with a character aspect ratio compensation of ~2.1),
    • skipping cells outside the circle (transparent background),
    • projecting cells inside the circle onto a unit sphere (solving for nz = sqrt(1 - nx² - ny²)). It converts the sphere point into latitude/longitude:
    • lat = asin(-ny)
    • lon = atan2(nx, nz) + angle
  • It samples the earthBitmap via isLand(lon, lat) and draws:
    • # for land
    • . for ocean (chosen to give the sphere texture instead of a blank void)

This makes the globe feel “3D” while still being pure text, and rotation is just changing angle over time.

Picard telling Data to shut up

Do I understand one bit of that math? Nope. But AI told me, so it must be true! Right?

Demo

GIF of GH Firehose in action

Repo: https://github.com/leereilly/gh-firehose

Install: If you have GitHub CLI installed, just run the following:

gh extension install leereilly/gh-dungeons
gh firehose

My Experience with GitHub Copilot CLI

I treated GitHub Copilot CLI like ~an intern~ a fast, tireless pairing partner - describing the behavior I wanted at a high level and letting it scaffold, iterate, and refine the implementation in tight loops. It was especially great for experimenting with ideas I normally would’ve avoided, like spherical projection math 🤓

A few highlights:

  • Projection + rotation scaffolding: Copilot CLI generated and iterated on the core globe math (sphere projection, lat/lon conversion, rotation logic), which I then tuned for terminal character aspect ratios and visual polish.

  • Refactoring without losing momentum: Once the prototype worked, Copilot CLI helped restructure messy code into clean functions like renderGlobe() and isLand() while keeping everything behaving correctly.

  • Small-but-constant productivity boosts: From Go string builders to loops, clamping, coordinate transforms, and API plumbing, Copilot CLI handled the repetitive bits so I could focus on the fun visual side.

Do I fully understand all the projection math? Absolutely not LOL. But Copilot CLI got me to something that works — and looks cool — far faster than I would’ve on my own.

Take it for a spin (pun intended) yourself at https://github.com/leereilly/gh-firehose 🌍

GitHub Changelog, Now in Your Terminal (Built with Copilot CLI)

2026-01-29 01:30:31

This is a submission for the GitHub Copilot CLI Challenge.

What I Built

gh-changelog - A GitHub CLI extension that shows the latest updates from the GitHub Changelog.

Demo

GitHub Changelog CLI Extension in action

My Experience with GitHub Copilot CLI

I kicked off the project with a straightforward prompt: build a GitHub CLI extension in Go that surfaces the latest updates from the GitHub Changelog. Within minutes, I had a working prototype that fetched real data and did exactly what I asked. That "time-to-something-useful" was genuinely impressive.

Where things fell a bit short initially was polish - layout, output formatting, and some of the CLI ergonomics I care about. That wasn’t a limitation of Copilot so much as a reflection of how broad my early prompts were. I asked for something functional, and that’s exactly what I got.

From there, I made use of the /delegate command to kick off other agent tasks resulting in a PR for each bit of visual polish and features like:

What I learned / what was reinforced

  • Copilot is incredibly good at getting you to "something useful" fast
  • Quality of output maps directly to quality of prompts
  • Treating Copilot like a junior collaborator (not a magic wand) works well
  • Agent-style delegation maps surprisingly well to small, reviewable PRs

If you live in the terminal and want to stay current with GitHub updates, give it a try:

gh extension install leereilly/gh-changelog
gh changelog

Networking devices 000

2026-01-29 01:21:30

Internet → Modem → Router → Switch → End Devices (PC, phone, server, etc.)
This flow shows how devices connect to the Internet in a typical network setup, where the modem links to the ISP, the router directs traffic, and the switch allows multiple devices to communicate.

            Internet
               |
             Modem
               |
           Firewall
               |
            Router
               |
            Switch
      /        |       \
Device 1  Device 2   Server Cluster
                       |
                 Load Balancer
                       |
                 Server 1, 2, 3...

Modem

Modem is often described as the gateway to internet. It is the thing which connect us to the ISP(Internet Service Provider).
We can connect only one device to the modem that can be either single device or router

Internet ↔ Modem ↔ Single device (or router)

Router

Router is a device which routes the traffic between different networks. Connects local network to the internet through modem.
Takes the responsibility of local IP addressing(DHCP - Dynamic Host Configuration Protocol). Performs NAT(Network Address Translation) so multiple devices can share the same address space(single IP).

Modem → Router → Multiple devices (wired/wireless)

Switch vs Hub

Hub : simply broadcast the data it received to all connected devices
Switch : relay the data to specific device which is intended

  • Both switch and hub work in LAN(local Area Network)
  • switch reduces the collision improves efficiency
Hub: Device A → Hub → Devices B, C, D (all get the data)
Switch: Device A → Switch → Device B (only B gets it)

Firewall

It is a device which safeguards the internet. we can configure it based on our need which type of things to allow through it or block it. It is setup generally in network level

Internet → Firewall → Router → LAN

Load Balancer

Its a device we use to manage the traffic or load. It is used to distribute the traffic to scale the system.

Internet → Load Balancer → Server 1 / Server 2 / Server 3

Devices Comparison

Device Name Can Be Hardware Can Be Software Replacement
Modem Yes Rarely (mostly hardware)
Router Yes Yes (e.g., virtual routers like pfSense, VyOS)
Switch Yes Yes (virtual switches in VMware, Hyper-V)
Hub Yes Rarely (mostly hardware)
Firewall Yes Yes (software firewalls like iptables, pfSense, Windows Firewall)
Load Balancer Yes Yes (software like Nginx, HAProxy, AWS ELB)

Conclusion

Understanding network devices—from modems and routers to switches, firewalls, and load balancers—gives a clear picture of how data flows and how networks are structured. Each device has a distinct role: modems connect us to the internet, routers direct traffic, switches manage local communication, firewalls provide security, and load balancers ensure scalability. Together, they form a reliable and secure system that powers both everyday internet use and complex backend applications. For web developers, knowing how these devices interact helps in designing efficient, secure, and scalable systems.

Codebase Guide: AI Mentor for Multi-Repo Onboarding

2026-01-29 01:18:16

What I Built
Codebase Guide is a conversational AI assistant that helps new developers understand and safely navigate complex multi-repository codebases. Instead of spending hours hunting through repos and asking seniors "where do I start?", juniors can ask natural language questions like "Where is authentication handled?" or "How do I add a new profile field?" and get instant, structured answers with files, repos, and test commands.

The problem: onboarding onto large, multi-service systems is painful. Documentation is scattered, tribal knowledge lives in senior devs' heads, and juniors waste days just figuring out where to add code.

Codebase Guide solves this by indexing services, patterns, and playbooks across all repos, then using Algolia Agent Studio to retrieve the right context and generate mentor-style guidance.

Demo
Live UI:
https://codebase-guide-final.vercel.app

video:
https://youtu.be/RlgZvAfyikU?si=E9raVWfmRduM8DY-

GitHub:
https://github.com/pulipatikeerthana9-wq/codebase-guide-final

How I Used Algolia Agent Studio
I created three specialized indices to power fast, contextual retrieval:

services_index: Maps each service/repo to its purpose, tech stack, owner team, entry files, and key directories. Tags like auth, payments, frontend enable quick filtering.

patterns_index: Stores "how we do X" patterns—authentication middleware, error handling, feature flags, webhook processing—with code snippets and explanations.

playbooks_index`: Step-by-step guides for common tasks: "Add a new profile field," "Create a protected route," "Add a notification type." Each includes repos involved, exact steps, and test commands.

The Agent Studio configuration:

System prompt: Positioned the agent as a "senior dev mentor" who always answers in 4 parts: current implementation, files to inspect, safe change plan, tests to run.

**Retrieval tools: **Configured Algolia Search across all three indices with tag-based filtering (auth, payments, profiles, etc.).

Structured output: The agent retrieves relevant services, patterns, and playbooks, then synthesizes them into actionable guidance.

Example query: "How do I add a new profile field in API and frontend?"
→ Agent retrieves:

users-service from services_index

frontend-app from services_index

pb_add_profile_field playbook from playbooks_index
→ Returns: files to touch, database migration steps, validation updates, and test commands.

Why Fast Retrieval Matters
Without fast, structured retrieval, juniors either:

Grep through hundreds of files (slow, overwhelming)

Interrupt seniors constantly (blocks their work)

Make unsafe changes because they didn't find the right pattern

With Algolia's sub-second retrieval across three indices:

Questions that took 30+ minutes to answer now take 10 seconds.

Juniors get complete context (services + patterns + playbooks) in one response.
Try It Yourself
The agent can filter by tags (auth, backend, frontend) to surface exactly what's needed, not every file that mentions "user."

This turns onboarding from a week-long slog into a guided, self-serve experience.

The agent is currently in draft mode in Algolia Agent Studio. To use it live with your own queries:

Fork the GitHub repo

Clone the Algolia indices (or create your own with your codebase data)

In Agent Studio, create a provider profile with your own LLM API key (OpenAI, Anthropic, or Gemini)

Publish the agent and embed it in the UI

The UI is deployed at https://codebase-guide-final.vercel.app and shows the complete interface design. The retrieval logic and agent configuration are fully functional and can be tested in the Algolia playground.

The Modernization Imperative

2026-01-29 01:18:10

As developers and architects, we know that code has a shelf life. The ecosystem around it evolves while the core remains static. For CIOs and engineering leads, the mainframe isn't just a computer; it is a massive gravitational well.

It holds your most valuable data and logic, but its gravity makes it incredibly expensive to escape. This post breaks down the strategic calculus of modernization and why so many attempts end in disaster.

To Modernize or Not? A Risk Matrix

Many leaders fall into the "If it ain't broke, don't fix it" trap. But in software, "not broke" doesn't mean "healthy." Below is a risk matrix to help frame this decision.

Factor The Risk of Maintaining (Status Quo) The Risk of Modernizing (Transformation)
Talent & Skills Critical. The 'Bus Factor' is alarming. As baby boomer devs retire, the cost to hire COBOL talent skyrockets, and institutional knowledge walks out the door. Moderate. You have access to a vast pool of Java/C#/.NET/Go developers, but they lack the domain knowledge embedded in the old system.
Agility High. Launching a new feature takes months due to regression testing fears and rigid monolith architecture. You cannot easily integrate with modern APIs or AI. Low. Once modernized (e.g., to microservices), feature velocity increases. CI/CD pipelines allow for rapid iteration and experimentation.
Stability Low Risk. The mainframe is legendary for uptime (Five 9s). It rarely crashes. High Risk. Distributed systems introduce complexity (network latency, eventual consistency) that the mainframe never had to deal with.
Cost High (OpEx). MIPS costs are rising. IBM licensing and hardware maintenance are significant line items. High (CapEx). The initial migration is expensive and resource-intensive. ROI is usually realized over 3-5 years, not immediately.

The Verdict

If your COBOL system is purely a system of record that requires zero changes, keep it. But if it is a system of differentiation—something that gives you a competitive edge—the risk of not modernizing is now higher than the risk of moving.

Why COBOL Modernization Projects Fail

Industry analysts estimate that up to 70% of legacy modernization projects fail. Here are the three horsemen of the modernization apocalypse:

1. The 'Documentation' Mirage

  • The Trap: Project managers plan timelines based on existing spec sheets from 1998.
  • The Reality: The code contains thousands of undocumented 'fix-its' and business rules hard-coded into IF-ELSE blocks.
  • The Fix: Automated Discovery. Do not rely on humans to read the code. Use static analysis tools to map dependencies before writing new code.

2. Tightly Coupled Spaghetti

  • The Trap: Thinking you can just 'lift and shift' the logic.
  • The Reality: User Interface (CICS), Business Logic, and Data (DB2) are woven into a single source file. You cannot migrate logic without dragging the UI with it.
  • The Fix: Refactor in Place. Before migrating, modularize the COBOL. Isolate logic into sub-programs to clarify extraction boundaries.

3. The 'Big Bang' Rewrite

  • The Trap: Attempting to rewrite 10 million lines of code and releasing it all at once.
  • The Reality: The new system will have bugs. If you switch everything at once, you will paralyze the business.
  • The Fix: The Strangler Fig Pattern (See below).

The Solution: The Strangler Fig Pattern

This architectural pattern involves gradually creating a new system around the edges of the old one, letting it grow until the old system is strangled and can be removed.

  1. Identify one specific function (e.g., 'Check Customer Balance').
  2. Build a function as a microservice in the cloud.
  3. Route only that request to the new service; keep everything else on the mainframe.
  4. Repeat until the mainframe is empty.

Summary

Modernization is not a technical upgrade; it is an archaeological dig. Success requires respecting the complexity of what was built before, rather than assuming it's just "old junk" that needs to be deleted.