2025-12-29 05:51:11
This section documents the process of integrating a headless Raspberry Pi into a home network using an Xfinity router and a Netgear switch. The objective is to verify physical connectivity, switch-level visibility, DHCP assignment, and overall network reachability in preparation for remote access and database deployment. Each step reinforces foundational networking concepts aligned with the OSI Model, specifically Layers 1 (Physical), 2 (Data Link), and 3 (Network).
Open the Xfinity Admin Tool through your browser and enter your credentials to log in
Once logged in, select the Connected Devices located on the left-hand side of the page
Download and install the Netgear Discovery Tool for your operating system
Official Netgear Discovery Tool
Once logged in, select Port Status located on the left
The Ethernet cable connected from the router to the switch port in step 2 has been identified by the Netgear Admin page, verifying that the port status is up.
The ping command verifies that the Raspberry Pi is reachable within the local area network, and the Pi replies with ICMP (Internet Control Message Protocol) echo reply packets.
This completes the physical connectivity and network visibility of the Raspberry Pi and the switch to the local area network.
2025-12-29 05:49:28
I've been writing shaders for a while now, and i keep running into the same couple of issues. They aren't deal-breakers, but they are annoying enough that I started wondering if there was a better way to handle them.
The biggest one is Coordinate Spaces.
In standard GLSL, a vec4 is just a vec4. The compiler doesn't care if that vector represents a position in World Space, Model Space, or Clip Space. If you accidentally multiply a View-Space-Vector by a Model-Matrix, the compiler won't stop you. You just get a black screen or weird lighting, and then you spend 20 minutes debugging only to realize you mixed up your spaces.
So, I decided to start a side project called Cast.
It's a small language that compiles down to GLSL, written in C# using ANTLR4. My goal wasn't to replace GLSL, but to create a "wrapper" that enforces some safety rules before generating the final code.
The Main Idea:
I wanted the coordinate system to be a part of the type itself. Instead of just writing vec4, in Cast you write vec4<World> or vec4<Model>.
This allows the compiler to check the math.
let modelPos : vec4<Model> = ...;
let matrix: mat4<Model, World> = ...;
// This works because of the types match
let worldPos = matrix * modelPos;
// This would throw a compiler error
let wrong = projectionMatrix * modelPos;
It's basically just using a strong type system to prevent logical errors.
Cleaning up the Syntax
The other thing that bothered me was reading nested math functions. Shader code often ends up looking like this: max(pow(dot(N, L), 32.0), 0.0)
You have to read it from inside out. My idea was to read it from left to right, instead of inside out. The outcome would be N.dot(L).pow(32.0).max(0.0)
Adding new features
While this is just syntactic sugar, i wanted to implement something that i remember from the language Go. It's called a Receiver Type and it goes like this:
type SomeStruct struct {
width,
height int
}
func (r *SomeStruct) SomeFunction() {...}
It's basically the same in Cast.
struct SomeStruct { x: float, y: float }
fn (SomeStruct) SomeFunction() {...}
Additionally, to keep the file a bit more structured i added in, out and uniform groups.
uniform {
...
}
in {
...
}
out {
...
}
Cast is currently in the "Proof of Concept" stage. It is not production-ready. Many standard features are still missing, and the compiler might crash if you look at it the wrong way.
I am sharing this now because i think the concepts (explicit coordinates and so on) are worth discussing, even if the implementation is still young.
I'd love to hear your feedback on the syntax and the architecture! Feel free to check out the source code, open an issue, or just tell me why my approach to matrix multiplication is wrong.
You can find it here Cast.
2025-12-29 05:47:00
Every morning, your engineers close their IDE and wait for the Zoom grid to fill. That's money burning.
Not just the obvious cost of salaries idling in a meeting. The true expense is deeper—flow states shattered, focus fragmented, and the slow corrosion of morale as your team performs daily status theater instead of building something that matters.
Eight engineers. One standup. Fifteen minutes scheduled, twenty-two minutes actual. Five days a week.
That's 880 minutes weekly. Nearly 15 person-hours. At a loaded cost of $100/hour—conservative for senior engineers—that's $1,500 weekly. $78,000 annually.
For one team. But that's just the visible cost.
The real damage happens at the margins. The engineer who surfaces from deep work thirty minutes early because starting anything meaningful before standup is pointless. The twelve minutes of Slack noise and context recovery after. The mental taxation of preparing your update.\n\nAdd those margins and you're looking at 25-30 hours of disrupted capacity weekly. $130,000-$150,000 in annual impact.
It's 10 AM. The Zoom grid populates.
Marcus goes first. "I finished the pagination bug, moving to the Redis cache invalidation issue today. No blockers."
Everyone nods. No one knows what pagination bug.
Raj describes a complex SSO integration issue he's debugging. He's been on it for three days. The explanation takes four minutes. Six people on the call have never touched SSO.
When Raj finishes, someone says "Let me know if you need anything" and the standup moves on.
Raj doesn't need seven people to know about his SSO struggle. He needs thirty uninterrupted minutes with the one engineer who implemented the previous auth flow. Instead he got an audience and a sympathetic platitude.
The standup optimized for broadcasting. What he needed was connection.
Here's what your team could do with those 25-30 hours weekly:
Write actual documentation. The kind that explains why the payment service has three retry strategies and when to use each.
Reduce your bus factor. Pair program on the hairy parts. Record a walkthrough of the deployment pipeline.
Fix the slow build. The one that adds eight minutes to every PR cycle.\n\n*Actually mentor.* In a focused session where a senior engineer and a junior engineer work through a problem together without an audience.
But you won't do those things. Because standup is on the calendar. Because coordination feels urgent and documentation feels eventual.
The standup isn't just consuming time. It's advertising what you value.
Your team spans three time zones. Your standup is at 9 AM Pacific, noon Eastern. On Tuesday, Lisa in New York has finally untangled a race condition in the websocket handler. It's 11:40 AM. She's in flow. She sees the shapes. She knows the fix.
The Slack reminder fires at 11:50. "Standup in 10 minutes."
She context-switches. Standup happens. By 12:15 PM she's back at her desk. The shapes are gone. She spends twenty minutes re-deriving what she already knew.
The standup didn't help her. It broke her.
If standups are so expensive, why does every team run them?
Standups create the appearance of oversight. Managers feel they know what's happening. Engineers feel visible. Stakeholders believe the team is synchronized.
Standups are insurance against blame. If something falls through the cracks and you had standups, you tried. You had process.
Standups are habit. They're in the Scrum Guide. They're what Agile teams do. Questioning them feels like questioning your professional legitimacy.\n\n## What Actual Coordination Looks Like
If you killed standup tomorrow, what would you need instead?
Async updates with pull-based attention. Engineers post a brief update when something changes. Others subscribe to what's relevant.
Friction-free help requests. A clear protocol for "I'm stuck" that routes to the right person immediately, not at 10 AM tomorrow.
Periodic deep sync. Maybe twice a week, the team meets for real coordination. Not status broadcast—actual conversation.
Transparency through artifacts. Updated tickets. Readable commits. A maintained README.
You can't just delete standup. Your team has muscle memory. Here's a four-week transition:
Week One: Measure the baseline. Track actual duration. Survey the team. Calculate the loaded cost.
Week Two: Introduce async updates in a Slack channel. Make standup opt-in for those who posted updates.
Week Three: Flip the default. Standup becomes a discussion forum. No round-robin updates. If nobody has discussion items, the meeting ends in two minutes.
Week Four: Replace daily standup with two weekly syncs. Maintain async updates. Establish a help protocol.
More likely, you'll find that engineers appreciate the uninterrupted morning, real blockers get resolved faster, and you recovered 20+ hours of team capacity per week.
That's $100K+ annually to spend on work that compounds.\n\n## Audit Your Defaults
Daily standup isn't evil. It's just expensive.
For some teams, in some contexts, the cost is worth it. But most teams have never calculated the cost, so they can't know if the value clears the bar.
Start asking the uncomfortable questions:
The Agile-Industrial Complex wants you to believe that cadence equals discipline, that ceremonies equal craftsmanship, that standups are the cost of collaboration.
They're not. They're just meetings. And like every meeting, they should justify their existence or die.
Read more at agilelie.com
2025-12-29 05:44:59
We’ve all seen the demos: an LLM generates a clean React component or a Python script in seconds. But in the real world, engineering isn't just about generation—it's about maintenance. It’s about diving into a 10-year-old Java repo, understanding the legacy context, and fixing a bug without breaking the entire build.
As part of my Mastery Tier submission for my current AI MOOC, I decided to tackle this problem head-on by building RAID-AI. It is a multi-language bug-fixing benchmark designed to evaluate "Green Agents" across Java, Python, and JavaScript.
The Problem: The Benchmarking Gap
Most AI benchmarks are "toy" problems. They exist in a vacuum. To truly test if an agent is ready for a production environment, it needs to face:
Multilinguality: Can it context-switch between the rigid types of Java and the dynamic nature of JS?
Environment Constraints: Can it handle real-world dependencies?
Efficiency: Is the agent solving the problem with minimal tokens, or is it "brute-forcing" the solution?
The Architecture: Under the Hood of RAID-AI
RAID-AI operates as an orchestration layer. It manages three distinct "Project Managers" (Java, Python, and JS) that interface with local bug repositories.
For the Java component, I integrated Defects4J, a database of thousands of real-world bugs. This wasn't a simple "plug-and-play" situation. To get the environment stable on WSL/Ubuntu, I had to navigate a "dependency minefield."
The Technical "War Story": Perl and Environment Parity
The biggest hurdle was achieving environment parity. Defects4J is built on a Perl-based backend, which led to the infamous String::Interpolate.pm error. I spent a significant portion of the development phase playing "dependency whack-a-mole," manually installing system-level libraries like libstring-interpolate-perl and liblist-moreutils-perl to ensure the benchmark could actually communicate with the Java projects.
This experience highlighted a critical truth in AI Engineering: Infrastructure is the ultimate bottleneck. If your testing environment isn't reproducible, your AI’s "success" is just a hallucination.
The Scoring Rubric: Why "Green" Matters
In RAID-AI, we don't just care about a "Pass" or "Fail." We use a weighted rubric to calculate the Green Agent Score:
Correctness (50%): Does it pass the original test suite?
Code Quality (20%): Is the fix maintainable or is it "spaghetti"?
Efficiency (15%): We track the time and token consumption. A fix that takes 10 minutes and 50k tokens is scored lower than a surgical 2-minute fix.
Minimal Change (15%): We penalize agents that rewrite entire files to fix a single-line logic error.
By enforcing a 600-second timeout per bug, RAID-AI forces agents to be decisive and computationally efficient.
Lessons from the Mastery Tier
Moving through this MOOC to the Mastery Tier has shifted my focus from "Prompt Engineering" to "System Design." My three biggest takeaways for fellow developers are:
Polyglot Agents are the Future: The next generation of engineers won't be "Python Developers"; they will be "System Orchestrators."
Adversarial Testing: You have to try and break your benchmark before you let an agent near it.
The Importance of Reproducibility: Automated bug-fixing only works if the "Check-out -> Fix -> Test" loop is atomic and indestructible.
Join the Project
RAID-AI is currently initialized with 64 high-priority bugs (17 Java, 17 Python, 30 JS), and this is only the beginning. If you're interested in building autonomous systems that actually work in the real world, I highly recommend checking out the curriculum that guided this build.
👉 Check out the MOOC here: https://agenticai-learning.org/f25
What are you building to test the limits of your agents? Let's discuss in the comments below.
2025-12-29 05:43:54
The semantic-release package (https://github.com/semantic-release/semantic-release) automates the version management and release process of a project. It determines the next version number based on the commit messages that adhere to the Conventional Commits specification, generates release notes, and publishes the release automatically.
However, some developers prefer more control over when to increment major and minor versions. Companies like JetBrains use the year as a major version and an auto-incrementing integer as a minor version as illustrated in the image below.
Since semantic-release offers various built-in tools, such as a release notes generator, it is interesting to keep using it and just change the bumping logic. That is what I will show in this post.
My approach involves creating an Azure DevOps pipeline that runs the semantic-release with the following steps:
The code snipet below shows the Azure Devops pipeline YAML with steps above. The parameters is used to request user bump type (default bump type set to patch) and wished version. Its values is set as environment variables that are used at semantic release configuration.
parameters:
- name: bumpType
type: string
default: "patch"
values:
- major
- minor
- patch
- name: bumpNumber
type: string
default: "0"
pool:
vmImage: ubuntu-latest
jobs:
- job: approval
pool: server
steps:
- task: ManualValidation@1
...
condition: ne('${{ parameters.bumpType }}', 'patch')
- job: create_tag
dependsOn: approval
steps:
...
- script: |
npx semantic-release
displayName: Run semantic-release setting ${{ parameters.bumpType }} version to ${{ parameters.bumpNumber }}
env:
SEMANTIC_RELEASE_BUMP_TYPE: ${{ parameters.bumpType }}
SEMANTIC_RELEASE_BUMP_NUMBER: ${{ parameters.bumpNumber }
That’s it! With this setup, you can manually bump the versions of your project without having to worry about commit messages.
The following code snipet, show the semantic release configuration plugin configuration at release.config.cjs. For the plugin @semantic-release/commit-analyzer (which bumps version according to commit messages) is set to always increase the version according to bump type options choosen by the user.
// file release.config.cjs
module.exports = {
...
plugins: [
...
['@semantic-release/commit-analyzer', { releaseRules: [
{ release: process.env.SEMANTIC_RELEASE_BUMP_TYPE }
] }],
'./verify-release.js'
]
};
The verify-release.js plugin verifies if the new version is incremented as expected. This ensures that if the pipeline is executed with the same input for the second time, it will fail because the bump will go to an undesired value (taking the JetBrains example, setting the major version to the next year). You can see the verify-release.js code in the next snippet.
// file verify-release.js
module.exports = {
verifyRelease: async (pluginConfig, context) => {
const { lastRelease = {}, nextRelease = {}, logger = console } = context;
const bumpType = process.env.SEMANTIC_RELEASE_BUMP_TYPE;
const bumpNumber = process.env.SEMANTIC_RELEASE_BUMP_NUMBER;
logger.log('Verifying expected release.');
if (!bumpType) {
logger.log('SEMANTIC_RELEASE_BUMP_TYPE not set — skipping version verification.');
return;
}
if (bumpType == 'patch') {
logger.log('Bump type set to patch. Nothing to verify.');
return;
}
const actual = nextRelease && nextRelease.version;
const match = actual.match(/^(\d+)\.(\d+)\.\d+$/);
if (!match) {
throw new Error(`Invalid tag format: ${lastRelease}`);
}
const actualMajor = Number(match[1]);
const actualMinor = Number(match[2]);
if (bumpType == 'major' && actualMajor != bumpNumber) {
logger.error(`Major version mismatch: expected ${bumpNumber} but will publish ${actualMajor}`);
throw new Error(`Version verification failed: expected major version ${bumpNumber}, got ${actualMajor}`);
}
if (bumpType == 'minor' && actualMinor != bumpNumber) {
logger.error(`Minor version mismatch: expected ${bumpNumber} but will publish ${actualMinor}`);
throw new Error(`Version verification failed: expected minor version ${bumpNumber}, got ${actualMinor}`);
}
logger.log(`Version verification passed: ${actual}`);
}
}
The picture below shows one example of pipeline execution that bumped major version.
Once the release is ready, you can begin building your application. For this example, I’ve created a simple hello world CLI in Go.
The pipeline’s steps are as follows:
Here’s a code snippet that shows the above pipeline:
variables:
appName: hello-world
buildDir: build
steps:
- checkout: self
fetchTags: true
fetchDepth: 0
- script: |
export VERSION=$(git describe --tags)
echo "##vso[build.updatebuildnumber]${VERSION}"
displayName: "Set build number"
- task: GoTool@0
inputs:
version: "1.25"
- script: |
mkdir -p $(buildDir)
go build -o $(buildDir)/$(appName) ./cmd
displayName: "Build Go binary"
- script: |
cd $(buildDir)
zip $(appName)-$(Build.BuildNumber).zip $(appName)
displayName: "Create ZIP with version"
- task: PublishBuildArtifacts@1
inputs:
PathtoPublish: "$(buildDir)"
ArtifactName: "release"
publishLocation: "Container"
The following image illustrates one successful build.
The code used in this post is available in the github repository https://github.com/dmo2000/semantic-release-manual
2025-12-29 05:38:49
I still vividly remember that Friday midnight. I, a man in my forties who should have been at home enjoying the weekend, was instead in a cold server room, the hum of fans buzzing in my ears, and a stream of error logs scrolling endlessly on the terminal before me. What was supposed to be a "simple" version update had turned into a disaster. The service wouldn't start, the rollback script failed, and on the other end of the phone was the furious roar of a client. At that moment, staring at the screen, I had only one thought: "There has to be a better way."
We old-timers grew up in an era when the term "maintenance window" was a fact of life. We were used to pausing services in the dead of night, replacing files, and then praying that everything would go smoothly. Deployment was a high-stakes gamble. If you won, you made it to dawn unscathed; if you lost, it was an all-night battle. This experience forged in us an almost paranoid pursuit of "stability" and "reliability."
As technology evolved, we got many tools to try to tame this beast of deployment. From handwritten Shell scripts to powerful process managers and the wave of containerization. Every step was an improvement, but it always seemed to fall just short of the ultimate dream: seamless, imperceptible, zero-downtime updates. Today, I want to talk to you about the nearly lost art of the "hot restart," and how I rediscovered this elegance and composure within the ecosystem of a modern Rust framework.
How many of you here have written or maintained a deployment script like the one below? Please raise your hand. 🙋♂️
Does this script look familiar? It's simple, direct, and "works" in most cases. But as a veteran who has stumbled through countless pitfalls, I can spot at least ten places where it could go wrong:
kill $PID just sends a SIGTERM signal. If the process can't respond to this signal due to a bug or I/O blocking, it gets forcibly killed by kill -9 after the 5-second sleep. What does this mean? It means data might not have been saved, connections not closed, state not synchronized. It's a ticking time bomb.myapp.pid file might still contain an old, invalid PID. The script will try to kill a non-existent process and then start a new instance, leading to two instances running simultaneously, fighting for ports and resources.git pull and mvn clean install can both fail. Network issues, code conflicts, dependencies failing to download... an error at any step will interrupt the script, leaving you with a stopped service and no new one to replace it.This approach, I call it "brute-force" deployment. It's fraught with risk, and every execution is a nail-biter. It works, but it's not elegant, let alone reliable.
Later, we got more professional tools, like PM2 in the Node.js world, or the general-purpose systemd. This was a huge step forward. They provided powerful features like process daemonization, log management, and performance monitoring.
With PM2, a deployment might be simplified to a single command:
pm2 reload attempts to restart your application instances one by one, thus achieving a so-called "zero-downtime" reload. For systemd, you might modify its service unit file and then run systemctl restart my-app.service.
These tools are fantastic, and I still use them in many projects today. But they are still not the perfect solution. Why?
PM2's command-line arguments or systemd's verbose unit file syntax. Your application doesn't know it's being "managed."PM2 primarily serves the Node.js ecosystem. While it can run programs in other languages, it doesn't feel "native." systemd is part of the Linux system and is not cross-platform.pm2 reload achieve zero downtime? It relies on "cluster mode," but the configuration and workings of cluster mode are a black box to many developers. When problems arise, debugging is extremely difficult.These tools are like hiring a nanny for your application. The nanny is very capable, but she is not family. She doesn't truly "understand" what your application is thinking, nor does she know if your application has some "last words" to say before restarting.
Now, let's see how server-manager from the Hyperlane ecosystem solves this problem. It takes a completely different path: stop relying on external tools and let the application manage itself.
Consider this code:
The philosophy of this code is completely different. The logic of service management (PID file, hooks, daemonization) is perfectly encapsulated by a Rust library and becomes part of our application. We no longer need to write Shell scripts to guess PIDs or configure systemd units. Our application, through server-manager, has the innate ability to manage itself.
This internalized approach brings several huge benefits:
set_start_hook and set_stop_hook are the masterstrokes. We can load configurations before the service starts, or gracefully close database connections and save in-memory data before it stops. The application gets a chance to deliver its "last words," which is crucial for ensuring data consistency.server-manager is designed with both Windows and Unix-like systems in mind, handling platform differences internally. The same code runs everywhere.This is already very close to my ideal state. But it still only solves the problem of "cold start" and "stop." What about "update"?
This is where hot-restart truly shines. It follows the same design philosophy as server-manager, internalizing the update logic into the application.
Imagine your application needs an update. You just need to send a signal to the running process (like SIGHUP) or notify it through another IPC mechanism. Then, the hot-restart logic inside the application is triggered.
The power contained in this piece of code is astounding. Let's break down the magic that might be happening behind the hot_restart function:
hot_restart logic listens for a specific signal.awaits the before_restart_hook we provided. This is the most critical step! It gives us a precious opportunity to take care of all "unfinished business."hot_restart calls cargo commands (check, build) to compile our code in the background. If the compilation fails, the restart process is aborted, and the old process continues to provide service without interruption. Never deploy a faulty version.accepting new connections on that port. To the operating system kernel, the entity listening on this port has simply changed from one process to another. Requests in the connection queue are not lost at all. To the clients, they don't even feel any change.This is a true, zero-downtime hot restart. It's not a simple rolling restart; it's a carefully orchestrated, atomic "coronation ceremony." It's elegant, safe, and puts the developer completely in control.
From clumsy Shell scripts to powerful external managers, and now to the fully internalized server-manager and hot-restart we see today, I see a clear evolutionary path. The destination of this path is to transform deployment from an uncertain ritual that requires prayer into a confident, deterministic engineering operation.
This integrated philosophy is one of the biggest surprises the Rust ecosystem has given me. It's not just about performance and safety; it's about a new, more reliable philosophy of building and maintaining software. It takes the complex, business-logic-disconnected knowledge that once belonged to the "ops" domain and brings it back inside the application using the language developers know best: code.
Next time you're anxious about a late-night deployment or worried about the risk of service interruption, remember that we deserve better tools and a more composed, elegant development experience. It's time to say goodbye to the wild west of the past and embrace this new era of deployment. 😊