2026-01-10 13:05:38
2026-01-10 12:54:33
Hello — I’m working on a project that means a lot to me and to the community it’s for, but I’ve hit a wall and could really use some outside perspective.
I’m building a small suite of offline, privacy‑first desktop tools to help people track different parts of their HRT journey: medication logs, journaling, cycle tracking, resource saving, and even a prototype voice‑training tool and so far the hardest tool to make, the the body change mapper. Each app works on its own, stores data locally, and avoids accounts, cloud sync, or analytics. The long‑term plan is to make it easier make, updates, new tool and combine everything into one cohesive app and eventually explore a secure web version.
The project Github can be located here
The individual tools are coming along well — but now that I’m trying to think about unifying them, I’m running into some challenges:
🔧 Where I’m stuck
How to structure a combined app without making the codebase overwhelming
How to design a shared data model that still respects local‑only storage
How to keep the UI accessible, simple, and consistent across tools
Whether I should refactor everything first or start building the unified shell
How to plan for a future web version without over‑engineering the desktop one
I’ve been staring at this for too long, and I think I’ve lost the “fresh eyes” needed to make the next move.
💬 What I’m looking for
Advice from people who’ve built multi‑tool apps or modular desktop suites
Thoughts on structuring shared components, storage, or UI patterns
Examples of similar projects or architectures
General guidance on how to approach “unifying” several standalone tools
Even just “here’s how I’d think about it” perspectives
I’m not looking for someone to rewrite my project — just some direction, patterns, or mental models that could help me get unstuck.
🌱 In conclusion
This project is meant to support people navigating transition in a safe, private, offline way. Accessibility and autonomy are core values here. I want to build something that genuinely helps people, and I want to do it thoughtfully — but right now I’m spinning my wheels.
If you have experience with modular design, PySide6, app suites, or even just strong opinions about architecture, I’d love to hear from you.
Thanks for reading, and thanks in advance for any guidance. It means a lot.
2026-01-10 12:48:51
Finding the similarity between two pieces of text is a foundational challenge in computer science. Instead of just counting how many characters we change, this problem asks us to consider the specific "cost" of each character based on its ASCII value.
You're given:
s1 and s2.Your goal:
This problem is a variation of the Longest Common Subsequence (LCS) or Edit Distance problems. The core idea is to decide, for every character pair, whether to keep them or delete them to reach a state where both strings are equal.
We use Dynamic Programming (DP) with memoization to explore these choices:
s1[i] and s2[j] are the same, it costs us 0 to keep them. We simply move to the next pair of characters.s1[i] and pay its ASCII price, then compare the rest of s1 with the current s2.Delete s2[j] and pay its ASCII price, then compare the current s1 with the rest of s2.
The Optimization: We always choose the path that yields the minimum sum. By storing the results in a 2D array (dp), we avoid calculating the same sub-problem multiple times.
class Solution {
public:
int helper(string& s1, string& s2, int i, int j, vector<vector<int>>& dp) {
int n = s1.size();
int m = s2.size();
// If s1 is exhausted, delete remaining characters of s2
if (i == n) {
int sum = 0;
for (int k = j; k < m; k++) sum += s2[k];
return sum;
}
// If s2 is exhausted, delete remaining characters of s1
if (j == m) {
int sum = 0;
for (int k = i; k < n; k++) sum += s1[k];
return sum;
}
if (dp[i][j] != -1) return dp[i][j];
if (s1[i] == s2[j]) {
// Characters match, no cost added
return dp[i][j] = helper(s1, s2, i + 1, j + 1, dp);
} else {
// Choice 1: Delete s1[i]
int deleteS1 = s1[i] + helper(s1, s2, i + 1, j, dp);
// Choice 2: Delete s2[j]
int deleteS2 = s2[j] + helper(s1, s2, i, j + 1, dp);
return dp[i][j] = min(deleteS1, deleteS2);
}
}
int minimumDeleteSum(string s1, string s2) {
int n = s1.size();
int m = s2.size();
vector<vector<int>> dp(n, vector<int>(m, -1));
return helper(s1, s2, 0, 0, dp);
}
};
class Solution:
def minimumDeleteSum(self, s1: str, s2: str) -> int:
n, m = len(s1), len(s2)
dp = [[-1] * m for _ in range(n)]
def helper(i, j):
# Base case: s1 exhausted
if i == n:
return sum(ord(c) for c in s2[j:])
# Base case: s2 exhausted
if j == m:
return sum(ord(c) for c in s1[i:])
if dp[i][j] != -1:
return dp[i][j]
if s1[i] == s2[j]:
dp[i][j] = helper(i + 1, j + 1)
else:
# Compare cost of deleting from s1 vs deleting from s2
delete_s1 = ord(s1[i]) + helper(i + 1, j)
delete_s2 = ord(s2[j]) + helper(i, j + 1)
dp[i][j] = min(delete_s1, delete_s2)
return dp[i][j]
return helper(0, 0)
/**
* @param {string} s1
* @param {string} s2
* @return {number}
*/
var minimumDeleteSum = function(s1, s2) {
const n = s1.length;
const m = s2.length;
const dp = Array.from({ length: n }, () => Array(m).fill(-1));
function helper(i, j) {
// Base case: s1 exhausted
if (i === n) {
let sum = 0;
for (let k = j; k < m; k++) sum += s2.charCodeAt(k);
return sum;
}
// Base case: s2 exhausted
if (j === m) {
let sum = 0;
for (let k = i; k < n; k++) sum += s1.charCodeAt(k);
return sum;
}
if (dp[i][j] !== -1) return dp[i][j];
if (s1[i] === s2[j]) {
dp[i][j] = helper(i + 1, j + 1);
} else {
// Minimum of deleting s1[i] or s2[j]
const deleteS1 = s1.charCodeAt(i) + helper(i + 1, j);
const deleteS2 = s2.charCodeAt(j) + helper(i, j + 1);
dp[i][j] = Math.min(deleteS1, deleteS2);
}
return dp[i][j];
}
return helper(0, 0);
};
ord() in Python or charCodeAt() in JavaScript allows us to treat text as numerical data for optimization.s1 vs delete s2) as a tree of recursive calls.This problem is a classic example of how minor tweaks to a standard algorithm can change its application. In real-world software engineering, this logic is used in Bioinformatics to align DNA sequences or in Version Control Systems (like Git) to calculate the "diff" between two file versions. Mastering these weighted string problems will make you much more effective at building search engines or comparison tools.
2026-01-10 12:48:28
Hey Cloud Builders 👋
Welcome to Day 30 of the #100DaysOfCloud Challenge!
Today, we are helping the Nautilus team connect a private server to the outside world. To keep costs low, we are passing on the managed NAT Gateway and building our own NAT Instance. This allows our private instance to securely upload files to S3 without being exposed to the public internet.
This task is part of my hands-on practice on the KodeKloud Engineer platform, which I highly recommend for anyone looking to master real-world DevOps scenarios.
datacenter-pub-subnet.datacenter-nat-instance.datacenter-nat-31923 S3 bucket.A private subnet has no direct path to the internet. A NAT (Network Address Translation) device acts as a "middleman" that sends requests out on behalf of the private server.
NAT Instance vs. Gateway: A NAT Instance is a regular EC2 instance configured to perform routing. It is cheaper than a managed NAT Gateway but requires manual setup and management.
Source/Destination Check: By default, EC2 instances only accept traffic meant for them. To act as a NAT, we must disable this check so the instance can forward traffic from other sources.
Public Subnet Requirement: The NAT Instance must live in a public subnet with a route to an Internet Gateway (IGW) to reach the outside world.
We’ll move from Network Setup → Instance Configuration → Routing.
datacenter-pub-subnet to the existing VPC.[Image of the AWS EC2 console showing the dialog to disable Source/Destination check on an instance]
datacenter-priv-subnet.0.0.0.0/0
datacenter-nat-instance.datacenter-priv-ec2 is already running a script to upload datacenter-test.txt.datacenter-nat-31923.You’ve just manually built a core networking component! Understanding NAT instances gives you deep insight into how Linux routing and AWS VPC networking interact. This is foundational knowledge for troubleshooting complex connectivity issues.
If you want to try these tasks yourself in a real AWS environment, check out:
👉 KodeKloud Engineer - Practice Labs
It’s where I’ve been sharpening my skills daily!
2026-01-10 12:42:08
Dear diary, today I discovered that leaving the comfortable embrace of UEFI is like moving out of your parents' house at 40. Everything that used to work magically now requires you to actually understand how the world functions.
It was 9am when I sat down with my coffee, confident that transitioning from UEFI to bare metal would be straightforward. After all, I had successfully implemented AHCI storage and a key-value store. How hard could it be to set up a Global Descriptor Table and start running my own kernel? The hubris was palpable.
The plan seemed reasonable: call ExitBootServices, set up a proper GDT for 64-bit long mode, get polling keyboard input working, and run a kernel shell. I'd even built a logging system that writes directly to the SSD so I could debug across reboots. What could possibly go wrong?
Everything. Everything could go wrong.
The first attempt was promising. ExitBootServices succeeded, the GDT loaded without complaint, and I was running in kernel mode. I could even see my kernel shell prompt. Victory seemed assured until I tried to enable interrupts with a confident sti instruction.
The machine triple-faulted immediately.
Now, a triple fault is the x86 processor's way of saying "I give up, you're on your own" before performing the digital equivalent of flipping the table and storming out. It's simultaneously the most and least helpful error condition - you know something is catastrophically wrong, but the CPU has decided that telling you what would be too much effort.
I spent the next two hours in what I like to call the "interrupt denial phase." Surely it wasn't the interrupts themselves. Maybe the GDT was wrong. I rewrote it three times, each iteration more baroque than the last. Maybe the stack was corrupted. I added stack canaries and verification code. Maybe UEFI had left some state that was interfering. I tried clearing every register I could think of.
The machine continued to triple fault with the same mechanical precision that I continued to make coffee.
By noon, I had accepted that interrupts were indeed the problem and decided to punt. Polling keyboard input wasn't elegant, but it would work. I implemented a simple PS/2 controller polling loop and got basic keyboard input working. The kernel shell was functional, and I could even save logs to the SSD. Milestone 5 was technically complete, but it felt like winning a race by getting out and pushing the car across the finish line.
But you know what they say about kernel development - if you're not moving forward, you're moving backward into a triple fault. So naturally, I decided to tackle interrupts properly for Milestone 6.
The afternoon was spent in the IDT mines, crafting interrupt service routines with the careful precision of a medieval scribe copying manuscripts. I wrote elegant macro systems that generated perfect stack frames. I created sophisticated handlers that could gracefully manage any interrupt condition. The code was beautiful, abstracted, and completely broken.
The first test with interrupts enabled produced a Debug Exception (Vector 1) immediately after sti. This was actually progress - instead of a triple fault, I was getting a specific exception. The CPU was at least trying to tell me what was wrong, even if what it was telling me made no sense.
Debug exceptions fire when you hit a debug register breakpoint or when the trap flag is set for single-stepping. I wasn't using any debugger, and I certainly hadn't set the trap flag intentionally. But x86 processors are like that relative who remembers every slight from thirty years ago - they hold onto state in the most inconvenient places.
It took me another hour to realize that UEFI might have left debugging state enabled. I added code to clear all the debug registers (DR0 through DR7) and the trap flag in RFLAGS. The debug exception disappeared, but now I had a new problem: the timer interrupt wasn't firing.
This began what I now refer to as "the silent treatment phase" of debugging. The PIC was configured, the IDT was set up, interrupts were enabled, but my timer tick counter remained stubbornly at zero. The system wasn't crashing, which was somehow more frustrating than when it was exploding spectacularly.
I verified the PIC configuration seventeen times. I read Intel manuals until my eyes bled. I checked and rechecked the IDT entries. Everything looked correct on paper, but the hardware seemed to be politely ignoring my carefully crafted interrupt handlers.
The breakthrough came at 6pm when I was explaining the problem to my rubber duck (a literal rubber duck I keep on my desk for debugging purposes - don't judge). As I described my elegant ISR macro system, I realized the problem: I was being too clever.
My macros were generating complex stack frame management code that was somehow corrupting the interrupt return address. When I looked at the actual assembly output, it was a nightmare of stack manipulation that would make a spaghetti factory jealous.
So I threw it all away and wrote the simplest possible interrupt handlers using naked functions with inline assembly. No fancy macros, no elegant abstractions, just the bare minimum code to handle an interrupt and return cleanly:
__attribute__((naked)) void isr_timer(void) {
asm volatile (
"push %rax\n"
"incq g_timer_ticks\n"
"movb $0x20, %al\n"
"outb %al, $0x20\n" // Send EOI
"pop %rax\n"
"iretq"
);
}
It was inelegant. It was primitive. It worked perfectly.
The moment I enabled interrupts with the new handlers, the timer immediately started ticking at exactly 100 Hz. The keyboard interrupt began capturing input flawlessly. After eight hours of fighting with sophisticated abstractions, the solution was to write interrupt handlers like it was 1985.
There's something profoundly humbling about spending an entire day implementing "modern" kernel architecture only to discover that the most primitive approach is the most reliable. It's like spending hours crafting a gourmet meal and then realizing that a peanut butter sandwich would have been both more satisfying and less likely to poison you.
By evening, I had a fully functional interrupt-driven kernel. The timer was ticking, the keyboard was responsive, and the kernel shell worked flawlessly. I could watch the timer ticks increment in real-time, each one a small victory over the chaos of bare metal programming.
The final test was letting the system run while I went to make dinner. When I returned, the timer showed 3,432 ticks - exactly 34 seconds of stable operation. No crashes, no mysterious hangs, no triple faults. Just a kernel quietly doing its job, handling dozens of interrupts per second with the reliability of a Swiss timepiece.
I saved the kernel log to review later:
[KERNEL] Enabling interrupts (STI)...
[KERNEL] Interrupts ENABLED.
[KERNEL] Timer ticks after delay: 199
[KERNEL] Kernel mode active (interrupt mode)
Those simple log messages represent eight hours of debugging, three complete rewrites of the interrupt system, and more coffee than any human should consume in a single day. But they also represent something more: a functioning kernel that has successfully transitioned from UEFI's protective embrace to the harsh reality of bare metal operation.
Looking back, the lessons are clear. First, x86 processors remember everything and forgive nothing - always clear the debug registers when transitioning from UEFI. Second, the PIC hasn't changed significantly since the 1980s, and trying to abstract away its quirks usually makes things worse. Third, when sophisticated solutions fail, sometimes the answer is to write code like it's three decades ago.
Most importantly, I learned that there's a particular satisfaction in building something from first principles, even when those principles seem designed to maximize human suffering. Every successful interrupt is a small victory over the entropy of the universe. Every timer tick is proof that somewhere in the chaos of transistors and electrons, my code is executing exactly as intended.
Tomorrow I'll tackle content-addressed storage and time travel debugging. Because apparently, I haven't suffered enough yet, and the beauty of hobby OS development is that there's always another layer of complexity waiting to humble you.
But tonight, I'm going to sit here and watch my timer tick counter increment, one interrupt at a time, and pretend that building an operating system is a reasonable way to spend one's free time.
2026-01-10 12:40:56
Chapter 13: The Table of Truth
The Wednesday rain beat a steady rhythm against the archive windows, blurring the Manhattan skyline into smears of gray and slate. Inside, the air smelled of old paper and the fresh, sharp scent of lemon.
Ethan stood at the long oak table, surrounded by scraps of paper. He was typing furiously, running a command, frowning, deleting a line, and running it again.
"Lemon poppyseed muffin," he said, sliding a white bag across the desk without looking up. "And a London Fog. Earl Grey, vanilla syrup, steamed milk."
Eleanor accepted the tea. "You seem... agitated, Ethan."
"I'm fixing a bug in the username validator," Ethan muttered. "I fix one case, break another. I fix that one, break the first one. I've been running go run main.go for an hour, just changing the input variable manually to see what happens."
Eleanor set her tea down slowly. "You are testing by hand?"
"How else do I do it?"
"Ethan, you are a human being. You are creative, intuitive, and prone to boredom. You are terrible at repetitive tasks." She opened her laptop. "Computers are uncreative and boring, but they never get tired. We do not test by hand. We write code to test our code."
"Go does not require you to install a heavy testing framework," Eleanor began. "It is built in. You simply create a file ending in _test.go next to your code."
She created a file named validator_test.go.
package main
import "testing"
func TestIsValidUsername(t *testing.T) {
result := IsValidUsername("admin")
expected := true
if result != expected {
t.Errorf("IsValidUsername('admin') = %v; want %v", result, expected)
}
}
"The function must start with Test and take a pointer to testing.T. This t is your control panel. You use it to report failures."
She ran go test in the terminal.PASS
"Okay," Ethan said. "But I have twenty different cases. Empty strings, symbols, too long, too short..."
"So you write twenty assertions?" Eleanor asked. "Copy and paste the same if block twenty times?"
"I guess?"
"No." Eleanor shook her head. "That is how you drown in boilerplate. In Go, we use a specific idiom. We treat test cases as data, not code. We build a Table of Truth."
She wiped the screen and began typing a structure that looked less like a script and more like a ledger.
package main
import "testing"
func TestIsValidUsername(t *testing.T) {
// 1. Define the table
// An anonymous struct slice containing all inputs and expected outputs
tests := []struct {
name string // A description of the test case
input string // The input to the function
expected bool // What we expect to get back
}{
{"Valid user", "ethan_rose", true},
{"Too short", "ab", false},
{"Too long", "this_username_is_way_too_long_for_our_system", false},
{"Empty string", "", false},
{"Contains symbols", "ethan!rose", false},
{"Starts with number", "1player", false},
}
// 2. Loop over the table
for _, tt := range tests { // tt = "test table" entry
// 3. Run the subtest
t.Run(tt.name, func(t *testing.T) {
got := IsValidUsername(tt.input)
if got != tt.expected {
// We use Errorf, NOT Fatalf.
// Errorf marks failure but continues to the next case.
t.Errorf("IsValidUsername(%q) = %v; want %v", tt.input, got, tt.expected)
}
})
}
}
"Look at this structure," Eleanor said, tracing the slice with her finger. "The logic—the if check, the execution—is written exactly once. The complexity of the test is separated from the complexity of the data."
Ethan stared. "It's... a spreadsheet."
"Precisely. It is a table. If you find a new bug—say, usernames can't end with an underscore—you don't write a new function. You just add one line to the struct slice."
She typed:{"Ends with underscore", "ethan_", false},
"And you are done. The harness handles the rest. Note that I used t.Errorf, not t.Fatalf. If I used Fatal, the first failure would stop the entire test. With Error, we see all the failures at once."
t.Run
"Notice the t.Run line," Eleanor pointed out. "This creates a Subtest. If the 'Empty string' case fails, Go will tell you exactly which one failed by name."
She intentionally broke the code to demonstrate.
--- FAIL: TestIsValidUsername (0.00s)
--- FAIL: TestIsValidUsername/Empty_string (0.00s)
validator_test.go:26: IsValidUsername("") = true; want false
FAIL
"It gives you the context immediately. You fix that specific case, run the tests again, and see the green PASS. It is a feedback loop. Write a failing case in the table. Fix the code. Watch it pass. Repeat."
Ethan rubbed his eyes. "This would have saved me three hours this morning."
"It will save you three years over your career," Eleanor said, taking a bite of the lemon poppyseed muffin. "The table-driven pattern forces you to think about edge cases. When you see the table, your brain naturally asks: 'What is missing? Did I check negative numbers? Did I check nil?'"
"Does this work for errors too?" Ethan asked. "Like the error handling we talked about last time?"
"It shines for errors," Eleanor smiled.
func TestParseConfig(t *testing.T) {
tests := []struct {
name string
filename string
wantErr bool // Simple boolean check: did we get an error?
}{
{"Valid file", "config.json", false},
{"File not found", "missing.json", true},
{"Bad permissions", "root_only.json", true},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
_, err := ParseConfig(tt.filename)
// If we expected an error (true) and got none (nil)... failure.
if (err != nil) != tt.wantErr {
t.Errorf("ParseConfig() error = %v, wantErr %v", err, tt.wantErr)
}
})
}
}
"Here, wantErr is a simple boolean. We don't always need to check the exact error message text—often, just knowing that it failed is enough for the logic check. If you need to check for a specific error type, you would use errors.Is inside the loop."
Ethan closed his eyes, visualizing his messy main.go. "So the test file is basically the specification for the program."
"Yes. It is the documentation that cannot lie. Comments can become outdated. Diagrams can be wrong. But if the test passes, the code works."
She finished her tea. "There is an old Russian proverb: Doveryay, no proveryay."
"Trust, but verify?"
"Exactly. Trust that you wrote good code. But verify it with a table."
Ethan opened a new file named user_test.go. He started typing tests := []struct.
"Eleanor?"
"Yes?"
"This muffin is pretty good."
"It is acceptable," she said, though the corner of her mouth quirked upward. "Now, add a test case for a username with emojis. I suspect your validator will fail."
The testing Package: Go's built-in framework. No external libraries required.
File Naming: Test files must end in _test.go (e.g., user_test.go). They are ignored by the compiler when building the regular application, but picked up by go test.
Test Functions: Must start with Test and take a single argument: t *testing.T.
Table-Driven Tests: The idiomatic Go way to test.
input, expected, and name.range.Error vs. Fatal:
t.Errorf: Records a failure but continues running the test function. Preferred for tables, so you can see multiple failures.t.Fatalf: Records a failure and stops the test function immediately. Use only when the test cannot proceed (e.g., setup failed).Subtests (t.Run): Allows you to label each iteration of the loop. If one case fails, go test reports the specific name of the failed case.
Running Tests:
go test runs all tests in the package.go test -v gives verbose output (shows every subtest).go test -run TestName runs only a specific test function.Mental Model: Tests are not a chore; they are a Table of Truth. They separate the data (test cases) from the execution logic (the harness).
Next chapter: Interfaces in Practice. What happens when your validator needs different rules for admins versus regular users? Ethan learns that "accept interfaces, return structs" is the key to flexible design.
Aaron Rose is a software engineer and technology writer at tech-reader.blog and the author of Think Like a Genius.