MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

How to create a "convention" plugin for your multi-module Android app

2026-01-26 07:49:29

One thing that trips me up all the time is how to common-ize my build files in a multi-module Android app. Every time I try to learn it, I give up because of overloaded terms, potential footguns, and possible slowdowns in my app. IMO this should be a lot easier, so I end up just duplicating code and sometimes I will just have some sort of subprojects.all{} block in my root build.gradle to apply something to all of my modules instead.

I'm trying to learn once more with a very simple case where I have:

  • Android app module
  • Android library lib1 module
  • Android library lib2 module

And I want to extract the common code in the android libraries (lib1 and lib2)

Some general notes:

  • According to https://docs.gradle.org/current/userguide/best_practices_structuring_builds.html#favor_composite_builds) that buildSrc isn't recommended and so I should go down a path of a convention plugin for sharing build logic
  • Convention plugin is a loaded term and you can have convention plugins in both buildSrc and build-logic . Similarly, you can write your convention plugins as "precompiled script plugins" (.kts ) or regular "binary plugins" (.kt)
  • https://github.com/autonomousapps/gradle-glossary is a good resource to brush up on gradle terms
  • NowInAndroid saved ~12s in some cases by removing precompiled script plugins
  • If you want the fastest possible performance, you want to publish your convention plugins (annoying for your "typical" android app) (see here)
  • If you see kotlin-dsl in your build, you should try to eliminate it to save some speed
  • Read https://mbonnin.net/2025-07-10_the_case_for_kgp/
  • Many definitions of a "convention plugin"
    1. Convention plugins are just regular plugins
    2. A "convention plugin" is  a plugin that only your team uses
    3. A "convention plugin" is a plugin that you share in your build, and so you could say every plugin is a convention plugin, but typically "convention plugins" are understood as being part of your repo
  • testing
  • id("java-gradle-plugin") and java-gradle-plugin are interchangable. Same with maven-publish See here

This was like 90% done put together with help from Martin Bonnin, but I had to write it down so I don't forget it

Conversion

So let's just pretend we did file > new project, then added two new android lib modules (lib1 and lib2). By default we'll have duplicate code in the the two lib modules. (this is default code that AS new module wizard will generate in Jan of 2026)

plugins {
  alias(libs.plugins.android.library)
}

android {
  namespace = "com.cidle.lib1"
  compileSdk {
    version = release(36)
  }

  defaultConfig {
    minSdk = 27

    testInstrumentationRunner = "androidx.test.runner.AndroidJUnitRunner"
    consumerProguardFiles("consumer-rules.pro")
  }

  buildTypes {
    release {
      isMinifyEnabled = false
      proguardFiles(getDefaultProguardFile("proguard-android-optimize.txt"), "proguard-rules.pro")
    }
  }
  compileOptions {
    sourceCompatibility = JavaVersion.VERSION_11
    targetCompatibility = JavaVersion.VERSION_11
  }
}

dependencies {
  implementation(libs.androidx.core.ktx)
  implementation(libs.androidx.appcompat)
  implementation(libs.material)
  testImplementation(libs.junit)
  androidTestImplementation(libs.androidx.junit)
  androidTestImplementation(libs.androidx.espresso.core)
}

Steps

  1. Create build-logic directory
  2. Add settings.gradle.kts in build-logic and fill it with
dependencyResolutionManagement {
    repositories {
        google()
        mavenCentral()
    }
    versionCatalogs {
        create("libs") {
            from(files("../gradle/libs.versions.toml"))
        }
    }
}

rootProject.name = "build-logic"
include(":convention")
  1. Add a convention dir inside of build-logic dir
  2. Inside of this new convention dir create a build.gradle.kts
plugins {
    `kotlin-dsl`
}

java {
    toolchain {
        languageVersion.set(JavaLanguageVersion.of(17))
    }
}

dependencies {
    compileOnly(libs.android.gradlePlugin)
}

gradlePlugin {
    plugins {
        register("androidLibrary") {
            id = "libtest.android.library"
            implementationClass = "AndroidLibraryConventionPlugin"
        }
    }
}

TODO: Investigate if we can remove kotlin-dsl

  1. Then in convention, create new src > main > kotlin > AndroidLibraryConventionPlugin.kt
import com.android.build.api.dsl.LibraryExtension
import org.gradle.api.JavaVersion
import org.gradle.api.Plugin
import org.gradle.api.Project
import org.gradle.api.artifacts.VersionCatalogsExtension
import org.gradle.kotlin.dsl.configure
import org.gradle.kotlin.dsl.dependencies
import org.gradle.kotlin.dsl.getByType

class AndroidLibraryConventionPlugin : Plugin<Project> {
    override fun apply(target: Project) {
        with(target) {
            with(pluginManager) {
                apply("com.android.library")
            }

            extensions.configure<LibraryExtension> {
                compileSdk = 36

                defaultConfig {
                    minSdk = 27
                    testInstrumentationRunner = "androidx.test.runner.AndroidJUnitRunner"
                    consumerProguardFiles("consumer-rules.pro")
                }

                buildTypes {
                    release {
                        isMinifyEnabled = false
                        proguardFiles(
                            getDefaultProguardFile("proguard-android-optimize.txt"),
                            "proguard-rules.pro"
                        )
                    }
                }

                compileOptions {
                    sourceCompatibility = JavaVersion.VERSION_11
                    targetCompatibility = JavaVersion.VERSION_11
                }
            }

            val libs = extensions.getByType<VersionCatalogsExtension>().named("libs")

            dependencies {
                add("implementation", libs.findLibrary("androidx-core-ktx").get())
                add("implementation", libs.findLibrary("androidx-appcompat").get())
                add("implementation", libs.findLibrary("material").get())
                add("testImplementation", libs.findLibrary("junit").get())
                add("androidTestImplementation", libs.findLibrary("androidx-junit").get())
                add("androidTestImplementation", libs.findLibrary("androidx-espresso-core").get())
            }
        }
    }
}

TODO: Check to see if there's a better way to use our toml file here. I'm not fond of libs.findLibrary, etc.

  1. Update lib1 and lib2 build.gradle.kts to be
plugins {
  id("libtest.android.library")
}

android {
  namespace = "com.cidle.lib1"
}

and

plugins {
  id("libtest.android.library")
}

android {
  namespace = "com.cidle.lib2"
}

We basically went down from 37 lines to 6 lines... in 2 modules! So for every time we add a new module, we save at least those 30 lines.

  1. In your settings.gradle.kts you need to add one line to add an "included build"
pluginManagement {
  includeBuild("build-logic") <===== this is the line you add!
  repositories {
    google {
      content {
        includeGroupByRegex("com\\.android.*")
        includeGroupByRegex("com\\.google.*")
        includeGroupByRegex("androidx.*")
      }
    }
    mavenCentral()
    gradlePluginPortal()
  }
}

🐝 Copilot Swarm Orchestrator

2026-01-26 07:45:23

Copilot Swarm Orchestrator

This is a submission for the GitHub Copilot CLI Challenge

Parallel, evidence-verified orchestration of real GitHub Copilot CLI sessions.

What I Built

Copilot Swarm Orchestrator coordinates multiple GitHub Copilot CLI sessions across a dependency-aware plan.

Instead of running Copilot prompts one at a time and manually stitching the results together, this tool:

  • Breaks a goal into dependency-aware steps
  • Runs independent steps in parallel waves
  • Executes each step as a real copilot -p session on its own git branch
  • Captures /share transcripts and verifies results by parsing them for concrete evidence (commands, test output, build output, claims)
  • Merges only verified work back into your branch

Nothing is simulated. No undocumented flags. No Copilot magic tricks.

It exists to make multi-area work like frontend, backend, tests, and integration faster without losing auditability.

Demo

Repository:

https://github.com/moonrunnerkc/copilot-swarm-orchestrator

Quick demo command:

npm start demo todo-app

This runs four Copilot CLI sessions across three parallel waves and prints live, interleaved output so you can see concurrency in action.

Note: the demo is a real end-to-end run and typically takes ~12–18 minutes depending on model latency and install/test time.

Each run produces an auditable trail in the repo (plans/, runs/, proof/) showing:

  • what each agent did (via captured /share transcripts)
  • what evidence was verified (via per-step verification reports)
  • what was merged

Screencast(fresh project interaction):

Screenshots (existing project interaction):

My Experience with GitHub Copilot CLI

This project was built with Copilot CLI, not "wrapped around" it.

I used Copilot CLI as a subprocess for real work, then designed guardrails around it:

  • dependency planning
  • bounded agent scopes
  • transcript-based verification
  • per-step branch isolation

Copilot accelerates implementation. The orchestrator adds structure, coordination, and evidence checks.

The result is a workflow where Copilot can move fast, fail safely, and leave behind proof instead of vibes.

Key Constraints (Intentional)

  • Uses only documented Copilot CLI flags (for example -p, --model, --share)
  • Does not embed or emulate Copilot
  • Does not guarantee correctness; verification is evidence-based (transcript parsing), not semantic understanding
  • All execution is explicit, inspectable, and reversible (work happens on branches before merge)

Why It Matters

Copilot CLI is powerful for a single task. This makes it practical for multi-step work by adding orchestration, parallel execution, and an audit trail that's easy to inspect after the fact.

License: ISC

Built with: TypeScript, Node.js 18+, GitHub Copilot CLI

The Hidden "Cost": Is Your Infrastructure Budget Being Held Hostage by Cold Data?

2026-01-26 07:26:52

Every year, enterprise IT departments pay a silent, multi-million dollar penalty. I call it the "Storage Tax."

​It’s the money you spend keeping data that hasn't been touched in three years on the same high-performance, high-cost storage as your most critical production databases. We do it because migration is scary, refactoring is expensive, and "storage is cheap."
​But in the cloud, storage isn't just a infrastructure line item—it’s an operational strategy. If you are migrating to AWS using a "disk-for-disk" mentality (EC2 + EBS), you aren't just missing out on cloud benefits; you’re actively overpaying for inefficiency.

The 80/20 Reality

​Data analysis across thousands of enterprise arrays reveals a consistent truth: roughly 80% of your data is "cold." It consists of old snapshots, completed project files, and logs that exist only for compliance.

Going with traditional storage options force you into a corner. You either pay the "Performance" (keeping everything on EBS) or the "Operational Tax" (manually moving files to S3 and breaking application paths).

Why Amazon FSx for NetApp ONTAP is the "Tax Shelter" You Need


Amazon FSx for NetApp ONTAP (or FSx for ONTAP) solves this through intelligent built-in tiering. This isn't just a script that moves files. It is an intelligent, block-level engine that differentiates between "Hot" (active) and "Cold" (inactive) data at the 4KB level.

The genius of this architecture is that it happens behind the scenes. To your application, the data never moves. There are no broken links and no "File Not Found" errors. But on your monthly bill, that 80% of cold data is suddenly priced at object storage rates (~$0.02/GB) rather than SSD rates (~$0.12/GB or higher).

The Competitor Gap: Why "Good Enough" is Costing You
​When organizations look at alternatives, they often miss the technical nuances that drive TCO:
​EBS (gp3): It’s fast, but it’s "pinned." You pay for the provisioned capacity whether you use it or not. There is no native tiering to S3.

Amazon EFS: Fantastic for serverless, but the unit cost for active data is significantly higher than FSx for ONTAP, and it lacks the deduplication and compression engine that further shrinks your footprint.

FSx for Windows: Excellent for pure SMB, but lacks the 4KB block-level granularity of FSx for ONTAP tiering, often resulting in larger, more expensive SSD footprints.

The Strategy: Pivot to Intelligent Storage
​If your organization is sitting on petabytes of unstructured data, you are likely the biggest victim of the hidden cost of cold data. By moving to FSx for ONTAP, you aren't just migrating; you’re implementing a self-optimizing data lifecycle.

Stop paying for air. Stop paying for "dark data." It’s time to move your data into a system that works as hard as your budget does.

The Real-Time Trap: Why Fresh Data Might Be Slowing Down Your Dashboards

2026-01-26 07:19:53

It is a scenario we’ve seen play out in boardrooms and engineering stand-ups alike:

A frustrated stakeholder approaches the data team with a seemingly simple demand. “The data warehouse is too slow,” they say. “We need to make it faster.”

On the surface, this sounds like a straightforward technical requirement. But data engineers know that “fast” is one of the most dangerously ambiguous terms in data engineering. When a user asks for speed, what are they actually asking for? Are they complaining that a dashboard takes 45 seconds to load, or are they frustrated because the report they’re looking at doesn’t reflect a sale that happened ten minutes ago?

This ambiguity is a primary source of friction between business leaders and engineering teams. To build a system that actually delivers value, we have to stop chasing “speed” as a monolith and start distinguishing between two entirely different concepts: Data Latency and Query Latency.

The Freshness Factor: Understanding Data Latency

Data latency is the time lag between an event occurring in a source system and that data becoming available for analysis. It is the definitive measure of the “lag” in your ingestion pipeline.

First, we need to understand the process that data must go through before it reaches the report dashboard. Data cannot teleport; it must move through a specific sequence of steps that each introduce delay:

  1. Extraction: How often do we pull from the source?
  2. Transmission: The time required to move data across the network.
  3. Staging: Landing data in a buffer to avoid overloading operational databases.
  4. Transformation and Loading: Cleaning, formatting, and applying business logic.

Consider the classic “9 AM vs. 2 AM” problem:

If a transaction occurs at 9:00 AM, but your pipeline is designed as a daily batch job that finishes at 2:00 AM the following morning, that data has a latency of 17 hours.

Data latency answers the question:

“How old is the data I’m looking at right now?”

In this scenario, the system isn’t “broken”—it is functioning exactly as designed. However, if the business needs to make real-time decisions, that 17-hour delay represents an architectural failure, no matter how quickly the final report might load.

Responsiveness and the User Experience: Decoding Query Latency

Query latency is the delay a user experiences between clicking “Run” and seeing results. While data latency is about the age of the information, query latency is about the responsiveness of the computation.

From an engineering perspective, query latency is driven by several technical levers:

• Indexing and physical data organization.

Clustering strategies to optimize data pruning.

• Hardware resources (CPU and Memory).

• Caching layers and query optimization.

Query latency answers the question: “How long do I have to stare at a loading spinner before I see results?”

For the end user, perception is reality. They often conflate these two types of latency; they may label a system “slow” because of a loading spinner, even if the data itself is only seconds old. Conversely, they may praise a “fast” system that loads instantly, blissfully unaware that the data they are making decisions on is 24 hours out of date.

The Zero-Sum Problem: Why You Can’t Have It All

Here is the hard truth that many vendors won’t tell you: optimizing for one type of latency often degrades the other. These are not just technical hurdles; they are fundamental design trade-offs.

The Freshness Trade-off:

If you optimize for near real-time data latency by streaming records into the warehouse as they happen, the system has no time to pre-calculate or reorganize that data. Consequently, when a user runs a query, the engine must scan massive volumes of raw or semi-processed data on the fly. You get fresh data, but you pay for it with higher query latency.

The Responsiveness Trade-off:

To ensure a dashboard is “snappy” and loads instantly, engineers use optimized summary tables and pre-calculated aggregates. But performing these transformations takes significant time and compute power. To do this efficiently, we typically batch the data. This makes the dashboard load without a spinner, but it increases the data latency.

Architecture is never about perfection; it is about choosing your trade-offs with intent.

The Exponential Cost of the Last Second

Latency reduction follows a steep curve of diminishing returns. Achieving “speed” does not come with a linear price tag; it is exponential.

Moving from a 24-hour data latency to a 1-hour latency might double your costs. However, moving from 1 hour to 1 second can increase your costs by 10x or 20x.

This massive price jump isn’t arbitrary. To hit sub-second latency, you aren’t just buying a bigger server; you are investing in significantly more infrastructure, higher levels of redundancy, and immense operational complexity.

Lower latency is not free. You are always trading cost and complexity for speed.

Architecture is About Strategy, Not Just Speed

There is no such thing as the “fastest” data warehouse. There is only a system that has been optimized for a specific business use case. A system built for high-frequency trading is an entirely different beast than one built for monthly financial auditing.

When a stakeholder demands that the system be “faster,” the most senior move you can make is to stop and ask: “Fast in what sense?”

• Do you need fresh data to make immediate, real-time decisions?

• Or do you need snappy, responsive dashboards that allow for fluid exploration?

Once you clarify that distinction, the engineering path becomes clear. You move away from “fixing speed” and toward aligning your architecture with actual business needs.

Balancing freshness against responsiveness—and both against cost—is the core of any modern data strategy.

C# Console menus with Actions

2026-01-26 07:14:17

Introduction

The focus of this article is to provide an easy-to-use menu system for C# console projects.

NuGet package Spectre.Console is required to construct the menu using Actions to execute items in the menu.

Benefits of using a menu

A developer can easily test different operations, whether to learn something new, quickly try out code slated for a project, or provide options for a dotnet tool.

Also, many online classes are organized into chapters/sections. Consider breaking them up into menu items.

Base parts

A class which is responsible for displaying menu items and what code to execute using an Action with or without parameters.

public class MenuItem
{

    public int Id { get; set; }
    public required string Text { get; set; }
    public required Action Action { get; set; }
    public override string ToString() => Text;
}

A class that builds the menu using the class above, and another class that has methods to execute when a menu item is selected.

In the sample projects provided

  • The MenuOperations class is responsible for building the menu
  • The Operations class contains methods to execute using an Action from the menu selection
    • Each method has code to display the method name along with code to stop execution, which, when pressing ENTER, returns to the menu.

Entry point

A while statement is used to present a menu, with one menu option to exit the application.

Shows a menu

Example 1 uses an Action with no parameters

internal partial class Program
{
    static void Main(string[] args)
    {
        while (true)
        {
            Console.Clear();
            var menuItem = AnsiConsole.Prompt(MenuOperations.SelectionPrompt());
            menuItem.Action();
        }
    }
}

Example 2 uses an Action with a parameter whose menuItem.Id property references a primary key in a database table; the operation, in this case, saves an image to disk.

shows a menu with text read from a database

internal partial class Program
{
    static void Main(string[] args)
    {
        while (true)
        {
            Console.Clear();
            var menuItem = AnsiConsole.Prompt(MenuOperations.SelectionPrompt());
            menuItem.Action(menuItem.Id);
        }
    }
}

dotnet tool example to read column descriptions for tables in a database.

dotnet tool sample

Implmentation

Using one of the provided sample projects, create a new console project.

  • Add NuGet package Spectre.Console to the project
  • Add folders, Models and Classes
  • Add MenuItem under the Models folder
  • Add an empty MenuOperations class under the Classes folder
  • Add code provided from one of Main methods to display the menu
  • Export the project as a new project template for a starter project

Tips

  • When something is unclear, set breakpoints and examine the code in the local window
  • Consider writing code in class projects that are executed from the Operations class, as in this class project, Customer record used here.

Source code

Source code 1 Source code 2 Source code 3

Zig vs Go: errors

2026-01-26 07:05:31

(you can find previous post about same topic here)

Errors are values

As in Go, errors in Zig are also handled as values, but while in Go we can indicate that a function returns multiple values, so one of them could be a pointer to an error, in Zig instead we declare a sort of union called error union type: the ! symbol that precedes the type returned by a function indicates that we might get an error. We can also have a precise indication of the error type that is defined as an enum if in addition to the ! we also have the type specification.

// Go
func canFail(num int) (int, error) {
    if num > 10 {
        return num, errors.New("input is greater than max (10)") 
    }
    return num, nil
}
// Zig
fn canFail(num: i8) !i8 {
    if (num > 10) {
        return error.InputIsGreaterThanMax;
    }
    return num;
}

We can notice that the substantial differences lie in the fact that Zig returns the value of an enum (declared inline in this example) and that this is not coupled with the result but is mutually exclusive; this is better seen by observing how it is handled.

// Go
result, err := canFail(val)
if err != nil {
        fmt.Printf("An error occurs: %v\n", err)
        os.Exit(1)
    }
// handle result value
// Zig
const result = canFail(num) catch |err| {
    std.debug.print("An error occurs: {}\n", .{err});
    return err;
};
// handle result value

In Zig it is also possible to use a more concise formula when the error should not be handled but only returned to the previous step:

const result = try canFail(num);
// handle result value

try in this case is the compressed version of catch |err| return err.

Error declaration

In Go, an error is any struct that implements the Error() string method, and to create custom errors we use these approaches.

// Go
var (
    ErrMissingArgument     = errors.New("missing argument")
    ErrInvalidArgument     = errors.New("invalid argument")
)

type MaxValueValidation struct {
    Max     int
    Current int
}

func (v *MaxValueValidation) Error() string {
    return fmt.Sprintf("input %d is greater than max %d", v.Current, v.Max)
}

In Zig, however, the error is reduced to an enum that can be combined with other enums and the result used as an indication of possible return errors.

const InputError = error{
    WrongInput,
    MissingInput,
};

const ValidationError = error{
    InputGreaterThanMax,
};

const FailureError = InputError || ValidationError;

Here are two complete examples of error handling:

// Go

package main

import (
    "errors"
    "fmt"
    "os"
    "strconv"
)

var (
    ErrMissingArgument = errors.New("missing argument")
    ErrInvalidArgument = errors.New("invalid argument")
)

type MaxValueValidation struct {
    Max     int
    Current int
}

func (v *MaxValueValidation) Error() string {
    return fmt.Sprintf("input %d is greater than max %d", v.Current, v.Max)
}

func main() {
    args := os.Args
    if len(args) < 2 {
        fmt.Printf("An error occurs: %v\n", ErrMissingArgument)
        os.Exit(1)
    }

    val, err := strconv.Atoi(os.Args[1])
    if err != nil {
        fmt.Printf("An error occurs: %v\n", ErrInvalidArgument)
        os.Exit(1)
    }

    result, err := canFail(val)
    var validationError *MaxValueValidation
    if errors.As(err, &validationError) {
        fmt.Printf("Check input: %v\n", validationError)
        os.Exit(1)
    } else if err != nil {
        fmt.Printf("An error occurs: %s\n", err.Error())
        os.Exit(1)
    } else {
        fmt.Printf("The result is %d\n", result)
    }
}

func canFail(num int) (int, error) {
    if num > 10 {
        return num, &MaxValueValidation{Max: 10, Current: num}
    }
    return num, nil
}

// Zig
const std = @import("std");

const InputError = error{
    WrongInput,
    MissingInput,
};

const ValidationError = error{
    InputGreaterThanMax,
};

const FailureError = InputError || ValidationError;

pub fn main() !void {
    var args = std.process.args();
    _ = args.skip();

    const valueArg = args.next() orelse {
        std.debug.print("Error occurs: missing argument.\n", .{});
        return FailureError.MissingInput;
    };

    const num = std.fmt.parseInt(i8, valueArg, 10) catch |err| {
        std.debug.print("Error occurs: wrong input {}\n", .{err});
        return FailureError.WrongInput;
    };

    const result = try canFail(num);
    std.debug.print("The result is: {d}", .{result});
}

fn canFail(num: i8) FailureError!i8 {
    if (num > 10) {
        std.debug.print("input {d} is greater than max {d}\n", .{ num, 10 });
        return ValidationError.InputGreaterThanMax;
    }
    return num;
}