MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Embedded Trial Keys in .NET: Ship Evaluation Versions Without a Server

2026-02-19 23:27:02

You want to ship a trial version of your app. Here's the easiest way: embed a trial key directly in the code.

No server. No activation. Just works.

Why Embedded Trial Keys?

Traditional trial approach:

  1. User downloads app
  2. User registers on your website
  3. You email a trial key
  4. User enters key in app
  5. App validates online

Result: 30-40% of users drop off before trying your app.

Embedded trial approach:

  1. User downloads app
  2. App runs immediately (trial key embedded)
  3. User evaluates

Result: 100% trial start rate.

Step 1: Create a Trial Key

In QLM Management Console:

1. Manage Keys → Create
2. Product: [Your Product]
3. License Model: Trial
4. Expiry Date: 30 days from today
5. Number of Licenses: 1
6. Click OK

Copy the generated key (example: A5B3C-D8E2F-G1H4I-J7K9L-M3N6P).

Step 2: Embed the Key

using System;
using QLM.LicenseLib;

class Program
{
    // Embedded trial key - same for all users
    private const string TRIAL_KEY = "A5B3C-D8E2F-G1H4I-J7K9L-M3N6P";

    static void Main(string[] args)
    {
        if (ValidateLicense())
        {
            Console.WriteLine("Trial active - running app");
            RunApp();
        }
        else
        {
            Console.WriteLine("Trial expired or invalid");
        }
    }

    static bool ValidateLicense()
    {
        var lv = new LicenseValidator("settings.xml");

        bool needsActivation = false;
        string errorMsg = string.Empty;

        // First check if license exists locally
        bool isValid = lv.ValidateLicenseAtStartup(
            ELicenseBinding.ComputerName,
            ref needsActivation,
            ref errorMsg
        );

        // If no license found, use embedded trial key
        if (!isValid && string.IsNullOrEmpty(lv.ActivationKey))
        {
            lv.ActivationKey = TRIAL_KEY;
            lv.ComputerKey = string.Empty;

            isValid = lv.ValidateLicenseAtStartup(
                ELicenseBinding.ComputerName,
                ref needsActivation,
                ref errorMsg
            );

            if (isValid)
            {
                // Store trial key locally for next run
                lv.QlmLicenseObject.StoreKeys(lv.ActivationKey, lv.ComputerKey);
                Console.WriteLine("Trial started - 30 days remaining");
            }
        }

        return isValid;
    }

    static void RunApp()
    {
        // Your application code here
    }
}

How It Works

  1. First run: App checks for local license → none found
  2. App uses embedded key: Validates it against expiry date
  3. Trial starts: Key stored locally, expiry date locked
  4. Subsequent runs: App uses stored license
  5. After 30 days: License expires, app stops working

Important: The trial period starts on first use, not when you create the key. The expiry date is calculated from first validation.

Upgrade Path

When the user purchases, they get a permanent key:

// User enters purchased key via QLM License Wizard
// Wizard replaces trial key with permanent key
// App validates and works indefinitely

Show Days Remaining

static void ShowTrialStatus(LicenseValidator lv)
{
    if (lv.QlmLicenseObject.LicenseModel == ELicenseModel.trial)
    {
        DateTime expiryDate = lv.QlmLicenseObject.ExpiryDate;
        int daysLeft = (expiryDate - DateTime.Now).Days;

        if (daysLeft > 0)
        {
            Console.WriteLine($"Trial: {daysLeft} days remaining");
        }
        else
        {
            Console.WriteLine("Trial expired");
        }
    }
}

Alternative: Different Key Per User

If you want to track each download:

// Generate trial key server-side when user downloads
// Embed unique key per user
// Track which keys are activated

But this requires a server call. The embedded approach is simpler.

Pros & Cons

Pros:

  • ✅ Zero friction — app runs immediately
  • ✅ No server required for trial
  • ✅ Works offline
  • ✅ Same binary for trial & paid

Cons:

  • ❌ Can't track who tries your app
  • ❌ Key can be extracted (not a real issue for trials)
  • ❌ One key for all users (acceptable for trials)

When to Use This

Use embedded keys when:

  • Frictionless trial is priority #1
  • You don't need to track trial users
  • You want offline trial support

Don't use when:

  • You need to track every download
  • You want unique keys per user
  • You need trial usage analytics

Supported Platforms

  • Windows: .NET Framework 2.x / 4.x, .NET 6/7/8/9/10
  • Cross-platform: .NET 6/7/8/9/10 (macOS, Linux)
  • Mobile: Android, iOS (.NET MAUI, Xamarin)

Quick License Manager

This uses Quick License Manager (QLM).

Pricing (per developer):

  • QLM Express: $200/year
  • QLM Professional: $699/year
  • QLM Enterprise: $999/year

Download 30-day trial

Resources

Related articles:

How do you handle trials in your apps? Share in the comments! 👇

Quick License Manager by Soraco Technologies — https://soraco.co

GoDaddy Reclassifies Users: Domain Consumer Protection Rights at Risk

2026-02-19 23:21:28

In a surprising move that has sent shockwaves through the domain industry, GoDaddy published updated Terms of Service in February 2026 that fundamentally reclassifies every customer as a "business customer"—regardless of whether they are registering a domain for a multinational corporation or a personal photo album. This dramatic shift in domain consumer protection rights represents one of the most significant policy changes in registrar history, potentially stripping millions of users of the legal safeguards they have long taken for granted.

domain consumer protection rights

Understanding GoDaddy New Business Customer Classification

GoDaddy revised Terms of Service, published this month, contains a pivotal modification that affects all 21+ million customers. The TOS now explicitly states that GoDaddy services "can only be used by businesses, not consumers"—a sweeping declaration that eliminates the traditional distinction between commercial and personal domain registration.

The company defines "business customer" with remarkably broad language that includes:

  • Any person or entity acting in a business or professional capacity
  • Commercial entities, partnerships, companies, organizations, sole proprietors, self-employed individuals, and independent contractors
  • Individuals using services for professional purposes—including personal branding, online presence, reputation management, career advancement, or professional networking
  • Anyone acquiring services to protect, secure, or manage their personal name, identity, brand, or online reputation for business purposes

Perhaps most tellingly, the terms explicitly state: "Our Services are not intended for private, personal or household use." This means that using GoDaddy to create something as simple as an online photo album for family or a personal travel blog for friends now technically falls outside their defined scope of services.

Why Domain Consumer Protection Rights Matter More Than Ever

The implications of this policy change extend far beyond mere semantics. By declaring all customers as business entities, GoDaddy effectively sidesteps numerous consumer protection statutes that have traditionally shielded individual registrants from unfair business practices. Some Federal Trade Commission (FTC) rules, for instance, apply exclusively to consumer transactions—not business-to-business relationships.

According to the Federal Trade Commission 2024 Consumer Sentinel Network report, consumers filed over 2.6 million fraud reports totaling more than $10 billion in losses. Domain-related scams and unfair registrar practices represent a significant portion of these complaints. Consumer protection laws serve as a critical safety net, providing recourse when registrars engage in predatory pricing, deceptive marketing, or unauthorized charges.

When you register a domain, you are entrusting the registrar with sensitive personal information, payment details, and control over your digital identity. Losing domain consumer protection rights means losing a layer of legal protection that holds these companies accountable for mishandling that trust.

How Consumer Rights Protections Shield Domain Owners

Consumer protection laws serve multiple critical functions in the domain registration ecosystem. First, they ensure transparency in pricing and billing practices. Without these protections, registrars could implement hidden fees, automatic renewals without clear notification, or price hikes during renewal periods that lock customers into paying more for domains they have already built brands around.

Second, domain consumer protection rights provide mechanisms for dispute resolution. When a registrar makes an error or engages in questionable practices, consumers have access to regulatory bodies like the FTC, state attorneys general offices, and consumer protection agencies. Business customers, by contrast, often must rely solely on expensive litigation or arbitration processes.

Third, consumer protections mandate certain standards for data handling and privacy. Registrars must comply with regulations regarding how they store, use, and share customer information. For individuals concerned about their personal data being harvested or sold, these protections are essential safeguards.

The Hidden Costs of Business Customer Status

The updated TOS also grants GoDaddy expanded authority to verify customer identity and business status. While this may seem like a reasonable security measure, it opens the door to increased surveillance and data collection requirements that privacy-conscious users may find concerning.

Additionally, the terms now make explicit that customers bear full responsibility for ensuring that any auto-generated content produced by GoDaddy AI tools does not infringe on third-party rights. This shifts liability squarely onto users while the company profits from providing these AI services—a concerning trend in the industry.

How to Protect Your Domain Consumer Protection Rights

If you are concerned about maintaining your consumer protections while registering domains, consider these proactive steps:

Choose Privacy-Focused Registrars

Not all registrars have abandoned consumer protections. When you register a domain, prioritize companies that respect individual privacy and maintain clear consumer-friendly policies. Look for registrars that explicitly cater to personal users and do not require business classification.

Enable Comprehensive Privacy Protection

Domain privacy is not just about hiding your contact information—it is about controlling your digital footprint. Choose a registrar that offers robust WHOIS privacy without requiring excessive personal documentation.

Document Everything

Screenshots, email confirmations, and chat logs can prove invaluable if disputes arise. Save copies of terms of service at the time of purchase—they may change without your explicit consent.

Industry-Wide Implications

GoDaddy policy shift raises important questions about the future of domain consumer protection rights across the industry. As the world largest registrar by market share, GoDaddy decisions often influence competitor behavior. Will other major registrars follow suit?

According to GigaLaw Domain Name Dispute Digest, UDRP decisions fell 0.9% in 2025 to 8,476 cases, suggesting either improved compliance or reduced access to dispute resolution mechanisms.

The Bottom Line

GoDaddy reclassification of all customers as business entities represents more than a terms of service update—it is a fundamental restructuring of the relationship between registrars and registrants.

Your domain name is often the foundation of your digital identity. Understanding your domain consumer protection rights has never been more critical.

Originally published at MonstaDomains Blog

Data vs. Analytics vs. Visual Analytics: Turning Information into Decisions That Scale

2026-02-19 23:20:03

Introduction: Why Data Alone Is No Longer Enough


Most organizations today are drowning in data — but still starving for clarity. Spreadsheets, dashboards, and reports are everywhere, yet leadership teams continue to ask the same question: “So what should we do?”

This gap exists because data, analytics, and visual analytics are often treated as interchangeable terms when, in reality, they play very different roles in decision-making. At VisualizExpert, our data analytics services are built on the belief that insights only matter when they lead to confident action.

Understanding the difference between these three layers is the foundation for building business intelligence systems that don’t just inform — but guide.

Data: The Raw Material (Not the Deliverable)
Data is the foundation of everything — but on its own, it has no direction. Tables, logs, metrics, and event streams are simply raw inputs. They answer what happened, but not why or what to do next.

Many organizations make a critical mistake here: they treat access to data as success. But data without structure, context, and intent increases cognitive load and slows decisions. This is why modern business intelligence consulting focuses less on collecting data and more on shaping it for outcomes.

At VisualizExpert, we often remind clients: a data source is not a solution. Until data is modeled, validated, and aligned to business goals, it cannot support real decisions.

Analytics: Making Sense of the Noise
Analytics is where data begins to create value. Through calculations, aggregations, comparisons, and statistical methods, analytics transforms raw data into insights.

This is where trends emerge, anomalies are detected, and performance is measured. Solutions like Power BI Dashboard Development and custom analytics solutions allow organizations to move beyond static tables into meaningful analysis.

However, analytics still has a limitation:
It explains what is happening — but often stops short of influencing what should happen next.

Many analytics initiatives fail not because the analysis is wrong, but because the insights are not communicated in a way that decision-makers can quickly understand and trust.

Visual Analytics: Where Insight Becomes Action
Visual analytics is the convergence of strategy, data, design, and engineering. It doesn’t just show insights — it shapes how humans perceive, interpret, and act on them.

This is where data visualization services and interactive business dashboards become critical. Visual analytics reduces time to insight, increases accuracy, and dramatically improves adoption across teams.

At VisualizExpert, visual analytics means:

Designing dashboards around decisions, not metrics
Using visual hierarchy to guide attention
Embedding business logic directly into reports
Enabling users to ask “the next question” without friction
This is the difference between looking at data and using data.

Why BI Projects Fail Without Visual Analytics
Many BI initiatives technically succeed — but practically fail. Why? Because they stop at analytics.

A dashboard packed with charts but lacking narrative, prioritization, or context creates confusion instead of clarity. This is why BI dashboard solutions must be designed for how executives actually think and decide.

Without visual analytics:

Insights are ignored
Dashboards become shelfware
Adoption drops
ROI disappears
This is where executive analytics dashboards and decision-ready data visualization change the game — by aligning insights with real-world decisions.

How VisualizExpert Bridges the Gap
VisualizExpert operates at the intersection of analytics and decision intelligence. Our approach combines technical depth with business strategy to deliver Power BI consulting services that scale across teams and industries.

What Makes Our Approach Different
Strategy-first design: Every dashboard starts with a decision framework
Strong data foundations: Including Power BI data modeling services and optimized schemas
Performance-focused engineering: Ensuring speed, reliability, and scalability
Human-centered visualization: Dashboards designed for clarity, not decoration
This is why clients trust us for analytics and reporting consulting that delivers measurable outcomes — not just reports.

From Static Reports to Living Systems
Traditional reports answer yesterday’s questions. Visual analytics systems evolve with the business.

Write on Medium
Using tools like Power BI Embedded Analytics and interactive KPI dashboards, we help organizations:

Monitor performance in real time
Identify risks before they escalate
Align teams around shared metrics
Build trust in data across leadership
The result is not just better reporting — but better decision-making at every level.

The Role of Engineering in Visual Analytics
Behind every great dashboard is invisible engineering. Poor performance, broken filters, and inconsistent metrics destroy trust instantly.

That’s why VisualizExpert invests heavily in:

Power BI DirectQuery Performance Optimization
Secure access using Power BI Row Level Security
Scalable architectures through Power BI Managed Services
Visual analytics only works when the underlying systems are reliable, fast, and governed.

Beyond BI: Building a Decision Culture
The future of analytics isn’t more AI or more dashboards — it’s better decisions. Organizations that win are those that treat analytics as a capability, not a project.

Through Visual Analytics Consulting, VisualizExpert helps teams:

Define decision ownership
Align metrics with strategy
Reduce analysis paralysis
Build confidence in insights
This cultural shift is what turns analytics investments into long-term competitive advantage.

Power BI Dashboard Development That Drives Decisions
(Keyword used: Power BI Dashboard Development)

A well-built Power BI dashboard should not ask users to interpret data — it should guide them. At VisualizExpert, our Power BI dashboards are structured around business questions, not visuals.

We focus on:

Clear KPI hierarchies
Scenario-based views
Contextual benchmarks
Executive-level summaries
This ensures dashboards are used daily — not reviewed once a month.

Why Visual Analytics Is the Real Competitive Advantage
Data is everywhere. Analytics is expected. Visual analytics is rare — and that’s why it matters.

Organizations that master visual analytics:

Decide faster
Align teams better
Adapt more quickly to change
Trust their data
This is the real ROI of modern BI.

Final Thoughts: Data Is Potential. Visual Analytics Is Power.
Data without analytics is noise.
Analytics without visualization is friction.
Visual analytics without a strategy is decoration.

At VisualizExpert, we bring all three together to help organizations move from information to impact — confidently, consistently, and at scale.

If your dashboards explain the past but don’t guide the future, it’s time to rethink how you use data.

How EC2 + EBS Actually Bills: A Breakdown for Engineers

2026-02-19 23:18:29

The "Stopped Instance" Trap

Every AWS engineer has done it. You spin up an EC2 instance for a quick test, run your script, and then "Stop" the instance thinking you've stopped the bleeding.

You haven't.

While the Compute meter has stopped spinning, the Storage meter is still running at full speed. And if you're using high-performance storage or have elastic IPs attached, you might be bleeding cash without realizing it.

In this post, I'm going to break down exactly how an EC2 instance is billed, component by component, so you can stop leaking money on "zombie" resources.

1. The Compute Layer (EC2)

This is the part everyone understands. When the instance is Running, you pay. When it's Stopped or Terminated, you don't.

  • On-Demand: You pay by the second (minimum 60 seconds).
  • Spot: You pay the market price, which fluctuates.
  • Savings Plans/RIs: You commit to usage in exchange for a discount.

The Gotcha: If you use a "Hibernate" stop instead of a regular stop, you are still paying for the RAM state stored on disk (more on that below).

2. The Storage Layer (EBS) - The Silent Killer

This is where 90% of "phantom costs" come from.

When you launch an EC2 instance, it almost always comes with an EBS volume (the root drive). This volume exists independently of the instance.

  • Scenario: You launch an m5.large with a 100GB gp3 volume.
  • Action: You stop the instance.
  • Result: You stop paying for the m5.large ($0.096/hr), but you continue paying for the 100GB gp3 volume ($0.08/GB/month).

If you have 100 "stopped" dev instances sitting around, that's 10TB of storage you're paying for every month. That's ~$800/month for literally nothing.

The "IOPS" Trap

With gp3 and io2 volumes, you can provision extra IOPS and Throughput. These are billed separately from the storage capacity.

  • Storage: $0.08/GB-month
  • IOPS: $0.005/provisioned IOPS-month (above 3,000)
  • Throughput: $0.04/provisioned MB/s-month (above 125)

If you provision 10,000 IOPS for a database test and then stop the instance, you are still paying for those 10,000 IOPS even though the volume is doing zero reads/writes.

3. The Network Layer (Data Transfer & IPs)

Elastic IPs (EIPs)

This is a classic AWS "tax."

  • Attached to Running Instance: Free (mostly).
  • Attached to Stopped Instance: $0.005/hour.
  • Unattached: $0.005/hour.

If you stop an instance but keep the static IP, AWS charges you because you are "hogging" a scarce IPv4 address.

Data Transfer

  • Inbound: Free.
  • Outbound (Internet): Expensive (~$0.09/GB).
  • Cross-AZ: If your EC2 instance talks to an RDS database in a different Availability Zone, you pay $0.01/GB in each direction.

4. The "Zombie" Snapshot

When you terminate an instance, the root volume usually deletes with it (if "Delete on Termination" is checked). But any manual snapshots you took of that volume remain.

I've seen accounts with terabytes of snapshots from 2018 for instances that haven't existed in 5 years. At $0.05/GB-month, that adds up fast.

The Solution: A "Clean" Shutdown Workflow

Don't just click "Stop." If you're done with an instance for the day (or week), follow this checklist:

  1. Check for EIPs: Release them if you don't need the static IP.
  2. Snapshot & Delete: If you need the data but not the compute, take a snapshot of the volume and delete the volume itself. Snapshots are cheaper ($0.05/GB) than active volumes ($0.08/GB).
  3. Tagging: Tag everything with Owner and ExpiryDate.
  4. Automation: Use a tool (like CloudWise or a simple Lambda) to scan for "Available" volumes (volumes not attached to any instance) and delete them after 7 days.

Summary Checklist

Component Billed When Running? Billed When Stopped? Billed When Terminated?
EC2 Compute ✅ Yes ❌ No ❌ No
EBS Storage ✅ Yes YES ❌ No (if deleted)
EBS IOPS/Throughput ✅ Yes YES ❌ No (if deleted)
Elastic IP ❌ No (usually) YES YES (if not released)
Data Transfer ✅ Yes ❌ No ❌ No

Stop paying for air. Check your "Volumes" tab today.

I'm Rick, building CloudWise to automate this cleanup for you. I write about AWS cost optimization and DevOps every week.

Designing Error Flows: Exceptions, Success Flags, or Discriminated Unions?

2026-02-19 23:18:17

The examples in this post are available in a demo repository here: https://github.com/liavzi/DesigningErrorFlows.

Introduction

Recently, I needed to integrate with an external API to synchronize data. From past experience, I have learned that failures are not a question of if but when. Network issues, timeouts, validation errors, authentication failures, unexpected payloads, and third-party outages can all cause requests to fail, often at the worst possible moment.

To make the system more resilient and easier to support, I decided to persist every failed request in the database and capture as much context as possible. This included the request payload, the response when available, and as many details as possible about the nature of the error.

Let’s explore three approaches to designing an API call with errors in mind. For simplicity, assume the call can produce only four outcomes: success with data, an authentication error, a validation error, or a general error.

Exception-Oriented Flow

public class ApiServiceExceptionsOriented(HttpClient httpClient)
{
    public async Task<TResponse> Post<TResponse>(string url, object payload)
    {
        string accessToken;
        try 
        {
            accessToken = await GetAccessToken();
        }
        catch (Exception)
        {
            throw new AuthenticationException("Failed to get access token");
        }
        var httpRequest = new HttpRequestMessage();
        httpRequest.Content = JsonContent.Create(payload);
        httpRequest.Headers.Add("Authorization", $"Bearer {accessToken}");
        var httpResponse = await httpClient.SendAsync(httpRequest);

        if (httpResponse.StatusCode == HttpStatusCode.BadRequest) {
            var validationErrorResponse = await httpResponse.Content.ReadFromJsonAsync<ExternalApiValidationErrorResponse>();
            throw new ValidationException(validationErrorResponse);
        }

        if (!httpResponse.IsSuccessStatusCode) {
            var rawResponse = await httpResponse.Content.ReadAsStringAsync();
            throw new GeneralApiException(rawResponse);
        }

        var response = await httpResponse.Content.ReadFromJsonAsync<TResponse>();
        return response;
    }

    public async Task<string> GetAccessToken()
    {
        return "SOME_ACCESS_TOKEN";
    }
}

Here is what an example call might look like:

public async Task<SyncEmployeeResponse> CallApiExample()
{
    var syncEmployeeRequest = new SyncEmployeeRequest();
    try 
    {
        var response = await Post<SyncEmployeeResponse>("https://some.api/sync-employee", syncEmployeeRequest);
        // other logic
        return response;
    }
    catch (AuthenticationException ex)
    {
        // Handle authentication failure
        throw;
    } catch (ValidationException ex)
    {
        // Handle validation failure, access details via ex.ValidationErrorResponse and maybe show them to the user
        throw;
    } catch (GeneralApiException ex)
    {
        // Handle other API failures, maybe log the error details (ex.Message contains the raw API response)
        throw;
    }
}

The main advantage of this approach is its simplicity on the happy path. You call the API, receive a response, and continue with your logic. However, this simplicity is also its primary drawback.

When an error occurs, you must handle each one in its own catch block. This not only clutters the code but also introduces additional challenges, such as variable scope. If you need access to variables declared inside the try block, you are often forced to move their declarations to an outer scope, which reduces readability and weakens encapsulation.

Moreover, the caller has no clear understanding of which errors might be thrown by the API call. Instead, they must dig into the implementation to discover the possible exceptions. This increases the risk of unhandled errors and can easily lead to future bugs, especially when new failure modes are introduced, such as a rate limit error.

Success Flags

public async Task<ApiCallResult<TResponse>> Post<TResponse>(string url, object payload)
{
    string accessToken;
    try 
    {
        accessToken = await GetAccessToken();
    }
    catch (Exception)
    {
        return new ApiCallResult<TResponse> { IsAuthenticationFailure = true };
    }
    var httpRequest = new HttpRequestMessage();
    httpRequest.Content = JsonContent.Create(payload);
    httpRequest.Headers.Add("Authorization", $"Bearer {accessToken}");
    var httpResponse = await httpClient.SendAsync(httpRequest);

    if (httpResponse.StatusCode == HttpStatusCode.BadRequest) {
        var validationErrorResponse = await httpResponse.Content.ReadFromJsonAsync<ExternalApiValidationErrorResponse>();
        return new ApiCallResult<TResponse>
        {
            ValidationErrorResponse = validationErrorResponse,
        };
    }

    if (!httpResponse.IsSuccessStatusCode) {
        var rawResponse = await httpResponse.Content.ReadAsStringAsync();
        return new ApiCallResult<TResponse>
        {
            HasGeneralFailure = true,
            ErrorRawResponse = rawResponse,
        };
    }

    var response = await httpResponse.Content.ReadFromJsonAsync<TResponse>();
    return new ApiCallResult<TResponse>
    {
        Data = response
    };
}

Now, instead of relying on exceptions, we return an ApiCallResult<T> object that explicitly represents the outcome of the operation:


public class ApiCallResult<T>
{
    public T Data { get; set; }
    public ExternalApiValidationErrorResponse ValidationErrorResponse { get; set; }
    public string ErrorRawResponse { get; set; }
    public bool HasGeneralFailure { get; set; }
    public bool IsSuccess => Data != null;
    public bool IsAuthenticationFailure { get; set; }
}

Now the caller can examine the object and decide how to proceed based on the different flags it exposes:

public async Task CallApiExample()
{
    var syncEmployeeRequest = new SyncEmployeeRequest();
    var callResult = await Post<SyncEmployeeResponse>("https://some.api/sync-employee", syncEmployeeRequest);
    switch (callResult)
    {
        case { IsSuccess: true }:
            // Handle success. can access the response data via callResult.Data
            break;
        case { IsAuthenticationFailure: true }:
            // Handle authentication failure
            break;
        case { ValidationErrorResponse: { } }:
            // Handle validation errors
            break;
        case { HasGeneralFailure: true }:
            // Handle general API failure
            break;
    }

}

We improved the readability of the code and encouraged the caller to think about failure scenarios rather than focusing only on the happy path.

However, the caller still needs to understand and examine the ApiCallResult class to determine which fields are populated for each possible outcome. This implicit contract can become harder to maintain as new error types are introduced.

Moreover, if a new error type is added, we must remember to handle it across all call sites. This raises an important design question: can we force the caller to handle new error types instead of relying on memory and code reviews?

Discriminated Unions

Let’s take this one step further and model the result as a discriminated union.

Instead of returning a single object with multiple nullable fields or boolean flags, we define a closed set of possible outcomes, where each case represents exactly one valid state:

  • Success with data

  • Authentication error

  • Validation error

  • General error

public abstract record ApiResult<TData>
{
    private ApiResult() { }

    public sealed record SuccessResult(TData Data) : ApiResult<TData>;

    public sealed record ValidationFailedResult(ExternalApiValidationErrorResponse ValidationErrorResponse) : ApiResult<TData>;

    public sealed record GeneralFailureResult(string ErrorRawResponse) : ApiResult<TData>;

    public sealed record AuthenticationFailure : ApiResult<TData>;
}

Now every outcome, whether success or a specific error type, has its own class and its own data. There are no boolean flags and no need to guess which property might be null.

Each case represents a single, valid state. A success contains data. A validation error contains validation details, and so on. The structure itself communicates the intent clearly.

This eliminates invalid combinations, improves readability, and makes the contract between the API call and its caller explicit and self-documenting.

Let’s look at the implementation of the Post method:

public async Task<ApiResult<TResponse>> Post<TResponse>(string url, object payload)
{
    string accessToken;
    try 
    {
        accessToken = await GetAccessToken();
    }
    catch (Exception)
    {
        return new ApiResult<TResponse>.AuthenticationFailure();
    }
    var httpRequest = new HttpRequestMessage();
    httpRequest.Content = JsonContent.Create(payload);
    httpRequest.Headers.Add("Authorization", $"Bearer {accessToken}");
    var httpResponse = await httpClient.SendAsync(httpRequest);

    if (httpResponse.StatusCode == HttpStatusCode.BadRequest) {
        var validationErrorResponse = await httpResponse.Content.ReadFromJsonAsync<ExternalApiValidationErrorResponse>();
        return new ApiResult<TResponse>.ValidationFailedResult(validationErrorResponse);
    }

    if (!httpResponse.IsSuccessStatusCode) {
        var rawResponse = await httpResponse.Content.ReadAsStringAsync();
        return new ApiResult<TResponse>.GeneralFailureResult(rawResponse);
    }

    var response = await httpResponse.Content.ReadFromJsonAsync<TResponse>();
    return new ApiResult<TResponse>.SuccessResult(response);
}

We generate a concrete class for every possible outcome. Notice how much more readable this is compared to the “flags” approach.

In addition, the caller is now required to consider the different result types and handle them appropriately:

public async Task CallApiExample()
{
    var syncEmployeeRequest = new SyncEmployeeRequest();
    var callResult = await Post<SyncEmployeeResponse>("https://some.api/sync-employee", syncEmployeeRequest);
    switch (callResult)
    {
        case ApiResult<SyncEmployeeResponse>.SuccessResult successResult:
            // Handle success. can access the response data via successResult.Data
            break;
        case ApiResult<SyncEmployeeResponse>.AuthenticationFailure:
            // Handle authentication failure
            break;
        case ApiResult<SyncEmployeeResponse>.ValidationFailedResult validationFailedResult:
            // Handle validation errors
            break;
        case ApiResult<SyncEmployeeResponse>.GeneralFailureResult generalFailureResult:
            // Handle general API failure
            break;
        default:
            throw new NotSupportedException("Unknown result type");
    }

}

Notice the default branch. It acts as a safety net, ensuring that if a new error type is introduced and not explicitly handled, it will fail at runtime rather than silently being ignored.

The reason we model ApiResult as a closed hierarchy is intentional. By limiting inheritance to a fixed set of known types, we define a finite set of possible outcomes. This makes the design explicit and controlled.

Even more importantly, this opens the door to stronger compile-time guarantees. If, in the future, the language or our implementation enforces exhaustive pattern matching over a closed hierarchy, the compiler will be able to require callers to handle all possible result types. At that point, adding a new error case would immediately surface compile errors in every unhandled call site, forcing correct handling at compile time rather than relying on runtime behavior.

Final Thoughts

As always, there is no silver bullet.

The discriminated union approach is very explicit, but if we apply it to every method, we may end up with switch expressions and conditional handling scattered throughout the codebase. This is one of the strongest advantages of exceptions: you can continue with the natural flow of the program, and if something goes wrong, you throw and let a higher level in the call chain decide how to handle it.

So in the end, it comes down to intent.

If you want to force the caller to at least think about the possible error cases, use a discriminated union or a success-result object. Both approaches make failure explicit and part of the method’s contract.

If you want to preserve a clean and linear flow while centralizing error handling, exceptions are often the more natural and practical choice.

The key is not to choose one approach blindly, but to understand the trade-offs and use each tool where it fits best (as always 🙂).

Building a Real-Time Data Pipeline: Streaming TCP Socket Data to PostgreSQL with Node.js

2026-02-19 23:13:21

Real-time data streams are the lifeblood of many modern applications, ranging from financial market tickers to IoT sensor networks. Processing these continuous streams efficiently requires software that can ingest raw data, parse it reliably, and pipe it into a persistent storage layer.

The socket-to-pg project is a lightweight Node.js microservice designed to do exactly this: it connects to a raw TCP socket, decodes an ASCII-based data stream, and continuously inserts the extracted metrics into a PostgreSQL database. Authored by @rpi1337, the application offers a clean, event-driven architecture using pure JavaScript.

Here is a deep dive into how it works.

The Challenge: Parsing Fragmented TCP Streams

When dealing with low-level TCP connections, data rarely arrives in perfect, individual packages. Instead, it comes in chunks that can arbitrarily split a single message in half.

The socket-to-pg application expects its incoming data over a TCP socket via Node's native net module. The expected protocol is an ASCII string of messages separated by newlines. Each message features a timestamp (in nanoseconds since the UNIX epoch) and a numerical value enclosed in brackets, such as [1468009895549670789,397807].

To reliably parse this, the application implements a custom SocketProcessor class that inherits from Node's built-in EventEmitter. When the processor receives a chunk of data, it iterates through it character by character.

  • It uses an inRecord flag that turns true when an opening bracket [ is detected.
  • Incoming characters are pushed into a recordBuffer array.
  • When a closing bracket ] is encountered, the buffered characters are joined together, and the buffer is cleared to prepare for the next message.

Once a fragment is successfully isolated, it must pass a regular expression validation check ([0-9][,]{1}[0-9]) to ensure the basic structure of numbers separated by a comma is met. Valid fragments are then emitted as a data event for the next stage of the pipeline.

Bridging the Gap: The Database Provider

Once the metrics are extracted from the raw socket stream, they need to be stored. For this, the application uses the popular pg library (version 8.7.1) and encapsulates its database logic within a DatabaseProvider class.

The database workflow is designed to be self-initializing:

  1. Connection: The provider connects to the PostgreSQL instance using configurations passed to the client.
  2. Table Creation: At startup, it dynamically executes a CREATE TABLE query to ensure the target table is available. By default, this table is named socket_stream_values and requires timestamp bigint NOT NULL and value integer NOT NULL fields.
  3. Data Insertion: When the SocketProcessor emits a new piece of data, the main script splits the fragment by its comma to separate the timestamp from the value. The writeToTable method is then called, which safely constructs a parameterized query (like $1, $2) using a helper method named _prepareInsertVariables. This approach ensures that incoming parameters are strictly evaluated by the database engine, avoiding potential SQL injection attacks.

Configuration and Deployment

The project is structured to be portable and configurable via environment variables. It relies on the dotenv package to load parameters into process.env.

To deploy the script, a user must configure their .env file to include network parameters (SOCKET_HOST, SOCKET_PORT) as well as database credentials (DB_HOST, DB_PORT, DB_DATABASE, DB_USER, DB_PASSWORD, and optionally TABLE_NAME). Once the .env configuration is in place, the data pipeline is initialized by simply executing node index.js via the command line.

Through this combination of robust stream parsing and clean class-based separation of concerns, socket-to-pg demonstrates an efficient, native approach to building a real-time data ingestion pipeline in Node.js.

https://github.com/arpad1337/socket-to-pg