MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Building WSL-UI: The Polish Phase and Privacy-First Analytics

2026-01-23 06:17:26

The previous posts in this series covered the interesting technical challenges — Tauri architecture, mock mode, registry surgery, Microsoft Store publishing, E2E testing. But there's a part of building WSL-UI that I haven't talked about: the sheer amount of time spent on polish.

As someone who spent years as a backend developer, this was an eye-opener.

The Backend Developer's Perspective

In backend development, you deal with well-defined contracts. An API either returns the right data or it doesn't. A function either handles the edge case or throws an exception. Tests pass or fail. There's a certain... cleanliness to it.

Request comes in. Process it. Response goes out. Done.

UI development is different. The "correct" behavior is subjective. Does this button look clickable enough? Is this spacing consistent with that spacing? What happens when the window is resized to 800px wide? 600px? What about 4K displays?

The Endless Polish

I spent more time on polish than on actual features. Some examples:

Pixel Pushing

"The cards don't quite line up." Okay, let me check the margins. And the padding. And the gap between items. And whether flex or grid makes more sense here. Oh, and the icon is 2 pixels off from the text baseline.

In backend code, if two values are functionally equivalent, it doesn't matter which you use. In UI, 15px vs 16px of margin is visible. Users notice, even if they can't articulate what's wrong.

Consistency Across States

A distribution can be:

  • Running
  • Stopped
  • Starting (transitioning)
  • Stopping (transitioning)
  • Installing
  • Failed

Each state needs distinct visual treatment. Status badges, action buttons that enable/disable appropriately, progress indicators. And they all need to look like they belong to the same app.

I ended up creating a state machine diagram just to track which UI elements should be visible in which states. Something I'd never needed for a REST API.

Edge Cases Everywhere

Backend edge cases are usually about data: null values, empty arrays, strings that are too long. UI edge cases are about everything:

  • What if the distribution name is 50 characters long?
  • What if there are 20 distributions?
  • What if a user has never created any distributions?
  • What if an operation fails mid-way?
  • What happens during the 2-second gap between clicking "Start" and the distribution actually starting?

Each edge case needs thought. Empty states need messaging. Long names need truncation. Slow operations need spinners. Failed operations need error messages that are actually helpful.

The Real-World Gauntlet

Once I published to the Microsoft Store, real users found things I'd never considered:

  • High-contrast mode exists, and my colour choices didn't work with it
  • Some users have display scaling at 150% or 200%
  • Windows Terminal isn't always the default terminal
  • People actually read error messages (and expect them to be useful)

The Time Sink

If I had to estimate, I'd say:

  • 30% of development time: Building features
  • 20% of development time: Testing and bug fixes
  • 50% of development time: Polish, edge cases, and "making it feel right"

That ratio surprised me. Backend development isn't like this. You build the feature, write tests, and move on. UI development has this long tail of refinement that never really ends.

Adding Analytics with Aptabase

Once WSL-UI was on the Microsoft Store, I wanted some visibility into how people were actually using it. But I had constraints:

  1. Privacy matters — I didn't want to collect personal data
  2. Simplicity — I didn't need complex funnel analysis
  3. Opt-in preferred — Users should choose to share data

I found Aptabase, which is designed exactly for this use case — privacy-first analytics for desktop and mobile apps.

How It Works

wsl-ui-polish/aptabase-flow

Aptabase collects minimal data:

  • Country (derived from IP, then discarded — rough location only)
  • OS version
  • App version
  • Custom events you define

No user IDs, no tracking cookies, no personal information. The data is anonymised and aggregated.

Implementation

Adding Aptabase to a Tauri app was straightforward:

// In your Tauri setup
use aptabase_rs::Aptabase;

fn main() {
    let aptabase = Aptabase::new("YOUR_APP_KEY");

    tauri::Builder::default()
        .manage(aptabase)
        .run(tauri::generate_context!())
        .expect("error while running application");
}

Then tracking events:

#[tauri::command]
pub async fn start_distribution(
    name: String,
    aptabase: State<'_, Aptabase>
) -> Result<(), String> {
    // Do the actual work
    wsl_start(&name)?;

    // Track the event (if analytics enabled)
    if analytics_enabled() {
        aptabase.track_event("distribution_started", None);
    }

    Ok(())
}

User Consent

This is the important part. Analytics is opt-out by default. When you first install WSL-UI, no data is sent anywhere.

In the Settings page, there's a toggle:

Settings page showing the analytics opt-in toggle

When a user enables analytics, they're explicitly choosing to share usage data. The setting is stored locally and checked before any tracking calls.

What I Actually See

The Aptabase dashboard shows:

Aptabase dashboard showing daily active users

Aptabase sessions view

Aptabase showing the event data collected

Key insights I've gained:

  • Which features are actually used (and which aren't)
  • App version distribution (helpful for knowing when to drop legacy support)
  • Rough geographic distribution of users
  • Whether new features get adopted

What I don't see:

  • Individual user behaviour
  • Personal information
  • Anything that could identify a specific person

Why This Matters

As a solo developer, I have limited time. Knowing that 80% of users use feature X and 5% use feature Y helps prioritise. If I'm going to spend time polishing something, it should be the things people actually use.

Without analytics, you're guessing. With privacy-invasive analytics, you're betraying user trust. Aptabase hits a sweet spot — enough signal to make informed decisions, without crossing ethical lines.

Lessons Learned

Building WSL-UI taught me things I wouldn't have learned from backend work:

  1. UI polish is real work — It's not "just making things pretty." It's dealing with an exponentially larger state space than backend code.

  2. Edge cases multiply — Every UI state, combined with every data state, combined with every screen size, creates an explosion of cases to handle.

  3. Users are unpredictable — They'll find workflows you never imagined. Build with flexibility.

  4. Analytics can be ethical — You don't need to choose between "no data" and "track everything." Privacy-first tools exist.

  5. The last 10% takes 50% of the time — Getting from "it works" to "it feels polished" is a longer journey than building the initial features.

Series Wrap-Up

That's the complete WSL-UI journey. From a Christmas project to learn Tauri, through mock modes and registry surgery, to the Microsoft Store and beyond.

If you're a backend developer considering a desktop app project, here's my honest take: it's more work than you expect, but also more rewarding. There's something satisfying about building something you can click on, something that sits in your taskbar and solves a problem you have every day.

Try It Yourself

WSL-UI is open source and available on:

Thanks for reading the series!

Originally published at https://wsl-ui.octasoft.co.uk/blog/building-wsl-ui-polish-and-analytics

Writing Terraform Resources with Write-Only Parameters

2026-01-23 06:10:57

Writing Terraform Resources with Write-Only Parameters

When building Terraform providers that handle sensitive data like passwords, API tokens, or secret keys, you'll eventually encounter the need for write-only parameters. These are values that should be sent to an API but never stored in Terraform state files.

In this post, I'll walk through implementing write-only parameters in a Terraform provider, the challenges involved, and how to balance automation with user control.

What Are Write-Only Parameters?

Write-only parameters (introduced in Terraform 1.11) are resource arguments that:

  • Are sent to the API during creation and updates
  • Never appear in the Terraform state file
  • Cannot be read back from the API (the API doesn't return them)

This is crucial for security. Without write-only parameters, sensitive values like passwords would be stored in plaintext in your state files.

Here's an example from my fork of the Event Drive Ansible (EDA) provider:

resource "aap_eda_credential" "example" {
  name               = "my-api-credential"
  credential_type_id = 1

  # Write-only: sent to API but NEVER stored in state
  inputs_wo = jsonencode({
    username  = "service-account"
    api_token = var.api_token
  })
}

The _version Argument

Since Terraform's statefile does not manage the value of the secret, Terraform requires way to know if the secret must be updated. The standard solution to detecting changes in write-only parameters is to add a companion _version field that users must manually increment:

resource "aap_eda_credential" "manual" {
  name      = "my-credential"
  inputs_wo = jsonencode({
    password = var.new_password
  })

  # User must increment this to trigger an update
  inputs_wo_version = 2  # Changed from 1
}

How it works:

  1. User updates the secret in inputs_wo
  2. User manually increments inputs_wo_version
  3. Terraform detects the version change and triggers an update
  4. The new secret is sent to the API

This works, but it's not user-friendly. Users have to remember to:

  • Increment the version every time they change secrets
  • Track version numbers manually
  • Update two fields instead of one

A Better Approach: Automatic Change Detection

Ideally, users shouldn't need to manage version numbers. They should just update the secret value and Terraform should detect the change automatically. But here's the problem:

If inputs_wo is write-only and not in state, how do we detect when it changes?

The answer: store a deterministic hash in private state.

Using Private State and Hashing

Terraform's plugin framework provides private state - a place to store provider-managed data that's:

  • Stored in the state file (so it persists)
  • Not visible to users (doesn't appear in terraform show)
  • Perfect for storing things like hashes

Here's the implementation approach:

// Create operation
func (r *EDACredentialResource) Create(ctx context.Context, req resource.CreateRequest, resp *resource.CreateResponse) {
    var data EDACredentialResourceModel

    req.Plan.Get(ctx, &data)

    // Calculate SHA-256 hash of the inputs
    inputsHash := calculateInputsHash(data.InputsWO.ValueString())

    // Store hash in private state (JSON-wrapped for validity)
    hashJSON := fmt.Sprintf(`{"hash":"%s"}`, inputsHash)
    resp.Private.SetKey(ctx, "inputs_wo_hash", []byte(hashJSON))

    // Set auto-managed version
    data.InputsWOVersion = tftypes.Int64Value(1)

    // ... send to API and save state ...
}

func calculateInputsHash(inputs string) string {
    h := sha256.New()
    h.Write([]byte(inputs))
    return hex.EncodeToString(h.Sum(nil))
}

In the Update operation, we compare hashes:

func (r *EDACredentialResource) Update(ctx context.Context, req resource.UpdateRequest, resp *resource.UpdateResponse) {
    var data, state EDACredentialResourceModel

    req.Plan.Get(ctx, &data)
    req.State.Get(ctx, &state)

    // Get stored hash from private state
    oldHashBytes, _ := req.Private.GetKey(ctx, "inputs_wo_hash")

    var hashWrapper struct {
        Hash string `json:"hash"`
    }
    json.Unmarshal(oldHashBytes, &hashWrapper)
    oldHash := hashWrapper.Hash

    // Calculate new hash
    newHash := calculateInputsHash(data.InputsWO.ValueString())

    if newHash != oldHash {
        // Inputs changed! Auto-increment version
        data.InputsWOVersion = tftypes.Int64Value(state.InputsWOVersion.ValueInt64() + 1)

        // Update stored hash
        hashJSON := fmt.Sprintf(`{"hash":"%s"}`, newHash)
        resp.Private.SetKey(ctx, "inputs_wo_hash", []byte(hashJSON))
    } else {
        // No change, keep version as-is
        data.InputsWOVersion = state.InputsWOVersion
    }

    // ... send to API and save state ...
}

Now users can just update the secret and Terraform automatically detects it:

resource "aap_eda_credential" "auto" {
  name      = "my-credential"
  inputs_wo = jsonencode({
    password = var.new_password  # Just change this!
  })
  # inputs_wo_version auto-increments (computed field)
}

Best of both worlds

While automatic detection is convenient, some users may not want a hash of their secrets stored in state - even if it's SHA-256 and in private state. The hash is still derived from the secret value.

Solution: Support both modes

func (r *EDACredentialResource) Create(ctx context.Context, req resource.CreateRequest, resp *resource.CreateResponse) {
    var data EDACredentialResourceModel
    req.Plan.Get(ctx, &data)

    var versionToSet tftypes.Int64

    if data.InputsWOVersion.IsNull() || data.InputsWOVersion.IsUnknown() {
        // Auto-managed mode: store hash
        inputsHash := calculateInputsHash(data.InputsWO.ValueString())
        hashJSON := fmt.Sprintf(`{"hash":"%s"}`, inputsHash)
        resp.Private.SetKey(ctx, "inputs_wo_hash", []byte(hashJSON))
        versionToSet = tftypes.Int64Value(1)
    } else {
        // Manual mode: user provided version, don't store hash
        versionToSet = data.InputsWOVersion
    }

    // ... rest of create logic ...
    data.InputsWOVersion = versionToSet
}

Schema definition:

"inputs_wo_version": schema.Int64Attribute{
    Optional: true,
    Computed: true,
    PlanModifiers: []planmodifier.Int64{
        int64planmodifier.UseStateForUnknown(),
    },
    Description: "Version number for managing credential updates. " +
        "If not set, the provider will automatically detect changes to inputs_wo " +
        "using a SHA-256 hash stored in private state. If set manually, you " +
        "control when the credential is updated by incrementing this value yourself.",
}

Users can choose their preferred mode:

# Auto-managed (default)
resource "aap_eda_credential" "auto" {
  name      = "auto-credential"
  inputs_wo = jsonencode({ password = var.pwd })
  # version auto-increments
}

# Manual control
resource "aap_eda_credential" "manual" {
  name              = "manual-credential"
  inputs_wo         = jsonencode({ password = var.pwd })
  inputs_wo_version = 1  # I control this
}

Handling Mode Switching: Just Say No

You might think: "What if a user starts with auto-mode and then wants to switch to manual mode?"

Attempting to handle this creates a mess:

  • Auto → Manual: If a user suddenly sets inputs_wo_version, do we remove the hash? What if they're just trying to force an update?
  • Manual → Auto: We'd need to create a hash, but we can't compare it to the old inputs (they're write-only and not in state), so we can't tell if inputs changed during the transition
  • The logic becomes complex and error-prone
  • Edge cases multiply

Better approach: Prevent mode switching

func (r *EDACredentialResource) Update(ctx context.Context, req resource.UpdateRequest, resp *resource.UpdateResponse) {
    // Determine current mode from private state
    oldHashBytes, _ := req.Private.GetKey(ctx, "inputs_wo_hash")
    wasAutoManaged := oldHashBytes != nil

    // Determine desired mode from config
    var configModel EDACredentialResourceModel
    req.Config.Get(ctx, &configModel)
    isNowManual := !configModel.InputsWOVersion.IsNull() && !configModel.InputsWOVersion.IsUnknown()

    var state EDACredentialResourceModel
    req.State.Get(ctx, &state)
    wasManual := !wasAutoManaged && !state.InputsWOVersion.IsNull()

    // Prevent mode switching
    if wasAutoManaged && isNowManual {
        resp.Diagnostics.AddError(
            "Cannot switch from auto-managed to manual version management",
            "The inputs_wo_version field was previously auto-managed. Once auto-managed, "+
            "it cannot be switched to manual mode. If you need to manually control the version, "+
            "you must recreate the resource with inputs_wo_version set from the start.",
        )
        return
    }

    if wasManual && !isNowManual {
        resp.Diagnostics.AddError(
            "Cannot switch from manual to auto-managed version management",
            "The inputs_wo_version field was previously manually managed. Once manually managed, "+
            "it cannot be switched to auto mode. If you need auto-managed version control, "+
            "you must recreate the resource without setting inputs_wo_version.",
        )
        return
    }

    // ... rest of update logic for the chosen mode ...
}

This gives users a clear error message:

Error: Cannot switch from auto-managed to manual version management

The inputs_wo_version field was previously auto-managed. Once auto-managed,
it cannot be switched to manual mode. If you need to manually control the
version, you must recreate the resource with inputs_wo_version set from the start.

Key Takeaways

When implementing write-only parameters in Terraform providers:

  1. Use private state for hashes - It's stored but not visible to users
  2. Support both auto and manual modes - Give users choice for their security preferences
  3. Prevent mode switching - The complexity isn't worth it; just error clearly
  4. Document clearly - Users need to understand the trade-offs

The complete implementation provides a great user experience while respecting privacy concerns:

# Most users: auto-managed, no version to track
resource "aap_eda_credential" "api" {
  name      = "api-credential"
  inputs_wo = jsonencode({ token = var.token })
}

# Privacy-conscious users: manual control, no hash stored
resource "aap_eda_credential" "secure" {
  name              = "secure-credential"
  inputs_wo         = jsonencode({ token = var.token })
  inputs_wo_version = 1
}

Both approaches are valid. Both are clear. And switching between them requires a resource recreation, which is easy to communicate and reason about.

Have you implemented write-only parameters in your Terraform provider? What challenges did you face? Let me know in the comments!

Frontend – Temporal, APIs, and DateTimePickers That Don't Lie

2026-01-23 06:10:07

Part 8 of 8 in the series Time in Software, Done Right

You've modeled time correctly on the backend. You've stored it properly in the database. Now you need to handle it in the browser — where users pick dates and times, and where your API sends data back and forth.

JavaScript's Date object has been the source of countless bugs. The Temporal API finally gives us proper types. But even with good types, you still need to think about what your DateTimePicker is actually asking users to select, and how to send that data across the wire.

This article covers the Temporal API, API contract design, and the principles behind DateTimePickers that don't mislead users.

Why JavaScript's Date Is Problematic

The Date object has been with us since 1995. It has... issues:

// Months are 0-indexed (January = 0)
new Date(2026, 5, 5)  // June 5th, not May 5th

// Parsing is inconsistent across browsers
new Date("2026-06-05")  // Midnight UTC? Midnight local? Depends on browser.

// No timezone support beyond local and UTC
const d = new Date();
d.getTimezoneOffset();  // Minutes offset, but which zone? You don't know.

// Mutable (a constant footgun)
const d = new Date();
d.setMonth(11);  // Mutates in place

There's no LocalDate, no LocalDateTime, no ZonedDateTime. Just one type that tries to do everything and does none of it well.

The Temporal API

The Temporal API is the modern replacement for Date. It's currently at Stage 3 — the final candidate stage before standardization — and requires a polyfill in most browsers (e.g., @js-temporal/polyfill). Browser support is coming, but for now, plan on using the polyfill.

Type Mapping

Concept Temporal Type NodaTime Equivalent
Physical moment Temporal.Instant Instant
Calendar date Temporal.PlainDate LocalDate
Wall clock time Temporal.PlainTime LocalTime
Date + time (no zone) Temporal.PlainDateTime LocalDateTime
IANA timezone Temporal.TimeZone DateTimeZone
Full context Temporal.ZonedDateTime ZonedDateTime

The names differ (Plain vs Local), but the concepts are identical. If you've understood NodaTime, you already understand Temporal.

Basic Usage

// Just a date
const date = Temporal.PlainDate.from("2026-06-05");
const date2 = new Temporal.PlainDate(2026, 6, 5);  // Months are 1-indexed!

// Just a time
const time = Temporal.PlainTime.from("10:00");

// Date and time (no zone)
const local = Temporal.PlainDateTime.from("2026-06-05T10:00");

// With timezone
const zone = Temporal.TimeZone.from("Europe/Vienna");
const zoned = local.toZonedDateTime(zone);

// Get the instant
const instant = zoned.toInstant();

Converting Between Types

// ZonedDateTime → components
const zoned = Temporal.ZonedDateTime.from("2026-06-05T10:00[Europe/Vienna]");
const local = zoned.toPlainDateTime();     // PlainDateTime
const instant = zoned.toInstant();          // Instant
const tzId = zoned.timeZoneId;              // "Europe/Vienna"

// Instant → ZonedDateTime (for display)
const instant = Temporal.Instant.from("2026-06-05T08:00:00Z");
const inVienna = instant.toZonedDateTimeISO("Europe/Vienna");
const inLondon = instant.toZonedDateTimeISO("Europe/London");

DST Handling

Temporal handles DST ambiguities explicitly:

// Time that doesn't exist (spring forward gap)
const local = Temporal.PlainDateTime.from("2026-03-29T02:30");
const zone = Temporal.TimeZone.from("Europe/Vienna");

// Default ("compatible"): shifts forward for gaps, picks earlier occurrence for overlaps
const zoned = local.toZonedDateTime(zone, { disambiguation: "compatible" });
const zoned = local.toZonedDateTime(zone, { disambiguation: "reject" });      // Throws

// Time that exists twice (fall back overlap)
const local = Temporal.PlainDateTime.from("2026-10-25T02:30");
const zoned = local.toZonedDateTime(zone, { disambiguation: "earlier" });  // First occurrence
const zoned = local.toZonedDateTime(zone, { disambiguation: "later" });    // Second occurrence

API Contracts: Sending Time Across the Wire

When your frontend talks to your backend, you need a clear contract for time values. There are several approaches.

Option 1: ISO Strings (Simple)

For instants, use ISO 8601 with Z suffix:

{
  "createdAt": "2026-06-05T08:00:00Z"
}

Unambiguous. Both sides parse it the same way.

Option 2: Structured Object (Recommended for User Intent)

For human-scheduled times, send the components:

{
  "appointment": {
    "localStart": "2026-06-05T10:00:00",
    "timeZoneId": "Europe/Vienna"
  }
}

The backend receives exactly what the user chose. It can:

  • Validate the timezone ID
  • Handle DST ambiguities with domain-specific logic
  • Compute the instant
  • Store all three values

Option 3: ZonedDateTime String

Temporal and some APIs support bracketed timezone notation:

{
  "startsAt": "2026-06-05T10:00:00[Europe/Vienna]"
}

This is compact and unambiguous, but not all JSON parsers handle it natively. You'll need custom parsing.

What NOT to Do

// DON'T: Ambiguous local time
{ "startsAt": "2026-06-05T10:00:00" }  // What timezone?

// DON'T: Offset without timezone ID
{ "startsAt": "2026-06-05T10:00:00+02:00" }  // Which +02:00 zone?

// DON'T: Unix timestamp for user-scheduled events
{ "startsAt": 1780758000 }  // Lost the user's intent

TypeScript Interfaces

Define clear types for your API:

// For instants (logs, events, timestamps)
interface AuditEvent {
  occurredAt: string;  // ISO 8601 with Z: "2026-06-05T08:00:00Z"
}

// For user-scheduled times
interface ScheduledTime {
  local: string;       // ISO 8601 without offset: "2026-06-05T10:00:00"
  timeZoneId: string;  // IANA zone: "Europe/Vienna"
}

interface Appointment {
  title: string;
  start: ScheduledTime;
  end: ScheduledTime;
}

// For date-only values
interface Person {
  name: string;
  dateOfBirth: string;  // ISO 8601 date: "1990-03-15"
}

DateTimePicker Design Principles

A DateTimePicker is a UI component that lets users select a date, time, or both. But "picking a time" isn't as simple as it sounds.

Principle 1: Know What You're Asking For

Before building (or choosing) a picker, decide:

What does the user select? What do you send to the backend?
Just a date PlainDate"2026-06-05"
Just a time PlainTime"10:00"
Date and time PlainDateTime"2026-06-05T10:00"
Date, time, and timezone ZonedDateTime{ local, timeZoneId }

Most pickers handle the first three. The fourth requires explicit timezone UI.

Principle 2: Timezone Display — When and How

Show the timezone when:

  • Users in different timezones use the system
  • The selected timezone might differ from the user's local timezone
  • The business operates across timezones

Hide the timezone when:

  • All users are in the same timezone
  • The context is unambiguous (e.g., "your local time")
  • Showing it would cause confusion without adding clarity

How to show it:

  • Display the current timezone near the picker: "Vienna (UTC+2)"
  • Allow changing it only if the user might need a different zone
  • Don't default to UTC — default to the user's timezone or the organization's timezone

Principle 3: Handle DST Gaps and Overlaps

When the user picks a time that falls in a DST transition:

Gap (time doesn't exist):

  • Option A: Prevent selection (disable those times in the picker)
  • Option B: Accept and adjust (shift forward), but inform the user
  • Option C: Show a warning and ask the user to choose a different time

Overlap (time exists twice):

  • Option A: Ask which one they mean (before DST change or after)
  • Option B: Pick one automatically and note it
  • Option C: Ignore it (acceptable for many use cases)

The right choice depends on your domain. A medical appointment might need explicit handling; a casual reminder might not.

Principle 4: Don't Lie About What's Stored

If your backend stores local + timeZoneId, your picker should collect exactly that. Don't:

  • Show a local picker but send UTC (user sees 10:00, backend gets 08:00)
  • Show UTC but let users think it's local
  • Convert silently and hope nobody notices

The picker's display should match what gets stored.

Principle 5: Consider the Editing Experience

When users edit an existing time:

  • Show them what they originally entered (the local time)
  • Don't show a converted UTC value
  • If the timezone changed since creation, decide: show original zone or current zone?

Principle 6: Validation Belongs on Both Ends

The frontend picker should:

  • Prevent obviously invalid dates (February 30th)
  • Warn about DST issues if relevant
  • Send well-formed data

The backend should:

  • Validate the timezone ID is real
  • Handle DST ambiguities according to business rules
  • Never trust that the frontend did it right

A Minimal Picker Contract

For a DateTimePicker that collects a scheduled time:

Input (initial value):

interface DateTimePickerValue {
  local: string;       // "2026-06-05T10:00"
  timeZoneId: string;  // "Europe/Vienna"
}

Output (on change):

interface DateTimePickerValue {
  local: string;       // "2026-06-05T14:30"
  timeZoneId: string;  // "Europe/Vienna"
}

The picker:

  • Displays date and time inputs
  • Optionally displays or allows changing the timezone
  • Emits the combined value on change

The parent component:

  • Receives the structured value
  • Sends it to the backend as-is
  • Doesn't do timezone math

Putting It Together: Frontend to Database

Here's the full flow:

1. User picks a time in a DateTimePicker

UI shows: June 5, 2026 at 10:00 AM (Vienna)

2. Frontend sends to API

POST /appointments
{
  "title": "Team Standup",
  "start": {
    "local": "2026-06-05T10:00:00",
    "timeZoneId": "Europe/Vienna"
  }
}

3. Backend (NodaTime) processes

var local = LocalDateTime.FromDateTime(DateTime.Parse(dto.Start.Local));
var zone = DateTimeZoneProviders.Tzdb[dto.Start.TimeZoneId];
var instant = local.InZoneLeniently(zone).ToInstant();

// Store all three
var appointment = new Appointment
{
    LocalStart = local,
    TimeZoneId = dto.Start.TimeZoneId,
    InstantUtc = instant
};

4. Database stores

INSERT INTO appointments (local_start, time_zone_id, instant_utc)
VALUES ('2026-06-05 10:00:00', 'Europe/Vienna', '2026-06-05 08:00:00+00');

5. Later: API returns to frontend

{
  "title": "Team Standup",
  "start": {
    "local": "2026-06-05T10:00:00",
    "timeZoneId": "Europe/Vienna"
  },
  "instantUtc": "2026-06-05T08:00:00Z"
}

6. Frontend displays

  • To the organizer (Vienna): "June 5 at 10:00 AM"
  • To a participant in London: "June 5 at 9:00 AM (10:00 AM Vienna)"

Detecting the User's Timezone

We've covered storing and displaying times with timezones. But how do you know what timezone the user is in?

Browser Detection

const userZone = Intl.DateTimeFormat().resolvedOptions().timeZone;
// "Europe/Vienna", "Europe/London", etc.

This returns an IANA timezone ID — exactly what you need.

Caveats

  • VPNs and proxies may cause the browser to report a different zone than the user expects
  • Corporate networks sometimes override timezone settings
  • User preference might differ from their physical location (e.g., someone living in Vienna but working with a London team)

Best Practice

Use browser detection as a default, but let users confirm or change it:

  1. Detect the timezone automatically
  2. Show it clearly in the UI: "Your timezone: Europe/Vienna"
  3. Let users change it if needed
  4. Store their preference (per user, not per session)

Don't silently assume the detected timezone is correct. A user in Vienna might be scheduling a meeting for their London office.

Timezone Is Not a Locale (and Not a Language)

Timezone, language, and locale are often treated as one setting — but they are three independent concerns.

  • Language (i18n) controls text:

    • "Today" vs "Heute" vs "Aujourd'hui"
  • Locale (l10n) controls formatting:

    • 1,000.00 vs 1.000,00
    • MM/DD/YYYY vs DD.MM.YYYY
  • Timezone controls when things happen:

    • Europe/Vienna
    • America/New_York
    • Asia/Tokyo

They often change together — but they are not coupled.

Example

A French-speaking user in New York expects French UI, French date formatting, and New York time. Inferring Europe/Paris from fr is wrong.

DateTimePicker Rule

A DateTimePicker should not assume timezone based on language or locale.

Timezone must come from explicit user choice, browser detection, or application context.

Key Takeaways

  • Temporal API gives JavaScript proper types: PlainDate, PlainDateTime, ZonedDateTime, Instant
  • API contracts should be explicit: use ISO strings for instants, structured objects for user intent
  • DateTimePickers need to know what they're collecting: date, time, datetime, or datetime + zone
  • Show timezones when they matter, hide them when they'd confuse
  • Handle DST explicitly — don't let invalid times slip through silently
  • Don't lie about what's stored — the picker should match the backend model
  • Validate on both ends — trust but verify

This concludes the series "Time in Software, Done Right." You now have a complete mental model for handling time — from concepts to code, from backend to database to frontend.

The next time someone says "just store it as UTC," you'll know when that's right, and when it's a trap.

Rick Beato: The Secret History of the 90's Greatest Ballad

2026-01-23 06:07:41

Ever wonder about the guy behind that epic 90s ballad? Edwin McCain spills the beans in a candid chat, tracing his journey from rough-and-tumble bar performances to battling record labels just to get his voice heard.

Get the inside scoop on his lightning-fast rise to fame and the real struggles of life out on the road.

Watch on YouTube

PostgreSQL – Storing Time Without Lying to Yourself

2026-01-23 06:07:23

Part 7 of 8 in the series Time in Software, Done Right

You've modeled time correctly in your application. You have LocalDateTime, TimeZoneId, and Instant. Now you need to persist it.

PostgreSQL has excellent time support — but its type names are misleading, and the default behaviors can surprise you. This article explains what PostgreSQL actually does, which types to use, and how to avoid the common traps.

The Two Timestamp Types

PostgreSQL has two timestamp types:

  • timestamp without time zone (or just timestamp)
  • timestamp with time zone (or timestamptz)

The names suggest one stores a timezone and one doesn't. That's not quite right.

timestamp without time zone

This stores exactly what you give it — a date and time with no timezone context.

INSERT INTO test (ts) VALUES ('2026-06-05 10:00:00');
SELECT ts FROM test;
-- Result: 2026-06-05 10:00:00

No conversion happens. No timezone is stored. It's just a calendar value.

Use for: LocalDateTime — when you're storing what the user said, not when it happened globally.

timestamp with time zone

This is where the name lies. PostgreSQL does not store a timezone. It converts the input to UTC and stores UTC internally. On retrieval, it converts back to the session's timezone.

SET timezone = 'Europe/Vienna';
INSERT INTO test (tstz) VALUES ('2026-06-05 10:00:00');

SET timezone = 'Europe/London';
SELECT tstz FROM test;
-- Result: 2026-06-05 04:00:00-04

Same row, different display — because PostgreSQL stored UTC internally and converted on output.

Use for: Instant — when you're storing a global moment.

What PostgreSQL Actually Stores

Let's be precise:

Type What's Stored What Happens on Insert What Happens on Select
timestamp Raw datetime Nothing Nothing
timestamptz Instant (normalized to UTC) Converts input to UTC Converts UTC to session timezone

The critical insight: timestamptz stores UTC, not a timezone. The "with time zone" means "timezone-aware" — it participates in timezone conversions. It doesn't mean "includes a timezone."

The Session Timezone Trap

With timestamptz, PostgreSQL uses your session's timezone for conversions:

SET timezone = 'UTC';
INSERT INTO events (instant_utc) VALUES ('2026-06-05 10:00:00');
-- Stored as: 2026-06-05 10:00:00 UTC

SET timezone = 'Europe/Vienna';
INSERT INTO events (instant_utc) VALUES ('2026-06-05 10:00:00');
-- Stored as: 2026-06-05 08:00:00 UTC (Vienna is UTC+2 in summer)

Same literal, different stored value — because PostgreSQL assumed the input was in the session timezone.

Best practice: Always use explicit UTC or offsets when inserting into timestamptz:

INSERT INTO events (instant_utc) VALUES ('2026-06-05 10:00:00+00');  -- Explicit UTC
INSERT INTO events (instant_utc) VALUES ('2026-06-05 10:00:00Z');    -- Also UTC
INSERT INTO events (instant_utc) VALUES ('2026-06-05 12:00:00+02');  -- Explicit offset

Or set your application's session timezone to UTC and keep it there.

The Recommended Schema

For human-scheduled events (meetings, deadlines, appointments), use the pattern from earlier articles:

CREATE TABLE appointments (
    id              uuid PRIMARY KEY DEFAULT gen_random_uuid(),
    title           text NOT NULL,

    -- Source of truth: what the user chose
    local_start     timestamp NOT NULL,
    time_zone_id    text NOT NULL,

    -- Derived: for global queries and sorting
    instant_utc     timestamptz NOT NULL
);

Why three columns?

  1. local_start — The user said "10:00". That's the intent.
  2. time_zone_id — The user meant "Vienna". That's the context.
  3. instant_utc — For queries like "what's happening now?" and for sorting.

If timezone rules change, you recalculate instant_utc from local_start + time_zone_id.

Choosing the Right Type for Each Concept

Concept PostgreSQL Type Example
Instant / UTC moment timestamptz Log timestamp, created_at
Local datetime (user intent) timestamp Meeting time, deadline
Date only date Birthday, holiday
Time only time Opening hours
IANA timezone ID text 'Europe/Vienna'

Querying Patterns

Pattern A: "What's on my calendar on June 5th in Vienna?"

Query by local date + timezone:

SELECT *
FROM appointments
WHERE time_zone_id = 'Europe/Vienna'
  AND local_start >= '2026-06-05'
  AND local_start <  '2026-06-06';

This finds all appointments that display as June 5th in Vienna, regardless of the global instant.

Pattern B: "What's happening globally in the next hour?"

Query by instant:

SELECT *
FROM appointments
WHERE instant_utc >= NOW()
  AND instant_utc <  NOW() + INTERVAL '1 hour';

This finds all appointments happening in the next hour, regardless of their local calendars.

Pattern C: "What's happening at 10:00 in any timezone?"

Query by local time (rare but sometimes needed):

SELECT *
FROM appointments
WHERE local_start::time = '10:00:00';

Indexing Strategies

For instant queries (most common)

CREATE INDEX idx_appointments_instant ON appointments (instant_utc);

This covers "what's happening now" and range queries across timezones.

For local calendar queries

CREATE INDEX idx_appointments_local ON appointments (time_zone_id, local_start);

This covers "what's on the calendar for this timezone" queries. The timezone comes first because you'll almost always filter by it.

For both

If you query both ways heavily, create both indexes. The storage cost is usually worth it.

Common Mistakes

Mistake 1: Using timestamptz for user intent

-- DON'T: Storing a meeting time as timestamptz
INSERT INTO meetings (starts_at) VALUES ('2026-06-05 10:00:00');

You've lost the "10:00" intent. If timezone rules change, you can't recover it.

Mistake 2: Using timestamp for log timestamps

-- DON'T: Storing a log timestamp without timezone
INSERT INTO logs (occurred_at) VALUES ('2026-06-05 10:00:00');

Is that UTC? Server time? You don't know. Use timestamptz and be explicit.

Mistake 3: Trusting session timezone

-- DON'T: Assuming session timezone is what you expect
INSERT INTO events (instant_utc) VALUES ('2026-06-05 10:00:00');

What timezone was the session in? Be explicit: '2026-06-05 10:00:00+00'.

Mistake 4: Storing timezone names in timestamptz

-- DON'T: Thinking this stores "Vienna"
INSERT INTO events (instant_utc) VALUES ('2026-06-05 10:00:00 Europe/Vienna');

PostgreSQL converts to UTC immediately. The string 'Europe/Vienna' is gone. If you need the timezone, store it separately.

AT TIME ZONE: The Conversion Operator

PostgreSQL's AT TIME ZONE converts between timestamps and timezones:

-- timestamptz → timestamp in a specific zone
SELECT instant_utc AT TIME ZONE 'Europe/Vienna' AS local_vienna
FROM appointments;

-- timestamp → timestamptz (interpreting as a specific zone)
SELECT local_start AT TIME ZONE time_zone_id AS instant
FROM appointments;

This is useful for display and for reconstructing the instant from stored local + timezone.

Gotcha: The behavior differs based on the input type:

  • timestamptz AT TIME ZONE 'X' → returns timestamp (strips timezone, shows in X)
  • timestamp AT TIME ZONE 'X' → returns timestamptz (interprets as X, converts to UTC)

Handling Timezone Rule Changes

When IANA updates timezone rules:

  1. Past events: Nothing to do. PostgreSQL's timestamptz already stores UTC. Historical conversions use historical rules (if your system's tzdata is updated).

  2. Future events: Recalculate instant_utc from local_start + time_zone_id.

-- Recalculate instant_utc for future Vienna appointments
UPDATE appointments
SET instant_utc = local_start AT TIME ZONE time_zone_id
WHERE time_zone_id = 'Europe/Vienna'
  AND instant_utc > NOW();

This is why storing local_start + time_zone_id matters — you have everything needed to recalculate.

Working with EF Core and Npgsql

If you're using .NET with Npgsql, the mapping is straightforward:

// In your DbContext
protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
    optionsBuilder.UseNpgsql(connectionString, o => o.UseNodaTime());
}

With UseNodaTime():

  • Instanttimestamptz
  • LocalDateTimetimestamp
  • LocalDatedate
  • LocalTimetime

The types align naturally with our model.

Full Example: Creating and Querying Appointments

-- gen_random_uuid() requires this extension
CREATE EXTENSION IF NOT EXISTS pgcrypto;

-- Create table
CREATE TABLE appointments (
    id              uuid PRIMARY KEY DEFAULT gen_random_uuid(),
    title           text NOT NULL,
    local_start     timestamp NOT NULL,
    time_zone_id    text NOT NULL,
    instant_utc     timestamptz NOT NULL
);

-- Create indexes
CREATE INDEX idx_appointments_instant ON appointments (instant_utc);
CREATE INDEX idx_appointments_local ON appointments (time_zone_id, local_start);

-- Insert an appointment (10:00 Vienna = 08:00 UTC in summer)
INSERT INTO appointments (title, local_start, time_zone_id, instant_utc)
VALUES (
    'Team Standup',
    '2026-06-05 10:00:00',
    'Europe/Vienna',
    '2026-06-05 10:00:00' AT TIME ZONE 'Europe/Vienna'
);

-- Query: What's on the Vienna calendar for June 5th?
SELECT title, local_start
FROM appointments
WHERE time_zone_id = 'Europe/Vienna'
  AND local_start >= '2026-06-05'
  AND local_start <  '2026-06-06';

-- Query: What's happening globally in the next 2 hours?
SELECT title, instant_utc, time_zone_id
FROM appointments
WHERE instant_utc >= NOW()
  AND instant_utc <  NOW() + INTERVAL '2 hours';

-- Display in viewer's timezone
SELECT 
    title,
    instant_utc AT TIME ZONE 'Europe/London' AS starts_at_london
FROM appointments;

A Note on ORMs, Query Builders, and Event Stores

The PostgreSQL model described here — storing local_start, time_zone_id, and a derived instant_utc — is independent of how you access the database.

  • EF Core / Npgsql: Works well with explicit mappings (see Article 6 for full NodaTime integration).
  • Dapper: Maps naturally to simple columns; you compute instants in application code.
  • Marten / Event Sourcing: Events typically store an Instant (occurred_at) plus domain-specific local values when needed.
  • Raw SQL: The same rules apply — PostgreSQL doesn't care how the data got there.

The key idea is not the ORM — it's the data model.

If you store human intent (local + timezone) separately from physical moments (instant), the approach works across tools, frameworks, and architectural styles.

Key Takeaways

  • timestamp stores raw datetime — use for LocalDateTime (user intent)
  • timestamptz converts to/from UTC — use for Instant (global moments)
  • timestamptz does not store a timezone — it stores UTC and converts on read
  • For human-scheduled events: store local_start + time_zone_id + instant_utc
  • Be explicit with timezones on insert — don't trust session settings
  • Index instant_utc for global queries, index (time_zone_id, local_start) for calendar queries
  • When rules change: recalculate instant_utc from the stored local + timezone

Next up: Frontend – Temporal, APIs, and DateTimePickers That Don't Lie — bringing it all together in the browser.

.NET in Practice – Modeling Time with NodaTime

2026-01-23 06:05:54

Part 6 of 8 in the series Time in Software, Done Right

You've made it through the conceptual articles. You understand the difference between instants and local times, between global and local events, between storing intent and storing math.

Now let's make it real in .NET.

The BCL has improved significantly — DateOnly and TimeOnly (since .NET 6) are solid types for dates and times. But for timezone-aware scheduling — meetings, deadlines, appointments that need to survive DST changes — you'll want NodaTime. It gives you the types the BCL is still missing.

This article shows you how to use NodaTime to model time correctly, store it properly, and avoid the traps we've discussed throughout this series.

Why DateTime Falls Short (and What the BCL Fixed)

DateTime in .NET is a single type that tries to represent multiple concepts:

var a = DateTime.Now;           // Local time on this machine
var b = DateTime.UtcNow;        // UTC instant
var c = new DateTime(2026, 6, 5, 10, 0, 0);  // Is this local? UTC? Unspecified?

The problem is the Kind property:

  • DateTimeKind.Local — local to this machine (not a specific timezone)
  • DateTimeKind.Utc — a UTC instant
  • DateTimeKind.Unspecified — could be anything

When you create new DateTime(2026, 6, 5, 10, 0, 0), the Kind is Unspecified. Is that 10:00 in Vienna? 10:00 in London? 10:00 UTC? The type doesn't know, and neither does your code.

The BCL Got Better: DateOnly and TimeOnly

.NET 6 added two types that address part of this problem:

DateOnly birthday = new DateOnly(1990, 3, 15);    // Just a date, no time confusion
TimeOnly openingTime = new TimeOnly(9, 0);         // Just a time, no date confusion

These are great! If you just need a date or just a time, use them. They're in the BCL, well-supported by EF Core, and do exactly what they say.

What the BCL Still Doesn't Have

But for the full picture — especially timezone-aware scheduling — the BCL still falls short:

  • No Instant type (you use DateTime with Kind.Utc or DateTimeOffset)
  • No LocalDateTime with proper semantics (you use DateTime with Kind.Unspecified)
  • No ZonedDateTime that combines local time with a timezone
  • No first-class IANA timezone support (TimeZoneInfo uses Windows zones by default)

DateTimeOffset is better than DateTime — it includes an offset — but as we discussed in Article 4, an offset is a snapshot, not a meaning. +02:00 could be Vienna in summer, Berlin in summer, Cairo, or Johannesburg. You can't tell.

For simple cases: DateOnly, TimeOnly, DateTime, and DateTimeOffset are fine.

For timezone-aware scheduling: NodaTime gives you the right types for the right concepts.

The NodaTime Types You Need

Here's how NodaTime maps to the concepts we've covered (and their BCL equivalents where they exist):

Concept NodaTime Type BCL Equivalent Example
Physical moment Instant DateTime (UTC) / DateTimeOffset Log timestamp, token expiry
Calendar date LocalDate DateOnly Birthday, holiday
Wall clock time LocalTime TimeOnly "Opens at 09:00"
Date + time (no zone) LocalDateTime DateTime (Unspecified) User's chosen meeting time
IANA timezone DateTimeZone TimeZoneInfo (partial) Europe/Vienna
Full context ZonedDateTime ❌ None Meeting at 10:00 Vienna
Snapshot with offset OffsetDateTime DateTimeOffset What the clock showed at a moment

The ✓ marks where the BCL type is a good choice. For DateOnly and TimeOnly, you can often skip NodaTime entirely.

The gap is ZonedDateTime — the combination of a local time and an IANA timezone that lets you handle DST correctly. That's where NodaTime shines.

Let's see each in action.

Instant: For Physical Moments

Use Instant when you're recording when something happened — independent of any human's calendar.

// Current moment
Instant now = SystemClock.Instance.GetCurrentInstant();

// From a Unix timestamp
Instant fromUnix = Instant.FromUnixTimeSeconds(1735689600);

// For logs, audits, event sourcing
public class AuditEntry
{
    public Instant OccurredAt { get; init; }
    public string Action { get; init; }
}

Instant is unambiguous. There's no timezone to confuse, no Kind property to check. It's just a point on the timeline.

LocalDate, LocalTime, LocalDateTime: For Human Concepts

These types represent calendar and clock values without a timezone attached.

// Just a date (NodaTime)
LocalDate birthday = new LocalDate(1990, 3, 15);

// Just a date (BCL - equally good!)
DateOnly birthdayBcl = new DateOnly(1990, 3, 15);

// Just a time (NodaTime)
LocalTime openingTime = new LocalTime(9, 0);

// Just a time (BCL - equally good!)
TimeOnly openingTimeBcl = new TimeOnly(9, 0);

// Date and time together (NodaTime)
LocalDateTime meetingTime = new LocalDateTime(2026, 6, 5, 10, 0);

For dates and times alone, use whichever you prefer — DateOnly/TimeOnly are in the BCL and work great with EF Core.

For date+time combinations that you'll later combine with a timezone, NodaTime's LocalDateTime is clearer because it's part of a coherent type system that includes ZonedDateTime.

A LocalDateTime of 2026-06-05T10:00 means "June 5th at 10:00" — but it doesn't yet specify where. That's intentional. You'll combine it with a timezone to get the full picture.

DateTimeZone: The Ruleset

A DateTimeZone represents an IANA timezone — not just an offset, but the complete ruleset including DST transitions and historical changes.

// Get a timezone by IANA ID
DateTimeZone vienna = DateTimeZoneProviders.Tzdb["Europe/Vienna"];
DateTimeZone london = DateTimeZoneProviders.Tzdb["Europe/London"];

// The provider gives you access to all IANA zones
IDateTimeZoneProvider tzdb = DateTimeZoneProviders.Tzdb;

DateTimeZoneProviders.Tzdb uses the IANA tz database, which is updated regularly with new rules. When you update NodaTime's tzdb data, your code automatically handles new DST rules.

ZonedDateTime: The Full Picture

ZonedDateTime combines a LocalDateTime with a DateTimeZone — giving you everything you need.

LocalDateTime local = new LocalDateTime(2026, 6, 5, 10, 0);
DateTimeZone zone = DateTimeZoneProviders.Tzdb["Europe/Vienna"];

// Combine them
ZonedDateTime zoned = local.InZoneLeniently(zone);

// Now you can get the instant
Instant instant = zoned.ToInstant();

// Or display in different zones
ZonedDateTime inLondon = instant.InZone(DateTimeZoneProviders.Tzdb["Europe/London"]);
Console.WriteLine(inNewYork.ToString("uuuu-MM-dd HH:mm x", CultureInfo.InvariantCulture));
// Output: 2026-06-05 04:00 -04
// (Requires: using System.Globalization;)

Why "Leniently"?

The InZoneLeniently method handles DST edge cases automatically:

  • If the local time falls in a gap (doesn't exist), it shifts forward
  • If the local time falls in an overlap (exists twice), it picks the earlier occurrence

For explicit control, NodaTime offers several options:

// Called on LocalDateTime
ZonedDateTime zoned = local.InZoneLeniently(zone);   // Auto-resolve gaps/overlaps
ZonedDateTime zoned = local.InZoneStrictly(zone);    // Throws if ambiguous

// Called on DateTimeZone (same behavior, different syntax)
ZonedDateTime zoned = zone.AtLeniently(local);
ZonedDateTime zoned = zone.AtStrictly(local);

// With custom resolver
ZonedDateTime zoned = local.InZone(zone, Resolvers.LenientResolver);

The Pattern: Store Intent, Derive Instant

Here's the core pattern from Article 4, implemented in NodaTime:

public class Appointment
{
    // Source of truth: what the user chose
    public LocalDateTime LocalStart { get; init; }
    public string TimeZoneId { get; init; }

    // Derived: for queries and sorting
    public Instant InstantUtc { get; private set; }

    public void RecalculateInstant()
    {
        var zone = DateTimeZoneProviders.Tzdb[TimeZoneId];
        var zoned = LocalStart.InZoneLeniently(zone);
        InstantUtc = zoned.ToInstant();
    }
}

When timezone rules change, you call RecalculateInstant() on future appointments. Past appointments stay correct because IANA contains historical rules.

Real-World Examples

Example 1: Logging (Use Instant)

public class LogEntry
{
    public Instant Timestamp { get; init; }
    public string Level { get; init; }
    public string Message { get; init; }

    public static LogEntry Create(string level, string message)
    {
        return new LogEntry
        {
            Timestamp = SystemClock.Instance.GetCurrentInstant(),
            Level = level,
            Message = message
        };
    }
}

Example 2: Birthday (Use LocalDate)

public class Person
{
    public string Name { get; init; }
    public LocalDate DateOfBirth { get; init; }

    public int GetAge(LocalDate today)
    {
        return Period.Between(DateOfBirth, today, PeriodUnits.Years).Years;
    }
}

No timezone needed — birthdays are calendar concepts.

Example 3: Meeting (Use LocalDateTime + TimeZone)

public class Meeting
{
    public string Title { get; init; }
    public LocalDateTime LocalStart { get; init; }
    public string TimeZoneId { get; init; }
    public Instant InstantUtc { get; init; }

    public static Meeting Create(string title, LocalDateTime localStart, string timeZoneId)
    {
        var zone = DateTimeZoneProviders.Tzdb[timeZoneId];
        var instant = localStart.InZoneLeniently(zone).ToInstant();

        return new Meeting
        {
            Title = title,
            LocalStart = localStart,
            TimeZoneId = timeZoneId,
            InstantUtc = instant
        };
    }

    // Display in any timezone
    public string GetDisplayTime(DateTimeZone viewerZone)
    {
        var inViewerZone = InstantUtc.InZone(viewerZone);
        // Note: uuuu is NodaTime's recommended year specifier (absolute year)
        return inViewerZone.ToString("uuuu-MM-dd HH:mm", CultureInfo.InvariantCulture);
    }
}

Example 4: Deadline (Use LocalDateTime + TimeZone)

public class Deadline
{
    public LocalDateTime LocalDeadline { get; init; }
    public string TimeZoneId { get; init; }
    public Instant InstantUtc { get; init; }

    public bool IsPastDeadline(Instant now)
    {
        return now > InstantUtc;
    }

    public Duration TimeRemaining(Instant now)
    {
        return InstantUtc - now;
    }
}

EF Core Integration

NodaTime doesn't map to SQL types out of the box, but there are excellent packages for this.

For PostgreSQL: Npgsql.EntityFrameworkCore.PostgreSQL.NodaTime

// In your DbContext configuration
protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
    optionsBuilder.UseNpgsql(connectionString, o => o.UseNodaTime());
}

This maps:

  • Instanttimestamp with time zone
  • LocalDateTimetimestamp without time zone
  • LocalDatedate
  • LocalTimetime

What about ZonedDateTime? There's no single PostgreSQL type for it — that's the whole point of our pattern. You decompose it into separate columns:

  • LocalDateTimetimestamp without time zone
  • TimeZoneIdtext
  • Optionally: Instanttimestamp with time zone (for queries)

Here's how to extract the parts from a ZonedDateTime:

ZonedDateTime zoned = local.InZoneLeniently(zone);

// Decompose for storage
LocalDateTime localPart = zoned.LocalDateTime;
string timeZoneId = zoned.Zone.Id;           // e.g. "Europe/Vienna"
Instant instantPart = zoned.ToInstant();

For SQL Server: Consider Value Converters

public class AppointmentConfiguration : IEntityTypeConfiguration<Appointment>
{
    public void Configure(EntityTypeBuilder<Appointment> builder)
    {
        builder.Property(a => a.LocalStart)
            .HasConversion(
                v => v.ToDateTimeUnspecified(),
                v => LocalDateTime.FromDateTime(v));

        builder.Property(a => a.InstantUtc)
            .HasConversion(
                v => v.ToDateTimeUtc(),
                v => Instant.FromDateTimeUtc(v));

        builder.Property(a => a.TimeZoneId)
            .HasMaxLength(64);
    }
}

Sample Entity

public class Appointment
{
    public Guid Id { get; init; }
    public string Title { get; init; }

    // Stored as timestamp without time zone
    public LocalDateTime LocalStart { get; init; }

    // Stored as text/varchar
    public string TimeZoneId { get; init; }

    // Stored as timestamp with time zone (for queries)
    public Instant InstantUtc { get; init; }
}

Handling DST Transitions

When creating appointments that might fall in DST gaps or overlaps, be explicit:

public class AppointmentService
{
    public ZonedDateTime ResolveLocalTime(LocalDateTime local, string timeZoneId)
    {
        var zone = DateTimeZoneProviders.Tzdb[timeZoneId];
        var mapping = zone.MapLocal(local);

        return mapping.Count switch
        {
            0 => zone.AtLeniently(local),        // Gap: shift forward to valid time
            1 => mapping.Single(),                // Normal: exactly one mapping
            2 => mapping.First(),                 // Overlap: pick earlier occurrence
            _ => throw new InvalidOperationException()
        };
    }
}

For more control (e.g., asking the user to choose during overlaps):

public ZonedDateTime ResolveWithUserChoice(
    LocalDateTime local, 
    string timeZoneId,
    Func<ZonedDateTime, ZonedDateTime, ZonedDateTime> overlapResolver)
{
    var zone = DateTimeZoneProviders.Tzdb[timeZoneId];
    var mapping = zone.MapLocal(local);

    return mapping.Count switch
    {
        0 => zone.AtLeniently(local),
        1 => mapping.Single(),
        2 => overlapResolver(mapping.First(), mapping.Last()),
        _ => throw new InvalidOperationException()
    };
}

Converting from DateTime

If you have existing code using DateTime, here's how to convert:

// DateTime (UTC) to Instant
DateTime dtUtc = DateTime.UtcNow;
Instant instant = Instant.FromDateTimeUtc(dtUtc);

// DateTime (unspecified) to LocalDateTime
DateTime dt = new DateTime(2026, 6, 5, 10, 0, 0);
LocalDateTime local = LocalDateTime.FromDateTime(dt);

// Instant to DateTime (UTC)
DateTime backToUtc = instant.ToDateTimeUtc();

// LocalDateTime to DateTime (unspecified)
DateTime backToUnspecified = local.ToDateTimeUnspecified();

Testing Time-Dependent Code

Code that calls SystemClock.Instance.GetCurrentInstant() directly is hard to test. You can't control "now".

NodaTime solves this with IClock:

// Production: inject the real clock
public class AppointmentService(IClock clock)
{
    public bool IsUpcoming(Appointment appointment)
    {
        var now = clock.GetCurrentInstant();
        return appointment.InstantUtc > now;
    }
}

// In production
var service = new AppointmentService(SystemClock.Instance);

// In tests: use a fake clock
var fakeNow = Instant.FromUtc(2026, 6, 5, 8, 0);
var fakeClock = new FakeClock(fakeNow);
var service = new AppointmentService(fakeClock);

// Now you can test time-dependent logic deterministically

Rule: Never call SystemClock.Instance directly in business logic. Inject IClock instead. Your tests will thank you.

Key Takeaways

  • Use NodaTime for anything beyond simple logging — it gives you the right types for the right concepts
  • Instant for physical moments (logs, events, tokens)
  • LocalDate for calendar dates (birthdays, holidays)
  • LocalDateTime + DateTimeZone for human-scheduled times (meetings, deadlines)
  • Store intent: LocalDateTime + TimeZoneId as your source of truth
  • Derive instant: compute InstantUtc for queries and sorting
  • Handle DST explicitly: use InZoneLeniently or check MapLocal for edge cases
  • EF Core: use Npgsql.EntityFrameworkCore.PostgreSQL.NodaTime for PostgreSQL, or value converters for other databases

Next up: PostgreSQL – Storing Time Without Lying to Yourself — the database side of the equation.