2026-01-23 06:17:26
The previous posts in this series covered the interesting technical challenges — Tauri architecture, mock mode, registry surgery, Microsoft Store publishing, E2E testing. But there's a part of building WSL-UI that I haven't talked about: the sheer amount of time spent on polish.
As someone who spent years as a backend developer, this was an eye-opener.
In backend development, you deal with well-defined contracts. An API either returns the right data or it doesn't. A function either handles the edge case or throws an exception. Tests pass or fail. There's a certain... cleanliness to it.
Request comes in. Process it. Response goes out. Done.
UI development is different. The "correct" behavior is subjective. Does this button look clickable enough? Is this spacing consistent with that spacing? What happens when the window is resized to 800px wide? 600px? What about 4K displays?
I spent more time on polish than on actual features. Some examples:
"The cards don't quite line up." Okay, let me check the margins. And the padding. And the gap between items. And whether flex or grid makes more sense here. Oh, and the icon is 2 pixels off from the text baseline.
In backend code, if two values are functionally equivalent, it doesn't matter which you use. In UI, 15px vs 16px of margin is visible. Users notice, even if they can't articulate what's wrong.
A distribution can be:
Each state needs distinct visual treatment. Status badges, action buttons that enable/disable appropriately, progress indicators. And they all need to look like they belong to the same app.
I ended up creating a state machine diagram just to track which UI elements should be visible in which states. Something I'd never needed for a REST API.
Backend edge cases are usually about data: null values, empty arrays, strings that are too long. UI edge cases are about everything:
Each edge case needs thought. Empty states need messaging. Long names need truncation. Slow operations need spinners. Failed operations need error messages that are actually helpful.
Once I published to the Microsoft Store, real users found things I'd never considered:
If I had to estimate, I'd say:
That ratio surprised me. Backend development isn't like this. You build the feature, write tests, and move on. UI development has this long tail of refinement that never really ends.
Once WSL-UI was on the Microsoft Store, I wanted some visibility into how people were actually using it. But I had constraints:
I found Aptabase, which is designed exactly for this use case — privacy-first analytics for desktop and mobile apps.
Aptabase collects minimal data:
No user IDs, no tracking cookies, no personal information. The data is anonymised and aggregated.
Adding Aptabase to a Tauri app was straightforward:
// In your Tauri setup
use aptabase_rs::Aptabase;
fn main() {
let aptabase = Aptabase::new("YOUR_APP_KEY");
tauri::Builder::default()
.manage(aptabase)
.run(tauri::generate_context!())
.expect("error while running application");
}
Then tracking events:
#[tauri::command]
pub async fn start_distribution(
name: String,
aptabase: State<'_, Aptabase>
) -> Result<(), String> {
// Do the actual work
wsl_start(&name)?;
// Track the event (if analytics enabled)
if analytics_enabled() {
aptabase.track_event("distribution_started", None);
}
Ok(())
}
This is the important part. Analytics is opt-out by default. When you first install WSL-UI, no data is sent anywhere.
In the Settings page, there's a toggle:
When a user enables analytics, they're explicitly choosing to share usage data. The setting is stored locally and checked before any tracking calls.
The Aptabase dashboard shows:
Key insights I've gained:
What I don't see:
As a solo developer, I have limited time. Knowing that 80% of users use feature X and 5% use feature Y helps prioritise. If I'm going to spend time polishing something, it should be the things people actually use.
Without analytics, you're guessing. With privacy-invasive analytics, you're betraying user trust. Aptabase hits a sweet spot — enough signal to make informed decisions, without crossing ethical lines.
Building WSL-UI taught me things I wouldn't have learned from backend work:
UI polish is real work — It's not "just making things pretty." It's dealing with an exponentially larger state space than backend code.
Edge cases multiply — Every UI state, combined with every data state, combined with every screen size, creates an explosion of cases to handle.
Users are unpredictable — They'll find workflows you never imagined. Build with flexibility.
Analytics can be ethical — You don't need to choose between "no data" and "track everything." Privacy-first tools exist.
The last 10% takes 50% of the time — Getting from "it works" to "it feels polished" is a longer journey than building the initial features.
That's the complete WSL-UI journey. From a Christmas project to learn Tauri, through mock modes and registry surgery, to the Microsoft Store and beyond.
If you're a backend developer considering a desktop app project, here's my honest take: it's more work than you expect, but also more rewarding. There's something satisfying about building something you can click on, something that sits in your taskbar and solves a problem you have every day.
WSL-UI is open source and available on:
Thanks for reading the series!
Originally published at https://wsl-ui.octasoft.co.uk/blog/building-wsl-ui-polish-and-analytics
2026-01-23 06:10:57
When building Terraform providers that handle sensitive data like passwords, API tokens, or secret keys, you'll eventually encounter the need for write-only parameters. These are values that should be sent to an API but never stored in Terraform state files.
In this post, I'll walk through implementing write-only parameters in a Terraform provider, the challenges involved, and how to balance automation with user control.
Write-only parameters (introduced in Terraform 1.11) are resource arguments that:
This is crucial for security. Without write-only parameters, sensitive values like passwords would be stored in plaintext in your state files.
Here's an example from my fork of the Event Drive Ansible (EDA) provider:
resource "aap_eda_credential" "example" {
name = "my-api-credential"
credential_type_id = 1
# Write-only: sent to API but NEVER stored in state
inputs_wo = jsonencode({
username = "service-account"
api_token = var.api_token
})
}
_version Argument
Since Terraform's statefile does not manage the value of the secret, Terraform requires way to know if the secret must be updated. The standard solution to detecting changes in write-only parameters is to add a companion _version field that users must manually increment:
resource "aap_eda_credential" "manual" {
name = "my-credential"
inputs_wo = jsonencode({
password = var.new_password
})
# User must increment this to trigger an update
inputs_wo_version = 2 # Changed from 1
}
How it works:
inputs_wo
inputs_wo_version
This works, but it's not user-friendly. Users have to remember to:
Ideally, users shouldn't need to manage version numbers. They should just update the secret value and Terraform should detect the change automatically. But here's the problem:
If inputs_wo is write-only and not in state, how do we detect when it changes?
The answer: store a deterministic hash in private state.
Terraform's plugin framework provides private state - a place to store provider-managed data that's:
terraform show)Here's the implementation approach:
// Create operation
func (r *EDACredentialResource) Create(ctx context.Context, req resource.CreateRequest, resp *resource.CreateResponse) {
var data EDACredentialResourceModel
req.Plan.Get(ctx, &data)
// Calculate SHA-256 hash of the inputs
inputsHash := calculateInputsHash(data.InputsWO.ValueString())
// Store hash in private state (JSON-wrapped for validity)
hashJSON := fmt.Sprintf(`{"hash":"%s"}`, inputsHash)
resp.Private.SetKey(ctx, "inputs_wo_hash", []byte(hashJSON))
// Set auto-managed version
data.InputsWOVersion = tftypes.Int64Value(1)
// ... send to API and save state ...
}
func calculateInputsHash(inputs string) string {
h := sha256.New()
h.Write([]byte(inputs))
return hex.EncodeToString(h.Sum(nil))
}
In the Update operation, we compare hashes:
func (r *EDACredentialResource) Update(ctx context.Context, req resource.UpdateRequest, resp *resource.UpdateResponse) {
var data, state EDACredentialResourceModel
req.Plan.Get(ctx, &data)
req.State.Get(ctx, &state)
// Get stored hash from private state
oldHashBytes, _ := req.Private.GetKey(ctx, "inputs_wo_hash")
var hashWrapper struct {
Hash string `json:"hash"`
}
json.Unmarshal(oldHashBytes, &hashWrapper)
oldHash := hashWrapper.Hash
// Calculate new hash
newHash := calculateInputsHash(data.InputsWO.ValueString())
if newHash != oldHash {
// Inputs changed! Auto-increment version
data.InputsWOVersion = tftypes.Int64Value(state.InputsWOVersion.ValueInt64() + 1)
// Update stored hash
hashJSON := fmt.Sprintf(`{"hash":"%s"}`, newHash)
resp.Private.SetKey(ctx, "inputs_wo_hash", []byte(hashJSON))
} else {
// No change, keep version as-is
data.InputsWOVersion = state.InputsWOVersion
}
// ... send to API and save state ...
}
Now users can just update the secret and Terraform automatically detects it:
resource "aap_eda_credential" "auto" {
name = "my-credential"
inputs_wo = jsonencode({
password = var.new_password # Just change this!
})
# inputs_wo_version auto-increments (computed field)
}
While automatic detection is convenient, some users may not want a hash of their secrets stored in state - even if it's SHA-256 and in private state. The hash is still derived from the secret value.
Solution: Support both modes
func (r *EDACredentialResource) Create(ctx context.Context, req resource.CreateRequest, resp *resource.CreateResponse) {
var data EDACredentialResourceModel
req.Plan.Get(ctx, &data)
var versionToSet tftypes.Int64
if data.InputsWOVersion.IsNull() || data.InputsWOVersion.IsUnknown() {
// Auto-managed mode: store hash
inputsHash := calculateInputsHash(data.InputsWO.ValueString())
hashJSON := fmt.Sprintf(`{"hash":"%s"}`, inputsHash)
resp.Private.SetKey(ctx, "inputs_wo_hash", []byte(hashJSON))
versionToSet = tftypes.Int64Value(1)
} else {
// Manual mode: user provided version, don't store hash
versionToSet = data.InputsWOVersion
}
// ... rest of create logic ...
data.InputsWOVersion = versionToSet
}
Schema definition:
"inputs_wo_version": schema.Int64Attribute{
Optional: true,
Computed: true,
PlanModifiers: []planmodifier.Int64{
int64planmodifier.UseStateForUnknown(),
},
Description: "Version number for managing credential updates. " +
"If not set, the provider will automatically detect changes to inputs_wo " +
"using a SHA-256 hash stored in private state. If set manually, you " +
"control when the credential is updated by incrementing this value yourself.",
}
Users can choose their preferred mode:
# Auto-managed (default)
resource "aap_eda_credential" "auto" {
name = "auto-credential"
inputs_wo = jsonencode({ password = var.pwd })
# version auto-increments
}
# Manual control
resource "aap_eda_credential" "manual" {
name = "manual-credential"
inputs_wo = jsonencode({ password = var.pwd })
inputs_wo_version = 1 # I control this
}
You might think: "What if a user starts with auto-mode and then wants to switch to manual mode?"
Attempting to handle this creates a mess:
inputs_wo_version, do we remove the hash? What if they're just trying to force an update?Better approach: Prevent mode switching
func (r *EDACredentialResource) Update(ctx context.Context, req resource.UpdateRequest, resp *resource.UpdateResponse) {
// Determine current mode from private state
oldHashBytes, _ := req.Private.GetKey(ctx, "inputs_wo_hash")
wasAutoManaged := oldHashBytes != nil
// Determine desired mode from config
var configModel EDACredentialResourceModel
req.Config.Get(ctx, &configModel)
isNowManual := !configModel.InputsWOVersion.IsNull() && !configModel.InputsWOVersion.IsUnknown()
var state EDACredentialResourceModel
req.State.Get(ctx, &state)
wasManual := !wasAutoManaged && !state.InputsWOVersion.IsNull()
// Prevent mode switching
if wasAutoManaged && isNowManual {
resp.Diagnostics.AddError(
"Cannot switch from auto-managed to manual version management",
"The inputs_wo_version field was previously auto-managed. Once auto-managed, "+
"it cannot be switched to manual mode. If you need to manually control the version, "+
"you must recreate the resource with inputs_wo_version set from the start.",
)
return
}
if wasManual && !isNowManual {
resp.Diagnostics.AddError(
"Cannot switch from manual to auto-managed version management",
"The inputs_wo_version field was previously manually managed. Once manually managed, "+
"it cannot be switched to auto mode. If you need auto-managed version control, "+
"you must recreate the resource without setting inputs_wo_version.",
)
return
}
// ... rest of update logic for the chosen mode ...
}
This gives users a clear error message:
Error: Cannot switch from auto-managed to manual version management
The inputs_wo_version field was previously auto-managed. Once auto-managed,
it cannot be switched to manual mode. If you need to manually control the
version, you must recreate the resource with inputs_wo_version set from the start.
When implementing write-only parameters in Terraform providers:
The complete implementation provides a great user experience while respecting privacy concerns:
# Most users: auto-managed, no version to track
resource "aap_eda_credential" "api" {
name = "api-credential"
inputs_wo = jsonencode({ token = var.token })
}
# Privacy-conscious users: manual control, no hash stored
resource "aap_eda_credential" "secure" {
name = "secure-credential"
inputs_wo = jsonencode({ token = var.token })
inputs_wo_version = 1
}
Both approaches are valid. Both are clear. And switching between them requires a resource recreation, which is easy to communicate and reason about.
Have you implemented write-only parameters in your Terraform provider? What challenges did you face? Let me know in the comments!
2026-01-23 06:10:07
Part 8 of 8 in the series Time in Software, Done Right
You've modeled time correctly on the backend. You've stored it properly in the database. Now you need to handle it in the browser — where users pick dates and times, and where your API sends data back and forth.
JavaScript's Date object has been the source of countless bugs. The Temporal API finally gives us proper types. But even with good types, you still need to think about what your DateTimePicker is actually asking users to select, and how to send that data across the wire.
This article covers the Temporal API, API contract design, and the principles behind DateTimePickers that don't mislead users.
The Date object has been with us since 1995. It has... issues:
// Months are 0-indexed (January = 0)
new Date(2026, 5, 5) // June 5th, not May 5th
// Parsing is inconsistent across browsers
new Date("2026-06-05") // Midnight UTC? Midnight local? Depends on browser.
// No timezone support beyond local and UTC
const d = new Date();
d.getTimezoneOffset(); // Minutes offset, but which zone? You don't know.
// Mutable (a constant footgun)
const d = new Date();
d.setMonth(11); // Mutates in place
There's no LocalDate, no LocalDateTime, no ZonedDateTime. Just one type that tries to do everything and does none of it well.
The Temporal API is the modern replacement for Date. It's currently at Stage 3 — the final candidate stage before standardization — and requires a polyfill in most browsers (e.g., @js-temporal/polyfill). Browser support is coming, but for now, plan on using the polyfill.
| Concept | Temporal Type | NodaTime Equivalent |
|---|---|---|
| Physical moment | Temporal.Instant |
Instant |
| Calendar date | Temporal.PlainDate |
LocalDate |
| Wall clock time | Temporal.PlainTime |
LocalTime |
| Date + time (no zone) | Temporal.PlainDateTime |
LocalDateTime |
| IANA timezone | Temporal.TimeZone |
DateTimeZone |
| Full context | Temporal.ZonedDateTime |
ZonedDateTime |
The names differ (Plain vs Local), but the concepts are identical. If you've understood NodaTime, you already understand Temporal.
// Just a date
const date = Temporal.PlainDate.from("2026-06-05");
const date2 = new Temporal.PlainDate(2026, 6, 5); // Months are 1-indexed!
// Just a time
const time = Temporal.PlainTime.from("10:00");
// Date and time (no zone)
const local = Temporal.PlainDateTime.from("2026-06-05T10:00");
// With timezone
const zone = Temporal.TimeZone.from("Europe/Vienna");
const zoned = local.toZonedDateTime(zone);
// Get the instant
const instant = zoned.toInstant();
// ZonedDateTime → components
const zoned = Temporal.ZonedDateTime.from("2026-06-05T10:00[Europe/Vienna]");
const local = zoned.toPlainDateTime(); // PlainDateTime
const instant = zoned.toInstant(); // Instant
const tzId = zoned.timeZoneId; // "Europe/Vienna"
// Instant → ZonedDateTime (for display)
const instant = Temporal.Instant.from("2026-06-05T08:00:00Z");
const inVienna = instant.toZonedDateTimeISO("Europe/Vienna");
const inLondon = instant.toZonedDateTimeISO("Europe/London");
Temporal handles DST ambiguities explicitly:
// Time that doesn't exist (spring forward gap)
const local = Temporal.PlainDateTime.from("2026-03-29T02:30");
const zone = Temporal.TimeZone.from("Europe/Vienna");
// Default ("compatible"): shifts forward for gaps, picks earlier occurrence for overlaps
const zoned = local.toZonedDateTime(zone, { disambiguation: "compatible" });
const zoned = local.toZonedDateTime(zone, { disambiguation: "reject" }); // Throws
// Time that exists twice (fall back overlap)
const local = Temporal.PlainDateTime.from("2026-10-25T02:30");
const zoned = local.toZonedDateTime(zone, { disambiguation: "earlier" }); // First occurrence
const zoned = local.toZonedDateTime(zone, { disambiguation: "later" }); // Second occurrence
When your frontend talks to your backend, you need a clear contract for time values. There are several approaches.
For instants, use ISO 8601 with Z suffix:
{
"createdAt": "2026-06-05T08:00:00Z"
}
Unambiguous. Both sides parse it the same way.
For human-scheduled times, send the components:
{
"appointment": {
"localStart": "2026-06-05T10:00:00",
"timeZoneId": "Europe/Vienna"
}
}
The backend receives exactly what the user chose. It can:
Temporal and some APIs support bracketed timezone notation:
{
"startsAt": "2026-06-05T10:00:00[Europe/Vienna]"
}
This is compact and unambiguous, but not all JSON parsers handle it natively. You'll need custom parsing.
// DON'T: Ambiguous local time
{ "startsAt": "2026-06-05T10:00:00" } // What timezone?
// DON'T: Offset without timezone ID
{ "startsAt": "2026-06-05T10:00:00+02:00" } // Which +02:00 zone?
// DON'T: Unix timestamp for user-scheduled events
{ "startsAt": 1780758000 } // Lost the user's intent
Define clear types for your API:
// For instants (logs, events, timestamps)
interface AuditEvent {
occurredAt: string; // ISO 8601 with Z: "2026-06-05T08:00:00Z"
}
// For user-scheduled times
interface ScheduledTime {
local: string; // ISO 8601 without offset: "2026-06-05T10:00:00"
timeZoneId: string; // IANA zone: "Europe/Vienna"
}
interface Appointment {
title: string;
start: ScheduledTime;
end: ScheduledTime;
}
// For date-only values
interface Person {
name: string;
dateOfBirth: string; // ISO 8601 date: "1990-03-15"
}
A DateTimePicker is a UI component that lets users select a date, time, or both. But "picking a time" isn't as simple as it sounds.
Before building (or choosing) a picker, decide:
| What does the user select? | What do you send to the backend? |
|---|---|
| Just a date |
PlainDate → "2026-06-05"
|
| Just a time |
PlainTime → "10:00"
|
| Date and time |
PlainDateTime → "2026-06-05T10:00"
|
| Date, time, and timezone |
ZonedDateTime → { local, timeZoneId }
|
Most pickers handle the first three. The fourth requires explicit timezone UI.
Show the timezone when:
Hide the timezone when:
How to show it:
When the user picks a time that falls in a DST transition:
Gap (time doesn't exist):
Overlap (time exists twice):
The right choice depends on your domain. A medical appointment might need explicit handling; a casual reminder might not.
If your backend stores local + timeZoneId, your picker should collect exactly that. Don't:
The picker's display should match what gets stored.
When users edit an existing time:
The frontend picker should:
The backend should:
For a DateTimePicker that collects a scheduled time:
Input (initial value):
interface DateTimePickerValue {
local: string; // "2026-06-05T10:00"
timeZoneId: string; // "Europe/Vienna"
}
Output (on change):
interface DateTimePickerValue {
local: string; // "2026-06-05T14:30"
timeZoneId: string; // "Europe/Vienna"
}
The picker:
The parent component:
Here's the full flow:
1. User picks a time in a DateTimePicker
UI shows: June 5, 2026 at 10:00 AM (Vienna)
2. Frontend sends to API
POST /appointments
{
"title": "Team Standup",
"start": {
"local": "2026-06-05T10:00:00",
"timeZoneId": "Europe/Vienna"
}
}
3. Backend (NodaTime) processes
var local = LocalDateTime.FromDateTime(DateTime.Parse(dto.Start.Local));
var zone = DateTimeZoneProviders.Tzdb[dto.Start.TimeZoneId];
var instant = local.InZoneLeniently(zone).ToInstant();
// Store all three
var appointment = new Appointment
{
LocalStart = local,
TimeZoneId = dto.Start.TimeZoneId,
InstantUtc = instant
};
4. Database stores
INSERT INTO appointments (local_start, time_zone_id, instant_utc)
VALUES ('2026-06-05 10:00:00', 'Europe/Vienna', '2026-06-05 08:00:00+00');
5. Later: API returns to frontend
{
"title": "Team Standup",
"start": {
"local": "2026-06-05T10:00:00",
"timeZoneId": "Europe/Vienna"
},
"instantUtc": "2026-06-05T08:00:00Z"
}
6. Frontend displays
We've covered storing and displaying times with timezones. But how do you know what timezone the user is in?
const userZone = Intl.DateTimeFormat().resolvedOptions().timeZone;
// "Europe/Vienna", "Europe/London", etc.
This returns an IANA timezone ID — exactly what you need.
Use browser detection as a default, but let users confirm or change it:
Don't silently assume the detected timezone is correct. A user in Vienna might be scheduling a meeting for their London office.
Timezone, language, and locale are often treated as one setting — but they are three independent concerns.
Language (i18n) controls text:
Locale (l10n) controls formatting:
1,000.00 vs 1.000,00
MM/DD/YYYY vs DD.MM.YYYY
Timezone controls when things happen:
Europe/ViennaAmerica/New_YorkAsia/TokyoThey often change together — but they are not coupled.
A French-speaking user in New York expects French UI, French date formatting, and New York time. Inferring Europe/Paris from fr is wrong.
A DateTimePicker should not assume timezone based on language or locale.
Timezone must come from explicit user choice, browser detection, or application context.
PlainDate, PlainDateTime, ZonedDateTime, Instant
This concludes the series "Time in Software, Done Right." You now have a complete mental model for handling time — from concepts to code, from backend to database to frontend.
The next time someone says "just store it as UTC," you'll know when that's right, and when it's a trap.
2026-01-23 06:07:41
Ever wonder about the guy behind that epic 90s ballad? Edwin McCain spills the beans in a candid chat, tracing his journey from rough-and-tumble bar performances to battling record labels just to get his voice heard.
Get the inside scoop on his lightning-fast rise to fame and the real struggles of life out on the road.
Watch on YouTube
2026-01-23 06:07:23
Part 7 of 8 in the series Time in Software, Done Right
You've modeled time correctly in your application. You have LocalDateTime, TimeZoneId, and Instant. Now you need to persist it.
PostgreSQL has excellent time support — but its type names are misleading, and the default behaviors can surprise you. This article explains what PostgreSQL actually does, which types to use, and how to avoid the common traps.
PostgreSQL has two timestamp types:
timestamp without time zone (or just timestamp)timestamp with time zone (or timestamptz)The names suggest one stores a timezone and one doesn't. That's not quite right.
This stores exactly what you give it — a date and time with no timezone context.
INSERT INTO test (ts) VALUES ('2026-06-05 10:00:00');
SELECT ts FROM test;
-- Result: 2026-06-05 10:00:00
No conversion happens. No timezone is stored. It's just a calendar value.
Use for: LocalDateTime — when you're storing what the user said, not when it happened globally.
This is where the name lies. PostgreSQL does not store a timezone. It converts the input to UTC and stores UTC internally. On retrieval, it converts back to the session's timezone.
SET timezone = 'Europe/Vienna';
INSERT INTO test (tstz) VALUES ('2026-06-05 10:00:00');
SET timezone = 'Europe/London';
SELECT tstz FROM test;
-- Result: 2026-06-05 04:00:00-04
Same row, different display — because PostgreSQL stored UTC internally and converted on output.
Use for: Instant — when you're storing a global moment.
Let's be precise:
| Type | What's Stored | What Happens on Insert | What Happens on Select |
|---|---|---|---|
timestamp |
Raw datetime | Nothing | Nothing |
timestamptz |
Instant (normalized to UTC) | Converts input to UTC | Converts UTC to session timezone |
The critical insight: timestamptz stores UTC, not a timezone. The "with time zone" means "timezone-aware" — it participates in timezone conversions. It doesn't mean "includes a timezone."
With timestamptz, PostgreSQL uses your session's timezone for conversions:
SET timezone = 'UTC';
INSERT INTO events (instant_utc) VALUES ('2026-06-05 10:00:00');
-- Stored as: 2026-06-05 10:00:00 UTC
SET timezone = 'Europe/Vienna';
INSERT INTO events (instant_utc) VALUES ('2026-06-05 10:00:00');
-- Stored as: 2026-06-05 08:00:00 UTC (Vienna is UTC+2 in summer)
Same literal, different stored value — because PostgreSQL assumed the input was in the session timezone.
Best practice: Always use explicit UTC or offsets when inserting into timestamptz:
INSERT INTO events (instant_utc) VALUES ('2026-06-05 10:00:00+00'); -- Explicit UTC
INSERT INTO events (instant_utc) VALUES ('2026-06-05 10:00:00Z'); -- Also UTC
INSERT INTO events (instant_utc) VALUES ('2026-06-05 12:00:00+02'); -- Explicit offset
Or set your application's session timezone to UTC and keep it there.
For human-scheduled events (meetings, deadlines, appointments), use the pattern from earlier articles:
CREATE TABLE appointments (
id uuid PRIMARY KEY DEFAULT gen_random_uuid(),
title text NOT NULL,
-- Source of truth: what the user chose
local_start timestamp NOT NULL,
time_zone_id text NOT NULL,
-- Derived: for global queries and sorting
instant_utc timestamptz NOT NULL
);
Why three columns?
local_start — The user said "10:00". That's the intent.time_zone_id — The user meant "Vienna". That's the context.instant_utc — For queries like "what's happening now?" and for sorting.If timezone rules change, you recalculate instant_utc from local_start + time_zone_id.
| Concept | PostgreSQL Type | Example |
|---|---|---|
| Instant / UTC moment | timestamptz |
Log timestamp, created_at
|
| Local datetime (user intent) | timestamp |
Meeting time, deadline |
| Date only | date |
Birthday, holiday |
| Time only | time |
Opening hours |
| IANA timezone ID | text |
'Europe/Vienna' |
Query by local date + timezone:
SELECT *
FROM appointments
WHERE time_zone_id = 'Europe/Vienna'
AND local_start >= '2026-06-05'
AND local_start < '2026-06-06';
This finds all appointments that display as June 5th in Vienna, regardless of the global instant.
Query by instant:
SELECT *
FROM appointments
WHERE instant_utc >= NOW()
AND instant_utc < NOW() + INTERVAL '1 hour';
This finds all appointments happening in the next hour, regardless of their local calendars.
Query by local time (rare but sometimes needed):
SELECT *
FROM appointments
WHERE local_start::time = '10:00:00';
CREATE INDEX idx_appointments_instant ON appointments (instant_utc);
This covers "what's happening now" and range queries across timezones.
CREATE INDEX idx_appointments_local ON appointments (time_zone_id, local_start);
This covers "what's on the calendar for this timezone" queries. The timezone comes first because you'll almost always filter by it.
If you query both ways heavily, create both indexes. The storage cost is usually worth it.
-- DON'T: Storing a meeting time as timestamptz
INSERT INTO meetings (starts_at) VALUES ('2026-06-05 10:00:00');
You've lost the "10:00" intent. If timezone rules change, you can't recover it.
-- DON'T: Storing a log timestamp without timezone
INSERT INTO logs (occurred_at) VALUES ('2026-06-05 10:00:00');
Is that UTC? Server time? You don't know. Use timestamptz and be explicit.
-- DON'T: Assuming session timezone is what you expect
INSERT INTO events (instant_utc) VALUES ('2026-06-05 10:00:00');
What timezone was the session in? Be explicit: '2026-06-05 10:00:00+00'.
-- DON'T: Thinking this stores "Vienna"
INSERT INTO events (instant_utc) VALUES ('2026-06-05 10:00:00 Europe/Vienna');
PostgreSQL converts to UTC immediately. The string 'Europe/Vienna' is gone. If you need the timezone, store it separately.
PostgreSQL's AT TIME ZONE converts between timestamps and timezones:
-- timestamptz → timestamp in a specific zone
SELECT instant_utc AT TIME ZONE 'Europe/Vienna' AS local_vienna
FROM appointments;
-- timestamp → timestamptz (interpreting as a specific zone)
SELECT local_start AT TIME ZONE time_zone_id AS instant
FROM appointments;
This is useful for display and for reconstructing the instant from stored local + timezone.
Gotcha: The behavior differs based on the input type:
timestamptz AT TIME ZONE 'X' → returns timestamp (strips timezone, shows in X)timestamp AT TIME ZONE 'X' → returns timestamptz (interprets as X, converts to UTC)When IANA updates timezone rules:
Past events: Nothing to do. PostgreSQL's timestamptz already stores UTC. Historical conversions use historical rules (if your system's tzdata is updated).
Future events: Recalculate instant_utc from local_start + time_zone_id.
-- Recalculate instant_utc for future Vienna appointments
UPDATE appointments
SET instant_utc = local_start AT TIME ZONE time_zone_id
WHERE time_zone_id = 'Europe/Vienna'
AND instant_utc > NOW();
This is why storing local_start + time_zone_id matters — you have everything needed to recalculate.
If you're using .NET with Npgsql, the mapping is straightforward:
// In your DbContext
protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
optionsBuilder.UseNpgsql(connectionString, o => o.UseNodaTime());
}
With UseNodaTime():
Instant ↔ timestamptz
LocalDateTime ↔ timestamp
LocalDate ↔ date
LocalTime ↔ time
The types align naturally with our model.
-- gen_random_uuid() requires this extension
CREATE EXTENSION IF NOT EXISTS pgcrypto;
-- Create table
CREATE TABLE appointments (
id uuid PRIMARY KEY DEFAULT gen_random_uuid(),
title text NOT NULL,
local_start timestamp NOT NULL,
time_zone_id text NOT NULL,
instant_utc timestamptz NOT NULL
);
-- Create indexes
CREATE INDEX idx_appointments_instant ON appointments (instant_utc);
CREATE INDEX idx_appointments_local ON appointments (time_zone_id, local_start);
-- Insert an appointment (10:00 Vienna = 08:00 UTC in summer)
INSERT INTO appointments (title, local_start, time_zone_id, instant_utc)
VALUES (
'Team Standup',
'2026-06-05 10:00:00',
'Europe/Vienna',
'2026-06-05 10:00:00' AT TIME ZONE 'Europe/Vienna'
);
-- Query: What's on the Vienna calendar for June 5th?
SELECT title, local_start
FROM appointments
WHERE time_zone_id = 'Europe/Vienna'
AND local_start >= '2026-06-05'
AND local_start < '2026-06-06';
-- Query: What's happening globally in the next 2 hours?
SELECT title, instant_utc, time_zone_id
FROM appointments
WHERE instant_utc >= NOW()
AND instant_utc < NOW() + INTERVAL '2 hours';
-- Display in viewer's timezone
SELECT
title,
instant_utc AT TIME ZONE 'Europe/London' AS starts_at_london
FROM appointments;
The PostgreSQL model described here — storing local_start, time_zone_id, and a derived instant_utc — is independent of how you access the database.
Instant (occurred_at) plus domain-specific local values when needed.The key idea is not the ORM — it's the data model.
If you store human intent (local + timezone) separately from physical moments (instant), the approach works across tools, frameworks, and architectural styles.
timestamp stores raw datetime — use for LocalDateTime (user intent)timestamptz converts to/from UTC — use for Instant (global moments)timestamptz does not store a timezone — it stores UTC and converts on readlocal_start + time_zone_id + instant_utc
instant_utc for global queries, index (time_zone_id, local_start) for calendar queriesinstant_utc from the stored local + timezoneNext up: Frontend – Temporal, APIs, and DateTimePickers That Don't Lie — bringing it all together in the browser.
2026-01-23 06:05:54
Part 6 of 8 in the series Time in Software, Done Right
You've made it through the conceptual articles. You understand the difference between instants and local times, between global and local events, between storing intent and storing math.
Now let's make it real in .NET.
The BCL has improved significantly — DateOnly and TimeOnly (since .NET 6) are solid types for dates and times. But for timezone-aware scheduling — meetings, deadlines, appointments that need to survive DST changes — you'll want NodaTime. It gives you the types the BCL is still missing.
This article shows you how to use NodaTime to model time correctly, store it properly, and avoid the traps we've discussed throughout this series.
DateTime in .NET is a single type that tries to represent multiple concepts:
var a = DateTime.Now; // Local time on this machine
var b = DateTime.UtcNow; // UTC instant
var c = new DateTime(2026, 6, 5, 10, 0, 0); // Is this local? UTC? Unspecified?
The problem is the Kind property:
DateTimeKind.Local — local to this machine (not a specific timezone)DateTimeKind.Utc — a UTC instantDateTimeKind.Unspecified — could be anythingWhen you create new DateTime(2026, 6, 5, 10, 0, 0), the Kind is Unspecified. Is that 10:00 in Vienna? 10:00 in London? 10:00 UTC? The type doesn't know, and neither does your code.
.NET 6 added two types that address part of this problem:
DateOnly birthday = new DateOnly(1990, 3, 15); // Just a date, no time confusion
TimeOnly openingTime = new TimeOnly(9, 0); // Just a time, no date confusion
These are great! If you just need a date or just a time, use them. They're in the BCL, well-supported by EF Core, and do exactly what they say.
But for the full picture — especially timezone-aware scheduling — the BCL still falls short:
Instant type (you use DateTime with Kind.Utc or DateTimeOffset)LocalDateTime with proper semantics (you use DateTime with Kind.Unspecified)ZonedDateTime that combines local time with a timezoneTimeZoneInfo uses Windows zones by default)DateTimeOffset is better than DateTime — it includes an offset — but as we discussed in Article 4, an offset is a snapshot, not a meaning. +02:00 could be Vienna in summer, Berlin in summer, Cairo, or Johannesburg. You can't tell.
For simple cases: DateOnly, TimeOnly, DateTime, and DateTimeOffset are fine.
For timezone-aware scheduling: NodaTime gives you the right types for the right concepts.
Here's how NodaTime maps to the concepts we've covered (and their BCL equivalents where they exist):
| Concept | NodaTime Type | BCL Equivalent | Example |
|---|---|---|---|
| Physical moment | Instant |
DateTime (UTC) / DateTimeOffset
|
Log timestamp, token expiry |
| Calendar date | LocalDate |
DateOnly ✓ |
Birthday, holiday |
| Wall clock time | LocalTime |
TimeOnly ✓ |
"Opens at 09:00" |
| Date + time (no zone) | LocalDateTime |
DateTime (Unspecified) |
User's chosen meeting time |
| IANA timezone | DateTimeZone |
TimeZoneInfo (partial) |
Europe/Vienna |
| Full context | ZonedDateTime |
❌ None | Meeting at 10:00 Vienna |
| Snapshot with offset | OffsetDateTime |
DateTimeOffset |
What the clock showed at a moment |
The ✓ marks where the BCL type is a good choice. For DateOnly and TimeOnly, you can often skip NodaTime entirely.
The gap is ZonedDateTime — the combination of a local time and an IANA timezone that lets you handle DST correctly. That's where NodaTime shines.
Let's see each in action.
Use Instant when you're recording when something happened — independent of any human's calendar.
// Current moment
Instant now = SystemClock.Instance.GetCurrentInstant();
// From a Unix timestamp
Instant fromUnix = Instant.FromUnixTimeSeconds(1735689600);
// For logs, audits, event sourcing
public class AuditEntry
{
public Instant OccurredAt { get; init; }
public string Action { get; init; }
}
Instant is unambiguous. There's no timezone to confuse, no Kind property to check. It's just a point on the timeline.
These types represent calendar and clock values without a timezone attached.
// Just a date (NodaTime)
LocalDate birthday = new LocalDate(1990, 3, 15);
// Just a date (BCL - equally good!)
DateOnly birthdayBcl = new DateOnly(1990, 3, 15);
// Just a time (NodaTime)
LocalTime openingTime = new LocalTime(9, 0);
// Just a time (BCL - equally good!)
TimeOnly openingTimeBcl = new TimeOnly(9, 0);
// Date and time together (NodaTime)
LocalDateTime meetingTime = new LocalDateTime(2026, 6, 5, 10, 0);
For dates and times alone, use whichever you prefer — DateOnly/TimeOnly are in the BCL and work great with EF Core.
For date+time combinations that you'll later combine with a timezone, NodaTime's LocalDateTime is clearer because it's part of a coherent type system that includes ZonedDateTime.
A LocalDateTime of 2026-06-05T10:00 means "June 5th at 10:00" — but it doesn't yet specify where. That's intentional. You'll combine it with a timezone to get the full picture.
A DateTimeZone represents an IANA timezone — not just an offset, but the complete ruleset including DST transitions and historical changes.
// Get a timezone by IANA ID
DateTimeZone vienna = DateTimeZoneProviders.Tzdb["Europe/Vienna"];
DateTimeZone london = DateTimeZoneProviders.Tzdb["Europe/London"];
// The provider gives you access to all IANA zones
IDateTimeZoneProvider tzdb = DateTimeZoneProviders.Tzdb;
DateTimeZoneProviders.Tzdb uses the IANA tz database, which is updated regularly with new rules. When you update NodaTime's tzdb data, your code automatically handles new DST rules.
ZonedDateTime combines a LocalDateTime with a DateTimeZone — giving you everything you need.
LocalDateTime local = new LocalDateTime(2026, 6, 5, 10, 0);
DateTimeZone zone = DateTimeZoneProviders.Tzdb["Europe/Vienna"];
// Combine them
ZonedDateTime zoned = local.InZoneLeniently(zone);
// Now you can get the instant
Instant instant = zoned.ToInstant();
// Or display in different zones
ZonedDateTime inLondon = instant.InZone(DateTimeZoneProviders.Tzdb["Europe/London"]);
Console.WriteLine(inNewYork.ToString("uuuu-MM-dd HH:mm x", CultureInfo.InvariantCulture));
// Output: 2026-06-05 04:00 -04
// (Requires: using System.Globalization;)
The InZoneLeniently method handles DST edge cases automatically:
For explicit control, NodaTime offers several options:
// Called on LocalDateTime
ZonedDateTime zoned = local.InZoneLeniently(zone); // Auto-resolve gaps/overlaps
ZonedDateTime zoned = local.InZoneStrictly(zone); // Throws if ambiguous
// Called on DateTimeZone (same behavior, different syntax)
ZonedDateTime zoned = zone.AtLeniently(local);
ZonedDateTime zoned = zone.AtStrictly(local);
// With custom resolver
ZonedDateTime zoned = local.InZone(zone, Resolvers.LenientResolver);
Here's the core pattern from Article 4, implemented in NodaTime:
public class Appointment
{
// Source of truth: what the user chose
public LocalDateTime LocalStart { get; init; }
public string TimeZoneId { get; init; }
// Derived: for queries and sorting
public Instant InstantUtc { get; private set; }
public void RecalculateInstant()
{
var zone = DateTimeZoneProviders.Tzdb[TimeZoneId];
var zoned = LocalStart.InZoneLeniently(zone);
InstantUtc = zoned.ToInstant();
}
}
When timezone rules change, you call RecalculateInstant() on future appointments. Past appointments stay correct because IANA contains historical rules.
public class LogEntry
{
public Instant Timestamp { get; init; }
public string Level { get; init; }
public string Message { get; init; }
public static LogEntry Create(string level, string message)
{
return new LogEntry
{
Timestamp = SystemClock.Instance.GetCurrentInstant(),
Level = level,
Message = message
};
}
}
public class Person
{
public string Name { get; init; }
public LocalDate DateOfBirth { get; init; }
public int GetAge(LocalDate today)
{
return Period.Between(DateOfBirth, today, PeriodUnits.Years).Years;
}
}
No timezone needed — birthdays are calendar concepts.
public class Meeting
{
public string Title { get; init; }
public LocalDateTime LocalStart { get; init; }
public string TimeZoneId { get; init; }
public Instant InstantUtc { get; init; }
public static Meeting Create(string title, LocalDateTime localStart, string timeZoneId)
{
var zone = DateTimeZoneProviders.Tzdb[timeZoneId];
var instant = localStart.InZoneLeniently(zone).ToInstant();
return new Meeting
{
Title = title,
LocalStart = localStart,
TimeZoneId = timeZoneId,
InstantUtc = instant
};
}
// Display in any timezone
public string GetDisplayTime(DateTimeZone viewerZone)
{
var inViewerZone = InstantUtc.InZone(viewerZone);
// Note: uuuu is NodaTime's recommended year specifier (absolute year)
return inViewerZone.ToString("uuuu-MM-dd HH:mm", CultureInfo.InvariantCulture);
}
}
public class Deadline
{
public LocalDateTime LocalDeadline { get; init; }
public string TimeZoneId { get; init; }
public Instant InstantUtc { get; init; }
public bool IsPastDeadline(Instant now)
{
return now > InstantUtc;
}
public Duration TimeRemaining(Instant now)
{
return InstantUtc - now;
}
}
NodaTime doesn't map to SQL types out of the box, but there are excellent packages for this.
// In your DbContext configuration
protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
optionsBuilder.UseNpgsql(connectionString, o => o.UseNodaTime());
}
This maps:
Instant → timestamp with time zone
LocalDateTime → timestamp without time zone
LocalDate → date
LocalTime → time
What about ZonedDateTime? There's no single PostgreSQL type for it — that's the whole point of our pattern. You decompose it into separate columns:
LocalDateTime → timestamp without time zone
TimeZoneId → text
Instant → timestamp with time zone (for queries)Here's how to extract the parts from a ZonedDateTime:
ZonedDateTime zoned = local.InZoneLeniently(zone);
// Decompose for storage
LocalDateTime localPart = zoned.LocalDateTime;
string timeZoneId = zoned.Zone.Id; // e.g. "Europe/Vienna"
Instant instantPart = zoned.ToInstant();
public class AppointmentConfiguration : IEntityTypeConfiguration<Appointment>
{
public void Configure(EntityTypeBuilder<Appointment> builder)
{
builder.Property(a => a.LocalStart)
.HasConversion(
v => v.ToDateTimeUnspecified(),
v => LocalDateTime.FromDateTime(v));
builder.Property(a => a.InstantUtc)
.HasConversion(
v => v.ToDateTimeUtc(),
v => Instant.FromDateTimeUtc(v));
builder.Property(a => a.TimeZoneId)
.HasMaxLength(64);
}
}
public class Appointment
{
public Guid Id { get; init; }
public string Title { get; init; }
// Stored as timestamp without time zone
public LocalDateTime LocalStart { get; init; }
// Stored as text/varchar
public string TimeZoneId { get; init; }
// Stored as timestamp with time zone (for queries)
public Instant InstantUtc { get; init; }
}
When creating appointments that might fall in DST gaps or overlaps, be explicit:
public class AppointmentService
{
public ZonedDateTime ResolveLocalTime(LocalDateTime local, string timeZoneId)
{
var zone = DateTimeZoneProviders.Tzdb[timeZoneId];
var mapping = zone.MapLocal(local);
return mapping.Count switch
{
0 => zone.AtLeniently(local), // Gap: shift forward to valid time
1 => mapping.Single(), // Normal: exactly one mapping
2 => mapping.First(), // Overlap: pick earlier occurrence
_ => throw new InvalidOperationException()
};
}
}
For more control (e.g., asking the user to choose during overlaps):
public ZonedDateTime ResolveWithUserChoice(
LocalDateTime local,
string timeZoneId,
Func<ZonedDateTime, ZonedDateTime, ZonedDateTime> overlapResolver)
{
var zone = DateTimeZoneProviders.Tzdb[timeZoneId];
var mapping = zone.MapLocal(local);
return mapping.Count switch
{
0 => zone.AtLeniently(local),
1 => mapping.Single(),
2 => overlapResolver(mapping.First(), mapping.Last()),
_ => throw new InvalidOperationException()
};
}
If you have existing code using DateTime, here's how to convert:
// DateTime (UTC) to Instant
DateTime dtUtc = DateTime.UtcNow;
Instant instant = Instant.FromDateTimeUtc(dtUtc);
// DateTime (unspecified) to LocalDateTime
DateTime dt = new DateTime(2026, 6, 5, 10, 0, 0);
LocalDateTime local = LocalDateTime.FromDateTime(dt);
// Instant to DateTime (UTC)
DateTime backToUtc = instant.ToDateTimeUtc();
// LocalDateTime to DateTime (unspecified)
DateTime backToUnspecified = local.ToDateTimeUnspecified();
Code that calls SystemClock.Instance.GetCurrentInstant() directly is hard to test. You can't control "now".
NodaTime solves this with IClock:
// Production: inject the real clock
public class AppointmentService(IClock clock)
{
public bool IsUpcoming(Appointment appointment)
{
var now = clock.GetCurrentInstant();
return appointment.InstantUtc > now;
}
}
// In production
var service = new AppointmentService(SystemClock.Instance);
// In tests: use a fake clock
var fakeNow = Instant.FromUtc(2026, 6, 5, 8, 0);
var fakeClock = new FakeClock(fakeNow);
var service = new AppointmentService(fakeClock);
// Now you can test time-dependent logic deterministically
Rule: Never call SystemClock.Instance directly in business logic. Inject IClock instead. Your tests will thank you.
Instant for physical moments (logs, events, tokens)LocalDate for calendar dates (birthdays, holidays)LocalDateTime + DateTimeZone for human-scheduled times (meetings, deadlines)LocalDateTime + TimeZoneId as your source of truthInstantUtc for queries and sortingInZoneLeniently or check MapLocal for edge casesNpgsql.EntityFrameworkCore.PostgreSQL.NodaTime for PostgreSQL, or value converters for other databasesNext up: PostgreSQL – Storing Time Without Lying to Yourself — the database side of the equation.