2025-12-04 23:50:39
In the first part of this series, I covered the high-level architecture and the tools I chose for building my personal website. Now let's dive deeper into the technical implementation, starting with the Terraform modules.
To deploy the minimal infrastructure with everything needed on Hetzner Cloud, we need to configure the following components:
I created Terraform modules for each of these components. Let's go through them one by one.
The first piece of infrastructure we need is a private network. For this, I created the terraform-hcloud-network module.
This module provides comprehensive network management for Hetzner Cloud:
server, cloud, or vswitch)Here's my network configuration:
module "network" {
source = "danylomikula/network/hcloud"
version = "1.0.0"
create_network = true
name = local.project_slug
ip_range = "10.100.0.0/16"
labels = local.common_labels
subnets = {
web = {
type = "cloud"
network_zone = "eu-central"
ip_range = "10.100.1.0/24"
}
}
}
I chose the eu-central network zone because it offers the best pricing. This configuration creates a network with a /16 CIDR block (10.100.0.0/16) and a single subnet with a /24 block (10.100.1.0/24). For a single server, this is more than enough address space.
Next, we need to set up a firewall to restrict external traffic. As I mentioned in the first part, I only allow HTTP/HTTPS traffic from Cloudflare IP addresses and SSH access from my home IP.
For this, I created the terraform-hcloud-firewall module. It supports:
Here's my firewall configuration:
module "firewall" {
source = "danylomikula/firewall/hcloud"
version = "1.0.0"
firewalls = {
"${local.resource_names.website}" = {
rules = [
{
direction = "in"
protocol = "tcp"
port = "22"
source_ips = [var.my_homelab_ip]
description = "allow ssh"
},
{
direction = "in"
protocol = "tcp"
port = "80"
source_ips = local.cloudflare_all_ips
description = "allow http from cloudflare"
},
{
direction = "in"
protocol = "tcp"
port = "443"
source_ips = local.cloudflare_all_ips
description = "allow https from cloudflare"
},
{
direction = "in"
protocol = "icmp"
source_ips = ["0.0.0.0/0", "::/0"]
description = "allow ping"
}
]
labels = {
service = "firewall"
}
}
}
common_labels = local.common_labels
}
Cloudflare publishes their IP ranges publicly, so I fetch them dynamically using Terraform's http data source:
data "http" "cloudflare_ips_v4" {
url = "https://www.cloudflare.com/ips-v4"
}
data "http" "cloudflare_ips_v6" {
url = "https://www.cloudflare.com/ips-v6"
}
locals {
cloudflare_ipv4_cidrs = split("\n", trimspace(data.http.cloudflare_ips_v4.response_body))
cloudflare_ipv6_cidrs = split("\n", trimspace(data.http.cloudflare_ips_v6.response_body))
cloudflare_all_ips = concat(local.cloudflare_ipv4_cidrs, local.cloudflare_ipv6_cidrs)
}
This approach ensures that whenever Cloudflare updates their IP ranges, a simple terraform apply will update the firewall rules automatically.
Important: For this setup to work, you need to enable the Proxy toggle on your A and AAAA records in Cloudflare DNS settings.
Before creating the server, we need an SSH key for authentication. I created the terraform-hcloud-ssh-key module for this purpose.
This module is quite flexible and supports:
Here's my configuration:
module "ssh_key" {
source = "danylomikula/ssh-key/hcloud"
version = "1.0.0"
create_key = true
name = local.project_slug
save_private_key_locally = true
local_key_directory = path.module
labels = local.common_labels
}
This generates an ED25519 key pair (the default and recommended algorithm) and saves both the private and public keys locally for easy access.
Finally, let's create the server itself using the terraform-hcloud-server module. Like the others, it's designed to be flexible and supports:
hcloud_server resource attributesHere's my server configuration:
module "servers" {
source = "danylomikula/server/hcloud"
version = "1.0.0"
servers = {
"${local.resource_names.website}" = {
server_type = "cx23"
location = "hel1"
image = data.hcloud_image.rocky.name
user_data = local.cloud_init_config
ssh_keys = [module.ssh_key.ssh_key_id]
firewall_ids = [module.firewall.firewall_ids[local.resource_names.website]]
networks = [{
network_id = module.network.network_id
ip = "10.100.1.10"
}]
labels = {
service = "website"
}
}
}
common_labels = local.common_labels
}
I chose the cx23 server type as it's the cheapest option available and costs me less than $5 per month in the Helsinki (hel1) region. Its specifications are more than enough for a static website.
Notice how I'm passing variables from previous modules dynamically — the SSH key ID, firewall ID, and network ID are all referenced from their respective module outputs. This eliminates manual configuration and reduces the chance of errors.
Here's the full Terraform configuration with all the pieces together:
locals {
project_slug = "mikula-dev"
common_labels = {
environment = "production"
project = local.project_slug
managed_by = "terraform"
}
resource_names = {
website = "${local.project_slug}-web"
}
cloud_init_config = templatefile("${path.module}/cloud-init.tpl", {
ansible_ssh_public_key = var.ansible_user_ssh_public_key
})
cloudflare_ipv4_cidrs = split("\n", trimspace(data.http.cloudflare_ips_v4.response_body))
cloudflare_ipv6_cidrs = split("\n", trimspace(data.http.cloudflare_ips_v6.response_body))
cloudflare_all_ips = concat(local.cloudflare_ipv4_cidrs, local.cloudflare_ipv6_cidrs)
}
# Fetch Cloudflare IP ranges for firewall rules
data "http" "cloudflare_ips_v4" {
url = "https://www.cloudflare.com/ips-v4"
}
data "http" "cloudflare_ips_v6" {
url = "https://www.cloudflare.com/ips-v6"
}
module "network" {
source = "danylomikula/network/hcloud"
version = "1.0.0"
create_network = true
name = local.project_slug
ip_range = "10.100.0.0/16"
labels = local.common_labels
subnets = {
web = {
type = "cloud"
network_zone = "eu-central"
ip_range = "10.100.1.0/24"
}
}
}
module "ssh_key" {
source = "danylomikula/ssh-key/hcloud"
version = "1.0.0"
create_key = true
name = local.project_slug
save_private_key_locally = true
local_key_directory = path.module
labels = local.common_labels
}
module "firewall" {
source = "danylomikula/firewall/hcloud"
version = "1.0.0"
firewalls = {
"${local.resource_names.website}" = {
rules = [
{
direction = "in"
protocol = "tcp"
port = "22"
source_ips = [var.my_homelab_ip]
description = "allow ssh"
},
{
direction = "in"
protocol = "tcp"
port = "80"
source_ips = local.cloudflare_all_ips
description = "allow http from cloudflare"
},
{
direction = "in"
protocol = "tcp"
port = "443"
source_ips = local.cloudflare_all_ips
description = "allow https from cloudflare"
},
{
direction = "in"
protocol = "icmp"
source_ips = ["0.0.0.0/0", "::/0"]
description = "allow ping"
}
]
labels = {
service = "firewall"
}
}
}
common_labels = local.common_labels
}
module "servers" {
source = "danylomikula/server/hcloud"
version = "1.0.0"
servers = {
"${local.resource_names.website}" = {
server_type = "cx23"
location = "hel1"
image = data.hcloud_image.rocky.name
user_data = local.cloud_init_config
ssh_keys = [module.ssh_key.ssh_key_id]
firewall_ids = [module.firewall.firewall_ids[local.resource_names.website]]
networks = [{
network_id = module.network.network_id
ip = "10.100.1.10"
}]
labels = {
service = "website"
}
}
}
common_labels = local.common_labels
}
With this configuration, running terraform apply provisions the complete infrastructure in just a few minutes.
Now let's look at bootstrapping the actual website. For this, I'm using an Ansible collection that I also created and published publicly: ansible-hugo-deploy.
For the operating system, I chose Rocky Linux 10. For the web server — Caddy.
The Ansible collection handles the complete deployment pipeline:
Here's my complete configuration:
---
# Domain configuration.
domain: "mikula.dev"
admin_email: "admin@{{ domain }}"
# Git repository for website source.
website_repo_url: "[email protected]:danylomikula/mikula.dev.git"
website_repo_branch: "master"
# Web content paths.
website_root: "/var/www/{{ domain }}"
caddy_log_path: "/var/log/caddy"
website_public_dir: "{{ website_root }}/public"
# Deploy SSH key configuration.
deploy_ssh_key_user: "caddy"
deploy_ssh_key_group: "{{ deploy_ssh_key_user }}"
deploy_ssh_key_dir: "/var/lib/{{ deploy_ssh_key_user }}/.ssh"
deploy_ssh_key_path: "{{ deploy_ssh_key_dir }}/deploy_key"
deploy_ssh_key_type: "ed25519"
deploy_ssh_key_comment: "{{ domain }}-deploy-key"
# Website rebuild configuration.
webrebuild_schedule: "*-*-* 04:00:00"
webrebuild_boot_delay: "180"
webrebuild_service_user: "caddy"
webrebuild_service_group: "caddy"
webrebuild_commands:
- "git pull origin {{ website_repo_branch }}"
- "hugo --gc --minify"
hugo_version: "0.152.2"
# Caddy configuration.
caddy_version: "2.10.2"
caddy_go_version: "1.25.4"
caddy_modules:
- github.com/mholt/caddy-ratelimit
- github.com/caddy-dns/cloudflare
caddy_rate_limit:
enabled: true
events: 60
window: "1m"
caddy_compression_formats:
- gzip
- zstd
# DNS / ACME configuration.
cloudflare_api_token: "{{ vault_cloudflare_api_token }}"
caddy_acme_ca: "https://acme-v02.api.letsencrypt.org/directory"
# Firewall configuration.
firewall_zone: "public"
firewall_allowed_services:
- ssh
- http
- https
firewall_allowed_ports: []
firewall_allowed_icmp: true
firewall_allowed_icmp_types:
- echo-request
Since I'm using Cloudflare with proxy enabled, the standard Caddy build isn't enough for automatic certificate provisioning. I need the caddy-dns/cloudflare module to pass the DNS-01 ACME challenge for certificate verification.
Since I'm already building a custom Caddy binary, I decided to add another useful module — caddy-ratelimit for rate limiting protection against bots and scanners.
The configuration for these modules is available in my Ansible playbook. If you don't want to use one of them or want to add additional modules, you can easily customize the caddy_modules list.
We can now deploy the website, but one problem remains: how do we update the content automatically without manually logging into the server? I want to simply push to Git and have the website update itself after some time.
To solve this, I'm using GitHub deploy keys. These keys are read-only, meaning all they can do is read the content of the Git repository — nothing more.
The Ansible playbook generates this key automatically, outputs the public part to the console, and waits for your confirmation while you configure your GitHub repository. After confirmation, it clones the content, builds it, and starts the Hugo server.
For periodic content updates, I use a simple systemd timer that runs every morning and updates the website with new content.
webrebuild.service:
[Unit]
Description=Rebuild website from Git repository
After=network-online.target
Wants=network-online.target
[Service]
Type=oneshot
User={{ webrebuild_service_user }}
Group={{ webrebuild_service_group }}
WorkingDirectory={{ website_root }}
Environment=PATH={{ caddy_webserver_rebuild_path }}
{% for command in webrebuild_commands %}
ExecStart=/usr/bin/env bash -c "{{ command }}"
{% endfor %}
StandardOutput=journal
StandardError=journal
webrebuild.timer:
[Unit]
Description=Rebuild website daily
RefuseManualStart=no
RefuseManualStop=no
[Timer]
# Run {{ webrebuild_boot_delay }} seconds after boot for the first time.
OnBootSec={{ webrebuild_boot_delay }}
# Run daily at scheduled time.
OnCalendar={{ webrebuild_schedule }}
Unit=webrebuild.service
[Install]
WantedBy=timers.target
With this setup, every morning at 4:00 AM the timer triggers, pulls the latest changes from the repository, and rebuilds the site with Hugo. If I need an immediate update, I can always trigger it manually with sudo systemctl start webrebuild.service.
The Caddy server is configured using a config file called Caddyfile. Here's the complete template:
# Caddyfile for {{ domain }}
# Managed by Ansible - do not edit manually.
{% if (cloudflare_api_token | length > 0) or (caddy_acme_ca | length > 0) %}
{
{% if caddy_acme_ca | length > 0 %}
acme_ca {{ caddy_acme_ca }}
{% endif %}
{% if cloudflare_api_token | length > 0 %}
acme_dns cloudflare {env.CLOUDFLARE_API_TOKEN}
{% endif %}
}
{% endif %}
www.{{ domain }} {
# Redirect www to non-www domain.
redir https://{{ domain }}{uri} permanent
}
{{ domain }} {
# Root directory for static files.
root * {{ website_public_dir }}
# Enable static file server.
file_server
{% if caddy_rate_limit.enabled | default(false) %}
# Basic rate limiting per client IP to slow down bots/scanners.
rate_limit {
zone per_client {
key {remote_ip}
events {{ caddy_rate_limit.events }}
window {{ caddy_rate_limit.window }}
}
}
{% endif %}
# Enable compression.
encode {% for format in caddy_compression_formats %}{{ format }} {% endfor %}
# TLS configuration with admin email.
tls {{ admin_email }}
# Access logging.
log {
output file {{ caddy_log_path }}/access.log {
roll_size 100MiB
roll_local_time
roll_keep_for 15d
}
}
# Security headers.
header {
Strict-Transport-Security "max-age=63072000; includeSubDomains; preload"
X-Content-Type-Options "nosniff"
Content-Security-Policy "default-src 'self'; script-src 'self' 'unsafe-inline' https://www.googletagmanager.com https://static.cloudflareinsights.com; style-src 'self' 'unsafe-inline'; img-src 'self' data: https://static.cloudflareinsights.com; font-src 'self' data:; frame-ancestors 'none'; object-src 'none'; base-uri 'self'; form-action 'self'; connect-src 'self' https://www.google-analytics.com https://www.googletagmanager.com https://static.cloudflareinsights.com https://cloudflareinsights.com"
Referrer-Policy "strict-origin-when-cross-origin"
}
}
This configuration includes:
www.mikula.dev is permanently redirected to mikula.dev
That's it! With this setup, I can deploy a fully functional, secure, and automated website infrastructure in about 15 minutes. The entire workflow is:
terraform apply to provision the infrastructureFrom that point on, the website updates itself automatically every day.
I hope this guide helps you set up your own website even faster than I did. Feel free to use my ready-made configurations as a starting point.
All the code is open source:
Have questions or suggestions? Feel free to reach out or open an issue on GitHub.
2025-12-04 23:49:45
Beyond UI Generation — where humans and AI communicate through meaning, not pixels
Most AI agents interact with web apps through Black Box methods: consuming DOM dumps or screenshots, then guessing what to click.
But HTML was never designed for machines. From the AI's perspective, the DOM is noise where business logic is faintly buried.
This essay argues for a White Box approach:
Instead of making agents reverse-engineer the UI, expose a Semantic State Layer that reveals the application's structure, rules, state, and valid transitions directly.
This is not about replacing UI. It's about giving AI agents a proper interface — what I call an Intelligence Interface (II) — alongside the traditional User Interface.
This post introduces Manifesto, an open-source engine that implements this philosophy with a concrete protocol: @manifesto-io/engine.
Here's how most teams "add AI" to their web apps today:
This is the Black Box approach. The agent sees only the rendered surface and must infer everything else.
Consider a typical Material UI form field:
<div class="MuiFormControl-root css-1u3bzj6">
<label class="MuiInputLabel-root">Product Name</label>
<div class="MuiInputBase-root">
<input aria-invalid="false" type="text" class="MuiInputBase-input" value="" />
</div>
<p class="MuiFormHelperText-root">This field is required.</p>
</div>
From an agent's perspective:
| Problem | Impact |
|---|---|
| Token waste | 90% of tokens are class names and wrappers |
| Missing constraints | Is it required? What's the max length? |
| No dependencies | Does this field depend on others? |
| No causality | Submit is disabled — but why? |
The agent is forced to guess. A CSS refactor breaks everything. A layout change confuses the model. The logic was never exposed — only its visual projection.
Signal < 10%. Noise > 90%.
The alternative is a White Box protocol.
Instead of showing HTML, the engine exposes a Semantic Snapshot — a structured representation of the application's internal state that agents can read directly.
{
"topology": {
"viewId": "product-create",
"mode": "create",
"sections": [
{ "id": "basic", "title": "Basic Info", "fields": ["name", "productType"] },
{ "id": "shipping", "title": "Shipping", "fields": ["shippingWeight"] }
]
},
"state": {
"form": { "isValid": false, "isDirty": false },
"fields": {
"name": {
"value": "",
"meta": { "valid": false, "hidden": false, "disabled": false, "errors": ["Required"] }
},
"productType": {
"value": "PHYSICAL",
"meta": { "valid": true, "hidden": false, "disabled": false, "errors": [] }
},
"shippingWeight": {
"value": null,
"meta": { "valid": true, "hidden": false, "disabled": false, "errors": [] }
}
}
},
"constraints": {
"name": { "required": true, "minLength": 2, "maxLength": 100 },
"shippingWeight": { "min": 0, "max": 2000, "dependsOn": ["productType"] }
},
"interactions": [
{ "id": "updateField:name", "intent": "updateField", "target": "name", "available": true },
{ "id": "updateField:productType", "intent": "updateField", "target": "productType", "available": true },
{ "id": "submit", "intent": "submit", "available": false, "reason": "Name is required" }
]
}
Now the agent has:
No guessing. No inference. The agent reads the application's brain directly.
🎮 See it in action: Manifesto Playground — try changing field values and watch the semantic state update in real-time.
Here's a scenario from a complex SaaS scheduling interface:
User: "I see a date picker, but where do I select which week?"
AI Chatbot: "The week selector only appears when you set frequency to 'Weekly'. Right now it's set to 'Daily'. Should I change it for you?"
For this to work, the AI needs to know:
weekSelector existsfrequency === 'WEEKLY'
frequency is 'DAILY'
No amount of DOM parsing gives you this reliably. But a Semantic Snapshot does:
{
"fields": {
"frequency": {
"value": "DAILY",
"meta": { "hidden": false }
},
"weekSelector": {
"value": null,
"meta": { "hidden": true },
"visibleWhen": "frequency === 'WEEKLY'"
}
}
}
The AI reads this and knows — without inference — exactly why the field is hidden and what would make it appear.
Manifesto implements a continuous feedback loop between the engine and AI agents:
┌─────────────────────────────────────────────────────────────────────┐
│ │
│ [Context Injection] → [Reasoning] → [Action Dispatch] → [Delta] │
│ ▲ │ │
│ └─────────────── Continuous Snapshots ─────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────┘
Context Injection: Engine exports a Semantic Snapshot
Reasoning: Agent plans next action based on snapshot
Action Dispatch: Agent calls abstract intents, not DOM events
updateField, submit, reset, validate
Delta Feedback: Engine returns what changed
Loop continues with updated snapshot
This is fundamentally different from "click and hope." The agent operates on structured meaning with predictable feedback.
Manifesto exposes this protocol through @manifesto-io/ai:
import { createInteroperabilitySession } from '@manifesto-io/ai'
const session = createInteroperabilitySession({
runtime, // FormRuntime instance
viewSchema, // View definition
entitySchema, // Entity definition
})
// Get current semantic snapshot
const snapshot = session.snapshot()
// snapshot.interactions tells the agent:
// - submit: available=false, reason="Name is required"
// - updateField:name: available=true
// - updateField:productType: available=true
The agent now knows the current state and exactly what actions are valid.
const result = session.dispatch({
type: 'updateField',
fieldId: 'productType',
value: 'DIGITAL',
})
if (result._tag === 'Ok') {
const { snapshot, delta } = result.value
// delta shows exactly what changed:
// {
// fields: {
// productType: { value: 'DIGITAL' },
// shippingWeight: { hidden: true },
// fulfillmentType: { hidden: true }
// },
// interactions: {
// 'updateField:shippingWeight': { available: false, reason: 'Field is hidden' }
// }
// }
}
The agent doesn't just get "success." It gets a delta showing the causal chain: changing productType to DIGITAL caused shippingWeight to become hidden.
Convert the snapshot into OpenAI/Claude-compatible function definitions:
import { toToolDefinitions } from '@manifesto-io/ai'
const tools = toToolDefinitions(snapshot, { omitUnavailable: true })
// Returns JSON-Schema tool definitions:
// - updateField (with enum of available fields)
// - submit (if form is valid)
// - reset
// - validate
This enables agents to interact with forms through standard function-calling interfaces.
When an agent manipulates DOM directly:
Hallucination Firewall: Every agent action is validated before execution.
const result = session.dispatch({
type: 'updateField',
fieldId: 'nonexistent', // ❌ Unknown field
value: 'test',
})
// result._tag === 'Err'
// result.error === 'Field not found: nonexistent'
// State unchanged — no side effects
What gets rejected:
Err
Err
Err
Err
Err
Atomic Rollback: On any failure, the previous snapshot remains intact. No partial mutations.
Deterministic Contracts: Same input + same state = same output. Agents can plan reliably.
This is capability-based access control for AI. The agent only sees and can only act on what's explicitly permitted.
The Semantic Snapshot is derived from a declarative schema. Here's how it looks:
import { entity, field, enumValue } from '@manifesto-io/schema'
export const productTypes = [
enumValue('PHYSICAL', 'Physical Product'),
enumValue('DIGITAL', 'Digital Product'),
] as const
export const productEntity = entity('product', 'Product', '1.0.0')
.fields(
field.string('name', 'Product Name')
.required('Product name is required')
.min(2).max(100)
.build(),
field.enum('productType', 'Product Type', productTypes)
.required()
.defaultValue('PHYSICAL')
.build(),
field.number('shippingWeight', 'Shipping Weight (kg)')
.min(0).max(2000)
.build(),
)
.build()
import {
view, section, layout, viewField,
on, actions, $, fieldEquals,
} from '@manifesto-io/schema'
export const productCreateView = view('product-create', 'Create Product', '1.0.0')
.entityRef('product')
.mode('create')
.sections(
section('basic')
.title('Basic Information')
.layout(layout.grid(2, '1rem'))
.fields(
viewField.textInput('name', 'name')
.label('Product Name')
.build(),
viewField.select('productType', 'productType')
.label('Product Type')
.reaction(
on.change()
.when(fieldEquals('productType', 'DIGITAL'))
.do(
actions.updateProp('shippingWeight', 'hidden', true)
)
)
.reaction(
on.change()
.when(['!=', $.state('productType'), 'DIGITAL'])
.do(
actions.updateProp('shippingWeight', 'hidden', false)
)
)
.build(),
viewField.numberInput('shippingWeight', 'shippingWeight')
.label('Shipping Weight (kg)')
.dependsOn('productType')
.props({ min: 0, max: 2000 })
.build(),
)
.build(),
)
.build()
The schema captures:
.dependsOn('productType')
on.change().when(...).do(...)
All of this is introspectable. The engine reads the schema, builds a DAG of dependencies, and exports the current state as a Semantic Snapshot.
| Tool | Strength | Gap |
|---|---|---|
| XState | State machines | No UI semantics, no agent protocol |
| Zod | Validation | No field dependencies, no visibility rules |
| React Hook Form | Form state | Business logic buried in components |
| MCP | Tool invocation | No UI domain logic, no snapshot protocol |
The missing piece is a layer that captures:
This is UI domain logic. None of the above expose it in a machine-readable protocol.
Manifesto fills that gap.
For decades we've built User Interfaces:
That still matters. But it's no longer enough.
Software now needs both a UI for humans and an II — Intelligence Interface — for agents.
| Layer | Consumer | Content |
|---|---|---|
| UI | Humans | Pixels, clicks, visual feedback |
| II | Agents | Semantic Snapshot, intent dispatch, delta feedback |
Manifesto's architecture:
┌─────────────────────────────────────────────────────────────┐
│ Schema Layer │
│ ┌─────────────┬─────────────┬─────────────────────────┐ │
│ │ Entity │ View │ Reactions & Rules │ │
│ └─────────────┴─────────────┴─────────────────────────┘ │
├─────────────────────────────────────────────────────────────┤
│ Engine (DAG Runtime) │
├───────────────────────┬─────────────────────────────────────┤
│ UI Renderer │ AI Protocol (@manifesto/ai) │
│ (React/Vue/etc) │ Snapshot + Dispatch + Delta │
└───────────────────────┴─────────────────────────────────────┘
↓ ↓
Humans Agents
Define the schema once. Generate both UI and II from it.
To me, an AI-native application has these properties:
White Box, not Black Box — The engine exposes semantic state, not just rendered output
UI is a projection — A visual representation of state, not the source of truth
Agents interact with meaning — Through structured snapshots and intent dispatch
Protocol over DOM — Actions are validated, deterministic, and return deltas
Safety by design — Hallucination firewall, atomic rollback, capability-based access
This doesn't mean abandoning UI. It means recognizing that UI alone is insufficient when your users include both humans and machines.
We're at an inflection point.
For decades, software was built for human consumption. UI was the interface, and it was enough.
Now, AI agents are becoming first-class users. They don't need pixels and click events. They need:
The teams that build for this will have AI integrations that are:
The teams that don't will find their AI integrations perpetually fragile — dependent on screenshot parsing and prompt hacks that break with every redesign.
HTML is a great language for humans.
For AI, it's a noisy encoding of things it shouldn't have to reverse-engineer.
AI doesn't need your pixels. It needs your meaning.
That meaning should be exposed as a Semantic State Layer — a White Box protocol where agents can read state, dispatch intents, and receive causal feedback.
Manifesto is my attempt to build that layer.
GitHub: github.com/eggplantiny/manifesto-io
Playground: manifesto-io-playground.vercel.app
Package: @manifesto-io/engine — The interoperability protocol for agents
Don't feed HTML to your agents.
Give them a White Box: state, intent, and semantics.
2025-12-04 23:48:39
Happy Tuesday 👋 and welcome another special edition of Tech Talks Weekly!
This edition includes the top 15 most-watched talks this year so far grouped by programming language.
These come with short summaries, so you can quickly decide whether a talk is worth watching. I put them together with a little help from AI. Hope you like them!
Get ready for a bit of scrolling, but it’s worth it.
I promise.
With that said, expect your watchlist to grow!
This post is an excerpt from Tech Talks Weekly which is a free weekly email with all the recently published Software Engineering podcasts and conference talks. Currently subscribed by +7,200 Software Engineers who stopped scrolling through messy YT subscriptions/RSS feeds and reduced FOMO. Subscribe here: https://www.techtalksweekly.io/
Navigate to a section using the links below:
JavaScript land is still wild but the patterns are getting clearer. These talks cover modern React patterns, TanStack, Node and TypeScript, performance tricks from game engines, and Angular and Vue updates.
tldw: Learn how to tackle messy async operations in React with modern patterns that keep your UI smooth and reliable, from concurrent rendering to optimized state management.
tldw: Composition is the secret sauce for scaling React codebases, as it helps avoid the chaos of conditional props and makes your components cleaner and easier to work with, both for humans and AI.
tldw: This talk uncovers the dark side of React Query, showing its drawbacks and the scenarios where it might not work for you, giving you a clearer view of its place in your toolkit.
tldw: TanStack is way more than React Query and this talk walks through TanStack Start, the router, forms, real time DBs, and a demo that shows how it actually competes with Next.js so give it a watch.
tldw: Refs are more than an escape hatch; this talk uses real examples to show how they can clean up code, cut renders and stop UI flicker while explaining safe usage and what’s changing in React 19.
tldw: Running TypeScript files directly in Node.js is finally within reach; watch this talk for a pragmatic, code-first tour of type stripping, syntax detection, and the migration tradeoffs that actually matter.
tldw: Node.js isn’t dead; this talk debunks the doom narratives, shows recent performance wins, modern JS features, npm and governance tradeoffs, and makes a practical case for using it in server side and cloud native apps. Worth watching if you ship backend code.
tldw: Ten years of React lessons in rapid-fire: eight state strategies, common problems, performance tricks, TypeScript tips, reusable patterns, devtools, file structures, and workflows you can copy; worth watching if you ship React apps.
tldw: Node will use all available memory and that’s OK, this talk explains why V8 and the GC behave that way, what it means for production workloads, and useful tuning tips, so watch it if you run Node at scale.
“State of Vite and Vue 2025 by Creator Evan You”
Conference ⸱ +3k views ⸱ Jun 03, 2025 ⸱ 00h 42m 18s
tldw: The creator of Vite and Vue presents the State of Vue and Vite in 2025, covering Alien Signals, Devtools v7, Vapor Mode, Rolldown Vite, Vue Plus and why the massive usage numbers actually matter. This is a must watch for everyone!
“ng-conf 2025 LIVE Angular Team Keynote with Mark Thompson, Alex Rickabaugh, Minko Gechev”
Conference ⸱ +3k views ⸱ Oct 17, 2025 ⸱ 01h 10m 19s
tldw: The Angular team’s ng-conf 2025 keynote presents the roadmap, new features, and practical migration tips, less marketing and more real-world lessons, worth watching if you build or maintain Angular apps.
“Unlocking Observability with React & Node.js | Mohit Menghnani | Conf42 SRE 2025”
Conference ⸱ +3k views ⸱ Jul 04, 2025 ⸱ 00h 17m 07s
tldw: Learn how to tie React frontends to Node.js backends for unified observability, with practical tracing, KPI correlation, tooling, and real-world scenarios.
“JavaScript Blazingly FAST! Lessons from a Game Engine - Erik Onarheim - NDC Oslo 2025”
Conference ⸱ +2k views ⸱ Aug 06, 2025 ⸱ 00h 55m 28s
tldw: Watch this talk to get a game engine veteran’s decade of JavaScript performance tricks turned into a punchy guide on browser profiling, stopping memory leaks, caching hot paths, and when to reach for WebWorkers, WASM, WebGPU or TypedArrays so your web apps run blazingly fast.
“Using AI with JavaScript: good idea? - Wes Bos - dotJS 2025”
Conference ⸱ +1k views ⸱ Apr 09, 2025 ⸱ 00h 19m 45s
tldw: Practical, skeptical take on using AI with JavaScript that cuts through the hype, shows where it helps and where it breaks, and offers concrete patterns worth stealing.
“Angular Unit Tests Distilled | Rainer Hahnekamp”
Conference ⸱ +1k views ⸱ Apr 29, 2025 ⸱ 01h 02m 22s
tldw: If you write Angular tests, watch this compact 45 minute workshop that distills unit testing into practical techniques for taming async code and mocking dependencies, covering both zone based and zone less approaches in Jasmine and Jest and when tests actually belong so you walk away with concrete ways to make your tests stable and far less brittle.
Any suggestions? Leave a comment.
Java is having a big year with JDK 25 release. These talks cover Java 21–25 features, future plans (Project Amber & Valhalla), GC performance improvements, database and JSON support along with the latest best practices for building and debugging production systems. You’ll also learn how Netlix is using Java.
tldw: In this talk, you’ll learn how Netflix runs 3,000+ services with Spring Boot, DGS/GraphQL, gRPC, new garbage collectors and virtual threads, plus real-world lessons on dependency management and native images. Worth watching if you run Java at scale.
tldw: This talk presents five recent JDK performance improvements, from the MergeStore JIT array merge to GC and library tweaks, showing how unchanged Java can get faster.
tldw: Explore the latest updates and future plans for Java, including exciting features from Project Amber and Project Valhalla that are shaping the language’s futuer.
tldw: Java 24 launches live at JavaOne with opening keynote demos, GC performance insights, AOT caching, and stream gatherers.
tldw: A deep, interactive demo of strange Java behaviors that still trip up experienced devs with hands-on code examples.
tldw: Java 22 and 23 cram in a bunch of tiny but important changes, from unnamed and primitive patterns and the foreign-function and memory API to module imports, stream gatherers, Markdown in JavaDoc, and improved GC, and this deep dive teases out what’s final, what’s preview, and why it actually matters for real projects.
tldw: Watch this talk to see how Java’s existing and upcoming features like the Foreign Function and Memory API, the Vector API, Project Valhalla and Project Babylon could let Java compete in AI and get concrete ideas for libraries and apps.
tldw: Practical Java 21+ practices to remove bloat, structure monoliths and microservices, improve testing, use data oriented and decoupling patterns, automate with pure Java, and rethink design for LLM assistants.
tldw: Hands-on, beginner-friendly guide for Java developers that compares Java UI frameworks to React/Angular/Vue, lays out modern frontend techniques and tradeoffs.
“The New Java Best Practices by Stephen Colebourne”
Conference ⸱ +31k views ⸱ Oct 09, 2025 ⸱ 00h 47m 36s
tldw: Java’s best practices have moved on since 8; this talk tears into records versus beans, pattern matching, Optional versus null, and data oriented programming with sharp, opinionated guidance worth watching.
“AI/ML Introduction for Java Developers”
Conference ⸱ +28k views ⸱ Jun 02, 2025 ⸱ 00h 51m 31s
tldw: A practical intro for Java devs to ML and GenAI vs PredAI, prompt strategies, LLM APIs like Langchain4J, vector databases and RAG with code demos that show where GenAI helps and where to avoid it.
“Growing the Java Language #JVMLS”
Conference ⸱ +27k views ⸱ Aug 21, 2025 ⸱ 01h 20m 21s
tldw: This talk covers how Java can grow without breaking compatibility, digging into design trade-offs, JVM constraints, and the practical paths to ship language features, and it’s worth watching if you care about where mainstream languages go next.
“Java 24 - Better Language, Better APIs, Better Runtime”
Conference ⸱ +26k views ⸱ Mar 01, 2025 ⸱ 00h 50m 33s
tldw: Java 24 quietly stacks useful language, API, and runtime improvements, from AOT class loading and stream gatherers to the class-file API and generational ZGC, so watch this talk to see which changes actually matter in real apps.
“SQL, JSON, and Java”
Conference ⸱ +24k views ⸱ Apr 14, 2025 ⸱ 00h 50m 16s
tldw: Modern multi model databases are starting to outpace MongoDB by marrying SQL and JSON, and this JavaOne talk walks through JSON versus relational trade offs, ISO SQL for JSON, binary JSON for low latency JDBC storage, Jackson and Jakarta integrations, and how Java 21 record patterns make schema-less JSON storage practical.
“Garbage Collection in Java - The progress since JDK 8”
Conference ⸱ +24k views ⸱ Feb 15, 2025 ⸱ 00h 49m 32s
tldw: Java’s garbage collection has come a long way since JDK 8; this talk walks through the different collectors, practical tradeoffs, and why upgrading your JDK can actually speed your apps. Worth wathcing.
Any suggestions? Leave a comment.
Rust is moving from niche systems tool to a full-stack and company-wide language. These talks cover Rust web stacks, OS and desktop projects, AI-assisted Rust coding, C and C++ interop, language theory work like MiniRust and how teams keep large Rust codebases clean.
tldw: Rust web frameworks are finally close to JS parity and often better on server performance. This talk walks through Leptos, Dioxus, SSR, bundle splitting, and lazy loading to make the case for end to end Rust web apps.
tldw: Microsoft is sharing its journey of adopting Rust, highlighting both the successes and challenges faced along the way.
tldw: Ten years of Redox OS and Rust unpack how you actually build a real OS in Rust, with stories about tradeoffs, tooling, and where systems programming goes next, definitely worth the watch.
tldw: Johan argues Rust can be a truly high-level app platform and shows how Dioxus tackles ergonomics with linker-based asset bundling, cross-platform deployment, and sub-second hot reload, so go watch it.
tldw: New work on high-performance 2D vector path and text rendering introduces sparse strips plus CPU, GPU and hybrid modes to make rendering much faster and far easier to integrate, definitely worth watching if you build graphics or UI engines.
tldw: This talk demos an open-source Rust Coder that gets LLMs to generate, compile, run, and iterate full Cargo projects with real compiler and test feedback, showing how to make AI actually produce reliable Rust code.
tldw: C++ and Rust interop is messy but solvable, and this talk walks through manual versus CXX generated bindings, wiring CMake to Cargo, and handling transitive C++ deps with Conan so you can actually ship hybrid code.
tldw: A hands-on comparison of modern C++ features and their Rust counterparts, with code examples that expose practical trade-offs and show where Rust actually changes how you design systems, definitely worth a watch.
tldw: This talk presents MiniRust, a precise, executable core language that pins down Rust’s undefined behavior with a Rust-to-MiniRust lowering and a reference interpreter you can test against, watch it if you care about making your unsafe code less mysterious.
“From Rust to C and Back Again — by Jack O’Connor — Seattle Rust User Group, April 2025”
Conference ⸱ +4k views ⸱ Apr 27, 2025 ⸱ 00h 48m 38s
tldw: A no nonsense hands on tour of calling C from Rust using the cc crate and bindgen, with build and link demos, common gotchas, and linked code.
“Rust under the Hood — by Sandeep Ahluwalia — Seattle Rust User Group, January 2025”
Conference ⸱ +4k views ⸱ Mar 03, 2025 ⸱ 00h 42m 52s
tldw: This talk dives into ownership, the borrow checker, lifetimes and performance tradeoffs to give a practical, no-fluff look at what actually makes Rust safe and fast.
“Rust for Web Apps? What Amazon’s Carl Lerche Knows”
Conference ⸱ +3k views ⸱ Jul 21, 2025 ⸱ 00h 43m 25s
tldw: Check out this talk from an Amazon Tokio core maintainer arguing Rust can be a killer choice for web apps, sharing some good tips on async, tooling, ergonomics, and deployment tradeoffs.
“Are We Desktop Yet? - Victoria Brekenfeld | EuroRust 2025”
Conference ⸱ +2k views ⸱ Nov 04, 2025 ⸱ 00h 36m 16s
tldw: Building a whole Linux desktop in Rust sounds crazy, and this talk follows System76’s COSMIC journey, covering ecosystem gaps, a bespoke Rust GUI toolkit and compositor, plus hard-won engineering lessons worth watching.
“Building and Maintaining Rust at Scale - Jacob Pratt | EuroRust 2025”
Conference ⸱ +2k views ⸱ Nov 05, 2025 ⸱ 00h 31m 56s
tldw: Discover how to make your Rust code exemplary and maintainable at scale with insights on design patterns, idioms, and practical tips for structuring large codebases.
“Rust Traits In C++ - Eduardo Madrid - C++ on Sea 2025”
Conference ⸱ +1k views ⸱ Oct 26, 2025 ⸱ 00h 57m 52s
tldw: This talk shows how Rust-style traits can be reproduced in C++ with type erasure to give non-intrusive, often faster runtime polymorphism, and it’s worth watching if you hack on C++ and care about clean, fast abstractions.
Any suggestions? Leave a comment.
Go this year is about runtime and tooling improvements while staying very production focused. These talks cover Go 1.25 features, coding agents, the build toolchain, observability, security, testing, performance work and what it takes to run high-availability systems in Go.
tldw: Go 1.25 packs a bunch of language, toolchain, and stdlib changes like iterators, PGO, and FIPS; this talk gives a technical tour of what’s been released in early August and why it matters for Go users.
tldw: A live-coding introduction to building a practical coding agent with Ollama and gpt-oss that shows how to list, read, and edit files while explaining tool-calling and reasoning, so you can steal the patterns and actually build one yourself.
tldw: A long-time Go developer distills the idioms, naming and package patterns they keep repeating, including main.run and guard clause style, organizing files and packages, using helpers and generic algos instead of duplicated logic, and argues the Sapir-Whorf hypothesis actually shapes Go code, worth watching.
tldw: The toolexec flag in Go lets you turn every build into a programmable pipeline, and this talk shows how to inject custom analysis, codegen, and observability hooks at compile time with real projects, trade offs, and practical tips to keep builds fast so you can start experimenting right away.
tldw: Watch this practical walkthrough of instrumenting Go services with OpenTelemetry and the LGTM stack, showing when to use traces, metrics, or logs, why context.Context matters, and pragmatic best practices for scalable telemetry, no PhD required.
tldw: Go programmers tend to distrust abstractions, and this talk presents a practical framework for deciding which ones earn their place while unpacking trade-offs and real-world angles you can actually use.
tldw: Go’s security history is oddly quiet for a 15-year-old language, and this talk is a must-watch deep dive into past mistakes, present fixes, and what’s coming to make Go safer.
tldw: Swiss Maps in Go shows how Go 1.24’s reworked map uses clever bit twiddling and new SIMD compiler tricks to squeeze real-world performance out of your CPU while spelling out the gotchas you actually need to know.
tldw: This talk shows how Just Eat’s microservice toolkit bootstraps Go services, wires event-driven workflows, and auto-generates infra-as-code and CI/CD so you can realistically deploy complex systems in minutes.
“Climbing the Testing Pyramid: From Real Service to Interface Mocks in Go - Naveen Ramanathan”
Conference ⸱ +1k views ⸱ Sep 17, 2025 ⸱ 00h 48m 12s
tldw: Testing Go services that hit S3, this talk walks through practical strategies from testing against real S3 and Toxiproxy-driven network chaos to LocalStack, httptest/httpmock and interface-based mock generation so you can actually test failure modes.
“When Failure Is Not an Option: Surviving Cloud Outages in Go - Kevin Holditch”
Conference ⸱ +1k views ⸱ Sep 18, 2025 ⸱ 00h 30m 54s
tldw: This talk shows how a real-time payments team moved from single-cloud Java to a cloud-agnostic active/active/active Go platform on Kubernetes, CockroachDB and NATS to meet bank level SLAs and run 24 hour provider kill tests in production, so watch if you want to see multi-cloud actually work.
“Hello, MCP World! - Daniela Petruzalek”
Conference ⸱ +1k views ⸱ Sep 17, 2025 ⸱ 00h 30m 18s
tldw: Model Context Protocol aims to standardize how apps talk to LLMs, and this talk breaks down the client/server building blocks, transports, tools, prompts and resources while showing practical Go examples that make AI-assisted coding and writing actually usable and worth watching.
“Deep dive into a go binary - Jesús Espino”
Conference ⸱ +1k views ⸱ Sep 18, 2025 ⸱ 00h 49m 33s
tldw: A deep dive into Go binaries unpacks ELF sections, runtime metadata, and the linking tricks hidden inside your executable, perfect for devs who love poking under the hood.
“The Quest for Speed: Journey to 50% Better P99 Times with Go - David Vella”
Conference ⸱ +1k views ⸱ Sep 17, 2025 ⸱ 00h 50m 43s
tldw: A gritty engineering postmortem on halving P99 in Go that walks through profiling, real monitoring, and the common Go anti patterns that were killing latency, and is packed with practical fixes.
“A Gopher’s Guide to Vibe Coding - Daniela Petruzalek”
Conference ⸱ +900 views ⸱ Sep 18, 2025 ⸱ 01h 01m 20s
tldw: Vibe coding is the new buzz in Go and this talk walks through building testquery to evaluate whether vibe coding holds up for production across speed, quality, idiomatic code, maintainability and testability, offering pragmatic lessons not hype.
Any suggestions? Leave a comment.
This year, Python community is doubling down on community building and mentoring, virtual env tooling, AST-powered libraries, large data handling, new chips and RAG patterns and modern data workflows across Polars, DuckDB and friends. Also, check out Python: The Documentary, if you missed it!
tldw: This talk shows how to break free from the cycle of endless tutorials and actually start developing your own projects, with helpful tips on design, structure, and using AI tools, applicable to any programming language.
tldw: How Big Tech rigs the internet and what developers can actually do to take back control.
tldw: Learn how to create a cross-platform GUI for your Python projects, and discover how to deploy your app seamlessly across desktops and mobile devices without changing any code.
tldw: See how how helping others teaches you as much as being mentored, with simple, practical habits to beat impostor syndrome and actually get better as a developer.
tldw: Discover how Structured RAG outperforms traditional RAG in answering complex queries over vast datasets and get a sneak peek at a new Python library designed to enhance long-term AI memory.
tldw: Virtual environments are moving from something you manually manage to something that happens for you. This talk looks at their history, the shift from imperative to declarative management, and practical UX with tools like uv while flagging current limitations.
tldw: Your tools and processes quietly push your code into shapes you didn’t intend, so watch this talk to see what “design pressure” looks like and learn how to spot and steer it.
tldw: AST parsing can supercharge Python libraries by rewriting code before it reaches C extensions. This talk shows practical AST transforms from an open-source computation graph library and assumes familiarity with the ast module and the Python C API.
tldw: Large JSON files in Python can quietly eat all your memory and either slow things to a crawl or crash your jobs. This talk shows how to measure memory, why JSON is so wasteful, and practical fixes like lean in-memory formats, loading only what you need, streaming parsers, and using JSON Lines.
“Marco Gorelli - How Narwhals brings Polars, DuckDB, PyArrow, & pandas together | PyData London 25”
Conference ⸱ +4k views ⸱ Jul 03, 2025 ⸱ 00h 36m 02s
tldw: Narwhals shows how to make Polars, DuckDB, PyArrow, pandas, and cuDF work together so your feature engineering tool can accept any dataframe backend.
“Taming file zoos: Data science with DuckDB database files - Alex Monahan”
Conference ⸱ +3k views ⸱ Jun 02, 2025 ⸱ 00h 28m 32s
tldw: DuckDB makes taming messy file zoos in Python actually straightforward, letting you query CSVs, Parquet, Excel and Google Sheets, persist many tables in one fast ACID columnar file, and run analyses larger than memory. Watch the talk for hands on demos in notebooks and scripts.
“The Hidden Potential of Python’s Dunder Methods | Eti Ijeoma | Conf42 Python 2025”
Conference ⸱ +3k views ⸱ Feb 06, 2025 ⸱ 00h 37m 31s
tldw: Watch this talk to learn how Python’s dunder methods really work with practical and advanced examples plus clear best practices for writing cleaner, more powerful classes.
“PyXL: A Chip That Runs Python at Turbo Speeds - Ron Livne”
Conference ⸱ +3k views ⸱ Jun 10, 2025 ⸱ 00h 26m 45s
tldw: A custom processor runs Python in hardware and outperforms high-end CPUs like the M1 Pro on per-cycle efficiency. Watch to see how your existing Python code could get dramatic speedups with no code changes.
“Keynote: “Python: the Documentary” Q&A — Paul Everitt, Armin Ronacher, Brett Cannon”
Conference ⸱ +3k views ⸱ Aug 28, 2025 ⸱ 00h 55m 27s
tldw: Veteran Python contributors trace how the language went from an Amsterdam side project to powering AI at the biggest companies. Less marketing, more real stories; watch the screening and Q&A.
“Joris Bekkers - Cutting Edge Football Analytics in Python | PyData London 25”
Conference ⸱ +3k views ⸱ Jun 28, 2025 ⸱ 00h 31m 34s
tldw: Polars, Keras, and Spektral get used to turn millions of player coordinates into pro level football metrics and real time prediction models. Watch to see practical code and GNN methods applied to public tracking data.
Any suggestions? Leave a comment.
Kotlin community this year is pushing hard on multiplatform, AI and better ergonomics on top of the JVM. These talks cover Java-to-Kotlin migrations, Compose on mobile and web, Kotlin WASM, Spring Boot 4 support, AI agents, better error handling and running full-stack apps with just Kotlin.
tldw: See how Uber migrated millions of lines of Java to Kotlin while building tooling and AI powered pipelines to generate migration datasets, manage risk and handle thousands of PRs.
tldw: This talk explains what is MCP, how apps provide context to LLMs and shows a Kotlin library that connects models to tooling like IDEs. Worth watching if you build AI tooling or want a practical demo of it in action.
tldw: Swift 6’s strict concurrency and actors explained for Kotlin developers, showing how they prevent data races and what Kotlin might borrow. Worth watching if you use Kotlin coroutines.
tldw: This talk shows how to break free from the usual mobile UI design by using Compose to create visually stunning interfaces inspired by video games like Persona 5.
tldw: Compose Multiplatform is stable on iOS now, so you can build shared UIs for iOS and Android. Watch this talk to see the API changes and new tooling for IntelliJ IDEA and Android Studio so you can ship production apps.
tldw: KotlinConf’25 Keynote covers the language roadmap, multiplatform progress, server-side work and how Kotlin is being used with AI. Less hype, more concrete takeaways, worth watching if you build backend, mobile or cross-platform apps.
tldw: Kotlin’s nullability model gets extended into restricted union types to model errors more explicitly. Watch to see how nullable types, exceptions, sealed classes and Result-like classes compare and what a richer error model looks like in practice.
tldw: Kotlin and Spring show how a modern language on a mature ecosystem can make server side development feel cleaner and faster, with real app code and a candid account of moving from Java and using structured RAG.
tldw: See how to get a full stack side project running entirely in Kotlin, from Postgres CRUD and coroutines to a high-fidelity Vaadin UI without writing any HTML CSS or JavaScript.
“State of Kotlin Wasm and Compose Multiplatform for Web on Modern Browsers | Pamela Hill”
Conference ⸱ +12k views ⸱ Jul 18, 2025 ⸱ 00h 13m 45s
tldw: Kotlin Wasm and Compose Multiplatform show how Kotlin multiplatform runs in modern browsers and what to expect from the upcoming Beta, worth a watch if you build web or Kotlin apps.
“Building AI Agents in Kotlin with Koog | Vadim Briliantov”
Conference ⸱ +11k views ⸱ Jul 05, 2025 ⸱ 00h 46m 28s
tldw: Koog brings AI agents to Kotlin and shows how LLMs can take charge of dynamic workflows. Less theory, more concrete building blocks and basic workflows you can use in real Kotlin projects.
“Next level Kotlin support in Spring Boot 4 | Sébastien Deleuze”
Conference ⸱ +9k views ⸱ Jul 11, 2025 ⸱ 00h 39m 42s
tldw: Spring Boot 4 finally brings a Kotlin 2 baseline, JSpecify null-safety, GraalVM native image support and new DSLs so watch this talk if you build Kotlin on Spring and want faster runtimes and nicer APIs.
“Kotlin and Compose Multiplatform Patterns for iOS Interop | John O’Reilly”
Conference ⸱ +9k views ⸱ Jul 09, 2025 ⸱ 00h 37m 06s
tldw: Kotlin Multiplatform and Compose Multiplatform show how to handle iOS interop with real world UI and non-UI patterns from five years of experience. Worth watching if you’re sharing UI between Android and iOS.
“Compose Prototyping in Kotlin Notebooks | Christian Melchior”
Conference ⸱ +9k views ⸱ Jul 30, 2025 ⸱ 00h 15m 23s
tldw: Kotlin Notebooks mix markdown and REPL style code and now support Compose and Swing for live UI prototyping. This lightning talk shows how to hook up existing UI code or build new views, run them inside the notebook and publish the results as docs on GitHub or Gists.
“Build Websites in Kotlin & Compose HTML with Kobweb | David Herman”
Conference ⸱ +7k views ⸱ Aug 21, 2025 ⸱ 00h 41m 16s
tldw: Kobweb turns Kotlin and Compose HTML into a real web stack you can actually use, with live coding that builds and exports a full site so you can see how Kotlin handles HTML CSS and the DOM.
Any suggestions? Leave a comment.
There’s plenty of talks this year about upgrading legacy C++ projects. This year, the most-watched C++ talks cover migration to C++20, new C++26 features, memory safety efforts like Fil-C, lock-free queues, contracts, better polymorphism, graphics work with SDL3 and practical Rust and JavaScript interop.
Join 7,200+ Software Engineers & Engineering Leads who receive a free weekly email with all the recently published podcasts & conference talks. Stop scrolling through messy YT subscriptions. Stop FOMO. Easy to unsubscribe. No spam, ever.
This issue is free for everyone, so feel free to share it or forward it.
Enjoy ☀️
2025-12-04 23:48:07
In my web development career, I've so far worked with three quote and quote "10X" (or "cracked") developers across three different projects.
They are amazing at work, complete more tickets per sprint than the rest of the team combined, work 12+ hours a day, skip weekends, rarely take any leave, and most importantly, they can get you a POC in less than a week or two (a bond they might be cursed with forever).
A 10X developer’s relationship with POCs often begins with an ambitious feature request.
The lead proposes that we first make a POC, do some feasibility analysis, and run user acceptance tests. "But we don't have much time", says the product manager. They look at the 10X developer and ask, "We can do this in two weeks, right?".
The 10X developer grinds for two weeks and delivers a prototype that's “good enough.” It goes into testing and passes with flying colours. The prototype is merged into the main project. Success.
A new feature request comes in. The POC is assigned to the 10X developer. This time, the prototype didn't pass testing, and the feature is marked not feasible, saving us months of development time. Success again.
Over time (sprint after sprint), the 10X developer gets assigned more and more POC work. Eventually, they stop doing any bug fixing, enhancements, upgrades, or maintenance. All they do now is create POCs.
Every big and small feature now needs a POC, and who handles that? Correct: the 10X developer.
As more and more POC work gets assigned, they get less and less time to actually work on it.
The prototype quality drops from good enough to barely functional, and they start writing clever but garbage code that's hard to understand and a nightmare to maintain. Once the prototype is merged, they don't have time to continue working on that feature, as they have five more POCs to work on. That work is assigned to someone else in the team (yes, sometimes it's you).
You realise the prototype that was “supposedly” built to handle edge cases barely works even on the happy path. Product believes the prototype accounts for the future specs (developed by the 10X developer), but you only know that it barely covers the current ones.
They grow out of touch with actual, real-world project work. They now live in the POC world, where everything is a green field project. Their system never breaks; it's always someone else's faulty code. Anyways, they don't have time to fix it. They’ve already got ten more POCs to work on. And now it’s your responsibility to fix it.
They start shipping more features in a quarter than the rest of the team combined in a year. Meanwhile, you get less and less time to stabilise the prototype and get it into production. The 10X developer assures you it's already “production-ready.” You just need to iron out the cloth, the only issue being, the cloth has multiple holes.
Eventually, everyone on the team starts resenting having to pick up the pieces from the 10X developer's prototypes. No one wants to work with them anymore.
What was once an outlier becomes the baseline. People start burning out, start leaving the project, and with each departure, the project slips from bearable to broken.
What was once a thriving codebase and a colourful team becomes a barren, forgotten land.
2025-12-04 23:42:41
This is Lesson 1 of a free 8-lesson Git course on Nemorize. Continue learning with spaced repetition quizzes.
If you've ever felt lost using Git, copy-pasting commands from Stack Overflow without understanding what they do, you're not alone. The problem isn't you — it's that most Git tutorials teach you commands before teaching you concepts.
Imagine learning to drive by memorizing "turn the wheel 37 degrees left, press pedal 2 inches" without understanding that the wheel controls direction and the pedal controls speed. That's how most people learn Git!
This lesson builds the mental model that makes Git intuitive. Once you understand how Git thinks, the commands will make perfect sense. 🧠
Let's start with the big reveal: Git is not primarily a version control system. At its core, Git is a content-addressable filesystem with a version control interface built on top.
What does "content-addressable" mean? It means Git stores data (files, directories, commits) and gives each piece a unique address based on its content. This address is a SHA-1 hash — a 40-character string like a3f5c9b2e8d1f6a7b4c3e2d1f0a9b8c7d6e5f4a3.
🤔 Did you know? The same file content always produces the same hash, no matter where or when you save it. This makes Git incredibly efficient — if two branches have the same file, Git only stores it once!
Here's the fundamental concept that changes everything:
Git does not store changes (diffs). Git stores complete snapshots of your project.
Most other version control systems store the original file and then a series of changes:
Version 1: "Hello world"
Version 2: Change line 1 to "Hello Git"
Version 3: Add line 2 "Welcome"
Git doesn't work this way. Instead, it takes a complete snapshot of your entire project at each commit:
Commit 1: [Complete snapshot of all files]
Commit 2: [Complete snapshot of all files]
Commit 3: [Complete snapshot of all files]
💡 Efficiency tip: Don't worry about storage! If a file hasn't changed between commits, Git doesn't store it again — it just creates a link to the previous version. Git is incredibly space-efficient.
This snapshot model is why Git is so powerful for branching and merging. Each commit is a complete, self-contained snapshot, not dependent on a chain of changes.
Git organizes your work into three distinct areas. Understanding these is crucial:
+------------------+ +------------------+ +------------------+
| WORKING | | STAGING | | REPOSITORY |
| DIRECTORY | ----> | AREA | ----> | (.git folder) |
| | git | (INDEX) | git | |
| Your actual | add | Prepared | commit| Permanent |
| files you edit | | snapshot | | history |
+------------------+ +------------------+ +------------------+
This is your playground — the actual files and folders you see and edit. When you open a file in your text editor, you're working in the working directory.
Think of it as your drafting table where you make changes freely. Nothing here affects Git's history until you explicitly tell Git about it.
The staging area is Git's unique feature that confuses many beginners but becomes incredibly powerful once you understand it.
Think of the staging area as a photography studio where you arrange exactly what you want in your snapshot before taking the picture. You can:
The command git add moves changes from the working directory to the staging area.
💡 Why have a staging area? It lets you create clean, logical commits. You might have changed 10 files while working, but you can stage and commit related changes separately, making your history clear and meaningful.
The repository is Git's permanent database — the .git folder in your project. This is where Git stores:
When you run git commit, Git takes everything from the staging area and creates a permanent snapshot in the repository.
⚠️ Important: Once something is committed to the repository, it's nearly impossible to lose. Even if you "delete" a commit, Git usually keeps it for weeks. This is why Git is so safe!
A commit is Git's fundamental unit. Let's demystify what a commit really is.
A commit contains:
+-----------------------------------+
| Commit: a3f5c9b |
|-----------------------------------|
| Author: Jane Developer |
| Date: 2024-01-15 10:30 |
| Message: "Add login feature" |
|-----------------------------------|
| Parent: f2e4d8a |
|-----------------------------------|
| Snapshot: |
| - index.html [hash: d3a2...]|
| - style.css [hash: b7f1...]|
| - app.js [hash: e8c4...]|
+-----------------------------------+
|
v
+-----------------------------------+
| Commit: f2e4d8a (parent) |
|-----------------------------------|
| Author: John Coder |
| Date: 2024-01-14 15:20 |
| Message: "Initial commit" |
+-----------------------------------+
Each commit points to its parent, creating a chain (actually a directed acyclic graph, but we'll get to that later). This chain is your project's history.
🧠 Mnemonic: Think of commits as linked snapshots — each one is complete but knows where it came from.
Let's walk through creating a commit step by step, tracking what happens in each area:
Starting state:
Working Directory: (empty new project)
Staging Area: (empty)
Repository: (empty)
Step 1: Create a file
echo "Hello Git" > readme.txt
Working Directory: readme.txt (modified/new)
Staging Area: (empty)
Repository: (empty)
Git doesn't know about this file yet. It's only in your working directory.
Step 2: Stage the file
git add readme.txt
Working Directory: readme.txt
Staging Area: readme.txt (staged for commit)
Repository: (empty)
Git has prepared a snapshot that includes this file. The staging area now holds a copy.
Step 3: Commit the snapshot
git commit -m "Add readme file"
Working Directory: readme.txt
Staging Area: (clean - matches repository)
Repository: Commit a3f5c9b: "Add readme file"
↳ Contains: readme.txt
Git has created a permanent snapshot in the repository! The staging area is now clean because it matches what's in the repository.
This example shows why the staging area is powerful:
Starting state: You have a committed project with index.html and styles.css.
You make changes:
# Fix a bug in index.html
# Add a new feature in index.html
# Update colors in styles.css
Working Directory: index.html (modified)
styles.css (modified)
Staging Area: (clean)
Repository: Last commit
You want two separate commits (one for the bugfix, one for the feature + styling). You can stage selectively:
# Stage only the bugfix lines from index.html
git add -p index.html
# (select only the bugfix hunks)
git commit -m "Fix navigation bug"
Now commit 1 contains only the bugfix. Then:
# Stage the rest
git add index.html styles.css
git commit -m "Add color theme feature"
Now commit 2 contains the feature and styling changes. Your history is clean and logical! 🎉
A file in your project can be in one of several states:
+----------------+ +----------------+ +----------------+
| UNTRACKED | | UNMODIFIED | | MODIFIED |
| (new file, |---->| (committed, |---->| (changed since |
| git doesn't | git | hasn't been | edit| last commit) |
| know about it)| add | changed) | | |
+----------------+ +----------------+ +----------------+
| |
v v
+----------------+ +----------------+
| REMOVED | | STAGED |
| (deleted, | | (ready to be |
| staged for | | committed) |
| deletion) | | |
+----------------+ +----------------+
You can check file states with git status:
$ git status
On branch main
Changes to be committed: # STAGED
(use "git restore --staged <file>..." to unstage)
modified: index.html
Changes not staged for commit: # MODIFIED
(use "git add <file>..." to stage)
modified: styles.css
Untracked files: # UNTRACKED
(use "git add <file>..." to include)
new-feature.js
Commits form a graph structure. Here's a simple linear history:
A <-- B <-- C <-- D (main)
^
HEAD
Each letter is a commit. Arrows point to parent commits. main is a branch (just a pointer to a commit). HEAD points to where you are now.
When you create a new commit:
git commit -m "Add new feature"
Git creates commit E:
A <-- B <-- C <-- D <-- E (main)
^
HEAD
The main branch moves forward automatically, and HEAD follows it.
💡 Key insight: Branches are just moveable pointers to commits. Creating a branch doesn't copy any files — it just creates a new pointer!
Once you understand that:
...then all Git commands become logical:
git add = "Move this to staging"git commit = "Create a snapshot from staging"git checkout = "Move HEAD to a different commit"git branch = "Create a new pointer"git merge = "Combine two commit histories"git rebase = "Replay commits on a different base"No more memorizing! Each command is just manipulating these fundamental structures.
❌ Wrong thinking: "Git saves the changes I made to each file."
✅ Correct thinking: "Git saves complete snapshots. It can show me diffs by comparing snapshots."
Why it matters: This misconception makes branching and merging seem scary. If you think Git stores diffs, you imagine complex chains of changes that might conflict. But since Git stores complete snapshots, branches are just different snapshot sequences!
❌ Wrong: "If I delete a file, it's gone from Git forever."
✅ Correct: "If I delete a file from my working directory, it's still in the repository until I commit that deletion."
Example:
rm important.txt # Deleted from working directory only!
git status # Shows "deleted: important.txt"
git restore important.txt # Brings it back from repository!
The file is safe in the repository until you git add the deletion and commit it.
❌ Wrong pattern:
git add .
git commit -m "Made changes"
This stages everything indiscriminately, leading to messy commits that mix unrelated changes.
✅ Better pattern:
git add file1.js file2.js # Only related changes
git commit -m "Add user authentication"
git add file3.css
git commit -m "Update button styles"
Each commit has a clear, single purpose.
❌ Wrong: "I'll wait until everything is perfect before committing."
✅ Correct: "I'll commit small, logical units of work frequently."
Why: Frequent commits give you more save points to return to. If you break something, you can easily go back. Commits are cheap — Git is designed for many small commits, not few giant ones.
❌ Wrong: "HEAD is magical and confusing."
✅ Correct: "HEAD is just a pointer showing where I am right now (usually pointing to a branch, which points to a commit)."
HEAD --> main --> commit D
When you checkout a different branch, HEAD moves:
HEAD --> feature --> commit F
That's it! No magic.
Git is a content-addressable filesystem that stores data by content hash
Git stores snapshots, not diffs — each commit is a complete picture of your project
Three areas exist:
Commits are linked snapshots containing:
The staging area lets you craft clean, logical commits by selecting exactly what to snapshot
Branches are just moveable pointers to commits — cheap and easy to create
HEAD is a pointer showing where you are (usually points to a branch)
This mental model makes all Git commands logical — they're just manipulating these structures
🔧 Try this: After this lesson, run git status in one of your projects. You'll now understand exactly what each section means: which area each file is in, and what Git is telling you about the state of your project!
Nemorize https://nemorize.com/preview/019ae981-1807-79b3-b117-358d2262f429 - Git Mastery — Version Control Without Fear
Git Internals - Plumbing and Porcelain (Official Git Book): https://git-scm.com/book/en/v2/Git-Internals-Plumbing-and-Porcelain Deep dive into how Git stores objects and creates snapshots
A Visual Git Reference: https://marklodato.github.io/visual-git-guide/index-en.html Excellent diagrams showing how Git commands affect the three areas
Git from the Bottom Up: https://jwiegley.github.io/git-from-the-bottom-up/ Builds Git understanding from the object database up
+------------------------------------------------------------------+
| GIT MENTAL MODEL CHEAT SHEET |
+------------------------------------------------------------------+
| CORE CONCEPT |
| Git stores SNAPSHOTS, not changes |
| Each commit = complete picture of your project |
+------------------------------------------------------------------+
| THREE AREAS |
| Working Directory → Staging Area → Repository |
| (your files) (prepared) (permanent) |
+------------------------------------------------------------------+
| KEY COMMANDS |
| git add <file> Move working → staging |
| git commit -m "..." Move staging → repository (create snapshot)|
| git status See state of all three areas |
| git log View commit history |
+------------------------------------------------------------------+
| WHAT IS A COMMIT? |
| ✓ Complete snapshot of all files |
| ✓ Author, date, message |
| ✓ Parent commit pointer(s) |
| ✓ Unique SHA-1 hash (address) |
+------------------------------------------------------------------+
| FILE STATES |
| Untracked → git doesn't know about it |
| Unmodified → committed, hasn't changed |
| Modified → changed since last commit |
| Staged → ready to be committed |
+------------------------------------------------------------------+
| REMEMBER |
| • Commits are CHEAP → commit often! |
| • Staging area = craft logical commits |
| • Branches = just pointers to commits |
| • HEAD = where you are now |
+------------------------------------------------------------------+
You now have the foundational mental model that makes Git make sense! In the next lessons, we'll build on this foundation to master branching, merging, rebasing, and fixing mistakes — all of which will be easy now that you understand how Git really works. 🚀
2025-12-04 23:40:54
Tired of juggling multiple React packages in a messy monorepo? Meet Phyre – a lightweight React SSR framework designed to bring order, simplicity, and scalability to your projects.
Here’s why it’s different:
⚡ Fast SSR with React Router 7
📁 Automatic file-based routing (including dynamic routes)
🗂️ Monorepo support out-of-the-box – share packages and components effortlessly
🔄 Hot Module Reload for instant feedback
🎨 Built-in Tailwind CSS integration – style without headaches
🔒 Type-safe environment variables & routing
Phyre was built to scale with your team while keeping things neat, predictable, and fast. Whether you’re managing multiple packages, micro-frontends, or just love clean structure, Phyre has your back.
Curious? Dive into the docs and examples here: Phyre Documentation
Let’s talk about monorepos, SSR, and clean architecture – I’d love to hear how you handle scale in React apps!