MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Determine High-Performing Database Solutions

2026-02-08 13:42:07

Exam Guide: Solutions Architect - Associate
⚡ Domain 3: Design High-Performing Architectures
📘 Task Statement 3.3

🎯 Determining High-Performing Database Solutions is about picking and designing databases that meet:

1 Performance goals
2 Scale requirements
3 Availability expectations
4 Operational constraints

Start with the data model + access pattern: relational vs key-value vs document, then choose the service, then add performance boosters: read replicas, caching, connection pooling.

Knowledge

1 | AWS Global Infrastructure

Availability Zones And Regions

  • Multi-AZ deployments improve availability and can improve performance under failure.
  • Multi-region designs support disaster recovery and global performance.

“Must survive AZ outage”Multi-AZ
“Global users with low latency”global DB patterns

2 | Caching Strategies And Services

Amazon ElastiCache

Caching reduces database load and improves latency.

  • ElastiCache for Redis: caching + sessions + pub/sub + sorted sets
  • ElastiCache for Memcached: simple, distributed cache, no persistence

“Reduce read load / hot keys / repeated queries”ElastiCache.

3 | Data Access Patterns

Read-Intensive vs Write-Intensive

This is one of the most important drivers of database design:

1 Read-heavy → add caching, read replicas, or purpose-built read scaling
2 Write-heavy → consider partitioning/sharding patterns, or DynamoDB if it fits
3 Spiky traffic → serverless options or buffering with queues

4 | Database Capacity Planning

Capacity Units, Instance Types And Provisioned IOPS

1 RDS/Aurora performance depends on instance size, storage type, and sometimes Provisioned IOPS
2 DynamoDB uses RCUs/WCUs (or on-demand) and partition design affects performance
3 High-performance workloads often need correct sizing plus monitoring

5 | Database Connections And Proxies

Connection limits are a common real-world and exam bottleneck.

Amazon RDS Proxy pools connections and helps with spiky connection patterns (especially Lambda) and helps reduce failover impact and connection storms.

“Serverless app is exhausting DB connections”RDS Proxy.

6 | Database Engines With Appropriate Use Cases

Homogeneous vs Heterogeneous Migrations

  • Homogeneous migration: same engine to same engine (e.g., MySQL → MySQL)
  • Heterogeneous migration: different engines (e.g., Oracle → PostgreSQL) _ AWS DMS is commonly used for migrations (especially minimal downtime)._

7 | Database Replication

Read Replicas

Read replicas are mainly for:
1 Scaling reads
2 Offloading reporting/analytics queries
3 Cross-region read performance (depending on engine)

Reminder:

  • Read replicas are usually asynchronous
  • Multi-AZ is for availability, not for read scaling

8 | Database Types And Services

Relational (SQL)

Amazon RDS: MySQL, PostgreSQL, MariaDB, Oracle, SQL Server
Amazon Aurora MySQL/PostgreSQL-compatible, high performance, managed

Non-relational (NoSQL)

Amazon DynamoDB: key-value/document, massive scale, low latency

In-memory

ElastiCache: Redis/Memcached (cache, sessions)

Serverless Database Patterns

Aurora Serverless v2: elastic relational capacity

Skills

A | Configure Read Replicas To Meet Business Requirements

You Should Know When And Why

1 Add replicas to scale reads and isolate reporting workloads
2 Place replicas in other AZs or Regions if needed (engine-dependent)
3 Monitor replication lag and route read traffic appropriately

B | Design Database Architectures

Typical high-performing patterns:
1 App → (optional cache) → DB
2 Multi-AZ for HA
3 Read replicas for scaling reads
4 Shard/partition when required (more advanced, usually not primary SAA topic)
5 Offload analytics to separate systems when needed

C | Determine An Appropriate Database Engine

MySQL vs PostgreSQL, etc.

Expectation: pick based on compatibility/features/organization standards rather than arguing favorites.

1 Choose MySQL/Aurora MySQL when compatibility with MySQL ecosystem is needed.
2 Choose PostgreSQL/Aurora PostgreSQL when advanced SQL features/extensions are needed.
3 Choose commercial engines (Oracle/SQL Server) when required by licensing/app constraints.

D | Determine An Appropriate Database Type

Aurora vs DynamoDB

Fast rules:
1 Need joins/transactions/relational schemaRDS/Aurora
2 Need massive scale + low latency key-value/documentDynamoDB
3 Need sub-millisecond repeated reads → add ElastiCache

DynamoDB vs RDS is a frequent exam decision point.

E |Integrate Caching To Meet Business Requirements

Caching Options

  • ElastiCache for app-side caching of hot data
  • DAX (DynamoDB Accelerator) for DynamoDB read caching (in-memory, managed)

“Microsecond reads for DynamoDB queries”DAX (if DynamoDB is the DB).

Cheat Sheet

Requirement Database
Relational, transactions, joins RDS or Aurora
High performance managed relational Aurora
Key-value/document, massive scale DynamoDB
Read-heavy workload Read replicas + caching
Repeated hot reads / lower latency ElastiCache (or DAX for DynamoDB)
Lambda too many DB connections RDS Proxy
Global low-latency reads + DR Aurora Global Database / DynamoDB Global Tables (if mentioned)
Migrate DB with minimal downtime AWS DMS

Recap Checklist ✅

1. [ ] Database choice matches data model (relational vs non-relational)

2. [ ] Read-heavy workloads use read scaling (read replicas) and/or caching

3. [ ] Write scaling is considered (correct service + partition design if DynamoDB)

4. [ ] Connection spikes are handled (RDS Proxy when appropriate)

5. [ ] Capacity planning is understood at a high level (instance types, IOPS, RCUs/WCUs)

6. [ ] Multi-AZ is used for availability; read replicas are used for read scaling

7. [ ] Caching is integrated appropriately (ElastiCache/DAX)

AWS Whitepapers and Official Documentation

These are the primary AWS documents behind Task Statement 3.3

You do not need to memorize them, use them to understand how to Design High-Performing Database Solutions

Core database services

1. Amazon RDS

2. Amazon Aurora

3. Aurora Serverless v2

4. Amazon DynamoDB

Read scaling, HA, and connections

1. RDS Read Replicas

2. RDS Multi-AZ (concepts)

3. Amazon RDS Proxy

Caching

1. ElastiCache (Redis/Memcached)

2. DynamoDB Accelerator (DAX)

Migration

AWS Database Migration Service (DMS)

Capacity planning references

1. DynamoDB Capacity Modes

2. RDS storage options

🚀

Lupine.js - The Lightweight Frontend & Efficient Backend Framework

2026-02-08 13:38:48

Introducing Lupine.js: The "Unreasonably" Efficient Web Framework

In a world dominated by massive meta-frameworks and complex build chains, Lupine.js asks a simple question: What if we could have the power of a modern full-stack framework without the bloat?

Lupine.js is a lightweight (7kb gzipped), full-stack web framework that combines a React-like frontend with an Express-like backend. It is designed from scratch for speed, simplicity, and efficiency.

Why Lupine.js?

1. 🪶 Extremely Lightweight Frontend

The lupine.web frontend package is tiny—just 7kb gzipped. Yet, it retains the developer experience you know and love: TSX syntax (React JSX), components, and hooks. There is no heavy runtime to download, meaning your pages load instantly even on slow connections.

2. ⚡ Built-in Server-Side Rendering (SSR)

Most frameworks treat SSR as an add-on. In Lupine, SSR is a first-class citizen. The lupine.api backend is optimized to render your frontend pages on the server automatically.

  • No FOUC: Critical CSS is injected server-side.
  • Zero-Config SEO: Meta tags (og:image, description) are calculated before the page leaves the server.
  • Sharing Ready: Your links look great on Twitter/Facebook out of the box.

3. 🎨 Native CSS-in-JS Engine

Say goodbye to configuring PostCSS, Tailwind, or styled-components. Lupine includes a powerful CSS-in-JS engine built right in.

  • Scoped Styles: Styles are automatically scoped to your component.
  • Nesting: Support for .parent & syntax.
  • Performance: Styles are extracted and injected efficiently during SSR.
const Button = () => {
  const css = {
    backgroundColor: '#0ac92a',
    '&:hover': {
      backgroundColor: '#08a823',
    },
  };
  return <button css={css}>Click Me</button>;
};

4. 🚀 Full-Stack in One Place

Lupine isn't just a frontend library; it's a complete app solution.

  • Backend (lupine.api): An efficient, minimalist Node.js framework similar to Express.
  • Frontend (lupine.web): A reactive UI library.
  • Dev Experience: Run npm run dev and debug both frontend and backend in a single VS Code session.

Quick Start

Ready to give it a try? You can scaffold a new project in seconds.

Step 1: Create a Project

Use our CLI tool to create a new app.

npx create-lupine@latest my-awesome-app

Step 2: Run it

Enter the directory and start the dev server.

cd my-awesome-app
npm install
npm run dev

Visit http://localhost:11080 and you'll see your first Lupine app running!

Code Frequency & Activity

Lupine is actively developed. You can check our code frequency and contributions directly on GitHub:
👉 https://github.com/uuware/lupine.js

Conclusion

Lupine.js is perfect for developers who want:

  • Control: Understand every part of your stack.
  • Speed: Deliver the fastest possible experience to users.
  • Simplicity: No hidden magic, just clean code.

Give Lupine.js a star on GitHub and try it for your next project!

pdftk is GOAT

2026-02-08 13:37:34

I am not aware of FOSS pdf editor for Linux (I really need one), and I had free subscription for Foxit Editor, sso I used to boot Windows just to edit PDF bookmarks (my dumb uni can't seem to add them), even though I daily-drive Linux.

I did a quick search, and I found pdftk, an absolute masterpiece

  1. Dump PDF metadata out
# input the pdf
pdftk random.pdf dump_data_utf8 output output.txt
  1. Edit them

in output.txt grep for BookmarkBegin and start editing, if it is missing just append it at the end of the pdf (one block per bookmark)

BookmarkBegin
BookmarkTitle: Bookmark Title
BookmarkLevel: 1
BookmarkPageNumber: 1

save your file

  1. apply the new metadata
pdftk random.pdf update_info_utf8 output.txt output output.pdf

this so much more faster than old workflow tbh

3 cool things about the `create-vite` CLI you might not have known

2026-02-08 13:35:47

I wanted to learn how to make pleasant and interactive CLI like the creat-vite project scaffolding CLI, so yesterday I took a quick dive into the code of create-vite (note: not vite itself) to study what magical sauces they have and hopefully learn some cool new techniques along the way.

At a glance

Everything starts in the init function of the src/index.ts file

At a glance, we can see that the CLI progresses through 6 stages:

  1. Get (or ask for) project name and target directory
  2. Handle directory if exist and not empty
  3. Get package name (if the project name at step 1 is an invalid NPM package name)
  4. ASk users to choose a framework and variant
  5. Ask user if immediate install is desired
  6. Finally starts scaffolding folders and files

The main "stars" of this elegant experience includes:

  • mri library for working with CLI arguments
  • @clack/prompts library for displaying pretty interactive prompts
  • And picocolors for adding colors to the console log

Overall, the create-vite CLI is a pretty straightforward and simple tool. Diving into the code, I learned some interesting details.

Cool thing 1 - All the different CLI flags

On README, you might see that create-vite CLI supports the --template flag, but that's not the only one. Here are some more:

  • --overwrite / --no-overwrite: Do you want to overwrite if a non-empty directory already exists at your target location
  • --immediate / --no-immediate: Marks your preference for step (5) - i.e. do you want immediate install and starting the Dev server after scaffolding
  • --interactive (-i) / --no-interactive: Should Vite prompt you for answer, or assume default values? By default, the template is vanilla-ts if none is provided, and overwrite and immediate is false. The No-Interactive mode is useful when running create-vite as part of some unmonitored CI/CD pipeline, or if an AI agent is running some command.

Additionally, here are the full list of template names that you can pass into the --template argument:

// vanilla
"vanilla-ts",
"vanilla",

// vue
"vue-ts",
"vue",
"custom-create-vue",
"custom-nuxt",
"custom-vike-vue",

// react
"react-ts",
"react-compiler-ts",
"react-swc-ts",
"react",
"react-compiler",
"react-swc",
"rsc",
"custom-react-router",
"custom-tanstack-router-react",
"redwoodsdk-standard",
"custom-vike-react",

// preact
"preact-ts",
"preact",
"custom-create-preact",

// lit
"lit-ts",
"lit",

// svelte
"svelte-ts",
"svelte",
"custom-svelte-kit",

// solid
"solid-ts",
"solid",
"custom-tanstack-router-solid",
"custom-vike-solid",

// ember
"ember-app-ts",
"ember-app",

// qwik
"qwik-ts",
"qwik",
"custom-qwik-city",

// angular
"custom-angular",
"custom-analog",

// marko
"marko-run",

// others
"create-vite-extra",
"create-electron-vite"

Cool thing 2 - "Support" for AI agent

This is a fun little detail, but create-vite uses @vercel/detect-agent to determine if an agent is running the CLI. And if isAgent and interactive mode is enabled, the CLI will log a helpful message

To create in one go, run: create-vite --no-interactive --template

Cool thing 3 - Some coding techniques

Here are some cool programming techniques I though were very interesting:

  1. Determining the package mananger used via npm_config_user_agent ENV

Ever wondered how CLI can determine what package manager you used, so they can continue using that in subsequent commands? It's all thanks to the npm_config_user_agent environment variable. Each package manager sets the variable accordingly (like how pnpm does here).

Example: You can run pnpm config get user-agent to get the full agent string:

pnpm/10.20.0 npm/? node/v20.11.1 linux x64

Then you can split by space and then by slash to get the package manager name.

  1. Detect whether standard input is connected to a terminal or not via process.stdin.isTTY.

The terminal input can also be piped in (like cat data.txt | xargs pnpm create-vite), in which case interactivity won't be possible. As a result, the CLI only enables interactive mode if isTTY is true

  1. Handling Control-C gracefully

After every prompt, I noticed that there is always a check to see if user has cancelled the command so that we can gracefully display the message "Operation Cancelled".

if (prompts.isCancel(projectName)) return cancel()

This technique feels so obvious in retrospect (and the @clack/prompts creator also recommends so), but seeing how it is employed in production-ready code base really cements the idea of handling user cancellation gracefully in CLI for me.

Wired Django, Nextcloud, Grafana, Loki &amp; Prometheus into a secure observability mesh over Tailnet (metrics &amp; logs, dashboards).

2026-02-08 13:10:03

Building an Observability Mesh with Grafana, Loki, and Prometheus
When multiple backend services start running in isolation, debugging becomes guesswork. My recent sprint was about turning that guesswork into clarity — by wiring up full observability across Django, Nextcloud, Grafana, Loki, and Prometheus.

Goal
Unify logs and metrics across services in a distributed setup — all communicating over Caddy TLS and my Tailnet domain.
I wanted one dashboard that could tell me everything about my system’s health without SSH-ing into individual servers.

Architecture
Here’s the high-level design:

Architecture flow diagram

Stack Overview

  • Prometheus → scrapes metrics from Django and Nextcloud API endpoints

  • Loki → ingests logs from both services

  • Grafana → visualizes metrics and logs together

  • Caddy → reverse proxy with trusted TLS for all endpoints

  • Tailnet (Tailscale) → private network with identity-based access

Everything talks securely — no exposed ports, no unencrypted traffic.

Challenges

1. Grafana showed logs but no metrics
Root cause: Prometheus targets weren’t reachable after moving from localhost to tailnet hostnames.

2. TLS verification issues in Prometheus
Solved by updating Caddy’s certificates and confirming Prometheus scrape configs pointed to HTTPS endpoints.

3. Cross-service routing
Caddy needed to handle routes like /metrics, /api/schema, and /api/* correctly between Django and Nextcloud.

Config Highlights

Here’s a simplified Prometheus scrape config example:

scrape_configs:

  • job_name: "django" metrics_path: /metrics static_configs:
  • targets: ["X.tail.ts.net:8000"]

  • job_name: "nextcloud" metrics_path: /metrics static_configs:

  • targets: ["X.tail.ts.net:8080"]

Both routes sit behind Caddy, which handles TLS termination using trusted Tailnet certificates.

Results
Once Prometheus started scraping successfully, Grafana dashboards came alive.

grafana example dashboard

Now I can:

  • Correlate logs and metrics per request

  • Track uptime and performance trends

  • Visualize distributed system behavior across all nodes

It feels like operating my own mini control plane — distributed, secure, and explainable.

Next Steps

  • Add distributed tracing (OpenTelemetry)

  • Define Prometheus alert rules for critical endpoints

  • Automate observability config rollout via CI/CD

Key Takeaway
Observability isn’t an add-on — it’s the nervous system of your infrastructure.
When your servers start talking, you start listening differently.

rut: A Python Test Runner That Skips Unaffected Tests

2026-02-08 13:05:13

Nothing bugs me more than waiting for the computer to do something I already know is pointless. Changed one file and watching 500 unrelated tests run? That's wasted time I'm not getting back.

In 2008, I created doit — a build tool that tracks file dependencies and only rebuilds what changed. Same idea as make, but for Python workflows.

Then I built pytest-incremental — applying the same principle to tests. If you change utils.py, only run tests that depend on it. Skip the rest.

Now there's rut.

Why another test runner?

pytest-incremental requires pytest. Its plugins are great individually, but combining multiple plugins into a consistent experience is hard — they step on each other, configuration gets fragile, and debugging interactions is painful.

Codebases have grown orders of magnitude, and AI-assisted workflows are accelerating that further. We need new test infrastructure to keep up. Parallelization helps, but fast turnaround is still king — skipping what doesn't need to run beats running it faster.

rut is simple:

pip install rut

rut              # run all tests, build dependency graph
rut --changed    # run only affected tests

How it works

rut analyzes your import graph. If api.py imports models.py which imports utils.py, and you change utils.py, rut knows to run tests for all three.

Tests for modules that don't depend on utils.py? Skipped.

For well-structured codebases, this typically means 50-80% fewer tests on incremental runs.

Read more: Dependency Ordering | Incremental Testing

Features

  • Dependency-aware ordering: foundational tests run first, so failures point to root causes
  • Async support: built-in, no plugins needed
  • --dry-run: see what would run without running it
  • unittest compatible: drop-in replacement for python -m unittest

Try it

pip install rut
rut

GitHub