2026-02-08 13:42:07
Exam Guide: Solutions Architect - Associate
⚡ Domain 3: Design High-Performing Architectures
📘 Task Statement 3.3
1 Performance goals
2 Scale requirements
3 Availability expectations
4 Operational constraints
Start with the data model + access pattern: relational vs key-value vs document, then choose the service, then add performance boosters: read replicas, caching, connection pooling.
“Must survive AZ outage” → Multi-AZ
“Global users with low latency” → global DB patterns
Caching reduces database load and improves latency.
“Reduce read load / hot keys / repeated queries” → ElastiCache.
This is one of the most important drivers of database design:
1 Read-heavy → add caching, read replicas, or purpose-built read scaling
2 Write-heavy → consider partitioning/sharding patterns, or DynamoDB if it fits
3 Spiky traffic → serverless options or buffering with queues
1 RDS/Aurora performance depends on instance size, storage type, and sometimes Provisioned IOPS
2 DynamoDB uses RCUs/WCUs (or on-demand) and partition design affects performance
3 High-performance workloads often need correct sizing plus monitoring
Connection limits are a common real-world and exam bottleneck.
Amazon RDS Proxy pools connections and helps with spiky connection patterns (especially Lambda) and helps reduce failover impact and connection storms.
“Serverless app is exhausting DB connections” → RDS Proxy.
Read replicas are mainly for:
1 Scaling reads
2 Offloading reporting/analytics queries
3 Cross-region read performance (depending on engine)
Reminder:
Amazon RDS: MySQL, PostgreSQL, MariaDB, Oracle, SQL Server
Amazon Aurora MySQL/PostgreSQL-compatible, high performance, managed
Amazon DynamoDB: key-value/document, massive scale, low latency
ElastiCache: Redis/Memcached (cache, sessions)
Aurora Serverless v2: elastic relational capacity
1 Add replicas to scale reads and isolate reporting workloads
2 Place replicas in other AZs or Regions if needed (engine-dependent)
3 Monitor replication lag and route read traffic appropriately
Typical high-performing patterns:
1 App → (optional cache) → DB
2 Multi-AZ for HA
3 Read replicas for scaling reads
4 Shard/partition when required (more advanced, usually not primary SAA topic)
5 Offload analytics to separate systems when needed
Expectation: pick based on compatibility/features/organization standards rather than arguing favorites.
1 Choose MySQL/Aurora MySQL when compatibility with MySQL ecosystem is needed.
2 Choose PostgreSQL/Aurora PostgreSQL when advanced SQL features/extensions are needed.
3 Choose commercial engines (Oracle/SQL Server) when required by licensing/app constraints.
Fast rules:
1 Need joins/transactions/relational schema → RDS/Aurora
2 Need massive scale + low latency key-value/document → DynamoDB
3 Need sub-millisecond repeated reads → add ElastiCache
DynamoDB vs RDS is a frequent exam decision point.
“Microsecond reads for DynamoDB queries” → DAX (if DynamoDB is the DB).
| Requirement | Database |
|---|---|
| Relational, transactions, joins | RDS or Aurora |
| High performance managed relational | Aurora |
| Key-value/document, massive scale | DynamoDB |
| Read-heavy workload | Read replicas + caching |
| Repeated hot reads / lower latency | ElastiCache (or DAX for DynamoDB) |
| Lambda too many DB connections | RDS Proxy |
| Global low-latency reads + DR | Aurora Global Database / DynamoDB Global Tables (if mentioned) |
| Migrate DB with minimal downtime | AWS DMS |
1. [ ] Database choice matches data model (relational vs non-relational)
2. [ ] Read-heavy workloads use read scaling (read replicas) and/or caching
3. [ ] Write scaling is considered (correct service + partition design if DynamoDB)
4. [ ] Connection spikes are handled (RDS Proxy when appropriate)
5. [ ] Capacity planning is understood at a high level (instance types, IOPS, RCUs/WCUs)
6. [ ] Multi-AZ is used for availability; read replicas are used for read scaling
7. [ ] Caching is integrated appropriately (ElastiCache/DAX)
These are the primary AWS documents behind Task Statement 3.3
You do not need to memorize them, use them to understand how to Design High-Performing Database Solutions
1. Amazon RDS
2. Amazon Aurora
3. Aurora Serverless v2
4. Amazon DynamoDB
1. RDS Read Replicas
2. RDS Multi-AZ (concepts)
3. Amazon RDS Proxy
1. ElastiCache (Redis/Memcached)
2. DynamoDB Accelerator (DAX)
AWS Database Migration Service (DMS)
1. DynamoDB Capacity Modes
2. RDS storage options
🚀
2026-02-08 13:38:48
In a world dominated by massive meta-frameworks and complex build chains, Lupine.js asks a simple question: What if we could have the power of a modern full-stack framework without the bloat?
Lupine.js is a lightweight (7kb gzipped), full-stack web framework that combines a React-like frontend with an Express-like backend. It is designed from scratch for speed, simplicity, and efficiency.
The lupine.web frontend package is tiny—just 7kb gzipped. Yet, it retains the developer experience you know and love: TSX syntax (React JSX), components, and hooks. There is no heavy runtime to download, meaning your pages load instantly even on slow connections.
Most frameworks treat SSR as an add-on. In Lupine, SSR is a first-class citizen. The lupine.api backend is optimized to render your frontend pages on the server automatically.
og:image, description) are calculated before the page leaves the server.Say goodbye to configuring PostCSS, Tailwind, or styled-components. Lupine includes a powerful CSS-in-JS engine built right in.
.parent & syntax.const Button = () => {
const css = {
backgroundColor: '#0ac92a',
'&:hover': {
backgroundColor: '#08a823',
},
};
return <button css={css}>Click Me</button>;
};
Lupine isn't just a frontend library; it's a complete app solution.
lupine.api): An efficient, minimalist Node.js framework similar to Express.lupine.web): A reactive UI library.npm run dev and debug both frontend and backend in a single VS Code session.Ready to give it a try? You can scaffold a new project in seconds.
Use our CLI tool to create a new app.
npx create-lupine@latest my-awesome-app
Enter the directory and start the dev server.
cd my-awesome-app
npm install
npm run dev
Visit http://localhost:11080 and you'll see your first Lupine app running!
Lupine is actively developed. You can check our code frequency and contributions directly on GitHub:
👉 https://github.com/uuware/lupine.js
Lupine.js is perfect for developers who want:
Give Lupine.js a star on GitHub and try it for your next project!
2026-02-08 13:37:34
I am not aware of FOSS pdf editor for Linux (I really need one), and I had free subscription for Foxit Editor, sso I used to boot Windows just to edit PDF bookmarks (my dumb uni can't seem to add them), even though I daily-drive Linux.
I did a quick search, and I found pdftk, an absolute masterpiece
# input the pdf
pdftk random.pdf dump_data_utf8 output output.txt
in output.txt grep for BookmarkBegin and start editing, if it is missing just append it at the end of the pdf (one block per bookmark)
BookmarkBegin
BookmarkTitle: Bookmark Title
BookmarkLevel: 1
BookmarkPageNumber: 1
save your file
pdftk random.pdf update_info_utf8 output.txt output output.pdf
this so much more faster than old workflow tbh
2026-02-08 13:35:47
I wanted to learn how to make pleasant and interactive CLI like the creat-vite project scaffolding CLI, so yesterday I took a quick dive into the code of create-vite (note: not vite itself) to study what magical sauces they have and hopefully learn some cool new techniques along the way.
Everything starts in the init function of the src/index.ts file
At a glance, we can see that the CLI progresses through 6 stages:
The main "stars" of this elegant experience includes:
mri library for working with CLI arguments@clack/prompts library for displaying pretty interactive promptspicocolors for adding colors to the console logOverall, the create-vite CLI is a pretty straightforward and simple tool. Diving into the code, I learned some interesting details.
On README, you might see that create-vite CLI supports the --template flag, but that's not the only one. Here are some more:
--overwrite / --no-overwrite: Do you want to overwrite if a non-empty directory already exists at your target location--immediate / --no-immediate: Marks your preference for step (5) - i.e. do you want immediate install and starting the Dev server after scaffolding--interactive (-i) / --no-interactive: Should Vite prompt you for answer, or assume default values? By default, the template is vanilla-ts if none is provided, and overwrite and immediate is false. The No-Interactive mode is useful when running create-vite as part of some unmonitored CI/CD pipeline, or if an AI agent is running some command.Additionally, here are the full list of template names that you can pass into the --template argument:
// vanilla
"vanilla-ts",
"vanilla",
// vue
"vue-ts",
"vue",
"custom-create-vue",
"custom-nuxt",
"custom-vike-vue",
// react
"react-ts",
"react-compiler-ts",
"react-swc-ts",
"react",
"react-compiler",
"react-swc",
"rsc",
"custom-react-router",
"custom-tanstack-router-react",
"redwoodsdk-standard",
"custom-vike-react",
// preact
"preact-ts",
"preact",
"custom-create-preact",
// lit
"lit-ts",
"lit",
// svelte
"svelte-ts",
"svelte",
"custom-svelte-kit",
// solid
"solid-ts",
"solid",
"custom-tanstack-router-solid",
"custom-vike-solid",
// ember
"ember-app-ts",
"ember-app",
// qwik
"qwik-ts",
"qwik",
"custom-qwik-city",
// angular
"custom-angular",
"custom-analog",
// marko
"marko-run",
// others
"create-vite-extra",
"create-electron-vite"
This is a fun little detail, but create-vite uses @vercel/detect-agent to determine if an agent is running the CLI. And if isAgent and interactive mode is enabled, the CLI will log a helpful message
To create in one go, run: create-vite --no-interactive --template
Here are some cool programming techniques I though were very interesting:
npm_config_user_agent ENVEver wondered how CLI can determine what package manager you used, so they can continue using that in subsequent commands? It's all thanks to the npm_config_user_agent environment variable. Each package manager sets the variable accordingly (like how pnpm does here).
Example: You can run pnpm config get user-agent to get the full agent string:
pnpm/10.20.0 npm/? node/v20.11.1 linux x64
Then you can split by space and then by slash to get the package manager name.
process.stdin.isTTY.The terminal input can also be piped in (like cat data.txt | xargs pnpm create-vite), in which case interactivity won't be possible. As a result, the CLI only enables interactive mode if isTTY is true
After every prompt, I noticed that there is always a check to see if user has cancelled the command so that we can gracefully display the message "Operation Cancelled".
if (prompts.isCancel(projectName)) return cancel()
This technique feels so obvious in retrospect (and the @clack/prompts creator also recommends so), but seeing how it is employed in production-ready code base really cements the idea of handling user cancellation gracefully in CLI for me.
2026-02-08 13:10:03
Building an Observability Mesh with Grafana, Loki, and Prometheus
When multiple backend services start running in isolation, debugging becomes guesswork. My recent sprint was about turning that guesswork into clarity — by wiring up full observability across Django, Nextcloud, Grafana, Loki, and Prometheus.
Goal
Unify logs and metrics across services in a distributed setup — all communicating over Caddy TLS and my Tailnet domain.
I wanted one dashboard that could tell me everything about my system’s health without SSH-ing into individual servers.
Architecture
Here’s the high-level design:
Stack Overview
Prometheus → scrapes metrics from Django and Nextcloud API endpoints
Loki → ingests logs from both services
Grafana → visualizes metrics and logs together
Caddy → reverse proxy with trusted TLS for all endpoints
Tailnet (Tailscale) → private network with identity-based access
Everything talks securely — no exposed ports, no unencrypted traffic.
Challenges
1. Grafana showed logs but no metrics
Root cause: Prometheus targets weren’t reachable after moving from localhost to tailnet hostnames.
2. TLS verification issues in Prometheus
Solved by updating Caddy’s certificates and confirming Prometheus scrape configs pointed to HTTPS endpoints.
3. Cross-service routing
Caddy needed to handle routes like /metrics, /api/schema, and /api/* correctly between Django and Nextcloud.
Config Highlights
Here’s a simplified Prometheus scrape config example:
scrape_configs:
targets: ["X.tail.ts.net:8000"]
job_name: "nextcloud" metrics_path: /metrics static_configs:
targets: ["X.tail.ts.net:8080"]
Both routes sit behind Caddy, which handles TLS termination using trusted Tailnet certificates.
Results
Once Prometheus started scraping successfully, Grafana dashboards came alive.
Now I can:
Correlate logs and metrics per request
Track uptime and performance trends
Visualize distributed system behavior across all nodes
It feels like operating my own mini control plane — distributed, secure, and explainable.
Next Steps
Add distributed tracing (OpenTelemetry)
Define Prometheus alert rules for critical endpoints
Automate observability config rollout via CI/CD
Key Takeaway
Observability isn’t an add-on — it’s the nervous system of your infrastructure.
When your servers start talking, you start listening differently.
2026-02-08 13:05:13
Nothing bugs me more than waiting for the computer to do something I already know is pointless. Changed one file and watching 500 unrelated tests run? That's wasted time I'm not getting back.
In 2008, I created doit — a build tool that tracks file dependencies and only rebuilds what changed. Same idea as make, but for Python workflows.
Then I built pytest-incremental — applying the same principle to tests. If you change utils.py, only run tests that depend on it. Skip the rest.
Now there's rut.
pytest-incremental requires pytest. Its plugins are great individually, but combining multiple plugins into a consistent experience is hard — they step on each other, configuration gets fragile, and debugging interactions is painful.
Codebases have grown orders of magnitude, and AI-assisted workflows are accelerating that further. We need new test infrastructure to keep up. Parallelization helps, but fast turnaround is still king — skipping what doesn't need to run beats running it faster.
rut is simple:
pip install rut
rut # run all tests, build dependency graph
rut --changed # run only affected tests
rut analyzes your import graph. If api.py imports models.py which imports utils.py, and you change utils.py, rut knows to run tests for all three.
Tests for modules that don't depend on utils.py? Skipped.
For well-structured codebases, this typically means 50-80% fewer tests on incremental runs.
Read more: Dependency Ordering | Incremental Testing
--dry-run: see what would run without running itpython -m unittest
pip install rut
rut