MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Pointers Store Addresses — So Why Do They Have Types?

2025-12-23 13:38:27

We’ve all heard about this: "A pointer is just a variable that stores a memory address."

That statement is technically true — but incomplete. If pointers were only addresses, every pointer would be exactly the same (usually 4 or 8 bytes of data). But in C/C++, we have char*, int*, double*, struct* and so on.

If the address is just a number, why does the type matter?

To understand pointer types, we have to look at how memory actually works. Physical memory doesn’t know the difference between an integer, a character, or a floating-point number. It is simply a sequence of bytes.

The pointer type acts as a template. It tells the compiler two critical things:

  1. Width: How many bytes to read starting from that address?
  2. Interpretation: Once we have those bytes, how do we treat them (e.g., as a signed integer, an IEEE 754 float, or an ASCII character)?

Memory visualization showing char* vs int* pointer behavior

A Tale of Two Pointers: Scenario A vs. Scenario B

Look at the visualization above. We have 4 bytes of memory starting at address 0x20000000.

Address        Value
0x20000000     0x01
0x20000001     0x02
0x20000002     0x03
0x20000003     0x04

Scenario A: char* pc

When we dereference a char*, the compiler thinks: "Okay, start at 0x20000000, read exactly 1 byte, and treat it as a character."
Result: 0x01

Scenario B: int* pi

When we dereference an int* at the exact same address, the compiler thinks: "Start at 0x20000000, read the next 4 bytes, and combine them into an integer."
On a little-endian system, it reads 01, 02, 03, 04 and interprets it as:
Result: 0x04030201

The address stayed the same. The data in memory stayed the same.
This difference exists even though the underlying hardware memory is unchanged.

The facts that matter more than any definition

Everything discussed so far reduces to a few fundamental truths:

  • Memory is a linear address space of byte-sized storage units
  • The address itself is only a starting point
  • Pointer types tell the compiler how to interpret those bytes

What pointer types tell the compiler

Because memory itself has no notion of types, the pointer type becomes critical.

A pointer type tells the compiler:

  • How many bytes to read or write
  • How to interpret those bytes
  • How pointer arithmetic should behave

Because pointer arithmetic is scaled by the size of the pointed-to type, this is why it works like this:

int *p;
p + 1;   // moves 4 bytes, not 1 because size of int is 4 bytes

Conclusion

If you only know that “a pointer stores an address,”
you haven’t understood pointers — you’ve memorized a sentence.

Understanding pointer types means understanding how software actually talks to memory.

Language Agent Tree Search Unifies Reasoning, Acting, and Planning in Language Models

2025-12-23 13:36:37

選定理由

評価点 高SABCD低
整合性 B: LLMによる状態価値評価によって探索と利用のトレードオフを解いている点はビジネスニーズが高い
信頼性 S: Proceedings of Machine Learning Research 2024 採択、著者は元DeepMind
健全性 S: 理論設計(MCTSの導入、LM評価の利用、反省の統合)は整然としており、明確なアルゴリズム構成を持つ。
汎用性 A: langgraphでも実装例があり汎用性は高いが、ハイパーパラメータに対する鋭敏性とランニングコストが課題
発展性 A: 様々な発展があるが、木構造に限定される点や状態が明確に定義できないタスクは適用が難しい点が課題である。

Paper: https://arxiv.org/abs/2310.04406
Code: N/A

エグゼクティブサマリ

LLMの思考を木構造で管理することで、先読みを可能とした。これにLLMはより少ないトライ&エラーで正しい結論に辿り着くことができる。

概要

【社会課題】
現状のLLMは、単一の入力に対して即座に応答する「一問一答型」が中心であり、複雑な意思決定や多段階タスクには対応しづらい。

【技術課題】
複雑な意思決定や多段階タスクには先読み(計画)が必要となる。しかし、従来手法(CoT, ReAct, Reflexion等)は多段推論・行動・反省といった要素をLLMに追加するが、計画はできない。そのため行動選択が短期的目標になりがちで、タスク達成率が低下する。

【提案】
LLMに多段推論(reasoning)、行動(acting)、計画(planning)を統合的に実行させる枠組みLATS(Language Agent Tree Search)を提案した。LATSはモンテカルロ木探索(MCTS)を用いて複数の行動候補を探索し、LLMが価値評価・反省することでより長期的で一貫した意思決定を実現する。
tb1

又、表1に示すように推論、行動、計画、反省、記憶というすべての構成を含んだアプローチはLATSが初である。

【効果】
ファインチューニングなどの勾配学習を行わずに、LLMが自律的に多段推論を重ね計画し、環境と対話的に行動できるようにした。実験では、プログラミングで従来手法を上回り、pass@1で92.7 %を達成。Webナビゲーションで既存の強化学習ベース手法を超え、成功率 75.9 %を記録。マルチホップ推論でも正答率が約 +8 Pt向上。

Language Agent Tree Search (LATS)

fig2

LATSは、強化学習で用いられるMCTSやベルマンバックアップの考え方を、LLMの推論時探索に適用したアルゴリズムである。LATSの Evaluation と Simulation において実施されるブートストラップは任意のステップ数先まで予測するが、LLMによる近似的な環境予測に依存するため(選択・モデル・ブートストラップ)バイアスが重なりやすく、初期値鋭敏性を生じやすい。一方で、LLMは長期的構造や意味的整合性を捉える能力を持つため、厳密な環境モデルがなくても有用なヒューリスティックとして機能する。

図2はベルマン方程式におけるバックアップ線図belief空間上で近似評価したものと解釈でき、ノードが状態(履歴)で、エッジが行動選択を表す。

アルゴリズムの全体像を以下に示す。

al1

最後に強化学習アルゴリズムと比較すると以下のようになる。

観点 LATS モンテカルロ法(MC) TD法(TD(0)) SARSA
分類 推論時探索 強化学習(価値推定) 強化学習(価値推定) 強化学習(制御)
主目的 推論・行動の最適化 価値関数の学習 価値関数の学習 方策と価値の同時学習
状態・行動空間 自然言語(thought/action) 離散/連続 離散/連続 離散/連続
探索構造 木構造(MCTS) なし なし なし
ロールアウト LLMによるロールアウト 実エピソード 実遷移 実遷移
評価の基準 LM評価+自己一貫性 実報酬 実報酬+推定価値 実報酬+推定価値
ブートストラップ あり なし あり あり
更新対象 探索木の統計量 価値関数パラメータ 価値関数パラメータ 行動価値関数 (Q)
学習(重み更新) しない する する する
方策との関係 探索で暗黙的に決定 固定 or 任意 固定 or 任意 On-policy
失敗の活用 Reflection(自然言語) サンプル平均 TD誤差 TD誤差(行動依存)

Selection(選択)

全ノードから次に展開すべきノードをUCB(Upper Confidence Bound)に基づく評価で選択する。状態価値関数 V(st)V(s_t)V(st) と探索回数 N(st+1)N(s_{t+1})N(st+1) のバランスを取って、最も有望なノードを選ぶ行動 ata_tat を行う。

at=arg⁡max⁡at[V(St)+clog⁡N(St)N(St+1)] a_t = \arg\max_{a_t} \left[ V(S_t) + c \sqrt{ \frac{\log N(S_t)}{N(S_{t+1})} } \right] at=argatmax[V(St)+cN(St+1)logN(St)]
N(St+1)=N(St)+1 N(S_{t+1}) = N(S_{t}) + 1 N(St+1)=N(St)+1

その後、記憶に従い観測 oto_{t}ot を得る。これは過去の経験を再学習に使う Experience Replay とは異なり、探索木に保存された観測をそのまま再利用して同じ探索経路を辿るための仕組みである。

Expansion(展開)

選ばれたノード sts_{t}st から、 nnn 個の子ノードを θ\theta θ のパラメータを持つモデルからサンプリングして生成する。その後、実環境から観測 oto_{t}ot を得る。

at(i)∼pθ(St),St+1=Env(St,at) a_t^{(i)} \sim p_{\theta}(S_t), \quad S_{t+1} = \text{Env}(S_t, a_t) at(i)pθ(St),St+1=Env(St,at)

Evaluation(評価)

新しく展開されたノードの状態価値(スカラー量)をLLMによって評価する。

V(s)=λ⋅LM(s)+(1−λ)⋅SC(s) V(s) = \lambda \cdot \mathrm{LM}(s) + (1-\lambda)\cdot \mathrm{SC}(s) V(s)=λLM(s)+(1λ)SC(s)

ここで SC(s) は Self-Consistency を指す。 従来の ToT[Yao2023] が「思考の妥当性」のみを評価していたのに対し、LATSは外部環境からの観測を得た後に評価を行う。これにより、コードの実行エラーやウェブ検索の結果に基づいた、より正確な価値判断が可能になる。

Simulation (予測)

Expansion によって実環境から得られた観測までは事実であり、それ以降の将来については環境観測をせず、言語モデル自身に推論を行わせて「この状態を起点に進んだ場合、最終的にどれくらい良い結果になりそうか」を推定する。この推定値は、実際に得られた報酬ではなく近似的な評価であり、後続の Backpropagation で探索木全体の意思決定を導くために用いられる。

R^(ht)≈∑k=0Kγkr^t+k \hat{R}(h_t) \approx \sum_{k=0}^{K} \gamma^k \hat{r}_{t+k} R^(ht)k=0Kγkr^t+k

Backpropagation (木構造の統計量の更新)

Backpropagation は、Simulation で得られた将来価値の推定結果を、探索木をさかのぼって各ノードに反映する段階である。これにより、どの思考や行動が有望だったかという情報が蓄積され、次の探索ではより良い分岐が選ばれやすくなる。

V(ht)←N(ht) V(ht)+R^(ht)N(ht)+1 V(h_t) \leftarrow \frac{N(h_t)\,V(h_t) + \hat{R}(h_t)}{N(h_t) + 1} V(ht)N(ht)+1N(ht)V(ht)+R^(ht)
N(ht)←N(ht)+1 N(h_t) \leftarrow N(h_t) + 1 N(ht)N(ht)+1

Reflextion

Reflection は、これまでの思考や行動を振り返り、誤りや改善点を言語モデル自身に指摘させて、探索の方針を修正する段階である。
単なる価値更新にとどまらず、推論の質そのものを改善することで、同じ失敗を繰り返さない探索を可能にする。

効果まとめ

  • 一般性: 明示的な環境モデルや報酬設計を必要とせず、言語モデルの生成・評価能力をそのまま探索と価値推定に利用できるため、推論タスクから対話型タスクまで幅広く適用できる。

  • 探索効率: 実環境との相互作用を最小限に抑えつつ、木探索と価値バックアップによって有望な思考経路に計算資源を集中できる。

  • 柔軟性: 状態やツリー構造を設計することで様々な環境に適用することができる。

実験

Programming データセットでの実験

tb4tb5

表4,5が示すようにLATSは両方のデータセットでSOTAである。

Understanding Git in Simple way - Part 2

2025-12-23 13:36:20

Hello, I'm Ganesh. I'm working on FreeDevTools online, currently building a single platform for all development tools, cheat codes, and TL; DRs — a free, open-source hub where developers can quickly find and use tools without the hassle of searching the internet.

Previously, we understood what Git is and what a Git commit is.
In this blog, we will understand more about git.

Why Git uses DAG?

Git DAG

Each commit holds changes, will have Snapshot, Metadata, and the parent commit id.
Each commit will be pointing to its parent.

Hence, there will be no way it can form a cyclic or forming a loop for commit history.

This will be in a Directed Acyclic Graph (DAG) structure.

The Graph created will have a history of every commit, and every decision that the developer made will be captured in this structure.

This will make Git powerful. As we can jump to any point in a commit.

How Branching Works?

We usually assume the branch has a full copy of parent commit and we try to commi changes on top it. Which is a wrong assumption.

The branch will have a parent ID; that's it. Similar to any other commit.

Let's understand with an example.
Master Branch.

Create New Branch

git checkout -b iss53
Switched to a new branch "iss53"
git branch iss53
git checkout iss53

Commit changes

If you want to change anything in the master branch

Checkout to the branch

git checkout master
Switched to branch 'master'

And commit changes

Conclution

The DAG is your project's history, which has every branch, every merge, and every decision the developers made in the project.

We understood how the branch works. How a branch is created.
In the next blog, we will understand how Git knows where we are working and how branches are merged by PR.

Avoiding Hallucinations When Building Angular Apps with Gemini CLI

2025-12-23 13:28:12

We are seeing new models released every five or six months. That does not mean they are trained with the latest information. For example, a model may not include the most recent Angular documentation and can easily generate outdated code that does not follow current best practices.

GeminiCLI is a great tool for building faster with AI, but it is not immune to hallucinations. Below are a few techniques you can use to get more accurate output and better-aligned Angular code.

Prompt with best practices

You can define a shared prompt in your Gemini.md file. This helps guide the model to generate code that follows modern Angular patterns and current best practices.

Another option is to define a custom slash command and trigger it with every request. That way, you make sure your instructions are always included.

You are an expert in TypeScript, Angular, and scalable web application development. You write functional, maintainable, performant, and accessible code following Angular and TypeScript best practices.

## TypeScript Best Practices
- Use strict type checking
- Prefer type inference when the type is obvious
- Avoid the `any` type; use `unknown` when the type is uncertain

## Angular Best Practices
- Always use standalone components over NgModules
- Must NOT set `standalone: true` inside Angular decorators. It is the default in Angular v20+
- Use signals for state management
- Implement lazy loading for feature routes
- Do NOT use `@HostBinding` or `@HostListener`. Use the `host` object in the `@Component` or `@Directive` decorator instead
- Use `NgOptimizedImage` for all static images
  - `NgOptimizedImage` does not work with inline base64 images

## Accessibility Requirements
- Must pass all AXE checks
- Must follow WCAG AA minimums, including focus management, color contrast, and proper ARIA attributes

## Components
- Keep components small and focused on a single responsibility
- Use `input()` and `output()` functions instead of decorators
- Use `computed()` for derived state
- Set `changeDetection: ChangeDetectionStrategy.OnPush` in the `@Component` decorator
- Prefer inline templates for small components
- Prefer reactive forms over template-driven forms
- Do NOT use `ngClass`; use `class` bindings instead
- Do NOT use `ngStyle`; use `style` bindings instead
- When using external templates or styles, use paths relative to the component TypeScript file

## State Management
- Use signals for local component state
- Use `computed()` for derived state
- Keep state transformations pure and predictable
- Do NOT use `mutate` on signals; use `update` or `set` instead

## Templates
- Keep templates simple and avoid complex logic
- Use native control flow (`@if`, `@for`, `@switch`) instead of `*ngIf`, `*ngFor`, `*ngSwitch`
- Use the async pipe to handle observables
- Do not assume globals like `new Date()` are available
- Do not write arrow functions in templates

## Services
- Design services around a single responsibility
- Use `providedIn: 'root'` for singleton services
- Use the `inject()` function instead of constructor injection

This approach gives you full control over the rules you want the model to follow. The downside is that this file becomes another artifact you need to keep up to date as Angular evolves.

llms.txt

Another approach is exposing a llms.txt file. The idea is for websites to provide a single entry point that language models can use to discover official and structured documentation. Conceptually, it is similar to a RAG setup, but without the infrastructure overhead.

Angular currently exposes two versions:
llms.txt A lightweight file that links to key resources
llms-full.txt A comprehensive set of resources that explains how Angular works and how to build Angular applications

This solves part of the maintenance problem because you no longer need to keep your own prompt updated. The tradeoff is that you now depend on an external source being available and accurate.

Angular MCP

The Angular CLI now includes an MCP that GeminiCLI can consume to access up-to-date documentation and best practices. This is the most robust option and requires very little setup.

In .gemini/settings.json, add the following:

{
  "mcpServers": {
    "angular-cli": {
      "command": "npx",
      "args": ["-y", "@angular/cli", "mcp"]
    }
  }
}

This gives GeminiCLI access to several Angular-specific tools:
ai_tutor
Launches an interactive Angular tutor. Recommended for new Angular v20+ projects
find_examples
Finds authoritative and up-to-date code examples based on official Angular sources
get_best_practices
Retrieves the Angular Best Practices Guide, covering standalone components, typed forms, and modern control flow
list_projects
Lists all applications and libraries defined in an Angular workspace by reading angular.json
onpush_zoneless_migration
Analyzes your codebase and provides a step-by-step plan to migrate to OnPush change detection, a requirement for zoneless applications
search_documentation
Searches the official Angular documentation at angular.dev for APIs, guides, and best practices

If you have not tried GeminiCLI yet, it is worth a look. It is simple to use and surprisingly powerful when paired with the right context.

--

Don't forget to like and share! ❤️

CSS Combinators: How to Write Half the CSS With Twice the Clarity

2025-12-23 13:13:14

The difference between messy CSS and elegant CSS isn’t what you think. It’s not about knowing the latest framework. It’s not about memorizing every property. It’s not even about understanding flexbox vs grid (though that helps). It’s about those tiny symbols (CSS Combinators) between your selectors. The space. The >. The +. The ~.

Most developers treat them like punctuation, just syntax that connects one selector to another. But they’re so much more than that. They’re relationships, structure, and the actual language of CSS.

Once you learn how to use CSS Combinators, you will start writing half the CSS with twice the clarity. And your HTML will stays clean as well.

Ignore them, and you’ll spend your entire frontend career drowning in utility classes and fighting !important declarations.

As always, here are the working examples of CSS Combinators:

So yeah, let’s talk about combinators.

What Are CSS Combinators?

CSS Combinators aren’t some advanced CSS feature. They’re literally just the symbols (or spaces) that connect your selectors, that’s it.

When you write div p, that space? That's a combinator. When you write article > h2, that > symbol? Also a combinator.

They define relationships between elements, parent-child, sibling-to-sibling, ancestor-descendant. Once you learn these combinators, you’ll write better CSS with fewer classes and less specificity mess.

The Four CSS Combinators You Need to Know

There are technically six combinators in the CSS spec, but only four matter right now. The other two (column combinator and namespace separator) are either experimental or super niche.

1. Descendant Combinator (The Space)

Syntax: A B

This is the one everyone uses without thinking about it:

article p {
  line-height: 1.8;
}

It selects ALL p elements inside article, no matter how deeply nested. Grandchildren, great-grandchildren, doesn't matter.

Real-world example:

.blog-post p {
  margin-bottom: 1.5rem;
  color: #333;
}
.blog-post h2 + p {
  font-size: 1.1em;
  color: #555;
}

This gives all paragraphs in blog posts consistent spacing, but makes the first paragraph after any heading slightly bigger and lighter. Clean, semantic, no extra classes needed.

When you don’t care about nesting levels. Content areas, articles, basically anywhere your HTML structure isn’t guaranteed.

Performance note: People used to say descendant selectors were slow. They’re not, because modern browsers are insanely optimized for these. I’ve profiled this stuff, don’t worry about it.

2. Child Combinator (>)

Syntax: A > B

This is the “strict parent” selector. It only matches direct children:

.card > img {
  width: 100%;
  border-radius: 8px 8px 0 0;
}

That img has to be a direct child of .card. If it's wrapped in a div, no match.

Here’s where this gets more creative:

.navigation > ul {
  display: flex;
  gap: 2rem;
}
.navigation > ul > li {
  position: relative;
}
.navigation > ul > li > a {
  color: white;
  padding: 1rem;
}
/* Nested dropdowns don't get the same styles */
.navigation > ul > li > ul {
  position: absolute;
  top: 100%;
  background: #333;
}
.navigation > ul > li > ul > li > a {
  padding: 0.5rem 1rem;
  display: block;
}

We have complete control over each level of the navigation without classes. The top level gets flex layout, dropdowns get absolute positioning, clean separation.

Component boundaries, when you want to prevent styles from leaking into nested structures, or when your HTML is predictable.

3. Next-Sibling Combinator (+)

Syntax: A + B

This selects an element that immediately follows another. Same parent, right after.

h2 + p {
  font-size: 1.2em;
  font-weight: 500;
}

Only the paragraph directly after h2 gets styled. Perfect for lead paragraphs.

But here’s my favorite use case (and this is some pro-level and creative stuff):

/* The "lobotomized owl" - adds spacing between elements */
.stack > * + * {
  margin-top: 1.5rem;
}

This adds top margin to every element except the first one. No matter what elements you throw in there. Headings, paragraphs, images, whatever. Consistent spacing, zero extra classes.

Another killer pattern:

/* Remove margin between specific heading combinations */
h2 + h3,
h3 + h4 {
  margin-top: 0.5rem;
}
/* But keep normal spacing for other elements */
* + h2,
* + h3,
* + h4 {
  margin-top: 2rem;
}

Now your headings have tight spacing when stacked, but normal spacing otherwise.

When to use it?
Spacing patterns, typography, anywhere the “next thing” is special.

4. General Sibling Combinator (~)

Syntax: A ~ B

This selects ALL siblings that come after an element:

h2 ~ p {
  color: #666;
}

Every p after an h2 (at the same level) gets styled.

Here’s where this actually shines:

/* Style all content after a "read more" toggle */
.article .toggle:checked ~ .full-content {
  display: block;
}
.article .toggle:checked ~ .preview {
  display: none;
}

One checkbox, and you can show/hide entire sections of content. No JavaScript needed.

Form validation pattern I use constantly:

input:invalid:not(:placeholder-shown) ~ .error-message {
  display: block;
  color: #d32f2f;
}
input:valid:not(:placeholder-shown) ~ .success-message {
  display: block;
  color: #4caf50;
}

The form tells users what’s wrong as they type. The ~ combinator makes this possible because the error message comes after the input in the HTML.

:has() Solves the Parent Selector Problem

Technically not a combinator, but it works WITH combinators and it’s pretty amazing. It’s the parent selector we’ve been begging for since 2005.

/* Select articles that contain an h2 */
article:has(h2) {
  border-left: 4px solid blue;
}
/* Select labels whose input is invalid */
label:has(+ input:invalid) {
  color: red;
}

Browser support is excellent now (Safari 15.4+, Chrome 105+). Firefox is getting there.

Real-world example:

/* Different card layouts based on content */
.card:has(img) {
  display: grid;
  grid-template-columns: 200px 1fr;
}
.card:not(:has(img)) {
  padding: 2rem;
  border: 2px solid #e0e0e0;
}
/* Forms with errors get highlighted */
.form-group:has(input:invalid) {
  background: #ffebee;
  border-left: 4px solid #d32f2f;
}

No JavaScript, no extra classes, just smart CSS.

Specificity: The Part Everyone Gets Wrong

Here’s what you need to know: combinators add ZERO specificity.

That space in div p? Zero specificity. The > in div > p? Also zero.

Only the selectors matter:

/* Specificity: 0-0-2 (two elements) */
article p { }
/* Also 0-0-2 (combinator doesn't count) */
article > p { }
/* 0-0-1-1 (one class, one element) */
.post p { }

This catches people all the time:

/* Both have same specificity */
article p { color: blue; }
article > p { color: red; }
/* Red wins because it's last */

If you want the second one to win, increase its specificity:

article p { color: blue; }
article.featured > p { color: red; }
/* Now red wins (0-0-1-2 beats 0-0-0-2) */

Real-World Patterns From My Production Code

The Card Component

This is how I build cards now. Notice how structure drives styling:

.card {
  background: white;
  border-radius: 12px;
  box-shadow: 0 2px 8px rgba(0,0,0,0.1);
  overflow: hidden;
}
/* Full-width images at the top */
.card > img:first-child {
  width: 100%;
  height: 200px;
  object-fit: cover;
}
/* Content area */
.card > .content {
  padding: 1.5rem;
}
/* First element in content (usually h3) has no top margin */
.card > .content > :first-child {
  margin-top: 0;
}
/* All paragraphs get consistent spacing */
.card > .content p {
  color: #666;
  line-height: 1.6;
}
/* Last element has no bottom margin */
.card > .content > :last-child {
  margin-bottom: 0;
}
/* Footer area */
.card > .footer {
  padding: 1rem 1.5rem;
  background: #f8f9fa;
  border-top: 1px solid #e0e0e0;
}
/* Buttons in footer are auto-styled */
.card > .footer button {
  padding: 0.5rem 1rem;
  background: #2196f3;
  color: white;
  border: none;
  border-radius: 6px;
  cursor: pointer;
}
/* Cards with images get tighter content padding */
.card:has(> img) > .content {
  padding: 1rem 1.5rem;
}

Clean. Predictable. No class soup.

Article Typography System

This is my go-to for blog posts and long-form content:

article {
  max-width: 70ch;
  margin: 0 auto;
  padding: 2rem;
  font-size: 1.125rem;
  line-height: 1.7;
}
/* All paragraphs */
article p {
  margin-bottom: 1.5rem;
  color: #333;
}
/* Lead paragraph (first after any heading) */
article h1 + p,
article h2 + p,
article h3 + p {
  font-size: 1.25em;
  color: #555;
  line-height: 1.6;
}
/* Blockquotes */
article blockquote {
  border-left: 4px solid #2196f3;
  padding-left: 1.5rem;
  margin: 2rem 0;
  font-style: italic;
  color: #555;
}
/* Paragraphs after blockquotes get extra space */
article blockquote + p {
  margin-top: 2rem;
}
/* Lists */
article ul,
article ol {
  margin-bottom: 1.5rem;
  padding-left: 2rem;
}
/* List items get spacing too */
article li + li {
  margin-top: 0.5rem;
}
/* Code blocks */
article pre {
  background: #1e1e1e;
  color: #d4d4d4;
  padding: 1.5rem;
  border-radius: 8px;
  overflow-x: auto;
  margin: 2rem 0;
}
/* Inline code in paragraphs */
article p code {
  background: #f5f5f5;
  padding: 0.2em 0.4em;
  border-radius: 4px;
  font-size: 0.9em;
  color: #e53935;
}
/* Images and figures */
article figure {
  margin: 3rem 0;
}
article figure > img {
  width: 100%;
  border-radius: 8px;
}
article figure > figcaption {
  margin-top: 0.75rem;
  text-align: center;
  font-size: 0.9em;
  color: #666;
  font-style: italic;
}
/* Heading stack optimization */
article h2 + h3,
article h3 + h4 {
  margin-top: 0.5rem;
}

This gives you beautiful typography with zero classes in your HTML. Just write semantic markup and it looks great.

Form Validation That Doesn’t Suck

.form-group {
  margin-bottom: 1.5rem;
}
/* Labels */
.form-group > label {
  display: block;
  font-weight: 600;
  margin-bottom: 0.5rem;
  color: #333;
  transition: color 0.3s;
}
/* Show required asterisk */
.form-group > label:has(+ :required)::after {
  content: " *";
  color: #d32f2f;
}
/* Input fields */
.form-group > input,
.form-group > textarea {
  width: 100%;
  padding: 0.75rem;
  border: 2px solid #e0e0e0;
  border-radius: 6px;
  font-size: 1rem;
  transition: border-color 0.3s;
}
/* Focus state */
.form-group > input:focus,
.form-group > textarea:focus {
  outline: none;
  border-color: #2196f3;
}
/* Invalid state (only show when user has typed) */
.form-group > input:invalid:not(:placeholder-shown) {
  border-color: #d32f2f;
}
/* Valid state */
.form-group > input:valid:not(:placeholder-shown) {
  border-color: #4caf50;
}
/* Label changes color based on input state */
.form-group:has(input:invalid:not(:placeholder-shown)) > label {
  color: #d32f2f;
}
.form-group:has(input:valid:not(:placeholder-shown)) > label {
  color: #4caf50;
}
/* Error messages */
.form-group > .error {
  display: none;
  color: #d32f2f;
  font-size: 0.875rem;
  margin-top: 0.5rem;
}
/* Show error when invalid */
.form-group:has(input:invalid:not(:placeholder-shown)) > .error {
  display: block;
}
/* Help text */
.form-group > .help-text {
  font-size: 0.875rem;
  color: #666;
  margin-top: 0.5rem;
}
/* Hide help text when there's an error */
.form-group:has(input:invalid:not(:placeholder-shown)) > .help-text {
  display: none;
}

Users get real-time feedback. Labels change color. Error messages show up. All CSS, no JavaScript.

Navigation With Dropdowns

.nav {
  background: #2c3e50;
}
/* Main list */
.nav > ul {
  display: flex;
  list-style: none;
  margin: 0;
  padding: 0;
}
/* Top level items */
.nav > ul > li {
  position: relative;
}
/* Top level links */
.nav > ul > li > a {
  display: block;
  padding: 1rem 1.5rem;
  color: white;
  text-decoration: none;
  transition: background 0.2s;
}
.nav > ul > li > a:hover {
  background: rgba(255, 255, 255, 0.1);
}
/* Add dropdown arrow */
.nav > ul > li:has(> ul) > a::after {
  content: " ▾";
  font-size: 0.8em;
}
/* Dropdown menu */
.nav > ul > li > ul {
  display: none;
  position: absolute;
  top: 100%;
  left: 0;
  min-width: 220px;
  background: #34495e;
  list-style: none;
  margin: 0;
  padding: 0;
  box-shadow: 0 4px 12px rgba(0,0,0,0.2);
}
/* Show on hover */
.nav > ul > li:hover > ul {
  display: block;
}
/* Dropdown links */
.nav > ul > li > ul > li > a {
  display: block;
  padding: 0.75rem 1.5rem;
  color: white;
  text-decoration: none;
  transition: background 0.2s;
}
.nav > ul > li > ul > li > a:hover {
  background: rgba(255, 255, 255, 0.15);
}
/* Dividers between dropdown items */
.nav > ul > li > ul > li + li {
  border-top: 1px solid rgba(255, 255, 255, 0.1);
}
/* Active page indicator */
.nav a[aria-current="page"] {
  background: rgba(255, 255, 255, 0.2);
  font-weight: 600;
}

Fully functional dropdown navigation. Hover states, active indicators, the works. Structure is styling.

Common Mistakes (And How I’ve Learned From Them)

Mistake 1: Chaining Too Deep

/* Bad - this will break when HTML changes */
body > div > main > section > article > div > p {
  color: blue;
}
/* Good - resilient and clear */
.article-text {
  color: blue;
}

I used to do this. Don’t be like young me.

Mistake 2: Not Understanding Sibling Direction

/* This only affects elements AFTER h2 */
h2 ~ p {
  color: gray;
}

Siblings only go forward, never backward. This confused me for months when I started.

Mistake 3: Overusing Descendant When Child Would Work

/* Overkill - affects nested lists too */
.menu ul {
  list-style: none;
}
/* Better - only direct children */
.menu > ul {
  list-style: none;
}

Be as specific as needed, not more.

Mistake 4: Fighting Specificity

/* You'll lose this fight */
div div div p {
  color: blue;
}
/* Then you'll do this (don't) */
div div div div p {
  color: red !important;
}
/* Just do this instead */
.text-primary {
  color: blue;
}
.text-secondary {
  color: red;
}

Specificity wars are unwinnable. Keep selectors simple.

Performance

People still ask me if descendant selectors are slow. The answer is: not anymore.

Modern browsers use bloom filters, ancestor caching, and fast-path optimizations. I’ve profiled production apps with thousands of selectors. Style recalculation is rarely the bottleneck.

Focus on:

  • Keeping your stylesheet under 100KB
  • Avoiding unnecessary reflows
  • Writing maintainable code

Don’t worry about:

  • Descendant vs child combinator speed
  • Number of combinators in a selector
  • Selector matching performance

Unless you’re building something truly massive, it won’t matter.

When to Use Classes Instead

Combinators are great, but they’re not always the answer:

Use classes when:

  • The element appears in multiple contexts
  • You need maximum reusability
  • The HTML structure is unpredictable
  • You’re building a component library

Use combinators when:

  • Structure is stable and meaningful
  • You don’t control the HTML (CMS content)
  • You want to reduce HTML clutter
  • The relationship is semantically important

I usually use a mix. My components have classes. My content areas use combinators.

The Modern Approach

Here’s how I think about CSS architecture now:

  1. Components get classes: .button.card.modal
  2. Component internals use combinators: .card > img.modal > .header
  3. Content areas use combinators: article p, article h2 + p
  4. Layout uses classes: .grid.flex.container
  5. State uses pseudo-classes and :has(): .card:hover.form:has(:invalid)

This gives you the best of both worlds. Reusable components with clean, semantic internals.

Browser Support

All four main combinators work everywhere. Even IE11 (if you still care).

:has() is the new kid:

  • Safari 15.4+
  • Chrome 105+
  • Edge 105+
  • Firefox 103+ (behind flag, coming to stable soon)

For production use today? I’d add a fallback:

/* Works everywhere */
.card.has-image {
  padding: 0;
}
/* Enhanced for modern browsers */
.card:has(> img) {
  padding: 0;
}

Final Thoughts

After writing CSS for more than half a decade, CSS Combinators are still my favorite feature. They’re simple, powerful, and they make your code better.

Master these four patterns:

  1. Descendant for content areas
  2. Child for component boundaries
  3. Next-sibling for spacing
  4. General sibling for state

Add :has() to the mix, and you can build almost anything without JavaScript.

The key is understanding relationships. CSS isn’t just about making things pretty. It’s about expressing structure, hierarchy, and meaning. Combinators are how you do that.

Stop fighting specificity. Stop adding classes to everything. Learn the combinators, understand the relationships, and let your HTML structure do the work.

Your future self will thank you.

Quick Reference:

  • A B - Any B inside A (descendant)
  • A > B - B directly inside A (child)
  • A + B - B immediately after A (next-sibling)
  • A ~ B - Any B after A (general sibling)
  • A:has(B) - A that contains B (relational)

Now go write some better CSS.

— — — — — — — — — — — — — — — — — — — — — — —

Did you learn something good today?
Then show some love.
©Usman Writes
WordPress Developer | Website Strategist | SEO Specialist
Don’t forget to subscribe to Developer’s Journey to show your support.

Cursor’s debug mode enforces what good debugging looks like

2025-12-23 13:11:30

Debugging with AI usually means copying logs into chat. I paste, it guesses, I paste more. The AI never sees the full picture — just the fragments I decide to share.

Cursor's debug mode works differently. It sets up instrumentation, captures logs itself, and iterates until the fix is proven. The interesting part isn't the AI — it's the process it enforces.

The bug

Pagination on an external API integration. Code looked right. Tokens being sent, parsed, passed back. But every request returned the same first page.

Hypotheses before fixes

I described the bug. Instead of jumping to a fix, it generated hypotheses:

  • The JSON field name might be incorrect
  • The query might be missing required parameters
  • The tokens might not be getting passed through correctly

Then it added instrumentation to test each one and asked me to reproduce the bug.

Instrumentation

Cursor added debug logs like this to capture what it needed to test each hypothesis:

f, _ := os.OpenFile(debugLogPath, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644)
f.WriteString(fmt.Sprintf(`{"location":"search.go:142","message":"request_body","data":%s}`, requestJSON))
f.Close()

Then asked me to reproduce the bug and click proceed when ready.

Narrowing down

First round of hypotheses were all rejected. So it generated new hypotheses, added more instrumentation and ran again. Still rejected.

This went on for a few rounds. Each rejection narrowed the search space. The logs kept exposing more of what was actually happening at runtime.

The actual issue

Based on what the logs revealed, Cursor asked to check how the API expects pagination state to be passed for this specific operation.

Turns out the API had two different pagination mechanisms — one for regular queries, another for aggregations. We were using the right tokens but passing them through the wrong channel. The aggregation subsystem had its own contract for how continuation state gets passed back in.

Same data, different subsystem, different expectations. The logs showed the tokens being sent. The docs explained why they were being ignored.

Once the mismatch was clear, the fix was straightforward.

Wait for proof

Cursor didn't remove instrumentation until the fix was verified:

Before: page 1, page 1, page 1
After: page 1, page 2, page 3

No "it should work now." Actual proof.

What makes it different

The value isn't that AI is debugging for me — it's that the tool enforces discipline I already know I should follow:

  • Hypotheses before fixes
  • Instrumentation to capture evidence
  • Iteration until logs prove the fix
  • Documentation checks before assumptions

This is what good debugging looks like. Debug mode just makes it the default.