MoreRSS

site iconJacky Wong修改

1997年出生于扬州,前端开发者和音乐制作人。CWGI、只言、R2 Uploader等项目作者。
请复制 RSS 到你的阅读器,或快速订阅到 :

Inoreader Feedly Follow Feedbin Local Reader

Jacky Wong的 RSS 预览

Problems with Vibe Coding pt.2

2026-04-18 08:00:00

So, last time I said vibe coding makes me stop thinking, guess what? In certain cases, things are even worse. ## There is no flow state Normally, when I go "cave mode", a programming session looks like this: ![A flow diagram showing a rough mental model leading to code, refine, and ship, with a sharper mental model feedback loop returning from refine toward the earlier stage.]() One context, one head, uninterrupted flow. Now, with vibe coding, it looks like this: ![A flow diagram showing write prompt leading to wait, then review, with a loop through argue or re-prompt before moving on to ship.]() In cave mode, building the mental model doesn't really stop at the rough mental model. I start with one, sure, and when I get stuck I go back to it. But I never leave the context. Every line I write tests an assumption, every edge case sharpens the mental model a bit more. Building the mental model and coding are the same thread. Vibe coding cuts that thread. I'm not the one typing, so the only window where my mental model can grow is the plan. After that the agent goes off and writes in my name, and my understanding just stops while the codebase doesn't. ## The plan can't see what matters On small and contained work, AI really is faster, but on anything inside a big monolith, the speedup vanishes and the work just changes shape: the part where I build the mental model is gone, replaced by review, argue, and frankly, cursing the shit out of it. Devs know 🤷‍♂️. Like I said in pt.1, plan mode isn't upfront thinking. On the surface it looks like it is, the agent reads some files, gives me a plan, I approve, then I grab a coffee or something and let it cook, chill as fuck, right? But that plan was built from whatever fit in the context window, and even today's LLM context windows are still severely limited. They can't hold a whole codebase, not to mention that some coding agents tend to load even less. So the plan can't see the invariants in my head, the reasoning buried in commit messages, the constraints discussed in Lark threads months ago. Once I approve, a wrong premise could quietly become the foundation. ## And review won't save you ``` src/api/handlers/orders.ts | 892 +++++++++++++++++++++++----------- src/services/inventory.ts | 421 ++++++++++++++++++++-------------- src/db/migrations/0042_orders.sql | 287 ++++++++++++++++++++ src/components/Dashboard.tsx | 248 +++++++++++++++-------- ... 138 files changed, 4577 insertions(+), 3212 deletions(-) ``` That's what every argue cycle later is trying to correct, and the cost is higher than it looks, because I can't build the mental model at the same time. Prompt-writing and problem-modeling fight for the same brain space. Writing a prompt pulls what's in my head out into words, while building a mental model pulls the problem in. I can't do both at once, and every prompt-wait-output cycle wipes out a bit of the mental model I was holding. And even if the mental model is somewhat clear, on a large codebase or a huge monolith, I probably won't review the code line by line, in most cases, I won't even look at a single line, yet still hope that everything is working as planned. Then I deliver it to the test team, and there goes another nightmare. ## AI is a junior, and stays a junior It clicked when I started thinking of AI as a junior. Except this junior writes way more lines than any human, never asks when in doubt, makes things up confidently, doesn't remember what I taught yesterday, and doesn't reflect on mistakes. Teaching a junior is tiring, but at least it goes somewhere. They grow up, and after a year they're part of the system rather than a load on it. Teaching AI goes nowhere: every session starts cold, every fix gets re-broken next week, and the review load that would normally spread across ten juniors all lands on one senior. ## The team ceiling becomes the AI ceiling If the senior is the only one still holding the mental model, and the senior gets ground down by argue-loops, the team's correction capacity drops to whatever the AI itself can do. Which is: surface-level pattern matching on whatever happens to be in the context window. AI doesn't see invariants. It can't see why a try/catch was added after an incident like 2 years ago, or why a field is non-nullable because a downstream pipeline crashes if it isn't, or why two endpoints exist for what looks like the same data because one is for one client and the other is for another client. These invariants live in heads, in Lark, in commit messages, in the people who left last quarter. AI plans look reasonable and miss them anyway. Then the plan gets approved, shipped, and rewritten three months later by a different agent that also doesn't see them. ![A screenshot of a large system diagram with many service nodes and connections, representing the kind of real-world complexity a plan is supposed to account for.]() I've watched modules go through three "definitive" rewrites in a year for this exact reason. Every rewrite was locally correct, every rewrite was globally wrong, and none of them talked to each other. --- ## We're trapped The cost of vibe coding isn't the bad code it produces, bad code is recoverable. The cost is what happens to me while supervising it. Reflection time gets eaten, mental models stop forming, and six months in the senior is busier and producing shallower work, while the team's actual ceiling is silently dropping toward the coding agent's. I know there's a balance to strike with this stuff, and I just need time to find it.

Problems with Vibe Coding

2026-03-25 08:00:00

The more I use coding agents at work, the more I notice one thing: they make you stop thinking too early. I am not saying that we should avoid using coding agents. I use these mfs a lot for bug fixes, rushed features, and dirty work. The speedup is real, and kinda insane, but something feels off. ## "Code first, think later" In traditional development, the order is pretty clear. You think about the interface, the data structure, boundary cases, and tests first, and then you write code. Slower, definitely, but at least you have a whole picture in your head and a much better chance of knowing why the thing works. Now it is the opposite. You ask a coding agent to build on top of an existing project, and let me guess, the prompt is gonna be something like: > "Hey my boss just told me to do this, implement it." Then you drop in some markdown or whatever docs look like a feature request, wait for a while, and boom, the page renders, the API responds. It works. Once it works, you probably will not step back and look at the whole thing the coding agent just generated. Then these kinda so-called feature requirements or bug fixes keep coming one by one, again and again, and congrats, now you have a "shit mountain". | Code base size | Human-led AI coding | Pure vibe coding | | --- | --- | --- | | Very small | 12 | 7 | | Small | 22 | 12 | | Medium | 32 | 24 | | Large | 42 | 56 | | Very large | 52 | 94 | ## Plan mode != Upfront thinking Upfront thinking is not just writing a todo list or turning on plan mode in a coding agent. Real upfront thinking is modeling. What is the core object? Which values are real state and which are derived? Where is the module boundary? If we get three similar requirements next month, can this shape absorb them? At which layer should the tests exist? And at which point will this system break? This part is not sexy. It kills your brain cells like hell, and it will not show up in a weekly report. But it is what makes engineering hold up over time. ## Technical debt, still paid by humans Current coding agents are pretty much all LLM-based. They all have context limits, which means they forget things. Today you patch a module and push with one commit that looks like `+18914 / -7986`. Tomorrow you say, "we can clean it up later." But cleanup rarely happens, especially when you are working in a team. The same module might even get reused by other humans or coding agents in no time. And nobody will know why something exists or why a simple refactor breaks everything after just a few weeks. Then you start using a coding agent to debug it. It reads the git history, sees the commit author, and says, "huh, it was you!" 🤡 Nah fuck off. ## Old-school still teaches I keep seeing people online claiming that zero basics and a few weeks of vibe coding are enough to build production-ready apps. Some even have the nerve to walk into interviews like that 🤷. I do not deny that people learn fast, but getting something to run and building a system are two different things. Interface design, data modeling, complexity control, and technical debt intuition haven't really become outdated. If anything, they matter even more now. Implementation is cheaper. So the decisions that keep a system clean are more valuable. Coding agents can write a lot of code, but they do not know what to keep simple, what to not do, and what will be painful to change later. ## Final thoughts I will keep using these tools. They are too useful to ignore. But the easier it becomes to say "just make it work first", the more I need to remind myself not to skip the thinking part.

可能是最后一次更换博客引擎

2026-03-18 08:00:00

时间线还是值得记一下: - 2017 年,PHP - 2018 年,Jekyll - 2019 年,Hexo - 2024 年,Astro - 2026 年,Self-Built 这件事其实也不是突然发生的。最近几个月,如果你能看到这个博客仓库的提交记录,大概能看出来我一直在删东西:删不必要的样式,删不必要的依赖,删不必要的中间层,上周甚至连 Tailwind 也一起剔掉了(支持裁员 🤡)。 结果删到最后,我发现最大的那层反而还在,就是框架本身。既然都已经做减法做到这里了,那继续在框架上修修补补就没什么意思了,干脆把框架也干掉。 于是我的博客从 Astro 换成了自建引擎,底层是 Bun。 ## 性能 让 Astro 版本和现在这套引擎在同一台机器上跑同样的 build,对比结果: | Metric | Astro | Self-Built | Delta | | --- | --- | --- | --- | | node_modules | 461 MB | 243 MB | -47% | | Build Time | 12.1s | 702ms | -94% | | Build Output | 1.9 MB | 1.8 MB | -4% | | Homepage Size (brotli) | 17.9 KB | 9.7 KB | -46% | | Homepage Files | 5 | 3 | -40% | 本地构建之前是1.6秒,后来我又把 HTML 压缩整个拿掉了,结果是构建出来的 HTML 体积涨了 3.5%,全局构建时间从 1.6 秒降到了 700 毫秒左右。如果把 Shiki 也完全抽掉,构建时间会变成 110ms 左右(对比Astro减少了99%的构建时间),但是那样的话博客就没有好看的代码高亮了,700ms 的构建时间完全可以接受(傲娇脸)。Cloudflare Workers 上,Astro 版本的 build stage 大约要 28 秒,现在这套自建引擎只要 2 秒。 `package.json` 里的依赖条目也少了挺多。`dependencies + devDependencies` 从 `25` 个降到 `11` 个;如果只看运行时 `dependencies`,则只有3个,有两个甚至和前端不相关。 ```json { "dependencies": { "@upstash/qstash": "^2.9.0", "@upstash/redis": "^1.36.1", "pangu": "^7.2.0" }, "devDependencies": { "@biomejs/biome": "^2.3.13", "@types/bun": "^1.3.5", "chalk": "^5.6.2", "dotenv": "^17.2.3", "enquirer": "^2.4.1", "markdown-it": "^14.1.0", "shiki": "^4.0.2", "wrangler": "^4.70.0" } } ``` 这些数字说明了一件很直接的事:Astro 在我的博客上做了太多我根本不需要的工作。 新的引擎其实很简单:`markdown-it` 负责解析 Markdown,`shiki` 负责代码高亮,模板函数负责拼页面,`Bun.serve()` 负责本地开发,构建脚本负责输出静态文件。没有 Vite,没有 Rollup,没有 hydration,也没有额外的内容系统。 还有一个很实际的变化是,构建产物终于变得可预测了。以前首页表面上看也就一个页面,但背后还有 island runtime、renderer chunk、共享 chunk 这些东西,真实体积并不总是直观看得出来。现在这套就直接多了,首页就是明明白白的 HTML、CSS 和 JS 三个文件,没有别的 runtime 藏在后面。 这次顺手还解决了一个 Astro 时代一直很别扭的问题,就是 `atom.xml`。以前 MDX 正文并不能很自然地直接流进 feed 里,结果 feed 一直是一条额外维护的支线:自定义组件要手工转,HTML 要额外清洗,URL 要额外补。现在,正文本身就是 Markdown,feed 直接吃 Markdown,只有遇到自定义内容块时才退化成 Markdown 友好的版本。页面怎么渲染,feed 怎么降级,都是同一层解释器在决定,而不是正文一套逻辑,RSS 再偷偷长出一套逻辑。 ## 动机 我越来越明显地能感觉到:AI 已经在改变抽象层的成本结构。过去很多框架提供的工程收益,在博客这种低复杂度场景里,开始没有以前那么划算了。 这次重构本身,主要也是 Codex 做完的。它花了大概 3 个小时,把整个站点从头到尾重写了一遍。源码层面的变更大概是新增了 `6888` 行代码,删除了 `6344` 行代码。这件事让我重新想了一遍框架的价值。过去这笔交易是成立的:用一点性能,换一套更容易维护的工程结构,模板系统、组件模型、路由约定、内容 schema,这些东西本质上都是为了帮助人更稳定地理解和修改代码。 但 AI coding 打破了这里的平衡。对 Codex 这类 coding agent 来说,一个手写的 HTML 模板函数并不比一个 Astro 组件更难理解。它可以直接顺着 Markdown、模板、样式和脚本一路往下读,再改具体环节。很多原本为了“降低人类维护成本”而存在的抽象,在这种场景下没有以前那么必要了。 但是这不等于框架没用了。恰恰相反,有了 AI 之后,我反而觉得框架真正该解决的问题更清楚了:不是继续发明一套更花哨的模板语法,而是把边界、约束、验证、缓存、产物组织这些事情做扎实。语法糖 AI 可以学得很快,但边界不清、产物不可预测、降级全靠补丁,这些才是真问题。复杂应用、多人协作、长生命周期产品,框架依然值钱。 ## 最后 所以这次大概真的不用再换了,系统已经简单到不太值得继续折腾,接下来真正应该花心思的,就是博客内容了。 最后请欣赏这曼妙的build输出 ⚡: ![video]()

Desktop notifications for Codex CLI and Claude Code

2026-03-10 08:00:00

## Context This setup was tested on my own machine with: - `Codex CLI 0.113.0` - `Claude Code 2.1.72` - `macOS 26.3.1 (25D2128)` - `arm64` Apple Silicon ![A macOS notification from Codex CLI with the subtitle Notification setup]() --- ## Start with Claude Code’s official setup `Claude Code` already documents the two parts you need: - terminal notifications and terminal integration - hooks for `Notification` and `Stop` - [Hooks reference](https://code.claude.com/docs/en/hooks) - [Hooks guide](https://code.claude.com/docs/en/hooks-guide) - [Terminal config](https://code.claude.com/docs/en/terminal-config) That is the right place to start. On macOS, the most obvious first implementation is also the simplest one: a tiny `osascript` wrapper. File: `$HOME/.claude/notify-osascript.sh` ```bash #!/bin/bash set -euo pipefail MESSAGE="${1:-Claude Code needs your attention}" osascript -e "display notification \"$MESSAGE\" with title \"Claude Code\"" >/dev/null 2>&1 || true ``` And wire it into Claude’s hooks: ```json { "hooks": { "Stop": [ { "hooks": [ { "type": "command", "command": "$HOME/.claude/notify-osascript.sh 'Task completed'" } ] } ], "Notification": [ { "matcher": "", "hooks": [ { "type": "command", "command": "$HOME/.claude/notify-osascript.sh 'Claude Code needs your attention'" } ] } ] } } ``` This worked, but only technically. - Clicking the notification did not cleanly bring me back to the terminal app. - There was no grouping, so notifications piled up. - Once terminal-native notifications entered the picture, especially in `Ghostty`, duplicate alerts got annoying. That was the point where `terminal-notifier` became the better base layer. ## Why I switched to terminal-notifier The official repo is here: - [julienXX/terminal-notifier](https://github.com/julienXX/terminal-notifier) Install it with Homebrew: ```bash brew install terminal-notifier ``` Then verify it: ```bash which terminal-notifier terminal-notifier -help | head ``` The three features that made it worth switching: - `-activate`, so clicking the notification can bring my terminal app to the front - `-group`, so I can keep one live notification per project instead of stacking old ones - better control over subtitle, sound, and macOS notification-center behavior --- ## A shared notification helper Before touching either tool, create one shared helper: ```bash mkdir -p "$HOME/.local/bin" ``` File: `$HOME/.local/bin/mac-notify.sh` ```bash #!/bin/bash set -euo pipefail TITLE="${1:?title is required}" MESSAGE="${2:-}" SUBTITLE="${3:-}" GROUP="${4:-}" SOUND="${5:-Submarine}" case "${TERM_PROGRAM:-}" in ghostty) BUNDLE_ID="com.mitchellh.ghostty" ;; iTerm.app) BUNDLE_ID="com.googlecode.iterm2" ;; Apple_Terminal) BUNDLE_ID="com.apple.Terminal" ;; vscode) BUNDLE_ID="com.microsoft.VSCode" ;; cursor) BUNDLE_ID="com.todesktop.230313mzl4w4u92" ;; zed) BUNDLE_ID="dev.zed.Zed" ;; *) BUNDLE_ID="" ;; esac TERMINAL_NOTIFIER="" if [ -x /opt/homebrew/bin/terminal-notifier ]; then TERMINAL_NOTIFIER="/opt/homebrew/bin/terminal-notifier" elif command -v terminal-notifier >/dev/null 2>&1; then TERMINAL_NOTIFIER="$(command -v terminal-notifier)" fi if [ -n "$TERMINAL_NOTIFIER" ]; then ARGS=( -title "$TITLE" -message "$MESSAGE" -sound "$SOUND" ) if [ -n "$SUBTITLE" ]; then ARGS+=(-subtitle "$SUBTITLE") fi if [ -n "$GROUP" ]; then ARGS+=(-group "$GROUP") fi if [ -n "$BUNDLE_ID" ]; then ARGS+=(-activate "$BUNDLE_ID") fi "$TERMINAL_NOTIFIER" "${ARGS[@]}" exit 0 fi SAFE_MESSAGE="${MESSAGE//\\/\\\\}" SAFE_MESSAGE="${SAFE_MESSAGE//\"/\\\"}" SAFE_SUBTITLE="${SUBTITLE//\\/\\\\}" SAFE_SUBTITLE="${SAFE_SUBTITLE//\"/\\\"}" osascript -e "display notification \"$SAFE_MESSAGE\" with title \"$TITLE\" subtitle \"$SAFE_SUBTITLE\" sound name \"$SOUND\"" >/dev/null 2>&1 || true ``` Make it executable: ```bash chmod +x "$HOME/.local/bin/mac-notify.sh" ``` I scope notification groups by tool and project, not by message. That gives me one live `Claude Code` notification and one live `Codex CLI` notification per repo instead of a growing stack. ### How click-to-focus works The key line is: ```bash -activate "$BUNDLE_ID" ``` `terminal-notifier` accepts a macOS bundle id and activates that app when the notification is clicked. I map the common values from `TERM_PROGRAM`: - `com.mitchellh.ghostty` - `com.googlecode.iterm2` - `com.apple.Terminal` - `com.microsoft.VSCode` - `com.todesktop.230313mzl4w4u92` for Cursor - `dev.zed.Zed` This does not target one exact split or tab. It just brings the app to the front, which is good enough for this workflow. --- ## Claude Code: attention notifications and completion notifications I split notifications into two categories: - `Notification`: Claude needs me to do something, like approve a permission request or answer a prompt - `Stop`: the main agent finished responding - `Notification` for permission prompts or other attention-needed states - `Stop` for completion ### Claude notification script File: `$HOME/.claude/notify.sh` ```bash #!/bin/bash set -euo pipefail MESSAGE="${1:-Claude Code needs your attention}" PROJECT_DIR="${PWD:-$HOME}" PROJECT_NAME="$(basename "$PROJECT_DIR")" [ "$PROJECT_NAME" = "/" ] && PROJECT_NAME="Home" PROJECT_HASH="$(printf '%s' "$PROJECT_DIR" | shasum -a 1 | awk '{print $1}' | cut -c1-12)" GROUP="claude-code:${PROJECT_HASH}" "$HOME/.local/bin/mac-notify.sh" "Claude Code" "$MESSAGE" "$PROJECT_NAME" "$GROUP" ``` ```bash chmod +x "$HOME/.claude/notify.sh" ``` ### Claude hooks configuration File: `$HOME/.claude/settings.json` ```json { "hooks": { "Stop": [ { "hooks": [ { "type": "command", "command": "$HOME/.claude/notify.sh 'Task completed'" } ] } ], "Notification": [ { "matcher": "permission_prompt", "hooks": [ { "type": "command", "command": "$HOME/.claude/notify.sh 'Permission needed'" } ] }, { "matcher": "idle_prompt", "hooks": [ { "type": "command", "command": "$HOME/.claude/notify.sh 'Waiting for your input'" } ] } ] } } ``` If you do not care about different notification types, an empty matcher `""` is enough. One detail worth remembering: Claude snapshots hooks at startup. If changes do not seem to apply, restart the session. Also check macOS notification permissions if nothing shows up. --- ## Codex CLI: completion notifications For `Codex CLI`, the mechanism is not `hooks`. It is `notify`. Official docs: - [Advanced Configuration](https://developers.openai.com/codex/config-advanced) - [Configuration Reference](https://developers.openai.com/codex/config-reference) As of `2026-03-10`, Codex documents external `notify` for supported events like `agent-turn-complete`. So in practice: - completion notifications: yes - Claude-style permission notifications through the same external script: no Approval reminders in Codex are a separate `tui.notifications` problem. ### Codex notify script File: `$HOME/.codex/notify.sh` ```bash #!/bin/bash set -euo pipefail PAYLOAD="${1:-}" [ -n "$PAYLOAD" ] || exit 0 python3 - "$PAYLOAD" <<'PY' import json import pathlib import sqlite3 import subprocess import sys import zlib from datetime import datetime, timezone CODEX_HOME = pathlib.Path.home() / '.codex' def log_skip(reason: str, payload: dict, **extra: object) -> None: log_path = CODEX_HOME / 'notify-filter.log' data = { 'ts': datetime.now(timezone.utc).isoformat(), 'reason': reason, 'client': payload.get('client'), 'thread-id': payload.get('thread-id'), 'cwd': payload.get('cwd'), } data.update(extra) with log_path.open('a', encoding='utf-8') as fh: fh.write(json.dumps(data, ensure_ascii=True) + '\n') def get_thread_originator(thread_id: str) -> tuple[str, str]: db_path = CODEX_HOME / 'state_5.sqlite' if not db_path.exists(): return '', '' try: with sqlite3.connect(db_path) as conn: cur = conn.cursor() cur.execute('select rollout_path, source from threads where id = ?', (thread_id,)) row = cur.fetchone() except Exception: return '', '' if not row: return '', '' rollout_path, source = row if not rollout_path: return '', source or '' try: first_line = pathlib.Path(rollout_path).read_text(encoding='utf-8', errors='ignore').splitlines()[0] payload = json.loads(first_line).get('payload', {}) except Exception: return '', source or '' return (payload.get('originator') or '').strip(), source or '' try: payload = json.loads(sys.argv[1]) except Exception: raise SystemExit(0) if payload.get('type') != 'agent-turn-complete': raise SystemExit(0) client = (payload.get('client') or '').strip().lower() if client and ('app' in client or client == 'appserver'): log_skip('skip-app-client', payload) raise SystemExit(0) thread_id = (payload.get('thread-id') or '').strip() if thread_id: originator, source = get_thread_originator(thread_id) if originator == 'Codex Desktop': log_skip('skip-desktop-originator', payload, originator=originator, source=source) raise SystemExit(0) cwd = payload.get('cwd') or '' subtitle = pathlib.Path(cwd).name if cwd else 'Task completed' message = (payload.get('last-assistant-message') or 'Task completed').replace('\n', ' ').strip() if not message: message = 'Task completed' if cwd: group = 'codex-cli:' + format(zlib.crc32(cwd.encode('utf-8')) & 0xFFFFFFFF, '08x') else: group = 'codex-cli:' + (payload.get('thread-id') or 'default') subprocess.run( [ str(pathlib.Path.home() / '.local' / 'bin' / 'mac-notify.sh'), 'Codex CLI', message[:180], subtitle, group, ], check=False, ) PY ``` ```bash chmod +x "$HOME/.codex/notify.sh" ``` ### Codex config File: `$HOME/.codex/config.toml` ```toml notify = ["/Users/you/.codex/notify.sh"] ``` Use any absolute path you want. I keep the script under `~/.codex/`. --- ## If you use Ghostty, disable terminal-native desktop notifications I hit one more annoying edge case in `Ghostty`: duplicate notifications. What happened was: - my script sent a notification through `terminal-notifier` - `Ghostty` also surfaced a terminal-native desktop notification That produced two macOS notifications for one event. On my machine, the clean fix was to keep `terminal-notifier` as the only notification channel and disable Ghostty’s terminal-native desktop notifications: File: `~/Library/Application Support/com.mitchellh.ghostty/config` ```plaintext desktop-notifications = false ``` Why I prefer this setup: - `terminal-notifier` gives me `-activate`, so click-to-focus still works - `terminal-notifier` gives me `-group`, so notifications stay scoped per project - both `Claude Code` and `Codex CLI` behave the same way Ghostty’s config docs describe `desktop-notifications` as the switch that lets terminal apps show desktop notifications via escape sequences such as `OSC 9` and `OSC 777`. Turning it off avoids the extra notification layer. --- ## If you also use Codex App This is the part that bit me. At first I assumed filtering by the `client` field would be enough. It was not. On my machine, some sessions started from `Codex App` looked like this in local session metadata: ```json { "originator": "Codex Desktop", "source": "vscode" } ``` That creates a duplicate-notification problem: - Codex App shows its own notification - the local CLI `notify` script can still fire - I get duplicate notifications for the same task So the script does two things: 1. fast path: skip obvious app-like `client` values 2. fallback: read `thread-id` from the `notify` payload, query `~/.codex/state_5.sqlite`, load the first `session_meta` line, and skip if `originator == "Codex Desktop"` That is why the script above checks local thread metadata instead of trusting only `client`. I also log skipped events to: ```text ~/.codex/notify-filter.log ``` That makes debugging much easier if Codex changes its session metadata format later. > This part is based on observed local behavior, not on a stable public contract from the docs. If OpenAI changes how Codex App identifies local sessions in future versions, the filter may need a small update. --- ## References - [OpenAI Codex Advanced Configuration](https://developers.openai.com/codex/config-advanced) - [OpenAI Codex Configuration Reference](https://developers.openai.com/codex/config-reference) - [Anthropic Claude Code Hooks Reference](https://code.claude.com/docs/en/hooks) - [Anthropic Claude Code Hooks Guide](https://code.claude.com/docs/en/hooks-guide) - [Anthropic Claude Code Terminal Configuration](https://code.claude.com/docs/en/terminal-config) - [terminal-notifier](https://github.com/julienXX/terminal-notifier)

Dating App Sucks Pt.2

2026-03-03 08:00:00

Ok here we go again. I think I've finally figured out the scariest thing about dating apps. They do actually turn finding love into a fucking job search. > Every date feels like a business meeting or something, no sparks, pure cringe. Think about it, we fill out our "resumes" with our best photos and wittiest bios. We list our "desired positions" in the filters. We swipe through "candidates" hoping to get a "decent offer". The whole thing is an HR pipeline with better lighting. But love is the exact opposite of a job search, which follows logic. Love? Personally, I think there is no logic in love. Love is a bias, a fucking tyranny. The bias is that you only want one specific person to do the things literally anyone could do. The tyranny is that you pour all your emotions, irrationally, recklessly, entirely onto another human being. And dating apps have always given me this weird feeling, love obtained through this process feels so bland it's almost offensive. If I were a planet, this whole approach would be like some engineer calculated the perfect speed, angle, and mass, then launched another planet at precisely the right time so we'd form a nice, stable binary star system. How romantic. How efficient, how abso-fucking-lutely dead inside. What I want is a rogue planet hurtling toward me at full speed out of nowhere in the middle of the void. The moment we touch, atoms from two entirely separate worlds are forced into lattices they were never meant to share. Molecular bonds snap, shatter, and reform into something unrecognizable. The pressure breeds temperatures that fuse nuclei into heavy, unnamed elements that no periodic table has ever seen, existing for a few picoseconds before decaying into something else entirely. Oceans of molten rock erupt outward, entire crusts peeled off like skin, shockwaves rippling through mantles at speeds no device could ever measure. What used to be two worlds is now a single, blinding wound in space. Some debris escapes into strange new orbits. The rest? Fuses together so tightly that nothing, not time, not entropy, can pull it apart, until our one last atom is annihilated with the heat death of the universe. I'm not saying dating apps are pure evil, you could still meet someone real on there, the odds exist. But what's truly terrifying about these things is that they teach you how to NOT invest. Everyone on there wants low-risk love. A guaranteed return with minimal downside. But since when has that ever been how love works? I've seen people around me become professional swipers. Always chatting, always got girls around them. And then what? This one's family background isn't great. That one's not pretty enough. Another one said something weird at dinner that gave them the "ick". Next. Next. Next. Bro, stop cos-ing a fucking conveyor belt. Being overly rational in love is a slow way to lose everything. The second anything feels slightly off, they're gone. No friction allowed. But no friction means no sparks either. They end up like the guy in Socrates' wheat field parable, walking through the field, always convinced a bigger stalk is just ahead, waiting, but never actually picking one. And the field does end. It very much does end. Uninstalled.