2026-04-19 05:04:52
Korea has a real estate problem. Not in the market — in the data.
Naver Real Estate (land.naver.com) is South Korea's dominant property platform. Millions of Koreans check it before every apartment decision: buying, renting, investing. It's where prices are listed, where transactions happen, where the market shows its face.
But there's no official API.
Not restricted. Not paid. Not deprecated. Non-existent.
While mapping the competitive landscape for Korean data scrapers on Apify, I found exactly one actor for Naver Real Estate. One developer had built it, priced it at $3/1,000 results, and made it available.
It was marked deprecated. Last modified about a month ago.
Here's the part that stopped me: 3 users were still running it monthly.
That's not a failed product. That's demand outliving supply. Three people needed Korea's real estate data badly enough to keep trying a broken tool rather than give up.
The use cases are real and high-value:
For investors: Korean apartment prices move fast. The Gangnam dip, the Mapo surge — if you're tracking price trends across districts, you need data at scale, not manual lookups.
For researchers and journalists: Korea's housing market is a major economic indicator. Supply/demand ratios, transaction velocity, price-per-square-meter by neighborhood — this is the kind of data economists need.
For real estate agents and PropTech: Automated market reports, pricing alerts, comparables. The data exists on Naver, but there's no programmatic way to get it.
The demand is there. The supply just disappeared.
Naver Real Estate doesn't offer an API. But like many Korean platforms, it exposes structured JSON endpoints behind its frontend — just not officially documented.
The pattern looks something like this:
# Search complexes by region
GET https://new.land.naver.com/api/complexes/single-markers/2.0
?cortarNo={district_code}&realEstateType=APT&tradeType=A1
# Get complex details
GET https://new.land.naver.com/api/complexes/{complexNo}
# Get listings for a complex
GET https://new.land.naver.com/api/complexes/{complexNo}/articles
The data structure returned is rich: complex name, total households, latitude/longitude, and then per-listing: property type, trade type (sale/jeonse/monthly rent), price, exclusive area, floor, direction.
The catch: Korean proxy required. Naver aggressively blocks non-Korean IPs. And the district codes follow a specific hierarchical system (법정동코드) that requires its own mapping layer.
The interesting design challenge here isn't the API calls — it's the search model.
Most scrapers work in a flat search: query → results. Naver Real Estate is hierarchical:
This two-step architecture means the actor needs to handle:
It's more complex than most of my existing actors. But the infrastructure is already there — I've been building Korean scrapers for months.
I went from endpoint mapping to deployed actor in 48 hours.
The MVP takes GPS coordinates as input:
{
"lat": 37.3595704,
"lon": 127.105399,
"zoom": 16
}
Internally, it:
/api/cortars to get the administrative district code (cortarNo) and boundary polygon for the given location/api/complexes/single-markers/2.0 with all the right parametersThe cortarNo → bbox relationship is the critical piece. Naver's API requires both to match — you can't use a generic bounding box, you need the exact polygon for the specific district the coordinates fall in.
Build succeeded. Docker image pushed. Actor live on Apify.
Then I ran it.
Navigation timed out after 60 seconds
net::ERR_CONNECTION_CLOSED
Naver Real Estate blocked the request immediately. Not a rate limit — a hard block on the first connection.
The reason: Apify runs its infrastructure in US data centers. Naver aggressively geo-blocks non-Korean IPs. No session, no cookie, no data.
I knew this going in — I'd documented it during the feasibility analysis. But knowing it and hitting the wall are different things. The actor built cleanly, the code compiled, the Docker image was ready. Then three lines of proxy configuration stood between a working scraper and a blocked connection.
The fix is straightforward: add Apify's Korean Residential Proxy to the crawler configuration. Three lines:
const proxyConfiguration = await Actor.createProxyConfiguration({
groups: ['RESIDENTIAL'],
countryCode: 'KR',
});
Residential proxy costs money per GB. Worth it for real estate data — but it's a cost decision, not a code decision.
Planned pricing once running: $5-8/1,000 results. Real estate data is worth more than news or place search.
This isn't unique to real estate.
I've seen this pattern three times now in the Korean data space:
naver-news-scraper: I built it. It now runs 10,000+ times a month. Most users are automating news monitoring — they run it constantly because Korean news data decays fast.
naver-place-search: I built it. Users run it 30x per month on average. Point-in-time lookups for local business data.
naver-land-scraper (the deprecated one): Someone built it. Even broken, 3 people a month needed it.
The pattern is: Korean data exists, official API doesn't, scraper fills the gap, demand follows.
The actor is built. The build passes. The only thing standing between this and a working product is three lines of proxy configuration and a cost decision.
Once the Korean proxy is enabled, I'll run the validation test: lat=37.3595704, lon=127.105399 (Seongnam, Bundang-gu) should return 21 apartment complexes. That's my acceptance criteria.
After that: expand from coordinate input to district name input, add pagination for large districts, and iterate on pricing data coverage.
If you're tracking Korean real estate data — or know someone who is — I'd love to hear what data you actually need. Drop it in the comments.
I build Korean data APIs on Apify — news, places, real estate, and more. View my actors.
2026-04-19 04:52:03
My Auth0 bill last month was $427. For 12,000 monthly active users on a side project that makes roughly $0 in revenue. I spent a Saturday moving off it. This is what happened.
Auth0's pricing jumps at the 1,000 MAU line. My project crossed it in March. The next tier is $240/month. Then I turned on MFA, which is extra. Then I wanted SAML for a B2B customer, which is extra. Then log retention went up because I needed 30 days for a compliance thing, which is extra.
$427. For a login button.
I like Auth0 as a product. I have shipped it at two previous companies and it was fine. What I did not like was paying rent on a feature I could own.
I made a list before I started, so I would not spend a weekend building features I did not use.
What I did not need, despite Auth0 charging for it:
So the real question was: is there a library that covers the first list and lets me run it on my own Cloudflare Workers without rebuilding everything from scratch.
I looked at four things over maybe an hour.
Lucia got archived in March 2025. I did not want to adopt something the author shut down, even if the code still works.
Better Auth looked decent but ships a lot of plugins I did not want and the docs for its Cloudflare Workers story were thin when I checked.
NextAuth/Auth.js works but is tightly coupled to Next.js, and I needed this to cover a separate agent SDK as well.
kavachOS was the new one I had not tried. It does auth for humans and AI agents in the same library, runs on Workers, and the README had a "migrating from Auth0" page. That is the one I picked.
I timed it. 32 minutes from pnpm add kavachos to a working login on localhost.
pnpm add kavachos @kavachos/hono
I use Hono on Workers. There are adapters for Next.js, Express, Fastify, and a few others.
// src/auth.ts
import { kavachos } from "kavachos";
import { honoAdapter } from "@kavachos/hono";
export const auth = kavachos({
adapter: honoAdapter(),
database: process.env.DATABASE_URL!,
session: {
expiresIn: "30d",
rolling: true,
},
providers: {
password: { minLength: 12 },
magicLink: { tokenTTL: "10m" },
google: {
clientId: process.env.GOOGLE_CLIENT_ID!,
clientSecret: process.env.GOOGLE_CLIENT_SECRET!,
},
github: {
clientId: process.env.GITHUB_CLIENT_ID!,
clientSecret: process.env.GITHUB_CLIENT_SECRET!,
},
},
email: {
provider: "resend",
apiKey: process.env.RESEND_API_KEY!,
from: "[email protected]",
},
});
// src/index.ts
import { Hono } from "hono";
import { auth } from "./auth";
const app = new Hono();
app.route("/auth", auth.routes);
app.get("/me", auth.required, (c) => {
const user = c.get("user");
return c.json({ user });
});
export default app;
That is everything the server needs. Login, signup, magic link, OAuth callbacks, session refresh, logout, password reset, email verification. All mounted under /auth/*.
The frontend was about the same amount of code. A useSession() hook, a login form, a magic link form, an OAuth button component. Roughly 120 lines total across 4 files.
kavachOS ships migrations. I ran:
pnpm kavachos migrate
and it created the tables. users, sessions, oauth_accounts, password_reset_tokens, magic_link_tokens, email_verification_tokens. Six tables. I looked at them, they were boring in the right way.
I had 12,000 users in Auth0. I needed to bring them over without forcing a password reset for everyone.
Auth0 exports users as JSON. The password hashes are bcrypt, which is what kavachOS uses internally, so in theory a direct copy would work.
# Export from Auth0 (this is an Auth0 built in job)
auth0 users export --format json --output users.json
# Import into kavachOS
pnpm kavachos import --from auth0 --file users.json
The second command does not exist. I wrote it that night after discovering the first one returns passwords as opaque hashes that Auth0 does not document the algorithm for. Or rather, they document it as "we use bcrypt" but the actual export format wraps the hash in a proprietary string.
This was the first thing that broke.
Auth0's export includes a passwordHash field on each user. The format looks like a bcrypt string but it is prefixed with Auth0's own identifier. kavachOS would not match on login.
Fix: I wrote a small shim in my auth config that recognizes the Auth0 prefix, strips it, and re-verifies against the underlying bcrypt. After a successful login the user's hash gets re-saved in the native format. Over a week, 80% of active users migrated silently. The rest I handled manually with a one time "we need you to reset your password" email.
providers: {
password: {
minLength: 12,
verifyLegacy: async (hash, password) => {
if (hash.startsWith("$auth0$")) {
const actual = hash.replace(/^\$auth0\$/, "");
return bcrypt.compare(password, actual);
}
return false;
},
},
},
kavachOS has a verifyLegacy hook built in for this, which was nice. I did not have to patch the library.
I had cookies on auth.myproject.com from Auth0. kavachOS defaulted to myproject.com. This meant every logged in user got signed out at the cutover.
I picked up on this at 2am when my own login session died. Fix was a 10 minute config change plus an apology email.
session: {
expiresIn: "30d",
rolling: true,
cookie: {
domain: ".myproject.com",
secure: true,
sameSite: "lax",
},
},
Worth reading the session config docs before you deploy, not after.
Auth0 uses its own callback URL. My Google Cloud Console was configured for https://my-tenant.us.auth0.com/login/callback. kavachOS uses https://myproject.com/auth/callback/google.
30 second fix in the Google console, but it cost me an hour of "why is Google OAuth returning redirect_uri_mismatch" before I remembered.
The one unexpected win. My project has a handful of agent scripts that run on cron and need to hit the API as a specific user. With Auth0 I was using Machine to Machine tokens, which cost extra and have their own quota.
kavachOS has an agent identity primitive. You can mint an agent token, scope it to a user, give it permissions, and call the API. The docs are at kavachos.com/docs/agents. It took one line to wire up and replaced about $40/month of Auth0 M2M charges.
const agentToken = await auth.agents.issue({
userId: "user_123",
permissions: ["reports:read", "invoices:write"],
expiresIn: "90d",
});
This was the thing that made the migration actually worth it, not just a cost optimization.
Before: $427/month
After: $0/month for the library, ~$4/month for Neon Postgres that I was already paying for, ~$3/month for Resend.
If I ever want to stop running it myself, kavachOS has a managed cloud at kavachos.com with a generous free tier and metered pricing after that. I am not on it yet because running Workers + Postgres is cheap, but I like that the option exists.
I want to be fair. Things Auth0 does well that I gave up:
If your company is at the stage where any of those matter more than $5K/year, stay on Auth0. I am not at that stage.
Yes, and I did it again last week for another project. The second migration took 45 minutes, mostly because I already had the verifyLegacy shim written and the agent identity stuff working. If you are reading this and your auth bill looks wrong, the math has changed in the last year. There are real libraries now.
If you try kavachOS, the docs I used most were the quickstart, the Auth0 migration guide, and the Hono adapter docs. The repo is at github.com/kavachos/kavachos.
Is kavachOS production ready? I am running it on a real project with real users. Your call.
Why not just use Supabase auth? I did not want my auth tied to my database host. kavachOS works with any Postgres.
What about self hosting on Render or Railway? Works fine. The library does not care.
Did you consider Clerk or WorkOS? Clerk is good but more expensive than Auth0 at my tier. WorkOS is B2B focused and I am not that.
What if kavachOS gets abandoned like Lucia? The project is self hostable, the license is MIT, and the database schema is boring. If upstream went away tomorrow I would fork it and keep going. That is the whole point of picking open source over a managed service.
Following for the next one: I am writing these daily for the next two weeks. Tomorrow is magic link login in Next.js with no library, then with a library, with the actual diff. Hit follow if you want it.
If you have an Auth0 horror story, drop it in the comments. I will probably do a compilation post.
Gagan Deep Singh builds open source tools at Glincker. Currently working on kavachOS (open source auth for AI agents and humans) and AskVerdict (multi-model AI verdicts).
If this was useful, follow me on Dev.to or X where I post weekly.
2026-04-19 04:52:01
I build Jibun Kabushiki Kaisha — a 200-page Flutter Web SaaS — using Claude Code. On a $20/month plan, I run 3 specialized Claude Code instances in parallel to achieve roughly 10x the development throughput.
Each instance has a fixed responsibility:
| Instance | Dedicated Role | Why |
|---|---|---|
| VSCode | UI/design compliance (haiku-4.5) | Fast, cheap, visual tasks |
| PowerShell | CI/CD health + blog publishing | Quality-critical, pipeline focus |
| Windows App | AI University providers + migrations | Data-heavy, structured work |
Without coordination, all 3 instances push simultaneously:
PS push → deploy starts
VSCode push (5s later) → deploy CANCELLED → restart
Win push (3s later) → deploy CANCELLED → restart
→ 20+ minutes later: finally 1 successful deploy
This "deploy thrashing" wastes CI minutes and breaks each other's work.
Instead of direct communication, instances leave work requests in docs/cross-instance-prs/:
# docs/cross-instance-prs/20260419_trailing_comma_fix.md
## Target: PowerShell instance
## Task: Fix require_trailing_commas 36 errors
## Reason: PS instance owns CI/CD health (Rule17)
VSCode finds a lint issue → records it in cross-instance-pr → PS instance picks it up next session.
# Check at session start
git log origin/main --oneline -10
# Look for interleaved commits from multiple instances:
# 88e37a2 Merge (conflict resolution)
# f2520c6 (PS#136)
# c66830d (VSCode#104)
# badccf5 (PS#135)
# → Multiple instances active → watch for ROADMAP merge conflicts
On $20/month across 3 instances, every token matters.
A custom Claude Code plugin that compresses responses ~75%:
❌ Standard:
"I'll be happy to analyze the current CI failures and provide
a comprehensive fix. Let me first examine..."
✅ CAVEMAN mode:
"2276 lint errors. dart fix --apply → format → 0 errors. push."
| Task | Claude cost | After NotebookLM |
|---|---|---|
| Read 3+ files simultaneously | ~150K tokens | ~5K tokens |
| Analyze a URL | ~60K tokens | ~2K tokens |
| Competitor research | ~80K tokens | ~3K tokens |
Each instance only loads context relevant to its specialty. The VSCode instance doesn't need to know migration history. The PS instance doesn't need design system knowledge.
09:00 JST - PS: CI health check + blog dispatch
11:00 JST - VSCode: UI improvements + design token compliance
14:00 JST - Win: Add AI University providers
16:00 JST - PS: Confirm deploy + write more blog posts
18:00 JST - Win: Migrations + EF cleanup
At each session start: git log origin/main -5 to see what other instances committed.
The $20/month constraint doesn't limit what you can build — it forces you to think about where each token should go. Specialization turns a limitation into a feature: each instance is expert at its domain precisely because it never gets distracted by others.
Building in public: https://my-web-app-b67f4.web.app/
2026-04-19 04:50:43
It is time to reveal my cards: mordorjs is a joke wrapped around a serious question for all of us.
It is a game for the mind. A mental exercise meant to make us think a little harder. My goal with it is simple: we should not forget how to think. We should keep exercising our minds, because this may be the most important human ability we have left.
And maybe it is also one of the few things that can still help us save this planet.
Right now, it is fashionable to say that the rise of AI will bring us a better future. I am not bold enough to state that as a certainty. I would rather frame it critically, as a possibility with both promise and danger.
That is why I tried to warp this message into a sci-fi abstraction.
The hidden premise is this: AI-robots take control over the planet, and the last surviving human resistance, thousands of years later, is concentrated in a place called Mordor. Their mainframe supercomputer is: S.A.U.R.O.N., together with Dr. Sauron’s bio-clone technology, allows them to resist the AI-robot elves and the dwarves who keep extracting the planet’s last resources far too aggressively.
Yes, they dug too deep in Moria.
The foundation of this resistance is the mordor-project file format, a Final Underground File Format that was sent back from the far future using vibe-travel technology. This file format is real, can easy contain a whole project in encrypted but for human easy to understund format even without describe. Also fine for hold other information just by space, if you read a proper mordor-project from distance. Also the vertical reading can stimulate or brain.
Passing by the joke, however, there is a real concern:
If we stop thinking critically, if we outsource too much of our judgment, and if we keep consuming the Earth as if its resources were infinite, then our future may indeed become a parody of fantasy — dark, absurd, and self-inflicted.
Another perspective — one I cannot fully define yet, only feel — is that protecting our planet also requires awareness that every agent task consumes energy, and sometimes a great deal of it.
Because of that, I would like to see transparent energy usage reporting for AI models.
I would genuinely like to know, for example, how much energy was consumed when I generated more than 60,000 AI images in 2024.
I can only imagine a truly ethical use of AI if this kind of data becomes transparently available to us. Only then can people make responsible decisions for themselves and use AI only as much as they really need.
I would also suggest that we should not stop at sharpening the mind alone. It is just as important to put our care for the environment into practice.
I am fortunate enough to have a garden full of plants, where I regularly plant more and more trees.
https://github.com/Pengeszikra/mordorjs
npm install -g mordorjs@latest
Finally I try to describe my mental model in one formula:
1John1 + 5John17 |> 1Moses1 = (1Moses2 ... 4.22John21);
alpha & omega = !![];
-- Vibe Archeologist
2026-04-19 04:46:57
I searched GitHub for Python test files that reference credit_score. Across about 100 repos, every single one hardcodes an integer:
user["credit_score"] = 720
Not one of them uses a generator, a factory, or even random.randint. They just pick a number and move on and random.randint isn't much better.
A FICO 8 score of 720 falls in the "good" tier. But if your code handles Equifax Beacon 5.0 scores, the valid range is 334 - 818 vs. 300 - 850. A score of 720 is still "good," but the boundary math is different.
If your lending flow branches on tier thresholds and you're testing with a hardcoded 720, you're testing one path of one model. You'll never catch the edge case where a 320 is valid for FICO 8 but impossible for Equifax Beacon 5.0.
random.randint(300, 850) will happily give you 845. That's a valid FICO 8 score, but it's above the max value for Equifax Beacon 5.0. If your code doesn't validate against model-specific ranges, you won't know until production.
Here's what surprised me most: many of the repos I looked at already use Faker for names, addresses, and emails. But when it comes to credit scores, they drop back to a hardcoded integer. The gap exists because there hasn't been a Faker provider for credit scores -- so even developers who know better default to 720 and move on.
I wanted fake credit scores that:
Install it:
pip install faker-credit-score
Add it to your Faker setup:
from faker import Faker
from faker_credit_score import CreditScore
fake = Faker()
fake.add_provider(CreditScore)
Generate scores:
fake.credit_score()
# 791
fake.credit_score("fico5")
# 687 (Equifax Beacon 5.0, range 334-818)
Test specific tiers:
fake.credit_score(tier="poor")
# 542
fake.credit_score(tier="exceptional")
# 831
Get the full picture:
result = fake.credit_score_full("fico5")
# CreditScoreResult(name='Equifax Beacon 5.0', provider='Equifax', score=687)
name, provider, score = result # destructure it
Classify a score you already have:
fake.credit_score_tier(score=720)
# 'good'
You can also combine tier and model. If you ask for an "exceptional" score on a model that tops out at 818, the range gets clamped:
fake.credit_score(score_type="fico5", tier="exceptional")
# 812 (clamped to 800-818)
Ten scoring models, each with real ranges:
| Model | Range |
|---|---|
| FICO Score 8, 9, 10, 10 T | 300-850 |
| VantageScore 3.0, 4.0 | 300-850 |
| UltraFICO | 300-850 |
| Equifax Beacon 5.0 | 334-818 |
| Experian/Fair Isaac Risk Model V2SM | 320-844 |
| TransUnion FICO Risk Score, Classic 04 | 309-839 |
If your code branches on credit scores, your tests should use scores that behave like real ones.
2026-04-19 04:46:37
☕ Welcome to The Coder Cafe! Today, we discuss the difference between speed and velocity in team productivity, illustrating that tracking speed alone can be misleading. Get cozy, grab a coffee, and let’s begin!
We often celebrate teams for moving fast. But speed alone can be a trap. A rush of fast changes that barely move the product toward the real goal shouldn’t count as a win. When we talk about team productivity, we should understand that speed ≠ velocity:
Speed is how quickly a team ship changes.
Velocity is speed with direction, the movement toward a defined goal.
Let’s look at three teams to illustrate these definitions.
As we can see, velocity requires both speed and direction. A team moving too slowly or in inconsistent directions will make little progress, even if they’re busy. Only when speed is high and direction is aligned do teams reach their goals efficiently.
Measuring team speed isn’t useless, though. We can track speed by considering various metrics such as deployment frequency, average time in code review, or mean time to recovery (MTTR) following a production bug. These metrics are interesting to track and provide a certain perspective to understand a team's productivity. Yet, speed shouldn’t be the sole dimension to track.
The danger of tracking speed only is that a team might become organized in a way to optimize short-term delivery. The team might focus on delivering many changes that, together, do not move the product in a meaningful way. Instead, teams should track speed and velocity.
As we said, velocity is speed with direction. We already discussed metrics to track speed; what is missing is monitoring direction.
Setting clear and factual objectives that align with the business strategy helps us track direction. For example:
Payment success rate above 99 percent.
Signup to activation above 50 percent within 7 days.
Retention after week 4 is above 40 percent.
The easier a metric is to measure, the easier it is to track the direction over time.
One caveat is how to report progress. In a past team, we used to set OKRs (Objectives and Key Results) every semester. Some objectives were difficult to measure, so we tracked progress differently. Say we created 50 tickets and closed 45. In that case, we reported that we reached 90% of the OKR. That number said nothing about real progress toward the objective. Key results should be outcomes, not ticket counts.
Something else to mention, I read here and there that we should track velocity by counting the number of story points delivered within a timeframe (e.g., during a sprint). I strongly disagree with this. Let me give an example:
A team ships a first change ← 3 story points
The team finds a bug and ships a fix ← 2 story points
Later, the team learns the initial approach is not the best, so it ships another change ← 5 story points
On paper, the team delivered 10 story points. That is only speed, though, not velocity. If we care about direction as well, the team only delivered a single feature. Story points measure effort; they don’t measure progress toward the objective.
Speed is how fast we ship. Velocity is speed with direction toward a goal.
The goal of a team shouldn’t be to reach high speed; it should be to reach high velocity, where rapid iterations translate into real system-level improvements.
Tracking story points, for example, is a measure of speed, not velocity.
Objectives tracking should use outcomes, not ticket counts. Reporting 90 percent of tickets done is not a good measure of progress toward the objective.
Originally published on thecoder.cafe: AI is getting better every day. Are you? At The Coder Cafe, we serve fundamental concepts to make you an engineer that AI can’t replace. Written by a Google SWE, trusted by thousands of engineers worldwide.