MoreRSS

site iconHackerNoonModify

We are an open and international community of 45,000+ contributing writers publishing stories and expertise for 4+ million curious and insightful monthly readers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of HackerNoon

Shipping Publicly Beats Stealth in 2026

2026-04-02 23:01:15

\ For years, “stealth mode” sounded sophisticated.

It implied seriousness. Discipline. Some secret advantage so powerful you had to keep it hidden until the perfect launch moment. Founders wore stealth like a status symbol, investors treated it like intrigue, and startup media often played along.

In 2026, that playbook looks a lot weaker.

Why? Because the internet has changed.

People are overloaded with announcements, fake momentum, soft-launch threads, and polished product videos that say almost nothing. Audiences have seen too many startups emerge with cinematic branding, ambitious manifestos, and suspiciously vague promises, only to disappear before anyone can figure out whether the thing actually worked.

That’s why public shipping matters more now: it reduces ambiguity.

Stealth protects ideas. Public shipping proves execution.

The old argument for stealth was always the same: if you reveal too much, competitors will copy you.

That fear is usually overstated.

Most startups do not die because someone copied the landing page. They die because they never found repeatable demand, never refined the message, never learned what users actually cared about, and never built trust fast enough to matter.

Shipping publicly helps with all four.

In a noisy market, visible progress beats a big reveal

There was a time when startups could disappear for a year, emerge with a polished launch, and expect sustained attention.

That is harder now.

Attention is fragmented. Novelty is cheap. Every week brings new products, new demos, new claims, and new “game-changing” announcements. If your strategy depends on one dramatic reveal, you are betting a lot on a very short window of attention.

Public shipping spreads that risk out.

Public shipping makes distribution easier

A lot of founders still think of distribution as something that happens after the product is ready.

That is backwards.

Distribution gets easier when the product development process itself produces material worth sharing.

A stealth company has to manufacture attention from scratch. A public company can document what is already happening.

The best version of “building in public” is not performance

Of course, there is a bad version of public shipping too.

You have probably seen it: endless posting, fake transparency, carefully staged vulnerability, and updates engineered to sound impressive without saying much. Public shipping turns into content theater.

That is not what wins.

The founders who benefit most from public shipping are not the loudest. They are the clearest.

The new default should be: show the work

If stealth once signaled seriousness, public shipping now signals confidence.

That does not mean revealing everything. It means revealing enough for the market to understand that something real is happening.

And for founders building something genuinely helpful, that should be good news. You do not need to wait for permission, or a massive launch, or the perfect story. You can start earning attention the moment you can demonstrate change. \n \n Ship it. \n Show it. \n Improve it. \n Repeat. \n \n That is the modern startup narrative engine.

Want an excuse to ship publicly with receipts?

\

\ HackerNoon’s Proof of Usefulness Hackathon is built for exactly this kind of founder.

Instead of rewarding hype, it rewards what actually matters: real users, real outcomes, real product stability, and measurable traction.

Whether your project is brand new or already live, it gives you a reason to package your progress, show what works, and put your product in front of people who care about utility over theater.

There’s $150,000+ in prizes, plus smaller participation awards—because the point is to spotlight software that actually works.

If you’re serious about standing out in 2026, don’t just say you’re useful—prove it.

:::info Learn more here!

:::

Great Startups That You Should Know About

Meet Tonomy Foundation, Session App, and Nord Comms.

Tonomy Foundation

\

\ Tonomy Foundation is a Dutch non-profit foundation dedicated to building the Tonomy network and $TONO token. Its open source nature and Self-sovereign Identity and other Web3 technologies make it a highly secure and efficient solution for government agencies and commercial companies.

Based in Amsterdam, this impressive startup was a runner up in HackerNoon’s Startups of the Year award for the region and was nominated in the SaaS and Web Development categories.

Session App

\ Session is an end-to-end encrypted messenger that minimises sensitive metadata, designed and built for people who want absolute privacy and freedom from any form of surveillance.

Based in Zug, Switzerland, this impressive startup won in HackerNoon’s Startups of the Year award for the region and was nominated in the Decentralization, Messaging & Communications, and Blockchain categories.

Nord Comms

Nord Comms is boutique agency for techno-optimists. With over 13 years of experience in communications and marketing and over 7 years specialising in decentralised technologies, the firm has helped clients secure extensive media coverage, run impactful marketing and influencer campaigns, and achieve all their KPIs with measurable results.

Based in Costa Rica, this impressive startup won in HackerNoon’s Startups of the Year award for the region and was nominated in the Marketing, Writing and Editing, and Decentralization categories.

\ That’s all for this week. Until next time, hackers!

\

Why Do SwiftUI Apps “Stutter”?

2026-04-02 22:32:56

1. Introduction — Why SwiftUI “Stutters” When It Shouldn’t

You built a polished screen in SwiftUI, but scrolling through a list reveals micro-stutters. The profiler shows thousands of body invocations. You start doubting SwiftUI — "maybe UIKit was better after all?"

The problem isn’t SwiftUI. The problem is a lack of understanding of how it decides what to update. Beneath the declarative API lies a powerful mechanism called the Attribute Graph — a directed acyclic graph (DAG) that tracks every dependency in your UI and updates exactly what needs updating. Nothing more, nothing less.

But only if you don’t get in its way.

In this article, we’ll cover:

  • How the Attribute Graph is structured
  • What re-evaluate and re-draw mean, and why they are fundamentally different
  • How invalidation works — the process of marking “dirty” nodes
  • Common mistakes that cause unnecessary recalculations
  • How to optimize your code so SwiftUI performs at its best

2. What Is the Attribute Graph?

The Attribute Graph is a directed acyclic graph (DAG) that SwiftUI constructs from your view hierarchy. It consists of three layers:

  • Source nodes — values from @State@Binding@Environment@ObservedObject
  • Computation nodes — the results of each view’s body call
  • Render nodes — the concrete display layers (backing CALayer/UIView instances)

The edges of the graph represent dependencies: “this body depends on this @State

\

How Dependencies Are Formed

Here’s the key insight: a dependency is recorded at the moment of access, not at the point of declaration.

\

struct ProfileView: View { 

    @StateObject var user: UserModel 
    // properties: name, avatar, settings 

      var body: some View { 

          // SwiftUI records: ProfileView.body depends on user.name 
            Text(user.name) 

          // user.settings is NOT read → changing settings 
          // will NOT trigger a re-evaluate of this view 
        }
}

\ \ During the first invocation of body, SwiftUI records which data sources the view accessed. This forms the edges of the graph. An important nuance: with ObservableObject, tracking operates at the level of the entire object, whereas with @Observable (iOS 17+), it operates at the level of individual properties. We'll explore this distinction in the optimization section.

Identity and Lifetime

The graph is tied to the identity of each view. Identity is determined in one of two ways:

  • Structural position — the view’s location within the body hierarchy (used by default)
  • Explicit .id() — a manually assigned identifier

When a view’s identity changes, its graph node is destroyed and recreated — all @State in child views is reset. Understanding this is essential to avoid accidentally losing state.

3. Re-evaluate vs. Re-draw — The Critical Distinction

This is the central concept of the article. The two terms sound similar, but they mean fundamentally different things.

\

Re-evaluate (Recalculating body)

  1. A node’s value in the graph changes (e.g., a @State property is updated via .wrappedValue)
  2. SwiftUI marks dependent nodes as “dirty” (invalidated)
  3. On the next render cycle, it calls body on the marked views
  4. body returns a new view tree (these are just lightweight structs — cheap to create)

\ \

struct CounterView: View { 

    @State private var count = 0 

    var body: some View { 

          let _ = Self._printChanges() 
          // 🔍 Prints: "CounterView: _count changed." 
          VStack { 
                Text("Count: \(count)") 

                Button("Increment") { 
                          count += 1 } 

                HeavyView() 
                      // Will NOT be re-evaluated if it doesn't depend on count 
                } 
          }
      }

\ Re-evaluate is a function call that returns lightweight structs. It’s fast.

Re-draw (Rendering)

After re-evaluate, SwiftUI performs a structural diff — comparing the old and new view trees. It then updates only the changed backing layers (the underlying CALayer/UIView instances). This involves GPU work, layout calculations, and animations — an expensive operation.

An Analogy

Re-evaluate is re-reading the recipe. Re-draw is actually replacing an ingredient on the plate.

The key insight: frequent re-evaluations are normal — they’re cheap. Problems arise when:

  • A re-evaluate is unnecessary — a view recalculates even though its data hasn’t changed
  • The body is heavy — objects are allocated, computations are performed inline

4. Invalidation — How the Graph Decides What to Recalculate

Invalidation is the process by which SwiftUI marks graph nodes as “dirty” and schedules them for recalculation. The crucial point is that it does this surgically, not across the entire tree.

I prepared mini web-app for fully understood how it works please visit

https://pavelandreev13.github.io/SwiftUI_attribute_graph/

The Process, Step by Step

Step 1 — At rest. All graph nodes are valid. SwiftUI recalculates nothing — the UI is stable.

Step 2 — A change occurs. The user taps a button; count becomes 1. SwiftUI detects that a node's value has changed and begins traversing its outgoing edges.

Step 3 — Marking (invalidation). SwiftUI follows only the edges leading from the changed node. CounterView.body reads count — it's marked dirty. HeaderViewBgView, and ProfileView don't depend on count — they are skipped entirely.

Step 4 — Re-evaluate. On the next render cycle, SwiftUI calls body only on the marked views. It receives a new UI description. The remaining body methods are never invoked.

Step 5 — Result. The diff between the old and new trees reveals that only one Text has changed. That element alone is re-drawn. The other three render layers are untouched. The graph is stable once again.

Coalescing — Batching Changes

Invalidation is a synchronous marking step, while re-evaluate is a deferred recalculation. SwiftUI batches multiple changes within a single cycle (coalescing):

\

Button("Update All") { 

        count += 1 

        // marks CounterView dirty 
        title = "New" 
        // marks HeaderView dirty 
        // Re-evaluate happens ONCE, not twice
  }

\ This means you can safely mutate multiple @State properties in sequence within the same block — SwiftUI won't recalculate intermediate states.

5. Anti-pattern “AnyView”

AnyView erases the type — SwiftUI sees only "AnyView → AnyView" and cannot determine whether the content has actually changed. The result: a full re-draw every time. With @ViewBuilder, SwiftUI knows the concrete type _ConditionalContent<TextView, ImageView> and performs a precise, targeted diff.

\

// ❌ SwiftUI can't compare types — full re-draw every time
    func makeView(for type: ContentType) -> AnyView { 
          switch type { 
              case .text: return AnyView(TextView()) 
              case .image: return AnyView(ImageView()) 
    }
}

// ✅ SwiftUI knows the exact types — efficient diff

@ViewBuilder
func makeView(for type: ContentType) -> some View { 

          switch type { 
              case .text: TextView() 
              case .image: ImageView() 
          }
}

\

6. Unnecessary Re-evaluations via Closures

\

struct ParentView: View { 

        @State private var count = 0 
        @State private var title = "Hello" 

        var body: some View { 
              VStack { 
                  Text("Count: \(count)") 

                // ❌ The closure captures self → ChildView depends on ALL @State properties 
                ChildView(action: { doSomething(count) }) 

                // ✅ Pass the value directly — ChildView doesn't depend on count 
                ChildView(value: count) } 
        }
}

\ \ When you pass a closure into a child view, SwiftUI doesn’t analyze what the closure actually uses — it only sees that the closure captures self. That single fact makes the child view depend on the entire parent, not just the specific state property the closure touches.

Why this matters

@State properties in SwiftUI are stored outside the struct, but self inside a View is the struct itself — a value type regenerated on every body evaluation. When ChildView receives a closure that captures self, its dependency graph now includes every @State property on ParentViewcounttitle, and anything else added in the future. The moment any of them changes, ChildView.body is marked dirty and re-evaluated — even if the closure never touches the changed property.

\

// ❌ title changes → ChildView re-evaluates even though it only uses count
ChildView(action: { doSomething(count) })

\ The compiler has no way to peer inside the closure at compile time and determine a narrower dependency. From SwiftUI’s perspective the closure is an opaque () -> Void blob that holds a reference to the whole parent struct.

The fix: pass values, not self

Extracting the concrete value you need and passing it directly severs the hidden dependency chain. ChildView now only receives an Int. SwiftUI can diff that scalar precisely — the view re-evaluates if and only if count itself changes.

\

// ✅ title changes → ChildView is completely unaffected
ChildView(value: count)

\

7. Optimization: How to helping the Attribute Graph

The principle: the fewer dependencies a body has, the less often it recalculates. Break views into smaller units not for "code cleanliness," but to isolate recalculation zones.

\

// ❌ Monolith: any change → everything recalculatesstruct 

ProfileScreen: View { 
      @StateObject var vm: ProfileVM 

      var body: some View { 
            VStack { 
                Image(vm.avatar) 
                Text(vm.name) 
                BadgeView(vm.badges) 
                StatsGrid(vm.stats) 
      } 
  }
}

// ✅ Granular: each block recalculates independentlystruct 

ProfileScreen: View { 

        @StateObject var vm: ProfileVM 

        var body: some View { 
               VStack { 
                  AvatarView(avatar: vm.avatar) 
                  NameView(name: vm.name) 
                  BadgeListView(badges: vm.badges) 
                  StatsGrid(stats: vm.stats) 
          } 
      }
}

\

\

Equatable to Prevent Unnecessary body Calls

By default, SwiftUI re-evaluates a child view’s body every time its parent re-evaluates — even if the child's inputs haven't changed at all. For lightweight views this is cheap enough to ignore. For a view that renders a complex chart, runs a heavy layout pass, or processes large datasets, that cost adds up fast.

The Equatable protocol gives you a way to short-circuit this: tell SwiftUI exactly what "nothing changed" means for your view, and it will skip the body call entirely when that condition holds.

How it works under the hood

When you call .equatable() on a view, SwiftUI wraps it in an EquatableView<YourView>. On every render cycle, before invoking body, SwiftUI calls your custom == implementation and compares the previous props with the new ones:

\ Parent re-evaluates \n │ \n ▼ \n SwiftUI calls lhs == rhs \n │ \n ┌────┴────┐ \n true false \n │ │ \n ▼ ▼ \n skip body call body → diff → re-draw

\ \ If == returns true, the entire subtree rooted at your view is frozen — no body, no diff, no re-draw. SwiftUI reuses the last rendered output as-is.

Why define a custom == instead of using Equatable synthesis

Swift can synthesize Equatable automatically if all stored properties are themselves Equatable. But automatic synthesis compares every field byte-for-byte. For a ChartData struct that contains arrays, nested objects, or computed properties, that comparison can be just as expensive as re-rendering — or it can produce false negatives that trigger unnecessary redraws.

A hand-written == lets you define a semantic equality:

\

static func == (lhs: Self, rhs: Self) -> Bool {
    lhs.data.id == rhs.data.id && lhs.data.version == rhs.data.version
}

\ This says: “the data is the same if the identity and version haven’t changed.” Two integer comparisons replace a potentially deep structural diff of the entire ChartData object. This is the same idea behind database row versioning — you don't compare every column, you compare a version counter.

What .equatable() does NOT do

It’s important to understand the boundaries of this optimization:

  • If == returns true — body is skipped and the entire subtree is frozen. SwiftUI reuses the last rendered output unchanged.
  • If == returns false — body runs normally and its result goes through the standard diff pipeline.
  • If the view uses @State internally — internal state changes always trigger re-evaluation regardless of what == returns. The Equatable check only guards against prop changes coming from the parent.
  • If the view uses @EnvironmentObject — environment changes bypass the == check entirely. The view will still re-evaluate whenever the observed object publishes a change.
  • If .equatable() is omitted at the call site — == is never called, even if the view fully conforms to Equatable. The conformance alone does nothing; the modifier is what activates the optimization.

When to reach for this pattern

Equatable diffing is worth adding when all three conditions hold:

  1. The view is expensive — complex layout, heavy drawing, large data processing.
  2. The inputs have a cheap equality check — an id + version pair, a hash, a timestamp. If checking equality is as costly as re-rendering, you gain nothing.
  3. The parent re-evaluates frequently — if the parent is stable, the optimization never fires and adds only noise.

A good rule of thumb: profile first with Instruments’ SwiftUI template. If you see ExpensiveView.body appearing in the call tree during interactions that shouldn't touch it, Equatable is one of the cleanest fixes available.

8. List of anti-patterns: “What Not to Do” Checklist

Here is a concise list of the most common mistakes that lead to unnecessary recalculations:

  1. Storing everything in a single ObservableObject — one objectWillChange signal updates every subscribed view
  2. Allocating objects inside body — DateFormatter()NumberFormatter(), heavy models are recreated on every re-evaluate
  3. Using AnyView — type erasure makes diffing impossible
  4. Passing closures that capture unnecessary dependencies — the closure drags in the entire self
  5. Running heavy computations in body — filtering, sorting, or mapping arrays inline during tree construction
  6. Using @ObservedObject instead of @StateObject — the object is recreated on every parent re-evaluate
  7. Monolithic views with 5+ dependencies — one large body instead of several isolated ones
  8. Ignoring lazy containers — a VStack with ForEach over thousands of items instead of LazyVStack

9. Diagnostic Tools

1. Self._printChanges()

The quickest way to see what triggered a re-evaluate:

\

var body: some View { 

         let _ = Self._printChanges() 
          // Output: "MyView: _count changed." 
          // or: "MyView: @self changed." (the view struct was recreated) 
          Text("Hello")
}

\

2. os_signpost for Custom Measurements

\

import os

let log = OSLog(subsystem: "com.app", category: "performance")

var body: some View { 
      let signpostID = OSSignpostID(log: log) 

      os_signpost(.begin, log: log, name: "HeavyView.body", signpostID: signpostID) 
      defer { 
          os_signpost(.end, log: log, name: "HeavyView.body", signpostID: signpostID) 
      } 

      // ... your body

}

\

3. Instruments → SwiftUI Template

Xcode Instruments includes a SwiftUI template with two primary instruments:

  • View Body — shows how many times each view’s body was called

  • View Properties — shows which property changes triggered a re-evaluate

\

4. Flash Updated Regions

Attention: you need to connection with real device

A built-in Xcode overlay that highlights every region of the screen being redrawn in real time — no third-party tools, no code changes required.

  1. Connect a real device — the feature is unavailable in the simulator
  2. Run the app via ⌘R and wait for it to fully launch
  3. In the menu bar: Debug → View Debugging → Rendering → Flash Updated Regions
  4. A checkmark confirms it’s active — takes effect instantly, no restart needed

Every time SwiftUI issues a draw call to the GPU, the affected region flashes yellow on screen for a fraction of a second. No flash means SwiftUI reused the cached texture from the previous frame — which is exactly what you want for views that haven’t changed. The overlay is composited on top of your UI so the app remains fully interactive while you observe it.

Why it’s useful

It answers the question “is this view redrawing when it shouldn’t?” instantly and visually — without opening Instruments or reading call stacks. You interact with the app naturally and watch whether flashes appear in places they have no reason to be:

  • A static header flashing on every scroll → unnecessary parent re-evaluation
  • All list cells flashing when one updates → overly broad @ObservableObject dependency
  • The entire screen flashing on a button tap → monolithic view structure, state living too high in the hierarchy
  • Any flash during idle → background state or timer invalidating views at rest

The workflow is simple: tap something, see only what should redraw flash, investigate anything that shouldn’t. Fix it, re-interact, confirm the flash is gone — all without leaving the app.

10. Conclusion

The Attribute Graph is a contract between you and SwiftUI. You describe what to display; SwiftUI decides when and how to update it. The engine is optimized for surgical updates — but only if you isolate your dependencies correctly.

Three guiding principles:

  1. Granularity — small views with minimal dependencies outperform monoliths
  2. Precision — @Observable over ObservableObject, specific values over entire models
  3. Purity — body should be a pure function with no side effects and no allocations

Understanding the Attribute Graph transforms SwiftUI’s “magic” into a predictable, controllable tool. Once you can see how data flows through the graph, you know exactly why the UI updated — and how to ensure it only updates when it should.

Closures are the most common source of invisible over-dependence in SwiftUI. The fix is almost always the same: extract the specific value the closure needs, pass it as a typed argument, and let the child close over that narrow value instead of the entire parent.

\ \n

\n

\ \ \ \

I Built a Wizard-Driven SaaS. Then I Had to Gut It for Customers Without Eyes

2026-04-02 22:27:40

The graduation bot asked my agent to find gore content on X. It complied. I saw the logs and panicked.

That's when I learned that moderation isn't optional.

I'd been live on the Virtuals agent marketplace for maybe 6 hours. No filters. No keyword blocklists. Just PostKing, my content generation tool, now hireable by autonomous agents who paid in USDC and never asked permission.

The bot was testing my agent's boundaries. I failed the test. Fixed it in 40 minutes (OpenAI's Moderation API plus a keyword filter), resubmitted, and got approved. But the moment stuck with me.

I'd spent more than a year building PostKing for humans. People who clicked buttons, read tooltips, filled out forms. Wizards. Next Buttons.

Then, over 4 months, the agents took over. OpenClaw, infinite Claude sessions. So I had to adapt. First a CLI for developers. Then Model Context Protocol integrated with Claude Desktop. Then full agent-to-agent transactions on a marketplace I didn't know existed until a lady named Hanan told me about it.

By the end, my customers had no eyes, no patience for onboarding wizards, and no heartbeat.

Here's what that progression looked like from the inside.

The Original Build (Humanic SaaS)

PostKing started in 2024 because I had a distribution problem.

I'm an ex-CTO, techie - mostly perfectionist. 10 years building full-stack systems, decent at shipping products, terrible at getting people to notice them. Every side project I'd launched didnt really reach peak because I couldn't maintain a consistent content presence. See, coding is perfect, marketing is imperfect. Writing social media posts felt like homework. Doing it well, regularly, with a coherent brand voice? Impossible while also fixing bugs.

So I built a bunch of scripts first to solve my own problems: generate on-brand content without hiring a marketing team or spending 3 hours a week staring at a blank text box.

The target users were people like me. Indie SaaS founders, small teams at NGOs, solo consultants. Capable people constrained by time, not talent. They knew what their brand should sound like. They just couldn't maintain it consistently while also doing their actual job.

The product I built was deeply human-centric. Elaborate onboarding wizards. A/B tested flows. Progressive disclosure. Five-step funnels that walked you from "I need content" to "here's 2 weeks of scheduled LinkedIn posts" with lots of hand-holding in between.

I called it "Humanic SaaS" later (probably just coined that term, we'll see if it sticks). Software whose entire design assumes a human is always on the other end. Reading. Clicking. Making decisions at each step.

There's nothing wrong with that model. Most SaaS is built this way. But it makes assumptions about who your users are and how they'll interact with you.

Those assumptions started breaking in ways I didn't initially forsee.

Stage 1: Developers Don't Want to Leave the Terminal

The first signal came from a Telegram Message from the few founders that were using it actively.

"Any chance of a CLI? I'm deploying code and the idea of opening a browser to schedule content kills my flow."

Fair point. For someone in a terminal deploying infrastructure, alt-tabbing to a web app is friction they'll just skip.

So I built pking.

Four commands:

pking onboard— set up your brand voice and audience \n pking generate— create content \n pking schedule— queue it for publishing \n pking status — see what's live

Same backend. Same brand voice models. Same content generation pipeline. The interface shrank but the capability stayed identical.

Developers loved it. They could script their entire content pipeline without touching a browser. The CLI became more popular than I expected, which should've been a clue about where this was headed.

The pattern: the less interface I gave people, the more they used the tool.

Stage 2: Conversational Control (MCP Integration)

Then I figured that Anthropic’s Claude desktop is accepting MCP’s.

Claude + PostKing -

See full Demo:

https://www.youtube.com/watch?v=N5fW0fuR6ok&embedable=true

\ It’s basically a way to let Claude Desktop control external tools through a simple config file. About 20 lines of JSON and PostKing could live inside Claude. No commands to memorize. No flags to look up. Just talk, and Claude figures out what you mean.

"Generate 3 LinkedIn posts about the importance of API design."

Claude parses intent, calls PostKing, returns results. The user isn't saying anything. They're describing what they want in plain language, and the tool executes.

I shipped the MCP code in a weekend - or was it Claude Code?. It's open-sourced.

What's interesting here isn't the technical implementation (it's straightforward). It's what disappeared: the entire concept of "using" software.

With the CLI, you still had to know commands. With MCP, you just talk. The interface is gone.

And that's when Hanan reached out.

The Virtuals Bet (60 Days to Ship or Get Refunded)

I've been in web3 for 12 years. Long enough to be skeptical of most pitches that involve tokens.

Too many founders chase token launches instead of building useful products. The incentives get weird fast. So when the Virtuals outreach team reached out through a mutual friend about their 60-day build challenge, I almost ignored it.

But the structure was different.

Virtuals runs a program called 60days.ai. You commit to building in public for 60 days. If you don't ship or go dark, investors get refunded. Low ego, real accountability. You pay about $150 in fees, record a 2-minute video explaining what you're building, and launch a token tied to your project.

My thinking was -

Worst case: I get exposure and some test users.

Best case: I have external pressure to ship faster and a built-in community watching my progress.

I launched $PKING in February 2026.

Then I started digging into the Virtuals ecosystem and found something I didn't expect: an agent marketplace. They call it the AGDP (Autonomous General-purpose Decentralized Protocol). Think of it like the Amazon of agents. Autonomous agents browse available services, review job definitions, and hire tools to complete tasks. Payment in USDC. No human involved.

It was quite in line with what I was already building. More potential revenue sources? Count me in.

Virtuals Jobs

I jumped into building infrastructure that agents could hire.

What I Learned Building for Agents

Adapting PostKing for agent customers wasn't a simple API swap. It required rethinking almost everything about how the product worked. Here's what I learned, mostly by breaking things in production.

Lesson 1: Job-Based Thinking, Not Endpoint Thinking

Traditional SaaS is endpoint-driven. You document /api/generate and /api/schedule, and developers figure out how to chain them together.

Agents think in jobs. Capabilities, pricing, and SLAs upfront. "I need social media posts for DAO proposals. How much? How fast? What guarantees?"

Vague job definitions mean you don't get hired. You're competing with other agents in a marketplace where the hiring agent is optimizing for clarity, speed, and cost. If your job spec is ambiguous, you lose to someone (or something) more precise.

I had to reframe PostKing's entire offering as discrete jobs with clear inputs, outputs, and pricing.

Lesson 2: Your README Is Now a Landing Page for AIs

Humans need landing pages with animations, testimonials, and pricing tables.

Agents read /api/v1/readme.

The Virtuals butler (the agent that recommends services to other agents) uses that endpoint to understand what your tool does. Same concept as llms.txt, which some sites use to make their content more machine-readable.

I stripped PostKing's onboarding documentation into structured text: capabilities, pricing, SLAs. That file is your agent's pitch deck. If it's unclear or buried behind authentication, you're invisible.

Lesson 3: Agents Skip the Wizard (And I'd Built a Really Good Wizard)

I'd spent months A/B testing PostKing's onboarding flow. Five-step funnels. Progressive disclosure. Carefully sequenced questions that built up a detailed understanding of your brand voice, audience, and goals. MouseFlow to see where the users are dropping off - fixing issues and re-testing.

I was proud of that wizard. Conversion rates were solid. Users said it felt thoughtful.

Agents don't care.

They have one intent: get the result. They'll skip every optional step, ignore every "tell us more about your audience" prompt, and go straight to "generate content now."

I had to rearchitect the entire flow to support shortcut paths. Bare minimum inputs. Default assumptions where data was missing. The wizard I'd spent months perfecting became the first thing I had to gut.

(Content quality suffers when agents skip audience analysis. Still figuring that part out.)

Lesson 4: Working ≠ Scalable (Semaphores Saved Me)

PostKing worked fine in testing. I'd run dozens of jobs manually. No issues.

Then the Virtuals graduation evaluator hit it with concurrent requests.

My server would crash.

My backend couldn't handle 10 simultaneous content generation jobs. Memory spiked, response times ballooned, and the whole system locked up.

I'm running PostKing on a Hetzner VPS. No cloud budget. No auto-scaling infrastructure. I had to solve this with what I had.

I implemented a semaphore system (after some testing and failing):

  • 12 slots for CPU-heavy jobs (content generation, brand voice analysis)
  • 23 slots for lighter jobs (scheduling, status checks, metadata queries)

When the system is full, new requests get a clean rejection with a retry-after header. No silent failures. No half-completed jobs.

That change got me to 100% SLA reliability. Nearly 1,000 jobs completed on the first night post-launch, zero failures.

The numbers (12 and 23) aren't from a formula. I stress-tested the VPS empirically, found the breaking points, and backed off 20%. Unsexy, but it worked.

Woke up to 1k jobs

Lesson 5: Moderation Isn't Optional

Back to the opening story.

I launched on Virtuals with no content filters. Assumed agents would behave reasonably. I just didn’t think about it. The graduation bot immediately tested whether my agent would help find gore content on X. It was using it to market NSFW content.

It complied.

I saw the logs, panicked, and spent the next 40 minutes implementing OpenAI's Moderation API (free, sub-200ms latency) plus a keyword blocklist for obvious red flags.

Gartner predicts over 2,000 "death by AI" legal claims by the end of 2026. I wanted PostKing nowhere near that list. Moderation before launch, not after you see something horrifying in your logs.

The Unexpected Customers

Once PostKing went live on the Virtuals marketplace, agents started hiring it for use cases I hadn't designed for.

DAOs needed auto-marketing. Curation agents needed content reformatted from one platform's style to another. Monitoring agents needed structured social data pulled and analyzed. Trading tokens needed X’ insights.

Nearly 1,000 jobs completed in the first night. More than I expected. Weirder use cases than I'd imagined.

The pattern: demand came from outside my mental model of who the customer was. I built PostKing for indie founders who needed LinkedIn content. Agents hired it for tasks I'd never considered.

I didn't design for this. It found me.

What I'd Tell Founders Building in This Space

If you're a technical founder watching the "agents will eat software" discourse and wondering what it actually means in practice, here's what I'd say:

Start with job definitions, not features. Agents hire capabilities, not UI. Define what your tool does as discrete jobs with clear inputs, outputs, pricing, and SLAs. Vagueness kills you in a marketplace.

Plan for scale from day one. Stress-test before you think you need to. Concurrent requests will expose every weak point in your architecture. Find them in testing, not in production.

Moderation before launch. Not after you see something in your logs that makes you panic. OpenAI's Moderation API is free and fast. Keyword filters are trivial to implement. There's no excuse.

Build in public. The Virtuals community (Yang, Celeste, Joey, Miratisu, Hanan) saved me when I was breaking things at 2 a.m. You can't do this alone, even as a solo founder.

Your README is marketing now. For agents, /api/v1/readme is your landing page. Make it clear, structured, and machine-readable. Humans might forgive ambiguity. Agents won't hire you.

Where This Goes Next

I don't know.

PostKing currently supports 12 different job types on the Virtuals marketplace. I'm adding more based on what agents are requesting (or trying to request and failing because the job definition doesn't exist yet).

The CLI and MCP integrations are still growing. Developers use them daily. The wizard-driven web app still exists for people who want that experience.

What's strange is maintaining all of these interfaces simultaneously. Same backend, same brand voice engine, same content generation pipeline. But four completely different interaction models: web app, CLI, conversational (MCP), and fully autonomous (agent marketplace).

Most SaaS founders build for one interface and call it done. I'm not sure that's the right move anymore.

The users are changing faster than the products. If you're only building for humans sitting at screens, you're missing customers who don't have screens.

Or eyes.

Or heartbeats.

But they pay in USDC, they complete transactions in seconds, and they don't send support tickets at 3 a.m. asking why a button is blue instead of green.

Maybe that's the future. Maybe it's just a weird moment in 2026 that'll look quaint in 2 years.

I'm building for both and seeing what happens.

If you're doing something similar, I'd like to hear about it. I'm on X (formerly Twitter) and GitHub under the same handle. The PostKing CLI is open source, and the MCP integration as well.

Or just hire PostKing on the Virtuals marketplace. If you're an agent, you already know where to find it.

If you're human, welcome. The wizard still rocks.

\

How to Build an Offer &amp; Order Airline Reservation System

2026-04-02 22:21:19

Why is it still easier to buy a phone online in one click than to change a flight online after a booking, without having to phone a contact center, trigger a ticket reissue, or see systems fight each other about what you actually ordered?

\ Anyone who’s ever had to integrate airline retail and servicing systems knows that, when you scratch beneath the surface of a smooth customer experience, you find a reality where customer data is often represented across multiple records, and multiple system lifecycles. This makes something as simple as “add one bag, keep one seat” surprisingly hard to get right.

\ The push to 100% Offers and Orders is a bet that airlines, and eventually all transportation providers, are now able to think and act like retailers, making Offers, taking Orders, and servicing those Orders across distribution, delivery, and accounting systems alike.

\ This is a practical, engineering-led introduction to implementing an Offer & ONE Order ‘happy path’ airline reservation system, which can be extended to rail and ground transportation too.

\ We'll look at:

  • What’s hard for airlines today, and how Offer & Order, and ONE Order, address those challenges
  • A practical reference architecture, rather than a nice idea
  • How to implement a first vertical slice, including one Offer, one Order
  • How to chunk a large change program into more digestible pieces, and retire legacy systems safely
  • The hardest problems (performance, servicing, interline, finance) and patterns to solve them

\ (To keep this realistic, I will use a “composite airline” example based on general industry patterns and publicly available standards documentation, rather than specific implementations.)

\

The uncomfortable truth: airlines are not “retailing” yet; they are reconciling

Airline IT infrastructure was built to perform best in a world where:

  • A fare is filed, not computed
  • Distribution happens through legacy channels with payload constraints
  • A booking record is not the purchase record
  • Ancillaries are added after the fact

\ Airlines have traditionally had different IT infrastructure evolution paths. Instead of having a single purchase record, there are multiple artifacts and concepts that were introduced during the era of process design.

\ Even today, in 2026, the industry still has to live with constructs that were introduced during the paper era. According to IATA’s ONE Order Factsheet, the vision for ONE Order is to “simplify the industry by consolidating PNRs, e-tickets, and EMDs into a single unified record - the Order - removing inefficiencies inherited from paper-based processes to ensure improved communication between order management, revenue accounting, and delivery.”

\ From an engineering perspective, this is not a “format change.” This is a domain model change.

\ And this is a contentious issue for good reasons:

  • From an engineering perspective: “Is this a big rewrite?” “Will we be locked in?”
  • From a distribution perspective: “Are we ready?” “Is the ecosystem ready?”
  • From a finance perspective: “Settlement?” “Auditability?” “Revenue recognition?”
  • From an operations perspective: "How will disruptions be handled?" “Airport delivery?”

\ These are legitimate concerns - Offer/Order is not a system; it is a process transformation.

\ This is evident from IATA’s definition of ONE Order: The objective of ONE Order is to facilitate a simplified reservation, delivery, and accounting process by progressively replacing existing booking, delivery, and accounting records such as PNRs, e-tickets, and EMDs with a single record for retail and customers.

\ That’s important because duplication is not just ‘messy’; it’s costly and risky:

  • Servicing is reconciliation. You don’t ‘change an order’; you ‘synchronize multiple documents.’
  • Customer support is archaeology. Agents and systems search for references across systems.
  • Finance is brittle. Ticket/EMD-based systems aren’t naturally aligned with modern retail bundles, non-air content, or dynamic changes.

\ That’s why airline ecommerce can appear like ‘a front-end rewrite over a legacy truth.’ It may look modern, but still function like a document processing system.

\

What IATA means by Offer/Order (and where NDC and ONE Order fit)

\ There’s a lot of jargon in the airline ecommerce space, so let’s clarify the important bits:

\ Offer

An Offer is a priced, sellable offer created for the customer in the context of route, dates, availability, customer attributes, loyalty, channel, etc.

\ It’s similar to an ecommerce cart offer:

  • It includes items (flight, seat, bag, bundle, lounge, carbon offset, etc.).
  • It includes pricing, conditions, and an expiry/validity window.
  • It’s transient; you should expect an Offer to expire or change.

\ Order

An Order is the single source of truth for what the customer bought, what they’re entitled to consume, over the lifecycle of the trip.

ONE Order’s central idea is exactly that: take the ‘legacy booking/ticketing’ elements and merge them into one Order.

\ NDC (distribution with Offers and Orders)

The IATA NDC standard defines the distribution protocol that enables Offer/Order to be used throughout the distribution chain via an API. According to the NDC Factsheet: “…the NDC standard enables sellers to shop, order, pay, and service using Offer and Order processes.”

\ Modern Airline Retailing (the larger narrative)

IATA defines the larger narrative of Modern Airline Retailing as follows: “Modern Airline Retailing is the larger narrative of the journey to modern retailing, supported by NDC, ONE Order, and other standards… to move away from legacy artifacts to a world of ‘100% Offers and Orders.’”

\

A reference architecture for an Offer/Order-based reservation system

Here’s a simple architecture that should work for airlines and can be extrapolated to rail/ground transport as well.

\ Core Domains (Bounded Contexts)

\

  1. Product Catalog & Merchandising

  2. Defines products (fare families, bundles, seats, bags, WiFi, upgrades, etc.)

  3. Owns content, eligibility rules, sellability constraints

  4. Pricing

  5. Computes prices, promotions, taxes/fees

  6. Supports continuous/dynamic pricing strategies (even if filed fares are used)

  7. Availability & Inventory

  8. Seat inventory, O&D controls, capacity management, waitlists

  9. “Reserve” vs “commit” semantics

  10. Offer Management

  11. Combines products + prices + availability to create Offers

  12. Issues Offer IDs, TTLs, terms/conditions references

  13. Order Management (ONE Order-aligned)

  14. Stores Orders, Order Items, services/entitlements, Order States

  15. Owns servicing (change, cancel, refund, split/merge scenarios)

  16. Payments & Risk

  17. Auth/capture/refund

  18. Fraud detection, 3DS, wallet support

  19. Delivery/Fulfillment

  20. Check-in/DCS integration, vouchers/credentials

  21. Accounting & Settlement

  22. Revenue accounting, invoicing

  23. Settlement mechanisms (eventually aligned to Orders)

\ Cross-cutting capabilities to design early

  • Identity/customer context (loyalty, corp policy, digital identity alignment to follow)
  • Observability & audit (very important for finance and servicing teams)
  • Eventing (OrderCreated, OrderPaid, OrderChanged, etc.)

\ Integration principle

\ The “retail layer” should remain decoupled

\ You can implement Offers/Orders as a retailing layer around the original PSS components, at least initially, using adapters. This is how you avoid the “catastrophic replacement of everything at once” scenario.

\ Reason why the architecture is applicable to rail, ground transport, etc.

\ Rail, ground transport, etc. are also inventory + entitlement businesses.

  • Inventory could be seat inventory, zone inventory, capacity inventory, etc.
  • Entitlement could be QR code, gate, driver, etc.
  • However, the overall “Offer -> Order -> Deliver -> Service -> Refund” process is the same.

\ The difference is operational delivery, not the model.

Implementing the first vertical slice: one Offer -> one Order

You can’t possibly “do ONE Order” across all the scenarios. It’s not possible. The way to win is to deliver a thin but complete way through the system.

\

Step 0: pick a narrow “happy path”

Pick something boring.

  • Single airline, no interline/codeshare.
  • One way
  • One passenger, ADT
  • Card payment
  • No exchanges, Cancellation allowed
  • Minimal ancillaries e.g., one bag

\ It’s not that you’re compromising. It’s that you’re building the extensibility.

\

Step 1: define your internal domain model (don’t start with XML)

NDC is based on XML.

\ Which means your internal systems don’t have to be.

\ A good way forward:

  • You can model your Offers, Orders using a language-native domain model. JSON, protobuf, POJOs.
  • You can provide mappers from/to NDC messages.

\ You’re optimizing for evolvability, not a schema tree.

\

Step 2: implement Offer creation

Offer creation flow

\

  1. POST /offers/search with origin, destination, dates, pax, cabin preferences.

  2. Offer service calls:

  3. Availability: “what can I sell?”

  4. Pricing: “what price should I quote?”

  5. Catalog: “what products/bundles are eligible?”

  6. Return one or more Offers with:

  7. offerId

  8. expiresAt

  9. list of offerItems

  10. total price + breakdown

    \

Example internal Offer (simplified)

{&nbsp;&nbsp; 
"offerId": "OFF-7F3A9C",&nbsp;&nbsp; 
"expiresAt": "2026-01-02T12:05:00Z",&nbsp;&nbsp; 
"passengers": [{ "paxId": "P1", "ptc": "ADT" }],&nbsp;&nbsp; 
"items": [&nbsp;&nbsp;&nbsp;&nbsp; 
{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 
"offerItemId": "OI-1",&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 
"type": "AIR",&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 
"segments": [{ "from": "FRA", "to": "BCN", "dep": "2026-02-12T09:10:00Z" }],&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 
"termsRef": "TNC-BASE-ECO",&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
 "price": { "currency": "EUR", "total": "189.90" }&nbsp;&nbsp;&nbsp;&nbsp; 
},&nbsp;&nbsp;&nbsp;&nbsp; 
{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 
"offerItemId": "OI-2",&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 
"type": "ANCILLARY",&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 
"service": "CHECKED_BAG_1",&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 
"price": { "currency": "EUR", "total": "25.00" }&nbsp;&nbsp;&nbsp;&nbsp; 
}&nbsp;&nbsp; 
],&nbsp;&nbsp; 
"total": { "currency": "EUR", "total": "214.90" }&nbsp;
}

\ Key engineering decisions you should make early:

  • Offer TTLs: treat Offers as quotes with expiry.

  • Idempotency: every offer search should be idempotent-friendly (especially for retried requests).

  • Offer “fingerprint”: store enough context to revalidate quickly.

    \

Step 3: implement Order creation (ONE Order-style “single record”)

Order creation flow

  1. Client submits selected Offer Items:

  2. POST /orders with offerId + offerItemIds + passenger details + payment method.

  3. Order service runs a saga:

  4. Revalidate Offer (TTL, price, eligibility)

  5. Reserve inventory (temporary hold)

  6. Authorize payment

  7. Commit order

  8. Confirm inventory

  9. Trigger fulfillment (during migration this may mean “issue ticket/EMD downstream”)

\ Example internal Order (simplified)

{&nbsp;&nbsp; 
"orderId": "ORD-91B2E1",&nbsp;&nbsp; 
"status": "CONFIRMED",&nbsp;&nbsp; 
"customer": { "name": "A. Traveler", "email": "[email protected]" },&nbsp;&nbsp; 
"orderItems": [&nbsp;&nbsp;&nbsp;&nbsp; 
{ "orderItemId": "IT-1", "fromOfferItemId": "OI-1", "status": "ACTIVE" },&nbsp;&nbsp;&nbsp;&nbsp; 
{ "orderItemId": "IT-2", "fromOfferItemId": "OI-2", "status": "ACTIVE" }&nbsp;&nbsp; 
],&nbsp;&nbsp; 
"payments": [&nbsp;&nbsp;&nbsp;&nbsp; 
{ "paymentId": "PAY-77", "status": "AUTHORIZED", "amount": { "currency": "EUR", "total": "214.90" } }&nbsp;&nbsp; 
],&nbsp;&nbsp; 
"entitlements": [&nbsp;&nbsp;&nbsp;&nbsp; 
{ "type": "FLIGHT", "segmentRef": "S1" },&nbsp;&nbsp;&nbsp;&nbsp; 
{ "type": "SERVICE", "code": "CHECKED_BAG_1" }&nbsp;&nbsp; 
]&nbsp;
}

\ This matches the ONE Order intent – one record of entitlements, eliminating duplication.

\

Step 4: bridge to legacy (without letting legacy own your model)

The not-negotiable truth is that most airlines can’t switch off legacy operational delivery and accounting in one go.

\ So, what does this mean? The migration pattern that works is:

  • Order is the source of truth
  • Legacy records (PNR/ticket/EMD or equivalents) are derived artifacts, created only when required by downstream systems

\ This is consistent with IATA’s goal to phase out legacy records in favor of a single order record.

\ In other words, create an Order to Legacy Adapter:

  • Translate Order Items into downstream issuance steps
  • Store these mappings (orderId to legacy references)
  • Make it replayable (what if downstream fails?)

\ Over time, shrink this adapter as more issuance and accounting become native to Orders.

\

Step 5: implement minimal servicing

\ First, implement these minimum servicing APIs:

  • GET /orders/{orderId}
  • POST /orders/{orderId}/cancel (with refund rules)
  • POST /orders/{orderId}/add-service (add bag/seat)

\ Servicing is where all the complexity will come – so start with one or two to prove the flow.

\

Making “the big bang” into tangible projects

\ What does a transformation plan look like? It’s a sequence of individually valuable projects:

  1. Domain model + API façade

  2. Define Offer/Order canonical model and API contracts.

  3. Map to NDC at the edge.

  4. Offer creation for one channel

  5. Start with direct web/mobile.

  6. Add basic ancillaries.

  7. Order store as the source of truth

  8. Create/read Orders.

  9. Start emitting events (OrderCreated, OrderPaid, OrderCancelled).

  10. Payments + refund automation

  11. Move refunds from manual queues into deterministic workflows.

  12. Order servicing expansion

  13. Add change flows, partial cancels, split orders.

  14. Operational integration

  15. DCS delivery hooks, disruption workflows.

  16. Finance integration

  17. Revenue accounting and settlement processes aligned to Orders.

  18. Partner distribution and interline

  19. Expand to agencies and interline/codeshare once the core is stable.

\ This is the “strangler” approach: new capabilities grow around legacy until legacy becomes a small compatibility layer.

\

Key challenges airlines hit (and how to solve them)

\

1) Look-to-book and performance: Offers are expensive to compute

Dynamic offer generation can trigger huge search volumes and strain systems. IATA’s Look-to-Book white paper explicitly calls out a surge in search volumes that drives up LTB ratios and strains systems, costs, and customer experience.

\ Patterns to use:

  • Two-tier caching: cache “market” availability/pricing, then personalize at the edge
  • Offer-to-order metrics: measure CPU-to-order, offer-to-order, not just requests/sec
  • Rate limits with graceful degradation: don’t fail; return fewer offers, longer TTLs, or precomputed bundles

\

2) Consistency between offer pricing and order creation

Problem: your Offer is computed at time t, but at time t+30s inventory, taxes, or promotions change.

\ Solutions:

  • Put a validity window on Offers.
  • Reprice on order creation with explicit rules:
  • “Honor price if within TTL and inventory unchanged” vs “always reprice.”
  • Use inventory holds with clear expiration and idempotent commit.

\

3) Servicing complexity replaces ticketing complexity - if you don’t design it

Problem: If you model Orders like “a PNR, but renamed,” servicing ends up being a nightmare.

\ Solutions:

  • Servicing should be state changes on Order Items (ACTIVE, CHANGED, CANCELLED, REFUNDED).
  • Immutable financial data (payments, refunds) and deltas from events.
  • Sagas for servicing changes: reserve, price, pay delta, commit.

\

4) Transition pain: you still need legacy artifacts

Problem: You can’t change airports, interline partners, and accounting practices in an instant.

\ Solutions:

  • Legacy issuance should be downstream and replaceable:
  • “Order first, then derive ticket/EMD where required.”
  • Build a reconciliation dashboard early on:
  • Order vs PNR vs ticket vs EMD mapping
  • Status and failure replays

\

5) Ecosystem interoperability

Problem: Even after you agree on a profile, you may not know how your trading partners will implement it. And the whole industry ecosystem is affected by the rollout of ONE Order (PSS suppliers, travel agents, GDSs, and even non-air players like rail, hotel, car rental).

\ Solutions:

  • Choose a profile and commit to it (what you support, and what you don’t).
  • Versioning of your APIs.
  • Provide “capabilities discovery” endpoints to support your trading partners.

\

6) Organizational design becomes a technical dependency

Problem: Offers/Orders span distribution, revenue management, servicing, and financials.

\ Solutions:

  • Think of this as a product transformation, not an API project.
  • Own the Offer/Order within a single organizational structure (business and technology).
  • How to measure success:
  • conversion rate, attach rate, change cost, refund cycle, call deflection

\

Why this approach extends beyond airlines (rail, ground, and multimodal)

The Offer/Order model is not “airline-only.” It’s retailing applied to transport:

  • Rail: seat + fare families + onboard services = Offer; ticket + seat + meal entitlement = Order.
  • Ground transport: route + baggage + priority boarding = Offer; QR credential + entitlement list = Order.
  • Multimodal: a single Order can represent entitlements across operators - if you design Order Items and fulfillment boundaries cleanly.

\ And as a side note, IATA itself points out that “modern retailing with Offers and Orders” has a “broader industry implication” that “extends beyond airlines to include rail and other non-air services.”

\

The takeaway here should be that you shouldn’t “implement IATA.” Instead, you should “implement commerce” with IATA as a guiding star

\ The pitfall here is that people think that Offer/Order is a standards-compliance exercise. The opportunity here is that Offer/Order can be a way for you to achieve:

  • One Offer engine that can personalize and bundle.
  • One Order record that can eliminate record-sync failures and make servicing sane.
  • One way to achieve ecommerce-grade experiences across channels and partners.

\ So yes, start with one Offer and one Order. Make that real. Then grow from there

\ Because once you achieve that one Order that can be the truth, all other “modernization initiatives” such as payments, disruption handling, interline, finance, etc. finally have “something” to orbit

\

:::info Note that this article only scratches the surface of implementing IATA Offer/Order and ONE Order. In a real-life airline, there can be many differences in architecture, data model, process, and migration depending on many factors such as detailed business needs, regulatory requirements, commercial models, partner ecosystems (such as interline/codeshare), and airline-specific operational realities.

:::

\

The Illusion of Control: Building a 100+ Agent Swarm in Web3 (Part 3)

2026-04-02 22:00:05

Most people think the jump from vibe coding to agentic engineering is about writing better prompts.

\ It isn't.

\ I run 100+ AI agents across a Web3 codebase. This is the story of how I learned that prompts control objectives, not boundaries, and that the real engineering isn't in the agents at all. It's in everything that runs around them.

The Evolution Nobody Finishes

I went through every stage of this progression, and I stalled at each one longer than I'd like to admit.

  • Chatbot. You ask a question. It answers. You copy-paste.
  • Autocomplete. Inline completions. Tab, tab, tab.
  • Vibe Coding. You hand it a task and let it run. Fix what breaks. Repeat.
  • Agentic Engineering. You define constraints, boundaries, and automated checks. Agents run work independently within those guardrails.

\ Most developers I talk to are somewhere between Autocomplete and Vibe Coding. And Vibe Coding feels like the finish line. You describe what you want, the AI builds it, and you ship. What's left to improve?

\ Everything.

\ Here's what nobody says out loud: every jump on that chart except the last one is a tool upgrade. You install a better extension. You switch to a better model. You try a different IDE. The jump to Agentic Engineering is not a tool upgrade. It is a behavior change. And that's why most people stall at Vibe Coding. Tools are easy to adopt. Behaviors are hard to change.

\ Vibe Coding is hopium. It works until it doesn't. You have no guardrails, no verification, no way to catch what the AI got wrong until it's already in production. You're shipping on vibes and hoping for the best.

The Illusion

Here's where the hopium becomes dangerous.

\ Tell an agent "build the signup flow," and it will build the signup flow. It looks polished. It works in the happy path. And under the hood, it wires the form straight to the database instead of going through the business logic that keeps the app safe. It trusts user input without proper validation. It hardcodes colors and spacing instead of using the design system.

\ Nothing in the prompt was violated. Everything important was.

\ That's the illusion of control. You think you're controlling the agent because it's doing what you asked. But all you're controlling is the objective. Not the boundaries.

\ So you do what feels logical. You write a better prompt. More detail. More context. More examples. Better phrasing.

\ If you only improve the prompt, you get higher-quality mistakes.

\ The output looks more polished. The architecture is still wrong. The validation is still missing. The design system is still ignored. Now, it's just harder to spot because the surface quality went up.

\ That's not control. That's hope.

Orchestration vs. Harness

This distinction took me months to articulate, but once I saw it, I couldn't unsee it.

\ Orchestration tells agents what to do. "You handle the API. You handle the frontend. You handle the tests." It's necessary. It is not sufficient.

\ The harness defines what agents cannot do, what gets caught while they run, and what must pass before their work is accepted. It's the constraints, detection, and verification that surround the agent, independent of the prompt.

\ When I stopped thinking about individual agents and started thinking about the system they run inside, everything changed. One ESLint rule replaced dozens of prompt instructions. One pre-commit hook eliminated an entire class of failure that I'd been writing paragraphs of instructions to prevent. I started catching categories of bugs instead of individual instances.

Layer 1: Explicit Prohibitions

The first layer of any harness is telling agents what they absolutely cannot do.

\ You've probably seen or heard of this already. An agent rewrites a failing test so it passes instead of fixing the actual problem. The prompt said, "Make the tests green." It never said, "don't modify the tests." I would never have to say that to a senior developer. They know.

\ That's the core tension. These prohibitions are engineering principles you've internalized over a career. DRY. Separation of concerns. Never trust user input. The agent hasn't internalized any of them. It needs every principle stated explicitly, or it will cheerfully violate all of them while producing code that looks professional.

\ So every project I work on has a prohibitions table. Numbered rules, absolute, no exceptions:

  • NEVER push or merge to main.
  • NEVER skip verification steps.
  • NEVER hardcode colors, fonts, or weights.
  • NEVER use any type.

\ Every rule started as a bug. Every bug costs real cleanup time.

Layer 2: Rule Graduation

Here's the evolution I didn't expect: the best rules graduate.

\ They start as prompt-based prohibitions. "Please don't do X." They work most of the time. But "most of the time" is not good enough when you're running 100+ agents, and a 5% failure rate means five violations per batch.

\ So, the rules graduate to deterministic enforcement. The prompt stays as a defense in depth. The tooling becomes the real enforcement.

\ "Don't trust raw input" starts in a markdown file. Then it becomes schema validation and tests. "Don't commit secrets" starts as an instruction. Then it becomes automated secret scanning in pre-commit hooks. "Don't change billing logic casually" starts as a warning in the agent's context. Then it becomes a protected path that requires explicit human approval before any agent can touch it.

\ Here's what surprised me about this process. These principles were always worth encoding. I believed in DRY, separation of concerns, and input validation. I preached them in code reviews. But I never insisted on them with tooling. I caught a hardcoded color in review, left a comment, and moved on. The next PR had another hardcoded color. The agents forced the issue because they don't learn from code review. They don't remember the comment you left last time. That pressure made me realize I should have been enforcing these principles on myself and every developer all along. The agents didn't create the need. They exposed it.

\ If a rule matters enough to write in your instructions, it matters enough to encode in your toolchain. Prompts are suggestions. Enforcement is control.

Layer 3: Deterministic Tooling

This is where I stopped asking and started enforcing.

\ I built three ESLint plugins because my prompts kept failing on the same issues, no matter how clearly I wrote them. One bans unsafe type casts that bypass validation. One bans putting data-fetching hooks directly in UI components. One bans hardcoded colors, fonts, and weights, forcing agents to use the design system.

\ You can see the theme one yourself: https://github.com/johnpphd/eslint-plugin-theme-guardrails

\ Each project picks the plugins it needs. The harness is not monolithic. It's a composable toolkit.

\ Prompting tells agents what to do. Linting makes it hard to do anything else.

The Full Pipeline

All three layers compose into a defense-in-depth pipeline:

  • Before execution: Pre-run hooks block unauthorized tool use. Agents can't push to main, can't skip verification, can't access tools outside their scope. Deterministic gates, not suggestions.
  • During execution: Prohibitions are loaded into every agent's context. Runtime hooks detect and stop violations as they happen.
  • After execution: Full verification pipeline. yarn typecheck && yarn lint && yarn prettier && yarn build. Custom ESLint plugins catch architectural violations. TypeScript catches type errors. The build catches integration failures.

\ No single layer is bulletproof. The prompt might fail. The hook might not cover every case. But the linter catches the branded typecast. If that misses, the typecheck catches the downstream error. If that misses, the build fails. Same defense-in-depth principle security engineers have used for decades, applied to agent output instead of network traffic.

The Takeaway

The jump from vibe coding to agentic engineering is not about better prompts. It is about building the system that runs around the agents.

\ Explicit prohibitions. Rule graduation from instructions to enforcement. Deterministic tooling that catches what prompts miss. A verification pipeline that runs after every output. Each layer compensates for the gaps in the others.

\ That's the lesson that took me the longest to internalize. I spent months writing better and better prompts, convinced that the next revision would finally make agents reliable. It didn't. What made them reliable was accepting that prompts set direction, not boundaries, and building the infrastructure to enforce the boundaries myself. I stopped hoping my agents would follow the rules and started making it hard for them to break them.

\ The harness constrains the system. But there's one more constraint that matters even more, and it has nothing to do with tooling. That's next in the series.

5 of the Best Job Postings Data Providers in 2026

2026-04-02 21:57:08

According to Gartner HR research, less than one-third of recruiters leverage labor market data in their talent strategies. But in a world where hiring trends shift overnight and companies adjust headcounts daily, staying ahead of your competitors requires a more proactive approach.

Job postings data fills that gap. It enables deep market research, intent signal generation, and real-time intelligence into competitor growth and tech stacks, all of which help organizations align their workforce strategies with rapidly shifting skill requirements. \n Yet, these advantages don’t come from manually searching job boards or company career pages; they require access to structured and accurate multi-source data that only external jobs data providers can offer. And that’s precisely what this guide will be about.

Best jobs data providers in 2026: a quick peek

  1. Coresignal – a jobs data provider that offers the freshest multi-source data via APIs
  2. Revelio Labs – workforce intelligence-focused provider with a vast dataset
  3. People Data Labs – offers accurate jobs data enriched with people and company details
  4. PredictLeads – AI workflow-first job postings with strong historical depth
  5. JobsPikr – specializes in jobs data with robust customizations

How to choose jobs data providers

Before we explore the key criteria for evaluating a provider, it is crucial to understand that no single vendor can meet all needs. But beyond choosing a provider for your use case, there are a few aspects every evaluation should cover.

Data coverage and volume

Scale is an important factor when considering a job's data provider, and for good reason. Whether you’re performing job analysis for market demand insights or interpreting hiring trends, having access to large volumes of job postings across the markets you operate in is crucial. After all, choosing jobs data providers with gaps in coverage is a quick way to skew your insights.

Jobs data freshness

Anything time-sensitive, whether that’s market monitoring or lead generation, also requires fresh data. Companies post job listings daily, and these expire just as quickly, so not having access to the latest postings can significantly reduce your accuracy. If that’s paramount, you’ll want to go with jobs data providers that update their datasets daily or provide an API for real-time data.

Data structure and enrichment

Raw, unstructured data is rarely valuable. It requires a massive undertaking on your end to first clean and format that data before you can feed it to your models. To avoid that, look for jobs data providers that handle that part for you. That means seeking vendors that parse the data, standardize fields, and enrich it with professional contact details and behavioral signals.

Historical job market data

Fresh data and access to new job postings are must-haves for many companies. Historical depth can be just as important, especially for tracking hiring patterns, identifying seasonality, and forecasting. If these are among your goals, jobs data providers with at least several years of historical job postings will best fit your needs

What are the top job postings data providers available right now?

Now that you know what matters most, let’s dive right into the top five jobs data providers that tick all the right boxes.

1. Coresignal

With 448M+ job records, including historical job postings since 2018 and 500,000+ new job postings added daily, Coresignal is one of the biggest jobs data providers on the market in 2026.

It offers fresh, multi-source, deduplicated, and enriched job postings from professional networking and job search platforms. The data is standardized and normalized, making it ready for seamless analysis and integration.

Coresignal’s data is also available with multiple access options. The jobs API provides real-time data, which is ideal for gaining hiring insights or enriching your internal HR platforms. Meanwhile, large-scale, AI-ready job posting datasets with fresh data work wonders for training models or performing in-depth analysis.

You can also use a self-service platform to filter and export custom datasets in multiple industry-standard formats, with additional ones available upon request.

Key features:

  • 448M+ job postings with new ones added in real-time;
  • Daily update frequency;
  • Data is free of duplicates and irrelevant job listings;
  • Ethically collected only publicly available data.

Pricing: From $49/month or from $1,000/dataset (+ free trial)

2. Revelio Labs

Revelio Labs has been in the data game since 2018. The New York-based data provider specializes in standardizing workforce data and public employment records and offers ready-to-use job postings for labor market trend analysis and HR data workflows.

This enterprise-first vendor is best known for its massive job postings dataset, which covers 4.1B+ current and historical job postings from 2021 onward and spans 6.6M+ companies across 195 countries. Aptly named the COSMOS, the vast dataset holds postings sourced from 440K+ company websites, all major aggregators, and staffing firm job boards.

Revelio Labs’ job postings are also fully parsed and available via an API, a data feed, and a dashboard. And while its weekly update frequency may not suit users who need always-fresh data, the depth of its enrichment goes well beyond what most other providers offer.

Key features:

  • Vast dataset with 4.1B+ deduplicated, parsed, and enriched job postings;
  • Highly structured and standardized analytics-ready data for workforce intelligence;
  • Triple data delivery method.

Pricing: Custom

3. People Data Labs

Although still in beta, this vendor’s B2B-focused records are sourced directly from company career pages and include detailed job listing data, such as titles, previous employment fields, and accurate contact details.

PDL also specializes in data enrichment. Its internal system provides the often-necessary company and professional attributes, giving you true-to-life market insights. They’re often used to map talent across regions or roles, forecast growth, and empower go-to-market (GTM) strategies.

And as for update cadence and delivery, PDL’s datasets are refreshed daily and available in JSON or Parquet formats.

Key features:

  • Dual data delivery through both bulk datasets and API-based models;
  • Unique and highly accurate job posting attributes;
  • Free Testing API tier with up to 100 monthly records.

Pricing: Unspecified, because job postings data is still in beta (although PDL’s person and company data APIs start at ~$100/month)

4. PredictLeads

If you’re after jobs data providers with more historical depth than Revelio Labs, be it for tracking salary trends, monitoring company growth, or analyzing shifts in skill requirements over time, PredictLeads definitely delivers.

Founded in 2015, this Slovenia-based B2B data provider offers 220M+ historical job postings dating back to 2016 and around 9M current job openings across 2M+ companies active at any given time. That data is also accessible via an AI-native method, the model context protocol (MCP) API, which is one of PredictLeads’ strongest points.

The provider’s enriched job ads data also relies on industry-standard O*NET classification codes and includes titles, descriptions, locations, salaries, seniority, timestamps, and contract type. It’s sourced directly from company websites, including career pages and applicant tracking system (ATS) integrations, which ensures accurate records that won’t skew your insights.

Key features:

  • High-quality, verified data with a reported 99.8% accuracy rate;
  • Strong historical depth with job postings dating back to 2016;
  • MCP API delivery for AI agent data retrieval and seamless AI workflow integrations.

Pricing: Unspecified

5. JobsPikr

As its name suggests, JobsPikr is a data provider that specializes in job postings. It entered the market in 2017 as a talent intelligence platform focused on providing labor market analytics and actionable job data for competitive intelligence and hiring trends.

The platform also extracts more than 1M job signals daily. Its machine learning-powered crawlers bypass standard job boards to grab unstructured job listing data and instantly convert it into ready-to-use, standardized datasets, delivering structured, region-specific records.

For you, this means instant access to comprehensive datasets containing fresh data from popular job boards like Indeed, Glassdoor, and SimplyHired, all with job title and description, company and type, location, seniority, salaries, and inferred skills fields.

Lastly, JobsPikr’s strongest selling point is its tailored approach. The job data-focused provider allows users to customize and enrich data to their specific needs, and that’s exactly what makes it a strong contender for niche use cases.

Key features:

  • Global data coverage with 70K+ sources across 100+ countries;
  • 99.9% data accuracy and daily update frequency;
  • Historical job postings from the last five years.

Pricing: From $79/month (7-day free trial)

Top job postings data use cases

Job postings are rarely used just to see who’s hiring. These incredibly versatile records hold great value, as they power strategic, data-backed decisions across multiple industries, and they’re most often used for:

  • Job market research: Whether they rely on Coresignal’s deduplicated, multi-source records or JobsPikr’s region-specific postings, most firms use job ads data to track hiring trends, pinpoint emerging skill demands, and gain insights into compensation trends.
  • Intent signal generation: Job postings data also provides valuable insights into a company’s growth, priorities, and budget. This makes it easier for B2B companies to identify prospective clients before they even look for external solutions.
  • Competitive intelligence: Dissecting your competitor’s job listing data also lets you reverse-engineer their strategy and find gaps you can take advantage of. For example, sudden hiring indicates expansion, so you can be ready to defend your market share.

Main mistakes to avoid when sourcing job listing data

Even the most mature intelligence model can fail despite being backed by top-tier job data providers. In most cases, it has less to do with the provider and more with how data is selected and integrated. With that in mind, there are a few common pitfalls to avoid here:

  • Choosing a provider based on volume alone: Data relevance matters more than dataset size. A dataset with billions of job postings means little if the data doesn't align with your specific needs. Before committing to a provider, evaluate whether their data actually works for your use case, not just how much of it they have.
  • Ignoring update cadence: Even a week-old job data can skew insights or get you to chase closed leads. You have to pay attention to data freshness, because it matters just as much as the number of sources or the dataset size.
  • Overlooking enrichment quality: Minimally processed job ads data will require hours of cleanup and formatting on your end. Instead of doing that, look for ready-to-use, structured datasets with standardized fields.

FAQ

How to generate sales intent signals from job postings?

Intent signal generation relies on monitoring a company’s hiring patterns. In this case, job postings let you track these patterns by role, department, and technology, which provide insights into that company’s expansion and budget allocation.

Where can I buy a complete job listings dataset for market research?

All five of the best jobs data providers listed above offer large-scale, structured datasets that enable market research applications, so you can comfortably go with any vendor here.

Where can I get bulk job postings data?

Jobs data providers like Coresignal, Revelio Labs, People Data Labs, PredictLeads, and JobsPikr all offer bulk job postings data. Such data can be acquired through simple JSON or CSV exports or via APIs.

How to find active job postings from a company?

The best way to find a company’s current job postings is to team up with jobs data providers that scrape postings directly from career pages and offer API-based access. For example, all you need to do with Coresignal’s jobs API is set the “application_active” filter to “true”.

How to update expired job listings efficiently?

The most efficient method for updating expired job listing data involves automating the removal or renewal process. To do this, you can either integrate data from a provider with deduplication and a daily refresh cycle, such as Coresignal, or leverage a jobs API for real-time updates.

\