MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Where LangChain Starts to Bend: The Signals That Tell You It’s Time for LangGraph

2026-04-02 15:43:14

Where LangChain Starts to Bend: The Signals That Tell You It’s Time for LangGraph

Most teams do not outgrow LangChain because they added more tools.

They outgrow it when execution itself becomes something they need to design, inspect, recover, and govern. LangChain’s current agent APIs run on LangGraph under the hood, while LangGraph is positioned as the lower-level orchestration runtime for persistence, streaming, debugging, and deployment-oriented workflows and agents. :contentReference[oaicite:0]{index=0}

That is the transition this article is about.

Not syntax.

Not diagrams.

Not “graphs are more advanced.”

Not “real systems need more complexity.”

This is a playbook for a narrower and much more useful question:

How do you know your AI app is no longer just an application problem, but a runtime problem?

That is the real boundary between staying comfortably in LangChain and moving into LangGraph.

And that boundary matters, because teams get this wrong in both directions.

Some teams move too early. They introduce explicit state, branching graphs, checkpointing, and recovery logic before the product has earned any of that complexity.

Other teams move too late. They keep stacking prompts, middleware, tool logic, and ad hoc retries onto a higher-level abstraction even after the runtime has clearly become the main engineering concern.

Both mistakes are expensive.

The first creates architecture debt in the name of seriousness.

The second creates system fragility in the name of speed.

The goal is not to start simple forever.

The goal is to know when simple stops being honest.

The wrong reasons to move to LangGraph

Before we talk about the real signals, it helps to clear out the fake ones.

A lot of teams decide they need LangGraph for reasons that sound plausible but are not actually sufficient.

“Our app uses tools”

That is not enough.

LangChain is already built for tool-using agents and applications. Its current agent stack includes tools, middleware, structured output, and a graph-based runtime under the hood. Tool usage by itself does not imply you need to own orchestration directly. :contentReference[oaicite:1]{index=1}

“Our app is important”

Also not enough.

An app can matter to the business and still be well served by a higher-level abstraction. Importance is not the trigger. Runtime complexity is the trigger.

“Our app has multiple steps”

Still not enough.

A multi-step system can often remain a straightforward application problem if the steps are predictable, the branching is light, and failures do not require custom recovery semantics.

“Our app is an agent”

This is probably the most misleading one.

The LangGraph docs draw a very useful distinction here: workflows have predetermined code paths, while agents dynamically define their process and tool usage at runtime. A lot of systems people call “agents” are really workflows with a language model inside them. :contentReference[oaicite:2]{index=2}

“We want a more serious architecture”

This one is rarely said out loud, but it drives a lot of technical decisions.

A lower-level runtime is not automatically more correct.

It simply gives you more responsibility.

That responsibility only pays off when the product truly needs it.

The real trigger: runtime behavior becomes the product problem

The cleanest way to decide is this:

Move to LangGraph when your main engineering problem stops being application behavior and starts becoming runtime behavior.

That sounds abstract, so let us make it concrete.

If your day-to-day engineering work is still mostly about:

  • better prompts,
  • better tools,
  • better retrieval,
  • better output schemas,
  • better middleware,
  • better UX,
  • better response quality,

you are probably still in LangChain territory.

But if your hardest problems increasingly sound like:

  • “Why did it take that path?”
  • “How do we resume from step 7 after failure?”
  • “How do we pause for approval and continue later?”
  • “How do we branch differently based on this intermediate state?”
  • “How do we guarantee completed work is not repeated?”
  • “Where exactly should state live between steps?”

then you are no longer just shaping an AI application.

You are shaping a runtime.

That is precisely the space LangGraph is built for: long-running, stateful workflows or agents with durable execution, human-in-the-loop support, persistence, and debugging/deployment support. :contentReference[oaicite:3]{index=3}

Signal #1: Branching is no longer incidental

The first major signal is that branching stops being a small detail and starts becoming core system behavior.

At first, branching looks harmless:

  • if tool A fails, try tool B
  • if confidence is low, ask a follow-up
  • if the user asks for export, generate a file

That is still manageable in a higher-level app.

But eventually branching stops being occasional and becomes structural:

  • different request classes take materially different paths
  • some paths require tools, others require retrieval, others require approval
  • some paths loop back into evaluation or refinement
  • downstream steps depend on explicit intermediate results
  • execution paths become important to inspect and reason about

Once that happens, “do the next reasonable thing” is no longer enough.

You need the path itself to become an object you can think about.

This is exactly why the LangGraph docs emphasize workflows and agents as execution patterns rather than just model calls. Workflows operate in a designed order; agents dynamically choose their process; LangGraph exists to support those execution patterns with persistence and debugging. :contentReference[oaicite:4]{index=4}

A good litmus test:

If different classes of requests now require materially different execution paths, and those paths matter operationally, branching is no longer incidental.

That is LangGraph pressure.

Signal #2: Conversation history is no longer an honest state model

A lot of AI apps start with implicit state:

  • the prior messages,
  • maybe some middleware context,
  • maybe a few inferred variables.

That works surprisingly well for a while.

But then the system grows, and conversation history starts doing jobs it was never meant to do:

  • storing workflow progress,
  • representing durable task state,
  • carrying partially completed work,
  • standing in for approval status,
  • acting as the only memory of what happened three steps ago,
  • encoding branch decisions implicitly rather than explicitly.

At that point, the transcript is no longer just context. It has become a bad database.

This is where LangGraph starts to matter because it treats state as a first-class runtime concern. Its persistence layer saves graph state as checkpoints at every step of execution, organized into threads, which then powers things like human-in-the-loop flows, conversational memory, time-travel debugging, and fault-tolerant execution. :contentReference[oaicite:5]{index=5}

That is a fundamentally different posture from “we will reconstruct what happened from the message list.”

A useful rule here is:

If your team is repeatedly asking what the state really is between steps, you probably need a runtime that models state explicitly.

That does not mean you need to model every variable in a graph tomorrow.

It means the abstraction boundary is starting to show strain.

Signal #3: Resumability matters

This is one of the clearest signals of all.

A simple AI application can often get away with failure meaning “run it again.”

But a more serious system cannot always do that.

Once your system has to:

  • run for a long time,
  • perform expensive steps,
  • coordinate multiple stages,
  • survive service interruptions,
  • wait for external input,
  • or continue later without recomputing everything,

resumability becomes a product requirement, not an implementation luxury.

This is exactly where LangGraph’s durable execution story becomes important. The docs describe durable execution as preserving completed work so a process can resume without reprocessing earlier steps, even after a significant delay. They also describe persistence as the foundation for resuming from the last recorded state after system failures or human-in-the-loop pauses. :contentReference[oaicite:6]{index=6}

That changes how you design the system.

The question is no longer:
“Can the model do the task?”

The question becomes:
“Can the process survive interruption without becoming wasteful, duplicate-prone, or fragile?”

If the answer increasingly needs to be yes, LangGraph starts to make sense.

A clean signal is this:

If rerunning from scratch is no longer acceptable, resumability is now architecture.

And that is a LangGraph concern.

Signal #4: Human approval is now first-class

There is a big difference between:

  • asking the user a follow-up question in chat,

and:

  • pausing execution at a specific step,
  • preserving system state,
  • waiting for external approval,
  • then resuming the exact run later from the saved point.

Those are not the same thing.

Many teams blur them together at first because both involve “human input.” But operationally they are very different.

The LangGraph interrupts docs are very explicit here: interrupts pause graph execution at specific points, save graph state via the persistence layer, and wait indefinitely until execution is resumed with external input. This is positioned as a direct fit for human-in-the-loop patterns. :contentReference[oaicite:7]{index=7}

That matters for workflows like:

  • approval before sending an email,
  • legal or compliance review before an external action,
  • manager approval before a destructive operation,
  • analyst validation before the system proceeds to the next stage.

If those are now first-class parts of your product, then “just ask another message” is often not an honest representation of the system anymore.

A strong decision rule:

If a human approval point needs to be part of execution state, not just conversation flow, you are in LangGraph territory.

Signal #5: Failure recovery must become deliberate

At the application layer, failure handling often starts out as:

  • retry,
  • fallback,
  • return a graceful error,
  • ask the user to try again.

That is fine when failure is mostly local.

But there is a very different class of system where failure handling has to become explicit and differentiated:

  • tool timeout means retry,
  • validation failure means route to repair,
  • approval rejection means terminate or rework,
  • service outage means suspend and resume later,
  • partial completion means continue from checkpoint,
  • inconsistent intermediate state means branch into recovery logic.

Once failures have different meanings and demand different execution responses, the runtime itself is no longer invisible.

You need to decide not just whether the request failed, but where it failed, what state survived, and what path should follow.

That is one of the clearest signs that higher-level convenience is giving way to orchestration needs.

LangGraph’s docs do not present this as abstract theory. Its persistence, durable execution, and debugging model are specifically framed around surviving interruptions, fault tolerance, and resuming from saved state. :contentReference[oaicite:8]{index=8}

A practical heuristic:

If “error handling” now means designing recovery paths rather than adding retries, you are feeling the edge of LangChain abstraction.

Signal #6: “Why did it do that?” becomes a daily engineering question

This may be the strongest and most painful signal.

At first, debugging is simple enough:

  • the prompt was bad,
  • the tool schema was wrong,
  • retrieval fetched poor context,
  • the output parser failed,
  • a middleware rule misfired.

Those are still application-layer problems.

But in more complex systems, the hardest debugging question becomes:

Why did the system take that path?

Not:

  • why did it hallucinate,
  • why did this tool fail,

but:

  • why did it branch there,
  • why did it loop again,
  • why did it skip review,
  • why did it call the tool twice,
  • why did it stop early,
  • why did it resume from this point,
  • why did it carry this state forward?

That is an execution-trace question.

And once that becomes common, runtime design has entered the center of engineering work.

LangGraph is explicitly positioned with support for debugging and deployment for workflows and agents, and its persistence model supports checkpoint inspection and time-travel-style debugging. :contentReference[oaicite:9]{index=9}

That is not just a convenience feature.

It is a recognition that at some level of complexity, execution itself becomes the thing you need to debug.

A sharp rule of thumb:

If your postmortems increasingly focus on execution paths rather than individual model outputs, LangGraph is probably no longer optional.

Signal #7: You need stronger workflow honesty than “agent” gives you

One of the most useful ideas in the LangGraph docs is the distinction between workflows and agents:

  • workflows have predetermined code paths,
  • agents define their own process dynamically at runtime. :contentReference[oaicite:10]{index=10}

Why is this a signal?

Because many teams call something an “agent” when what they actually need is:

  • a mostly known path,
  • explicit checkpoints,
  • deterministic transitions,
  • bounded decision points,
  • clearly owned side effects.

In other words, a workflow.

If you are increasingly realizing that your “agent” is really:

  • classify → retrieve → draft → validate → approve → send,
  • or research → summarize → score → review → publish,

then the issue is not that the system got larger.

The issue is that the system deserves a more honest execution model.

LangGraph becomes valuable here because it lets you represent workflows and agents explicitly rather than pretending everything is one generalized loop.

That honesty is often where reliability starts.

The shift in mindset: from app logic to runtime design

The deepest transition here is not technical. It is conceptual.

At the LangChain layer, you are mostly asking:

  • What should the model do?
  • What tools should it have?
  • What outputs do I need?
  • What retrieval context helps?
  • What middleware improves safety and quality?

At the LangGraph layer, you start asking a different class of question:

  • What are the steps?
  • What state moves between them?
  • What transitions are allowed?
  • What gets persisted?
  • Where can the process pause?
  • What resumes from where?
  • What happens after partial failure?
  • How do we inspect a run as a process rather than a transcript?

That is not “more code for the same thing.”

That is a different layer of ownership.

And the official Lang docs describe the stack in exactly this layered way: LangChain as the higher-level framework, LangGraph as the low-level orchestration runtime for long-running, stateful agents, with LangChain agents built on LangGraph primitives when deeper customization is needed. :contentReference[oaicite:11]{index=11}

Once you feel that shift, the decision becomes easier.

You are not moving because graphs are fashionable.

You are moving because the runtime has become part of the product.

A practical decision framework

If you want the shortest possible decision framework, use this one.

Stay in LangChain if:

  • your process is still evolving quickly,
  • tool calling and retrieval are the main concerns,
  • failures are mostly local,
  • branching is light,
  • implicit state is still honest enough,
  • rerunning from scratch is acceptable,
  • human interaction mostly lives in the normal chat flow,
  • your main problems are still product-quality problems.

Move toward LangGraph if:

  • branching paths matter operationally,
  • state must be explicit across steps,
  • resumability is a product requirement,
  • approval checkpoints are first-class,
  • failure recovery needs multiple distinct paths,
  • execution debugging is now a serious engineering problem,
  • your “agent” is increasingly a workflow that deserves explicit structure.

This is the line that matters.

Not importance.

Not hype.

Not number of tools.

Not how advanced your architecture diagram looks.

Just this:

Has execution itself become something we need to design and govern?

If yes, LangGraph is no longer a power-user option.

It is becoming the right tool.

What this means for the rest of the stack

This transition also clarifies the broader Lang story.

LangChain is where you stay when the application layer is still the honest center of gravity.

LangGraph is where you go when runtime behavior becomes the hard part.

And only after that, when work becomes longer-horizon, decomposable, artifact-heavy, and context-complex, does it make sense to look seriously at Deep Agents as a harness on top of LangGraph. LangChain’s product docs frame these as different layers: high-level frameworks on top of runtimes, with LangGraph as the low-level orchestration layer and Deep Agents as a harness for more complex agent behavior. :contentReference[oaicite:12]{index=12}

That sequencing matters.

Because it keeps teams from skipping the architectural question that actually determines success.

Final thought

You do not move to LangGraph because your app got bigger.

You move when the abstraction stops being honest.

When branching matters.

When state matters.

When resumability matters.

When approval matters.

When recovery matters.

When debugging the path matters.

That is the moment LangChain starts to bend.

And that is exactly the moment LangGraph starts to make sense.

Incremental Backup in PostgreSQL 17: A Practical Guide

2026-04-02 15:42:24

Introduction

PostgreSQL 17 introduced native incremental backup support, a major leap forward in database backup strategy. Rather than duplicating the entire dataset every time, incremental backup captures only the data blocks that have changed since the last backup (full or incremental). This drastically reduces backup time, storage consumption, and system overhead. Prior to PostgreSQL 17, achieving this required third-party tools such as pgBackRest or Barman, which added configuration and maintenance overhead. With native support now built into PostgreSQL, the process has become significantly more streamlined.

What Is Incremental Backup?

An incremental backup records only the changes made since the previous backup — whether that was a full backup or an earlier incremental one. Compared to full backups that copy all data regardless of what has changed, incremental backups are leaner, faster, and more storage-efficient.

Key Features in PostgreSQL 17

Native Integration - Incremental backup is now part of PostgreSQL's core, removing the need for external tools for this functionality.
Storage Efficiency - Only modified data pages are backed up, keeping storage usage minimal.
Faster Backups and Recovery - Since less data is processed each time, backup creation is quicker and recovery is streamlined by applying only the required changes on top of the full backup.

How It Works: Step-by-Step

Step 1 - Enable WAL Summarization In the postgresql.conf file, enable the summarize_wal parameter by setting it to on. This activates the WAL summarizer process, which tracks which data blocks have been modified. It can be enabled on either a primary or a standby server. It is set to off by default.
Step 2 - Take a Full Backup Use pg_basebackup to create the initial full backup. This serves as the foundation for all subsequent incremental backups.
Step 3 - Take the First Incremental Backup After inserting or modifying data, run pg_basebackup again with the --incremental flag, pointing to the backup_manifest file from the full backup. This tells PostgreSQL what the baseline was and allows it to capture only the changes since then.
Step 4 - Take Additional Incremental Backups After further data changes, take another incremental backup — this time referencing the backup_manifest from the first incremental backup. Each incremental backup chains to the previous one using its manifest file.

Restoring the Backups

Restoration is handled by pg_combinebackup, a new utility introduced in PostgreSQL 17. It merges the full backup and all incremental backups into a single, usable backup directory. The backups must be provided in chronological order — starting from the full backup, followed by each incremental in sequence. After combining, you adjust the port in the restored directory's postgresql.conf and start the database server using that data directory. Upon verification, all records from the full backup and every incremental backup are present and intact.

What Is pg_combinebackup?

pg_combinebackup is the companion utility that reconstructs a complete, restorable backup from the chain of incremental backups. It automates the merging process and validates the backup chain for consistency, eliminating the need for manual intervention during restoration.

Advantages of Incremental Backup

Cost Savings - Reduced storage usage means lower costs, whether on cloud or on-premises infrastructure.
Improved Performance - Less data transfer reduces system load, making it particularly valuable during peak operational hours.
Scalability - Well-suited for large databases or environments with frequent data changes where full backups would be impractical.

Limitations to Be Aware Of

summarize_wal must be enabled for this feature to work.
Incremental backups only function with pg_basebackup and cannot be taken from a standby server, they must be run on the primary instance.
Restoration depends on a complete, unbroken backup chain. If any backup in the chain is missing, recovery fails.
Backups operate at the cluster level, with no support for per-table backups.
Proper retention of WAL and summary files is required for the feature to function correctly.

Conclusion

Native incremental backup in PostgreSQL 17 addresses two longstanding pain points, storage waste and slow backup windows, while laying a stronger foundation for disaster recovery. The combination of pg_basebackup (with the --incremental flag) and pg_combinebackup makes the entire backup-and-restore workflow cleaner and more efficient, especially for large-scale, high-transaction environments.

PetStore: Pure Java for UI and Safe SQL Mapping

2026-04-02 15:41:35

I would like to introduce the PetStore sample application, which demonstrates a pure "Java-first" approach to web interface development. The entire project is built on the idea of maximum type safety and clarity, achieved through two modules of the Ujorm 3 library. These effectively eliminate common abstraction layers that often complicate development and debugging. The PetStore is built on the lightweight Ujorm3 framework.

Screenshot

🛠️ Two Pillars of Ujorm3

1. UI Creation without Templates (ujo-web)

We have replaced traditional engines like Thymeleaf or JSP with pure Java code.

  • Type-safe rendering: HTML is generated using the HtmlElement builder and try-with-resources blocks. This approach allows writing Java code in a natural tree structure that faithfully mirrors the HTML structure.
  • Refactoring with full IDE support: Since the UI is defined in Java, everything you are used to works – autocomplete (IntelliSense), instant refactoring (e.g., extracting a table into a renderTable() method), and correctness checking while writing.
  • No more parameter errors: The HttpParameter interface uses enums to define web parameters. This practically eliminates typos in form field names, which in standard solutions only manifest at runtime.

2. Modern Database Handling (ujo-orm)

Forget about complex XML mapping or runtime errors in SQL queries.

  • Using Java Records: Standard Java records serve as domain objects (Pet, Category). They are naturally immutable, clean, and fully compatible with @Table and @Column annotations.
  • Type-Safe SQL Builder: An annotation processor generates metamodels (e.g., MetaPet) during compilation. The compiler catches an error in a column name, not an application crash in production.
  • SQL under control: No unexpected LazyInitializationException or hidden N+1 problems. You have absolute control over every SqlQuery. Moreover, you can easily map results from native SQL back to records using the label() method.

📁 Code Sample (PetServlet)

The project is designed with an emphasis on straightforwardness.
The following example from a stateless servlet demonstrates how elegantly logic, parameters, and HTML generation can be connected:

@Override
protected void doGet(HttpServletRequest req, HttpServletResponse resp) {
    var ctx = HttpContext.ofServlet(req, resp);
    var contextPath = req.getContextPath();
    var action = ctx.parameter(ACTION, Action::paramValueOf);
    var petId = ctx.parameter(PET_ID, Long::parseLong);

    var pets = services.getPets();
    var categories = services.getCategories();
    var petToEdit = (Action.EDIT.equals(action) && petId != null)
            ? services.getPetById(petId).orElse(null)
            : null;

    try (var html = HtmlElement.of(ctx, BOOTSTRAP_CSS)) {
        try (var body = html.addBody(Css.container, Css.mt5)) {
            renderHeader(body, contextPath);
            renderTable(body, pets);
            renderForm(body, petToEdit, categories);
        }
    }
}

Here is what a native SQL query looks like in pure Java:

static final EntityManager<Pet, Long> PET_EM = 
             EntityManager.of(Pet.class);

public List<Pet> findAll() {
    var sql = """
        SELECT p.id AS ${p.id}
        , p.name    AS ${p.name}
        , p.status  AS ${p.status}
        , c.id      AS ${c.id}
        , c.name    AS ${c.name}
        FROM pet p
        LEFT JOIN category c ON c.id = p.category_id
        WHERE p.id >= :id
        ORDER BY p.id
        """;

    return SqlQuery.run(connection.get(), query -> query
        .sql(sql)
        .label("p.id", MetaPet.id)
        .label("p.name", MetaPet.name)
        .label("p.status", MetaPet.status)
        .label("c.id", MetaPet.category, MetaCategory.id)
        .label("c.name", MetaPet.category, MetaCategory.name)
        .bind("id", 1L)
        .streamMap(PET_EM.mapper())
        .toList());
}

💡 Why Choose This Approach?

This architecture represents an interesting alternative for developers who are tired of heavy JPA frameworks or bloated frontend technologies.

Where Ujorm PetStore shines most:

  • B2B and administrative applications: Where development speed and long-term maintainability are important.
  • Microservices: Thanks to minimal overhead and fast startup.
  • Projects with HTMX: It perfectly complements modern trends of returning to server-side rendering.

The "Java-First" philosophy drastically reduces context switching between Java, SQL, XML, and various templating languages.
Everything you need is under the protection of the compiler.

🚀 Try It Locally

The application utilizes the best of the current ecosystem:

  • Java 25
  • Spring Boot 3.5.0
  • H2 Database (In-memory)

All you need is JDK 25 and Maven installed, then just run:

mvn spring-boot:run

The application will start at http://localhost:8080.

Resources and Links:

  • PetServlet.java – A stateless Servlet acting as both Controller and View. It handles HTTP communication and builds the HTML.
  • Dao.java – Data access layer integrating Spring JDBC with Ujorm EntityManager.
  • Ujorm 3 Library on GitHub – Official library repository.
  • ORM Benchmarks – How this approach compares to the competition.

Does it make sense to you to have the UI and DB layers so tightly coupled with the compiler? I will be glad for any technical feedback!

We crammed a 24GB AI 3D-generation pipeline into a completely offline desktop app (and the Demo is live)

2026-04-02 15:39:26

If you are an indie game developer right now, you know the pain of 3D asset generation.

The current landscape of AI 3D tools is a nightmare of expensive monthly SaaS subscriptions, API paywalls, and cloud-based platforms that claim ownership over the meshes you generate. We got tired of it.

At Odyssey Game Studios, we decided to build the anti-SaaS solution: Jupetar. It’s a completely offline, local-compute 2D-to-3D asset generation pipeline. You pay once, you own the software, and it runs entirely on your local GPU.

After weeks of engineering (and battling the Steam review queue), the official Jupetar Demo is now live on Steam. Here is a look under the hood at how we built it, and how you can test it right now.

The "No-Ping" Architecture
The biggest challenge in building local AI wrappers is preventing the underlying scripts from constantly trying to phone home.

Jupetar relies on a massive 24GB local models folder containing the Hunyuan DiT (geometry) and Paint VAE (textures) weights. Standard HuggingFace implementations constantly try to ping the cloud to check for version updates or auto-heal missing files, which Valve (and privacy-conscious devs) understandably flag.

We surgically killed the huggingface_hub auto-heal scripts and forced strict offline environment variables directly into the pipeline:

os.environ["HF_HUB_OFFLINE"] = "1"
os.environ["TRANSFORMERS_OFFLINE"] = "1"
os.environ["DIFFUSERS_OFFLINE"] = "1"

The Ethernet Test: The ultimate proof of our architecture. You can literally unplug your ethernet cable, drop concept art into the UI, and Jupetar will still generate a fully textured .glb file.

Solving Local VRAM Fragmentation
Loading a massive Diffusion Transformer, XAtlas for UV unwrapping, and an FP32 rasterizer into a single localized PyTorch instance usually results in catastrophic memory leaks and Out-of-Memory (OOM) crashes on standard consumer GPUs.

To make this run stably on an 8GB-12GB VRAM card (our baseline is an RTX 3080 10GB), we had to force PyTorch expandable segments to mitigate memory fragmentation during the heaviest phase: the high-res 4K texture upscaling.

Our overarching C# wrapper handles the UI and hardware telemetry, acting as an orchestrator that spins up the Python environment, executes the generation, and actively flushes the VRAM at each pipeline stage.

What the Pipeline Actually Does
When you drop an image into Jupetar, it doesn't just spit out a messy point cloud. It executes a full, game-ready pipeline:

Geometry: Generates the raw mesh via Hunyuan3D.

Optimization: Runs an adaptive decimation pass (FaceReducer) to crunch meshes down to target poly-counts (e.g., 25k for weapons, 80k for humanoid creatures).

UV Mapping: Integrates XAtlas for automated, engine-ready UV unwrapping.

Texturing & PBR: Since native DiT vertex colors are blurry, the engine runs a VRAM-safe 4K tiled upscaler and synthesizes a procedural PBR normal map locally using Sobel derivatives.

Export: Packages everything into a standard .glb ready for Unity, Unreal, or Godot.

The Demo is Live
We just pushed the Demo build live on Steam.

Because we don't have a centralized server to authenticate accounts, we built a localized trial logic directly into the app. The Demo gives you 2 completely free generations to test the pipeline, benchmark your GPU's VRAM, and inspect the final 3D topology for yourself.

No credit card, no account creation, no cloud processing.

🎮 https://store.steampowered.com/app/4346660/Jupetar/?l=english

If you test it out, let me know how it handles on your specific GPU. We are continuing to optimize the C# orchestrator and VRAM management leading up to the V1.0 launch, and I'd love to hear feedback from the community!

npm, March 31: RAT in Axios and Half a Million Lines of Claude Code on GitHub

2026-04-02 15:35:47

I wake up in the morning, open my feed — and right away, two incidents. Both about npm. Both serious. And both happened on the same day.

The first one — in Axios (yes, the one that's everywhere) — spread a RAT trojan for three hours. The second — Anthropic accidentally published the full source code of Claude Code in a public npm package. Half a million lines with prompts and architecture.

Good morning, indeed :)

Axios: 3 hours was more than enough

What happened

Someone hijacked the npm account of Jason Saayman (jasonsaayman) — the main maintainer of Axios. They changed the linked email and manually published two versions:

The versions were live in the public registry from about 00:21 to 03:15 UTC on March 31. Three hours. For a package with over 100 million weekly downloads, that's more than enough.

How the attack worked

The nastiest part: the Axios code itself wasn't touched. Not a single line. Open the sources — everything looks clean. The trick was in package.json.

They added a dependency: [email protected]. The package was created the day before, on March 30. The name looks innocent — just some crypto utility, who would look twice? It's never imported anywhere in the Axios code. Not once.

So why was it there?

  1. npm install pulls in all dependencies from package.json
  2. plain-crypto-js contains a postinstall script
  3. The script downloads the second-stage payload from a C2 server
  4. It deploys a cross-platform RAT trojan (for macOS, Windows, Linux)
  5. After installation, the script cleans up after itself — it replaces package.json with a clean version

That last point is especially nasty. The trojan is already running, but when you check package.json, everything looks normal. No trace of plain-crypto-js.

Phantom dependency

This is called a phantom dependency — a ghost dependency. It's not used in the code, not imported, and exists only for the side effect during installation. Normal code review won't catch it because the .js files are clean.

You scan the sources for suspicious code? Good. But do you check package.json for new dependencies? Or postinstall scripts in transitive dependencies?

What to do right now

Safe versions:

Branch Malicious Safe
latest 1.14.1 1.14.0
legacy 0.30.4 0.30.3

If your project installed exactly those versions in the window from 00:21 to 03:15 UTC on March 31 — treat the system as compromised. Not "possibly". Compromised.

You need to:

  • Check package-lock.json / yarn.lock for "[email protected]" or "[email protected]"
  • Search for "plain-crypto-js" in the dependency tree
  • If you find it — the machine where npm install ran is infected
  • Rotate all keys, secrets, and tokens — the full set

StepSecurity, Socket, Endor Labs, Aikido, and Huntress have confirmed the details and published IOCs.

Who was behind the attack

Several sources — including Google and Reuters — point to the North Korean group UNC1069 / Lazarus. Supply-chain attacks via maintainer account takeover are their classic playbook.

Important note: the npm registry itself wasn't hacked. The infrastructure wasn't affected. The attackers simply logged in under a real account and ran npm publish. From the system's point of view, it was completely legitimate.

Claude Code: when someone forgot about .npmignore

What happened

On the same day, Anthropic released a new version of their CLI agent to npm — "@anthropic-ai/[email protected]". A routine release. But the package included a cli.js.map file weighing 59.8 MB.

That's a sourcemap. And through it, you can restore the entire original source code of the project.

The scale

From that one file they recovered:

  • ~512,000 lines of TypeScript
  • ~1,900 files
  • The agent's internal logic
  • System prompts
  • Memory and planning mechanisms, tool handling
  • Unannounced features

The sourcemap also pointed to a ZIP archive in Anthropic's public R2 bucket. Security researcher Chaofan Shou was the first to post about it on X. The code was mirrored on GitHub almost instantly.

The cause

Anthropic confirmed it: a mistake in the build process. They forgot to exclude the source maps via .npmignore. That's it.

No hack. No user data leak. Just a missing line in the build config.

The version was quickly removed and a fix was released. But the code had already spread.

A Korean developer at 4 AM and 50,000 stars in two hours

While Anthropic was sending DMCA takedowns to mirrors of the leaked code, one person in Korea took a different route.

Sigrid Jin (Sigrid Jin, GitHub: instructkr) — a well-known power user of Claude Code. How well-known? According to the Wall Street Journal, in the past year he generated more than 25 billion tokens through the tool. Twenty-five billion. The guy clearly knew the architecture inside out.

On the morning of April 1, around 4 AM local time, Jin woke up to notifications about the leak. He saw Anthropic taking down mirrors of the original code and made a decision: don't copy — rewrite.

Clean room

Jin didn't fork the leaked TypeScript. That would have been taken down by DMCA in a day. Instead, he did a clean-room reimplementation — rewriting the key patterns and architecture from scratch, this time in Python:

  • Agent harness
  • Tools
  • Memory and planning
  • Swarms of sub-agents

To speed things up, he used the AI tool oh-my-codex (OmX). The repository claw-code was live before sunrise.

Different language, different code, no copy-paste — legally, this is a new creative work. Gergely Orosz from Pragmatic Engineer and other lawyers/developers confirmed: such a "clean rewrite" is legally solid.

Anthropic couldn't take the repo down via DMCA. It's still alive.

GitHub record

Then the madness began.

Time after publication Stars
~2 hours 50,000
~24 hours 100,000+
Forks in the first day 50,000 - 58,000

50,000 stars in two hours. According to the author and several media outlets — the fastest-growing repository in GitHub history.

Later, Jin started porting the same architecture to Rust — that version also quickly gained tens of thousands of stars.

In essence, the community turned a corporate leak into a fully open clone of an AI agent in just a few hours. Jin himself later described the project's goal simply: "Better Harness Tools" — better tools that actually get things done.

You can argue whether it's ethical to build on someone else's leak. But legally — it's clean. And 100,000 stars in a day show that demand for an open alternative was huge long before March 31.

POSSE:

Na Derevo

DoBu — Documentation Builder for the Ascoos Ecosystem

2026-04-02 15:33:57

A JML‑inspired Documentation DSL for multilingual docblocks

DoBu (DOcumentation BUilder) is a Documentation DSL designed for the Ascoos OS ecosystem.

It is not PHPDoc.

It is not Doxygen.

It is not MkDocs or Docusaurus.

DoBu is a semantic documentation layer that lives inside simple docblocks such as:

/* ... */

and transforms documentation structure into:

  • structured metadata
  • multilingual documentation
  • AST‑friendly nodes
  • exportable formats (Markdown, HTML, JSON, etc.)
  • documentation suitable for IDEs, tools, and Ascoos OS subsystems

DoBu can generate documentation text for any programming language that supports block comments.

Why DoBu Was Created

Ascoos OS is a kernel containing:

  • hundreds of encrypted classes
  • a DSL/AST macro engine
  • CiC interpreters
  • JML markup
  • AI/NLP subsystems
  • IoT handlers
  • mathematical and scientific libraries
  • proprietary security layers

No existing documentation tool could:

  • support multilingual metadata
  • embed mathematical formulas (LaTeX, MathML) with analysis
  • describe numerical behavior
  • include performance metrics
  • support cross‑references
  • generate documentation without exposing source code

Thus, DoBu was created as the semantic documentation layer of Ascoos OS.

What DoBu Is

  • Documentation DSL
  • Semantic metadata language
  • AST‑friendly docblock interpreter
  • Multilingual documentation engine
  • Extensible schema system
  • JML‑inspired syntax
  • Language‑agnostic
  • Kernel‑level documentation layer

Basic Syntax

/*
dobu {
    class:id(`tmyclass`),name(`TMyClass`),extends(`TObject`),namespace(`ASCOOS\OS\Kernel\MyClass`) {
        summary:langs {
            en {`Creating a new Ascoos OS class.`}
            el {`Δημιουργία μιας νέας Ascoos OS κλάσης.`}
        }
    }
}
*/

Class Documentation Example

/*
dobu {
    class:id(`tmyclass`),name(`TMyClass`),extends(`TObject`),version(`0.0.1`) {
        summary:langs {
            en {`Creating a new Ascoos OS class.`}
            el {`Δημιουργία μιας νέας Ascoos OS κλάσης.`}
        }
    }
}
*/

Method Example with Math & Behavioral Metadata

DoBu supports:

  • mathematical formulas (LaTeX, MathML)
  • numerical behavior
  • performance metrics
  • cross‑references
  • verification cases
  • multilingual descriptions
/*
dobu {
    method:id(`blackscholesputdividend`),name(`blackScholesPutDividend`),return(`float`) {
        summary:langs {
            en {`Prices a European put option with continuous dividend yield.`}
            el {`Αποτιμά ένα ευρωπαϊκό put option με συνεχή μερισματική απόδοση.`}
        },
        formula:type(`latex`),value(`\[ P = K e^{-rT} N(-d_2) - S_0 e^{-qT} N(-d_1) \]`)
    }
}
*/

Full Class Example

See the create-dobu-class.php file for a full demonstration inside a PHP class.

Multilingual Documentation

langs {
    en {`English text`}
    el {`Ελληνικό κείμενο`}
}

Cross‑References

see:langs {
    all {`
       • blackScholesCallDividend() 
       • binomialPutEuropean()`
    }
}

Export Formats

DoBu can export documentation to:

  • HTML
  • INI
  • JSON
  • Markdown
  • TOML
  • YAML
  • XML
  • and many more formats

Ideal for Proprietary Kernels

DoBu works even when:

  • source code is not available (stubs only)
  • documentation must not reveal internal logic
  • structured, machine‑readable documentation is required

Relationship with Ascoos OS

DoBu is built using Ascoos OS classes and is used for:

  • documenting Ascoos OS itself
  • documenting generated code
  • producing documentation for other languages