2026-04-02 15:43:14
Most teams do not outgrow LangChain because they added more tools.
They outgrow it when execution itself becomes something they need to design, inspect, recover, and govern. LangChain’s current agent APIs run on LangGraph under the hood, while LangGraph is positioned as the lower-level orchestration runtime for persistence, streaming, debugging, and deployment-oriented workflows and agents. :contentReference[oaicite:0]{index=0}
That is the transition this article is about.
Not syntax.
Not diagrams.
Not “graphs are more advanced.”
Not “real systems need more complexity.”
This is a playbook for a narrower and much more useful question:
How do you know your AI app is no longer just an application problem, but a runtime problem?
That is the real boundary between staying comfortably in LangChain and moving into LangGraph.
And that boundary matters, because teams get this wrong in both directions.
Some teams move too early. They introduce explicit state, branching graphs, checkpointing, and recovery logic before the product has earned any of that complexity.
Other teams move too late. They keep stacking prompts, middleware, tool logic, and ad hoc retries onto a higher-level abstraction even after the runtime has clearly become the main engineering concern.
Both mistakes are expensive.
The first creates architecture debt in the name of seriousness.
The second creates system fragility in the name of speed.
The goal is not to start simple forever.
The goal is to know when simple stops being honest.
Before we talk about the real signals, it helps to clear out the fake ones.
A lot of teams decide they need LangGraph for reasons that sound plausible but are not actually sufficient.
That is not enough.
LangChain is already built for tool-using agents and applications. Its current agent stack includes tools, middleware, structured output, and a graph-based runtime under the hood. Tool usage by itself does not imply you need to own orchestration directly. :contentReference[oaicite:1]{index=1}
Also not enough.
An app can matter to the business and still be well served by a higher-level abstraction. Importance is not the trigger. Runtime complexity is the trigger.
Still not enough.
A multi-step system can often remain a straightforward application problem if the steps are predictable, the branching is light, and failures do not require custom recovery semantics.
This is probably the most misleading one.
The LangGraph docs draw a very useful distinction here: workflows have predetermined code paths, while agents dynamically define their process and tool usage at runtime. A lot of systems people call “agents” are really workflows with a language model inside them. :contentReference[oaicite:2]{index=2}
This one is rarely said out loud, but it drives a lot of technical decisions.
A lower-level runtime is not automatically more correct.
It simply gives you more responsibility.
That responsibility only pays off when the product truly needs it.
The cleanest way to decide is this:
Move to LangGraph when your main engineering problem stops being application behavior and starts becoming runtime behavior.
That sounds abstract, so let us make it concrete.
If your day-to-day engineering work is still mostly about:
you are probably still in LangChain territory.
But if your hardest problems increasingly sound like:
then you are no longer just shaping an AI application.
You are shaping a runtime.
That is precisely the space LangGraph is built for: long-running, stateful workflows or agents with durable execution, human-in-the-loop support, persistence, and debugging/deployment support. :contentReference[oaicite:3]{index=3}
The first major signal is that branching stops being a small detail and starts becoming core system behavior.
At first, branching looks harmless:
That is still manageable in a higher-level app.
But eventually branching stops being occasional and becomes structural:
Once that happens, “do the next reasonable thing” is no longer enough.
You need the path itself to become an object you can think about.
This is exactly why the LangGraph docs emphasize workflows and agents as execution patterns rather than just model calls. Workflows operate in a designed order; agents dynamically choose their process; LangGraph exists to support those execution patterns with persistence and debugging. :contentReference[oaicite:4]{index=4}
A good litmus test:
If different classes of requests now require materially different execution paths, and those paths matter operationally, branching is no longer incidental.
That is LangGraph pressure.
A lot of AI apps start with implicit state:
That works surprisingly well for a while.
But then the system grows, and conversation history starts doing jobs it was never meant to do:
At that point, the transcript is no longer just context. It has become a bad database.
This is where LangGraph starts to matter because it treats state as a first-class runtime concern. Its persistence layer saves graph state as checkpoints at every step of execution, organized into threads, which then powers things like human-in-the-loop flows, conversational memory, time-travel debugging, and fault-tolerant execution. :contentReference[oaicite:5]{index=5}
That is a fundamentally different posture from “we will reconstruct what happened from the message list.”
A useful rule here is:
If your team is repeatedly asking what the state really is between steps, you probably need a runtime that models state explicitly.
That does not mean you need to model every variable in a graph tomorrow.
It means the abstraction boundary is starting to show strain.
This is one of the clearest signals of all.
A simple AI application can often get away with failure meaning “run it again.”
But a more serious system cannot always do that.
Once your system has to:
resumability becomes a product requirement, not an implementation luxury.
This is exactly where LangGraph’s durable execution story becomes important. The docs describe durable execution as preserving completed work so a process can resume without reprocessing earlier steps, even after a significant delay. They also describe persistence as the foundation for resuming from the last recorded state after system failures or human-in-the-loop pauses. :contentReference[oaicite:6]{index=6}
That changes how you design the system.
The question is no longer:
“Can the model do the task?”
The question becomes:
“Can the process survive interruption without becoming wasteful, duplicate-prone, or fragile?”
If the answer increasingly needs to be yes, LangGraph starts to make sense.
A clean signal is this:
If rerunning from scratch is no longer acceptable, resumability is now architecture.
And that is a LangGraph concern.
There is a big difference between:
and:
Those are not the same thing.
Many teams blur them together at first because both involve “human input.” But operationally they are very different.
The LangGraph interrupts docs are very explicit here: interrupts pause graph execution at specific points, save graph state via the persistence layer, and wait indefinitely until execution is resumed with external input. This is positioned as a direct fit for human-in-the-loop patterns. :contentReference[oaicite:7]{index=7}
That matters for workflows like:
If those are now first-class parts of your product, then “just ask another message” is often not an honest representation of the system anymore.
A strong decision rule:
If a human approval point needs to be part of execution state, not just conversation flow, you are in LangGraph territory.
At the application layer, failure handling often starts out as:
That is fine when failure is mostly local.
But there is a very different class of system where failure handling has to become explicit and differentiated:
Once failures have different meanings and demand different execution responses, the runtime itself is no longer invisible.
You need to decide not just whether the request failed, but where it failed, what state survived, and what path should follow.
That is one of the clearest signs that higher-level convenience is giving way to orchestration needs.
LangGraph’s docs do not present this as abstract theory. Its persistence, durable execution, and debugging model are specifically framed around surviving interruptions, fault tolerance, and resuming from saved state. :contentReference[oaicite:8]{index=8}
A practical heuristic:
If “error handling” now means designing recovery paths rather than adding retries, you are feeling the edge of LangChain abstraction.
This may be the strongest and most painful signal.
At first, debugging is simple enough:
Those are still application-layer problems.
But in more complex systems, the hardest debugging question becomes:
Why did the system take that path?
Not:
but:
That is an execution-trace question.
And once that becomes common, runtime design has entered the center of engineering work.
LangGraph is explicitly positioned with support for debugging and deployment for workflows and agents, and its persistence model supports checkpoint inspection and time-travel-style debugging. :contentReference[oaicite:9]{index=9}
That is not just a convenience feature.
It is a recognition that at some level of complexity, execution itself becomes the thing you need to debug.
A sharp rule of thumb:
If your postmortems increasingly focus on execution paths rather than individual model outputs, LangGraph is probably no longer optional.
One of the most useful ideas in the LangGraph docs is the distinction between workflows and agents:
Why is this a signal?
Because many teams call something an “agent” when what they actually need is:
In other words, a workflow.
If you are increasingly realizing that your “agent” is really:
then the issue is not that the system got larger.
The issue is that the system deserves a more honest execution model.
LangGraph becomes valuable here because it lets you represent workflows and agents explicitly rather than pretending everything is one generalized loop.
That honesty is often where reliability starts.
The deepest transition here is not technical. It is conceptual.
At the LangChain layer, you are mostly asking:
At the LangGraph layer, you start asking a different class of question:
That is not “more code for the same thing.”
That is a different layer of ownership.
And the official Lang docs describe the stack in exactly this layered way: LangChain as the higher-level framework, LangGraph as the low-level orchestration runtime for long-running, stateful agents, with LangChain agents built on LangGraph primitives when deeper customization is needed. :contentReference[oaicite:11]{index=11}
Once you feel that shift, the decision becomes easier.
You are not moving because graphs are fashionable.
You are moving because the runtime has become part of the product.
If you want the shortest possible decision framework, use this one.
This is the line that matters.
Not importance.
Not hype.
Not number of tools.
Not how advanced your architecture diagram looks.
Just this:
Has execution itself become something we need to design and govern?
If yes, LangGraph is no longer a power-user option.
It is becoming the right tool.
This transition also clarifies the broader Lang story.
LangChain is where you stay when the application layer is still the honest center of gravity.
LangGraph is where you go when runtime behavior becomes the hard part.
And only after that, when work becomes longer-horizon, decomposable, artifact-heavy, and context-complex, does it make sense to look seriously at Deep Agents as a harness on top of LangGraph. LangChain’s product docs frame these as different layers: high-level frameworks on top of runtimes, with LangGraph as the low-level orchestration layer and Deep Agents as a harness for more complex agent behavior. :contentReference[oaicite:12]{index=12}
That sequencing matters.
Because it keeps teams from skipping the architectural question that actually determines success.
You do not move to LangGraph because your app got bigger.
You move when the abstraction stops being honest.
When branching matters.
When state matters.
When resumability matters.
When approval matters.
When recovery matters.
When debugging the path matters.
That is the moment LangChain starts to bend.
And that is exactly the moment LangGraph starts to make sense.
2026-04-02 15:42:24
PostgreSQL 17 introduced native incremental backup support, a major leap forward in database backup strategy. Rather than duplicating the entire dataset every time, incremental backup captures only the data blocks that have changed since the last backup (full or incremental). This drastically reduces backup time, storage consumption, and system overhead. Prior to PostgreSQL 17, achieving this required third-party tools such as pgBackRest or Barman, which added configuration and maintenance overhead. With native support now built into PostgreSQL, the process has become significantly more streamlined.
An incremental backup records only the changes made since the previous backup — whether that was a full backup or an earlier incremental one. Compared to full backups that copy all data regardless of what has changed, incremental backups are leaner, faster, and more storage-efficient.
Native Integration - Incremental backup is now part of PostgreSQL's core, removing the need for external tools for this functionality.
Storage Efficiency - Only modified data pages are backed up, keeping storage usage minimal.
Faster Backups and Recovery - Since less data is processed each time, backup creation is quicker and recovery is streamlined by applying only the required changes on top of the full backup.
Step 1 - Enable WAL Summarization In the postgresql.conf file, enable the summarize_wal parameter by setting it to on. This activates the WAL summarizer process, which tracks which data blocks have been modified. It can be enabled on either a primary or a standby server. It is set to off by default.
Step 2 - Take a Full Backup Use pg_basebackup to create the initial full backup. This serves as the foundation for all subsequent incremental backups.
Step 3 - Take the First Incremental Backup After inserting or modifying data, run pg_basebackup again with the --incremental flag, pointing to the backup_manifest file from the full backup. This tells PostgreSQL what the baseline was and allows it to capture only the changes since then.
Step 4 - Take Additional Incremental Backups After further data changes, take another incremental backup — this time referencing the backup_manifest from the first incremental backup. Each incremental backup chains to the previous one using its manifest file.
Restoration is handled by pg_combinebackup, a new utility introduced in PostgreSQL 17. It merges the full backup and all incremental backups into a single, usable backup directory. The backups must be provided in chronological order — starting from the full backup, followed by each incremental in sequence. After combining, you adjust the port in the restored directory's postgresql.conf and start the database server using that data directory. Upon verification, all records from the full backup and every incremental backup are present and intact.
pg_combinebackup is the companion utility that reconstructs a complete, restorable backup from the chain of incremental backups. It automates the merging process and validates the backup chain for consistency, eliminating the need for manual intervention during restoration.
Cost Savings - Reduced storage usage means lower costs, whether on cloud or on-premises infrastructure.
Improved Performance - Less data transfer reduces system load, making it particularly valuable during peak operational hours.
Scalability - Well-suited for large databases or environments with frequent data changes where full backups would be impractical.
summarize_wal must be enabled for this feature to work.
Incremental backups only function with pg_basebackup and cannot be taken from a standby server, they must be run on the primary instance.
Restoration depends on a complete, unbroken backup chain. If any backup in the chain is missing, recovery fails.
Backups operate at the cluster level, with no support for per-table backups.
Proper retention of WAL and summary files is required for the feature to function correctly.
Native incremental backup in PostgreSQL 17 addresses two longstanding pain points, storage waste and slow backup windows, while laying a stronger foundation for disaster recovery. The combination of pg_basebackup (with the --incremental flag) and pg_combinebackup makes the entire backup-and-restore workflow cleaner and more efficient, especially for large-scale, high-transaction environments.
2026-04-02 15:41:35
I would like to introduce the PetStore sample application, which demonstrates a pure "Java-first" approach to web interface development. The entire project is built on the idea of maximum type safety and clarity, achieved through two modules of the Ujorm 3 library. These effectively eliminate common abstraction layers that often complicate development and debugging. The PetStore is built on the lightweight Ujorm3 framework.
We have replaced traditional engines like Thymeleaf or JSP with pure Java code.
HtmlElement builder and try-with-resources blocks.
This approach allows writing Java code in a natural tree structure that faithfully mirrors the HTML structure.renderTable() method), and correctness checking while writing.HttpParameter interface uses enums to define web parameters.
This practically eliminates typos in form field names, which in standard solutions only manifest at runtime.Forget about complex XML mapping or runtime errors in SQL queries.
Pet, Category).
They are naturally immutable, clean, and fully compatible with @Table and @Column annotations.MetaPet) during compilation.
The compiler catches an error in a column name, not an application crash in production.LazyInitializationException or hidden N+1 problems.
You have absolute control over every SqlQuery.
Moreover, you can easily map results from native SQL back to records using the label() method.The project is designed with an emphasis on straightforwardness.
The following example from a stateless servlet demonstrates how elegantly logic, parameters, and HTML generation can be connected:
@Override
protected void doGet(HttpServletRequest req, HttpServletResponse resp) {
var ctx = HttpContext.ofServlet(req, resp);
var contextPath = req.getContextPath();
var action = ctx.parameter(ACTION, Action::paramValueOf);
var petId = ctx.parameter(PET_ID, Long::parseLong);
var pets = services.getPets();
var categories = services.getCategories();
var petToEdit = (Action.EDIT.equals(action) && petId != null)
? services.getPetById(petId).orElse(null)
: null;
try (var html = HtmlElement.of(ctx, BOOTSTRAP_CSS)) {
try (var body = html.addBody(Css.container, Css.mt5)) {
renderHeader(body, contextPath);
renderTable(body, pets);
renderForm(body, petToEdit, categories);
}
}
}
Here is what a native SQL query looks like in pure Java:
static final EntityManager<Pet, Long> PET_EM =
EntityManager.of(Pet.class);
public List<Pet> findAll() {
var sql = """
SELECT p.id AS ${p.id}
, p.name AS ${p.name}
, p.status AS ${p.status}
, c.id AS ${c.id}
, c.name AS ${c.name}
FROM pet p
LEFT JOIN category c ON c.id = p.category_id
WHERE p.id >= :id
ORDER BY p.id
""";
return SqlQuery.run(connection.get(), query -> query
.sql(sql)
.label("p.id", MetaPet.id)
.label("p.name", MetaPet.name)
.label("p.status", MetaPet.status)
.label("c.id", MetaPet.category, MetaCategory.id)
.label("c.name", MetaPet.category, MetaCategory.name)
.bind("id", 1L)
.streamMap(PET_EM.mapper())
.toList());
}
This architecture represents an interesting alternative for developers who are tired of heavy JPA frameworks or bloated frontend technologies.
Where Ujorm PetStore shines most:
The "Java-First" philosophy drastically reduces context switching between Java, SQL, XML, and various templating languages.
Everything you need is under the protection of the compiler.
The application utilizes the best of the current ecosystem:
All you need is JDK 25 and Maven installed, then just run:
mvn spring-boot:run
The application will start at http://localhost:8080.
Resources and Links:
EntityManager.Does it make sense to you to have the UI and DB layers so tightly coupled with the compiler? I will be glad for any technical feedback!
2026-04-02 15:39:26
If you are an indie game developer right now, you know the pain of 3D asset generation.
The current landscape of AI 3D tools is a nightmare of expensive monthly SaaS subscriptions, API paywalls, and cloud-based platforms that claim ownership over the meshes you generate. We got tired of it.
At Odyssey Game Studios, we decided to build the anti-SaaS solution: Jupetar. It’s a completely offline, local-compute 2D-to-3D asset generation pipeline. You pay once, you own the software, and it runs entirely on your local GPU.
After weeks of engineering (and battling the Steam review queue), the official Jupetar Demo is now live on Steam. Here is a look under the hood at how we built it, and how you can test it right now.
The "No-Ping" Architecture
The biggest challenge in building local AI wrappers is preventing the underlying scripts from constantly trying to phone home.
Jupetar relies on a massive 24GB local models folder containing the Hunyuan DiT (geometry) and Paint VAE (textures) weights. Standard HuggingFace implementations constantly try to ping the cloud to check for version updates or auto-heal missing files, which Valve (and privacy-conscious devs) understandably flag.
We surgically killed the huggingface_hub auto-heal scripts and forced strict offline environment variables directly into the pipeline:
os.environ["HF_HUB_OFFLINE"] = "1"
os.environ["TRANSFORMERS_OFFLINE"] = "1"
os.environ["DIFFUSERS_OFFLINE"] = "1"
The Ethernet Test: The ultimate proof of our architecture. You can literally unplug your ethernet cable, drop concept art into the UI, and Jupetar will still generate a fully textured .glb file.
Solving Local VRAM Fragmentation
Loading a massive Diffusion Transformer, XAtlas for UV unwrapping, and an FP32 rasterizer into a single localized PyTorch instance usually results in catastrophic memory leaks and Out-of-Memory (OOM) crashes on standard consumer GPUs.
To make this run stably on an 8GB-12GB VRAM card (our baseline is an RTX 3080 10GB), we had to force PyTorch expandable segments to mitigate memory fragmentation during the heaviest phase: the high-res 4K texture upscaling.
Our overarching C# wrapper handles the UI and hardware telemetry, acting as an orchestrator that spins up the Python environment, executes the generation, and actively flushes the VRAM at each pipeline stage.
What the Pipeline Actually Does
When you drop an image into Jupetar, it doesn't just spit out a messy point cloud. It executes a full, game-ready pipeline:
Geometry: Generates the raw mesh via Hunyuan3D.
Optimization: Runs an adaptive decimation pass (FaceReducer) to crunch meshes down to target poly-counts (e.g., 25k for weapons, 80k for humanoid creatures).
UV Mapping: Integrates XAtlas for automated, engine-ready UV unwrapping.
Texturing & PBR: Since native DiT vertex colors are blurry, the engine runs a VRAM-safe 4K tiled upscaler and synthesizes a procedural PBR normal map locally using Sobel derivatives.
Export: Packages everything into a standard .glb ready for Unity, Unreal, or Godot.
The Demo is Live
We just pushed the Demo build live on Steam.
Because we don't have a centralized server to authenticate accounts, we built a localized trial logic directly into the app. The Demo gives you 2 completely free generations to test the pipeline, benchmark your GPU's VRAM, and inspect the final 3D topology for yourself.
No credit card, no account creation, no cloud processing.
🎮 https://store.steampowered.com/app/4346660/Jupetar/?l=english
If you test it out, let me know how it handles on your specific GPU. We are continuing to optimize the C# orchestrator and VRAM management leading up to the V1.0 launch, and I'd love to hear feedback from the community!
2026-04-02 15:35:47
I wake up in the morning, open my feed — and right away, two incidents. Both about npm. Both serious. And both happened on the same day.
The first one — in Axios (yes, the one that's everywhere) — spread a RAT trojan for three hours. The second — Anthropic accidentally published the full source code of Claude Code in a public npm package. Half a million lines with prompts and architecture.
Good morning, indeed :)
Someone hijacked the npm account of Jason Saayman (jasonsaayman) — the main maintainer of Axios. They changed the linked email and manually published two versions:
The versions were live in the public registry from about 00:21 to 03:15 UTC on March 31. Three hours. For a package with over 100 million weekly downloads, that's more than enough.
The nastiest part: the Axios code itself wasn't touched. Not a single line. Open the sources — everything looks clean. The trick was in package.json.
They added a dependency: [email protected]. The package was created the day before, on March 30. The name looks innocent — just some crypto utility, who would look twice? It's never imported anywhere in the Axios code. Not once.
So why was it there?
npm install pulls in all dependencies from package.json
plain-crypto-js contains a postinstall scriptpackage.json with a clean versionThat last point is especially nasty. The trojan is already running, but when you check package.json, everything looks normal. No trace of plain-crypto-js.
This is called a phantom dependency — a ghost dependency. It's not used in the code, not imported, and exists only for the side effect during installation. Normal code review won't catch it because the .js files are clean.
You scan the sources for suspicious code? Good. But do you check package.json for new dependencies? Or postinstall scripts in transitive dependencies?
Safe versions:
| Branch | Malicious | Safe |
|---|---|---|
| latest | 1.14.1 | 1.14.0 |
| legacy | 0.30.4 | 0.30.3 |
If your project installed exactly those versions in the window from 00:21 to 03:15 UTC on March 31 — treat the system as compromised. Not "possibly". Compromised.
You need to:
package-lock.json / yarn.lock for "[email protected]" or "[email protected]"npm install ran is infectedStepSecurity, Socket, Endor Labs, Aikido, and Huntress have confirmed the details and published IOCs.
Several sources — including Google and Reuters — point to the North Korean group UNC1069 / Lazarus. Supply-chain attacks via maintainer account takeover are their classic playbook.
Important note: the npm registry itself wasn't hacked. The infrastructure wasn't affected. The attackers simply logged in under a real account and ran npm publish. From the system's point of view, it was completely legitimate.
On the same day, Anthropic released a new version of their CLI agent to npm — "@anthropic-ai/[email protected]". A routine release. But the package included a cli.js.map file weighing 59.8 MB.
That's a sourcemap. And through it, you can restore the entire original source code of the project.
From that one file they recovered:
The sourcemap also pointed to a ZIP archive in Anthropic's public R2 bucket. Security researcher Chaofan Shou was the first to post about it on X. The code was mirrored on GitHub almost instantly.
Anthropic confirmed it: a mistake in the build process. They forgot to exclude the source maps via .npmignore. That's it.
No hack. No user data leak. Just a missing line in the build config.
The version was quickly removed and a fix was released. But the code had already spread.
While Anthropic was sending DMCA takedowns to mirrors of the leaked code, one person in Korea took a different route.
Sigrid Jin (Sigrid Jin, GitHub: instructkr) — a well-known power user of Claude Code. How well-known? According to the Wall Street Journal, in the past year he generated more than 25 billion tokens through the tool. Twenty-five billion. The guy clearly knew the architecture inside out.
On the morning of April 1, around 4 AM local time, Jin woke up to notifications about the leak. He saw Anthropic taking down mirrors of the original code and made a decision: don't copy — rewrite.
Jin didn't fork the leaked TypeScript. That would have been taken down by DMCA in a day. Instead, he did a clean-room reimplementation — rewriting the key patterns and architecture from scratch, this time in Python:
To speed things up, he used the AI tool oh-my-codex (OmX). The repository claw-code was live before sunrise.
Different language, different code, no copy-paste — legally, this is a new creative work. Gergely Orosz from Pragmatic Engineer and other lawyers/developers confirmed: such a "clean rewrite" is legally solid.
Anthropic couldn't take the repo down via DMCA. It's still alive.
Then the madness began.
| Time after publication | Stars |
|---|---|
| ~2 hours | 50,000 |
| ~24 hours | 100,000+ |
| Forks in the first day | 50,000 - 58,000 |
50,000 stars in two hours. According to the author and several media outlets — the fastest-growing repository in GitHub history.
Later, Jin started porting the same architecture to Rust — that version also quickly gained tens of thousands of stars.
In essence, the community turned a corporate leak into a fully open clone of an AI agent in just a few hours. Jin himself later described the project's goal simply: "Better Harness Tools" — better tools that actually get things done.
You can argue whether it's ethical to build on someone else's leak. But legally — it's clean. And 100,000 stars in a day show that demand for an open alternative was huge long before March 31.
2026-04-02 15:33:57
DoBu (DOcumentation BUilder) is a Documentation DSL designed for the Ascoos OS ecosystem.
It is not PHPDoc.
It is not Doxygen.
It is not MkDocs or Docusaurus.
DoBu is a semantic documentation layer that lives inside simple docblocks such as:
/* ... */
and transforms documentation structure into:
DoBu can generate documentation text for any programming language that supports block comments.
Ascoos OS is a kernel containing:
No existing documentation tool could:
Thus, DoBu was created as the semantic documentation layer of Ascoos OS.
/*
dobu {
class:id(`tmyclass`),name(`TMyClass`),extends(`TObject`),namespace(`ASCOOS\OS\Kernel\MyClass`) {
summary:langs {
en {`Creating a new Ascoos OS class.`}
el {`Δημιουργία μιας νέας Ascoos OS κλάσης.`}
}
}
}
*/
/*
dobu {
class:id(`tmyclass`),name(`TMyClass`),extends(`TObject`),version(`0.0.1`) {
summary:langs {
en {`Creating a new Ascoos OS class.`}
el {`Δημιουργία μιας νέας Ascoos OS κλάσης.`}
}
}
}
*/
DoBu supports:
/*
dobu {
method:id(`blackscholesputdividend`),name(`blackScholesPutDividend`),return(`float`) {
summary:langs {
en {`Prices a European put option with continuous dividend yield.`}
el {`Αποτιμά ένα ευρωπαϊκό put option με συνεχή μερισματική απόδοση.`}
},
formula:type(`latex`),value(`\[ P = K e^{-rT} N(-d_2) - S_0 e^{-qT} N(-d_1) \]`)
}
}
*/
See the create-dobu-class.php file for a full demonstration inside a PHP class.
langs {
en {`English text`}
el {`Ελληνικό κείμενο`}
}
see:langs {
all {`
• blackScholesCallDividend()
• binomialPutEuropean()`
}
}
DoBu can export documentation to:
DoBu works even when:
DoBu is built using Ascoos OS classes and is used for: