2026-03-23 17:42:16
Hi!
I recently came across a similar discussion on Reddit, but I’ve reworked the idea here to present the project better.
I’m curious what you think: is it actually possible to combine a functional programming style with ECS in Rust—and even build a game framework around it using Macroquad and Bevy ECS?
At first, it felt unlikely (even though Bevy systems are function-based). But after experimenting, I managed to make it work—and honestly, it surprised me.
Result: 1,300+ entities running at ~28% CPU on a 2013 laptop. \n
That said, there are still some draw call bottlenecks I’ll get into later in the post.
Why did I build a new framework with Macroquad and Bevy ECS instead of using full Bevy?
For the following reasons:
It was really frustrating for me. When I was learning Bevy, it was really easy for me (15 times easier than OOP!). But when I tried to run it, everything broke. I thought then that I would never be able to make games in Rust.
\
So I made Light Acorn. And it is more than a framework. It is a manifest for the "Lord of the Code".
\ Light Acorn — an Open Source project (MIT/MPL) designed for old hardware like my 2013 X550CC laptop (i3-3217u, GT 720m) running on antiX.
The Core: Instead of a complex scheduler, I use a self-written Kernel based on the Macroquad async loop.
Zones & Locations architecture: You can manually define the order of functions in containers.
Runtime Flexibility: One of the core advantages of this approach is how much flexibility you get at runtime.
You can add, remove, or reorder systems while the game is running—without relying on:
unsafe blocks
smart pointers
heavy macro magic
or external scripting layers like Python or Lua (the REACORN-style approach)
The entire system stays simple and predictable.
Under the hood: it’s just vectors, plus a bit of lightweight syntactic sugar from macros.
DOD: No Arc<Mutex<>> or complex lifetimes for the user. Because the entire framework skeleton is built exclusively on vectors and loops.
Minimum entry threshold: you don't need to learn lifetime fighting 'a and 'b, to know complex macros and smart pointers to begin creating your games.
In one moment, only one function is executed. This eliminates data races. Also, multiple Bevy Queries can be run in a single function.
Bevy ECS is included but optional. That means functions are not required to change state.
For example, the function here accepts arguments but is not required to use them:
fn acorn_example_draw_circle(_world: &mut World, _context: &mut AcornContext) {
draw_circle(
screen_width()/2.0,
screen_height()/2.0,
60.0,
BLUE
)
}
Also…
Zone & Location is a unique concept: The grouping of functions and their order is the basis of the engine. The developer controls the order of code, grouping functions by Zone (group of Locations) and Location (group of functions). A developer can create their own Zones or Locations in Kernel: custom Zones with custom execution order.
The idea is very simple: functions are executed in order in an infinite loop. Zones and locations are containers for said functions.
In short: Zone is when, Location is where, Function is time-marker.
Bearing these in mind, we can say that Light Acorn is eseentially macroquad with architecture: Where Zones, Locations are a convenient list of functions that can be easily modified in the Light Acorn API.
\
In the code, it looks like this:
:::info Note: This is not a crate, it's a template for your projects.
:::
Why?
Because:
And it's really ALL! This explains why everything compiles so quickly.
For example, let’s say you want to draw a blue circle.
Create an Acorn function and add a simple Macroquad function:
fn acorn_example_draw_circle(_world: &mut World, _context: &mut AcornContext) {
draw_circle(
screen_width()/2.0,
screen_height()/2.0,
60.0,
BLUE
)
}
\
Add a function to the Zone. Preferably in the Zone after function set_default_camera(); because this turns on 2D rendering:
let after_2d_zone = Zone::default()
.with_locations(vec![
Location::from_fn_vec(vec![
acorn_example_draw_circle
// add own functions through comma
]),
// add own locations through comma
]);
\ See the result:
That’s ALL! Your function will run in every frame. Sometimes it seems to me that it’s even EASIER THAN REACT (although React doesn't try to draw every frame that has not changed).
Actually, in Light Acorn, functions are not like those in Haskell. For example, acorn_example_draw_circle doesn't use arguments, but changes the rendering state.
Light Acorn is Functional Style Data-Oriented Framework OR Functional-Driven ECS.
\
You might argue that storing functions in vectors isn't scalable, and that changing the order of functions at runtime can lead to chaos if used incorrectly.
What if a team of 30+ people requires 200-300+ functions? Do we really need to manually write every single function to make Zone bloat into a huge codebase?
So, the architecture is flexible but weak.
But show me a single programming paradigm or architectural approach that doesn't require human discipline to scale software?
If you offer something "better" than the Acorn approach, you will soon realize that you are offering the approach you are used to and use it everywhere like a golden hammer.
Everyone says that Unreal Engine and similar products are scalable for AAA, but not because this is really true, but because everyone is used to it and can’t change their mindset.
\
The entire history of IT has been an attempt to hide from hardware, hiding behind the complexity of the concept. Now we're at a point where IT is returning to hardware.
\ And I'm not just talking empty words, but proposing a solution in Acorn: Lord-Minor architecture for controlling the order of REACORN's runtime changes.
A Lord-Function controls its Location and can change the order of functions there, remove them, or add them.
Here’s what the code looks like:
let before_2d_zone = Zone::default()
.with_locations(vec![
// Lord-Location.
Location::from_fn_vec(vec![
//(press TAB to delete functions in Minor-Location)
acorn_example_delete_function,
]),
// Minor-Location
Location::from_fn_vec(vec![
acorn_example_greeting,
acorn_game_draw_3d_assets,
]),
]);
\ In other words, each Lord-Function has its own territory. This is a hierarchy, and it requires discipline. So…
I invented a tool, not a developer discipline.
\ Also…
YOU are not required to use Lord-Minor architecture. You are Lord of your ideas.
Despite everything working well so far, Light Acorn still has a major bottleneck: draw calls.
In the first image of the post, you can see the game running at 26 FPS, even though CPU usage is only 28%. The issue is that the CPU is issuing separate draw calls for each 3D acorn model—and my GPU simply can’t keep up.
My GT 720M is choking on the volume of draw calls.
To give you some context on the hardware:
Even most modern integrated graphics outperform this setup.
And yes, I’ve been using this machine for about 13 years.
Trying to Solve It
I reached out on Discord to the QUADS community (Macroquad/Miniquad) for help with implementing instancing.
The feedback I got was that it would likely require going deeper into Miniquad—and that it wouldn’t be easy.
Still, I really appreciate the two people who took the time to respond and engage, instead of brushing it off.
And to whoever gave Light Acorn its first star despite my rough presentation—thank you. It genuinely means a lot.
Light Acorn is fully open source, and I’d love help tackling the GPU instancing problem.
The project already includes documentation and code comments, so you won’t need to reverse-engineer everything from scratch. Just clone the repo, open main.rs, and follow along with the docs.
Proposed Direction
To address the draw call bottleneck, I’m exploring a few options:
Even if it means:
I’m open to pushing this as far as needed to make it work.
Here’s the main issue with the current implementation:
pub fn acorn_game_draw_3d_assets(world: &mut World, context: &mut AcornContext) {
let gl = acorn_get_gl_contex();
let mut query =
world.query::<(&Entity3DTransform, &Entity3DModel)>();
for (transform, mesh) in query.iter(world) {
let model_matrix = acorn_generate_matrix(&transform);
gl.push_model_matrix(model_matrix);
/*
You may change to if/else branching for safety
But I use perfomance mode
if let Some(mesh) = context.assets_3d.meshes.get(mesh.mesh_id) {
draw_mesh(mesh);
} else {
println!("oops...")
}
*/
draw_mesh(&context.assets_3d.meshes[mesh.mesh_id]);
gl.pop_model_matrix();
}
}
The draw_mesh function initializes new draw call for each Meshes.
1000 Meshes = 1000 draw calls = 1000 sufferings of GT 720m.
About me
You might not believe it… But I'm 18 years old, and I live in Kyrgyzstan. I started learning Rust just because it's hard 8 months ago.
Fun fact: I quit learning C++ because it wouldn't let me declare a function after void main() (also, I don't like OOP due to unjustified complexity).
Thanks for making it this far. Even if the project doesn't solve your problems, I'll still be pleased to know you connected to this in some way. For those who are willing to help or want to make simple 3D games in Rust, I've attached a link: https://github.com/Veyyr3/Light_Acorn
And perhaps soon I will add Taffy to the stack so that this framework is not only for games, but also for applications.
\
2026-03-23 17:17:24
When you debug software, you usually get some clues.
Logs. Error messages. Stack traces.
When you debug a toddler, the only message you get is:
"AAAAAAAAAA!"
Somewhere between production incidents and raising two boys under two - I realized something surprising:
Parenting and software engineering have a lot in common.
Both involve managing complex systems. Both break in unpredictable ways. And in both cases, the documentation is either outdated… or completely missing.
The only difference is that in parenting, the system occasionally throws snacks on the floor and refuses to sleep.
Here are a few unexpected parallels I discovered somewhere between a production bug and a toddler meltdown.
In software engineering, debugging is straightforward in theory.
You have logs, metrics, stack traces, monitoring dashboards.
In parenting? None of that exists.
One evening, my younger one started crying uncontrollably. No obvious reason. No visible error message. Just pure system failure.
So I began running my debugging sequence.
After ten minutes of investigation, the root cause finally surfaced.
He wanted the banana that his older brother was holding.
Not a banana. That banana. Also his older brother does not really eat a banana!
As any engineer knows, sometimes the problem isn't resource availability. It's resource contention.
Every engineer learns that tiny mistakes can create massive outages.
A missing null check. An off-by-one error. A race condition.
In parenting, the equivalent is missing the nap window.
One Saturday afternoon, my 2-year-old was happily building a tower with his magnet tiles. Nap time was approaching, but he seemed perfectly fine.
So I made the classic mistake:
"Maybe we can delay the nap today."
Thirty minutes later, the system entered a failure state.
The tower collapsed. The wrong snack was served. His brother touched his toy truck.
Suddenly we had a full-scale meltdown.
If you've ever pushed code to production thinking "This small change won't matter," you already know how this story ends.
Before kids, my life was basically a single-threaded process.
One schedule. One sleep cycle. One human to manage.
Now I run a distributed system with multiple independent agents:
Agent 1: 3-year-old \n Agent 2: 20-month-old
Each node has different energy levels, different snack requirements, different emotional states.
Synchronization failures occur frequently.
One night, I finally got my younger son to sleep after a long bedtime battle. Silence. Peace.
I slowly closed the bedroom door like someone diffusing a bomb.
And at that exact moment, my 3-year-old walked down the hallway and loudly announced:
"MOMMY, WHERE IS MY DINOSAUR?"
The baby woke up instantly.
Engineers call this a cascading failure in a distributed system. One node goes down, and suddenly the entire system is unstable again.
Living with two toddlers also means managing resource conflicts.
For example, yesterday both children wanted the same toy truck.
The available solutions were:
Instead, the system chose option four: Both children cry simultaneously.
Some distributed systems problems simply have no elegant solution.
Engineering teaches us something important. You rarely get things right on the first attempt.
You ship. You observe. You fix. You improve.
Parenting works exactly the same way.
You try a strategy - a bedtime routine, a new snack schedule, a different way to handle tantrums.
Sometimes it works. Sometimes it fails spectacularly.
But over time you learn something powerful:
Parenting isn't about perfect decisions. It's about constant iteration.
One of the hardest parts of engineering is maintaining systems that constantly evolve.
Just when you understand the architecture… it changes.
Parenting is exactly like that.
You finally figure out their sleep schedule, their favorite foods, how to calm them down.
And then suddenly… they grow. The system changes again.
New behaviors. New challenges. New bugs.
Just when you think you've mastered version 1.0, version 2.0 ships without documentation.
Becoming a mother didn't make me a worse engineer. If anything, it made me better.
Parenting teaches skills engineers value deeply: patience, adaptability, problem-solving under uncertainty, prioritizing what actually matters.
And perhaps most importantly:
Not every problem has a clean technical solution.
Sometimes the system just needs time, empathy, and snacks.
Preferably snacks.
Software systems are complex. But humans are infinitely more complex.
And raising two tiny humans has reminded me of something engineers sometimes forget:
The most interesting systems in the world aren't the ones we build. They're the ones we grow.
Debugging distributed systems is hard.
But debugging two toddlers who skipped their nap might be the most complex system I've ever worked on.
\n
\
2026-03-23 17:05:49
The alert fired at 3 PM on a Wednesday. Our Java microservices detected unusual access patterns from a service account that had been dormant for six months. The credentials were valid. The IAM policy allowed the actions. Everything looked legitimate according to AWS CloudTrail. Yet something felt wrong. The access pattern did not match historical behavior for that identity.
We faced an insider threat in progress. The attacker had compromised a service account with proper permissions. Traditional IAM controls could not stop them because the credentials were not stolen. They were misused by someone who already had access. Our security team spent the next 72 hours tracing how this happened. We realized that static IAM policies were not enough. We needed behavioral analysis. We needed machine learning to understand what normal access looked like.
In this article, I will share how we built machine learning-driven access control for our Java applications on AWS. I will explain the architecture we designed. I will detail the ML models we trained. I will provide code examples showing how to implement behavioral IAM analysis. This is not theoretical research. This is a practical account of defending against credential misuse in production.
AWS IAM policies define what actions an identity can perform. They do not define when or how those actions should occur. A service account with S3 read permissions can access any bucket at any time. This creates a security gap. Compromised credentials with valid permissions bypass all controls.
Consider a Java application that accesses DynamoDB. The IAM role grants full table access. An attacker who compromises that role can read or delete any item. Traditional controls cannot distinguish between legitimate access and malicious activity. The API calls look identical. The credentials are valid. The policy allows the action.
We learned this when a developer account was compromised. The attacker accessed customer data using valid credentials. CloudTrail logged every action. Nothing triggered alerts because everything was permitted. We needed a different approach. We needed to analyze behavior, not just permissions.
The key to detecting credential misuse is understanding normal access patterns. If you know what normal looks like, you can spot anomalies. Machine learning excels at this task. It can analyze millions of API calls and learn patterns that humans would miss.
We started by instrumenting our Java applications to emit detailed access telemetry. Every AWS SDK call generates logs containing the identity, action, resource, time, and source IP. We streamed this data to Amazon Kinesis for real-time processing.
\

This code emits telemetry for every AWS SDK call. The data flows to Kinesis, where Lambda functions process it in real time. We built baselines from this data over a two-week learning period. The system learned normal access frequencies, typical resource patterns, and usual time-of-day behavior for each identity.
We evaluated several approaches for anomaly detection. Statistical methods worked for simple cases. They failed for complex patterns. We needed something more sophisticated. We chose AWS GuardDuty and custom models trained with SageMaker.
The model analyzed multiple features simultaneously. Access frequency per identity. Resource access patterns. Time-of-day consistency. Geographic location. API call sequences. When the model detected a deviation from baseline, it generated a risk score.

This Lambda function runs synchronously for high-risk operations. It adds minimal latency, typically under 50 milliseconds. The model scores each access request in real time. High scores trigger immediate alerts.
Detection alone is not enough. You must respond quickly. We built an automated response system using Java Spring Security and AWS Lambda. When the risk score exceeded our threshold, the system took action.
\
import org.springframework.security.access.AccessDeniedException;
import org.springframework.stereotype.Component;
@Component
public class AdaptiveAccessManager {
private final AmazonLambdaAsync lambdaClient;
private final AmazonDynamoDB dynamoDB;
private final String riskAssessmentFunction;
private final String blockedIdentitiesTable;
public AdaptiveAccessManager() {
this.lambdaClient = AmazonLambdaAsyncClientBuilder.defaultClient();
this.dynamoDB = AmazonDynamoDBClientBuilder.defaultClient();
this.riskAssessmentFunction = System.getenv("RISK_ASSESSMENT_FUNCTION");
this.blockedIdentitiesTable = System.getenv("BLOCKED_IDENTITIES_TABLE");
}
public void checkAccess(Authentication authentication, String resource) {
RiskAssessmentRequest request = new RiskAssessmentRequest(
authentication.getName(),
resource,
getClientIp()
);
RiskAssessmentResponse response = assessRisk(request);
if (response.getRiskScore() > 0.95) {
blockIdentity(authentication.getName(), response.getReason());
throw new AccessDeniedException("High risk access detected");
} else if (response.getRiskScore() > 0.85) {
applyStepUpAuthentication(authentication);
logForInvestigation(response);
}
}
private RiskAssessmentResponse assessRisk(RiskAssessmentRequest request) {
InvokeRequest invokeRequest = new InvokeRequest()
.withFunctionName(riskAssessmentFunction)
.withPayload(toJson(request));
InvokeResult result = lambdaClient.invoke(invokeRequest).join();
return fromJson(result.getPayload(), RiskAssessmentResponse.class);
}
}
This manager integrates with Spring Security through custom access decision voters. Every protected resource access passes through the risk assessment system. High-risk identities receive immediate denial. Medium-risk identities face step-up authentication. The system adapts based on confidence levels.
The quality of your features determines the quality of detection. We engineered features specifically for identifying credential misuse in AWS environments.
Access Pattern Entropy: Normal identities access predictable resources. Attackers explore broadly. We calculated the entropy of resource access patterns. Legitimate users show low entropy. Compromised credentials show high entropy.
Temporal Consistency: Humans and services have natural rhythms. They operate during expected hours. Attackers do not follow these patterns. We analyzed access timing consistency.
Action Rarity Scoring: Some API actions are rarely used by normal operations. Delete operations, permission changes, and cross-account access fall into this category. We tracked action frequency per identity.

This feature extraction runs before the request reaches AWS services. It adds minimal overhead while providing valuable signals.
We trained our model using a combination of normal access logs and simulated attacks. The normal data came from two weeks of production CloudTrail logs. The attack data came from security researchers who performed red team exercises.
We used SageMaker's built-in algorithms initially. Random Cut Forest worked well for unsupervised anomaly detection. It identified outliers without requiring labeled attack data. We later supplemented this with supervised models trained on known attack patterns.
The training pipeline ran weekly. It ingested new telemetry data. It retrained models with fresh patterns. It deployed updated models to the endpoint with zero downtime. This ensured the system adapted to changing behavior.

The model achieved 92 percent accuracy on known attacks. More importantly, it detected 81 percent of credential misuse attempts in our test suite. This was far better than static IAM policies, which detected zero percent.
Our architecture is integrated with existing AWS identity services. We used Cognito for user authentication. We used IAM roles for service identities. We used STS for temporary credentials.
The flow worked like this. A request arrived at our Java application. The application extracted identity context. It called the risk assessment Lambda. If the risk score was acceptable the request proceeded. If not it was blocked or challenged. The entire process added 50 to 100 milliseconds of latency.

This infrastructure is deployed automatically through CI/CD. We tested changes in staging before production. Rollback took minutes if issues arose.
Building this system taught us valuable lessons about ML-driven access control.
\
\
\
\
Static IAM policies cannot stop credential misuse. Behavioral analysis powered by machine learning can. We built a system that detects anomalous access in real time using AWS Lambda and SageMaker. It integrates with Java applications through Spring Security. It adds minimal latency while providing significant protection.
The key insight is that you do not need to know what an attack looks like. You need to know what normal looks like. Anything that deviates significantly deserves scrutiny. Machine learning excels at learning normal patterns. It spots deviations that humans would miss.
If you are running Java applications on AWS consider implementing behavioral access control. Start with telemetry collection. Build baselines. Train models. Deploy gradually. Monitor false positives closely. Tune thresholds carefully. The effort is worth it. Credential misuse is inevitable. Your defense should be too.
\
2026-03-23 17:00:48
:::info Astounding Stories of Super-Science October 2022, by Astounding Stories is part of HackerNoon’s Book Blog Post series. You can jump to any chapter in this book here. THE MURDER OF ROGER ACKROYD - THE GOLDFISH POND
\
\ By Agatha Christie
:::
\ We walked back to the house together. There was no sign of the inspector. Poirot paused on the terrace and stood with his back to the house, slowly turning his head from side to side.
“Une belle propriété,” he said at last appreciatively. “Who inherits it?”
His words gave me almost a shock. It is an odd thing, but until that moment the question of inheritance had never come into my head. Poirot watched me keenly.
“It is a new idea to you, that,” he said at last. “You had not thought of it before—eh?”
“No,” I said truthfully. “I wish I had.”
He looked at me again curiously.
“I wonder just what you mean by that,” he said thoughtfully. “Ah! no,” as I was about to speak. “Inutile! You would not tell me your real thought.”
“Every one has something to hide,” I quoted, smiling.
“Exactly.”
“You still believe that?”
“More than ever, my friend. But it is not easy to hide things from Hercule Poirot. He has a knack of finding out.”
He descended the steps of the Dutch garden as he spoke.
“Let us walk a little,” he said over his shoulder. “The air is pleasant to-day.”
I followed him. He led me down a path to the left enclosed in yew hedges. A walk led down the middle, bordered each side with formal flower beds, and at the end was a round paved recess with a seat and a pond of goldfish. Instead of pursuing the path to the end, Poirot took another which wound up the side of a wooded slope. In one spot the trees had been cleared away, and a seat had been put. Sitting there one had a splendid view over the countryside, and one looked right down on the paved recess and the goldfish pond.
“England is very beautiful,” said Poirot, his eyes straying over the prospect. Then he smiled. “And so are English girls,” he said in a lower tone. “Hush, my friend, and look at the pretty picture below us.”
It was then that I saw Flora. She was moving along the path we had just left and she was humming a little snatch of song. Her step was more dancing than walking, and in spite of her black dress, there was nothing but joy in her whole attitude. She gave a sudden pirouette on her toes, and her black draperies swung out. At the same time she flung her head back and laughed outright.
As she did so a man stepped out from the trees. It was Hector Blunt.
The girl started. Her expression changed a little.
“How you startled me—I didn’t see you.”
Blunt said nothing, but stood looking at her for a minute or two in silence.
“What I like about you,” said Flora, with a touch of malice, “is your cheery conversation.”
I fancy that at that Blunt reddened under his tan. His voice, when he spoke, sounded different—it had a curious sort of humility in it.
“Never was much of a fellow for talking. Not even when I was young.”
“That was a very long time ago, I suppose,” said Flora gravely.
I caught the undercurrent of laughter in her voice, but I don’t think Blunt did.
“Yes,” he said simply, “it was.”
“How does it feel to be Methuselah?” asked Flora.
This time the laughter was more apparent, but Blunt was following out an idea of his own.
“Remember the Johnny who sold his soul to the devil? In return for being made young again? There’s an opera about it.”
“Faust, you mean?”
“That’s the beggar. Rum story. Some of us would do it if we could.”
“Any one would think you were creaking at the joints to hear you talk,” cried Flora, half vexed, half amused.
Blunt said nothing for a minute or two. Then he looked away from Flora into the middle distance and observed to an adjacent tree trunk that it was about time he got back to Africa.
“Are you going on another expedition—shooting things?”
“Expect so. Usually do, you know—shoot things, I mean.”
“You shot that head in the hall, didn’t you?”
Blunt nodded. Then he jerked out, going rather red, as he did so:—
“Care for some decent skins any time? If so, I could get ’em for you.”
“Oh! please do,” cried Flora. “Will you really? You won’t forget?”
“I shan’t forget,” said Hector Blunt.
He added, in a sudden burst of communicativeness:—
“Time I went. I’m no good in this sort of life. Haven’t got the manners for it. I’m a rough fellow, no use in society. Never remember the things one’s expected to say. Yes, time I went.”
“But you’re not going at once,” cried Flora. “Not—not while we’re in all this trouble. Oh! please. If you go——”
She turned away a little.
“You want me to stay?” asked Blunt.
He spoke deliberately but quite simply.
“We all——”
“I meant you personally,” said Blunt, with directness.
Flora turned slowly back again and met his eyes.
“I want you to stay,” she said, “if—if that makes any difference.”
“It makes all the difference,” said Blunt.
There was a moment’s silence. They sat down on the stone seat by the goldfish pond. It seemed as though neither of them knew quite what to say next.
“It—it’s such a lovely morning,” said Flora at last. “You know, I can’t help feeling happy, in spite—in spite of everything. That’s awful, I suppose?”
“Quite natural,” said Blunt. “Never saw your uncle until two years ago, did you? Can’t be expected to grieve very much. Much better to have no humbug about it.”
“There’s something awfully consoling about you,” said Flora. “You make things so simple.”
“Things are simple as a rule,” said the big game hunter.
“Not always,” said Flora.
Her voice had lowered itself, and I saw Blunt turn and look at her, bringing his eyes back from (apparently) the coast of Africa to do so. He evidently put his own construction on her change of tone, for he said, after a minute or two, in rather an abrupt manner:—
“I say, you know, you mustn’t worry. About that young chap, I mean. Inspector’s an ass. Everybody knows—utterly absurd to think he could have done it. Man from outside. Burglar chap. That’s the only possible solution.”
Flora turned to look at him.
“You really think so?”
“Don’t you?” said Blunt quickly.
“I—oh, yes, of course.”
Another silence, and then Flora burst out:—
“I’m—I’ll tell you why I felt so happy this morning. However heartless you think me, I’d rather tell you. It’s because the lawyer has been—Mr. Hammond. He told us about the will. Uncle Roger has left me twenty thousand111 pounds. Think of it—twenty thousand beautiful pounds.”
Blunt looked surprised.
“Does it mean so much to you?”
“Mean much to me? Why, it’s everything. Freedom—life—no more scheming and scraping and lying——”
“Lying?” said Blunt, sharply interrupting.
Flora seemed taken aback for a minute.
“You know what I mean,” she said uncertainly. “Pretending to be thankful for all the nasty castoff things rich relations give you. Last year’s coats and skirts and hats.”
“Don’t know much about ladies’ clothes; should have said you were always very well turned out.”
“It’s cost me something, though,” said Flora in a low voice. “Don’t let’s talk of horrid things. I’m so happy. I’m free. Free to do what I like. Free not to——”
She stopped suddenly.
“Not to what?” asked Blunt quickly.
“I forget now. Nothing important.”
Blunt had a stick in his hand, and he thrust it into the pond, poking at something.
“What are you doing, Major Blunt?”
“There’s something bright down there. Wondered what it was—looks like a gold brooch. Now I’ve stirred up the mud and it’s gone.”
“Perhaps it’s a crown,” suggested Flora. “Like the one Mélisande saw in the water.”
“Mélisande,” said Blunt reflectively—“she’s in an opera, isn’t she?”
“Yes, you seem to know a lot about operas.”
“People take me sometimes,” said Blunt sadly. “Funny idea of pleasure—worse racket than the natives make with their tom-toms.”
Flora laughed.
“I remember Mélisande,” continued Blunt, “married an old chap old enough to be her father.”
He threw a small piece of flint into the goldfish pond. Then, with a change of manner, he turned to Flora.
“Miss Ackroyd, can I do anything? About Paton, I mean. I know how dreadfully anxious you must be.”
“Thank you,” said Flora in a cold voice. “There is really nothing to be done. Ralph will be all right. I’ve got hold of the most wonderful detective in the world, and he’s going to find out all about it.”
For some time I had felt uneasy as to our position. We were not exactly eavesdropping, since the two in the garden below had only to lift their heads to see us. Nevertheless, I should have drawn attention to our presence before now, had not my companion put a warning pressure on my arm. Clearly he wished me to remain silent.
But now he rose briskly to his feet, clearing his throat.
“I demand pardon,” he cried. “I cannot allow mademoiselle thus extravagantly to compliment me, and not draw attention to my presence. They say the listener hears no good of himself, but that is not the case this time. To spare my blushes, I must join you and apologize.”
He hurried down the path with me close behind him, and joined the others by the pond.
“This is M. Hercule Poirot,” said Flora. “I expect you’ve heard of him.”
Poirot bowed.
“I know Major Blunt by reputation,” he said politely. “I am glad to have encountered you, monsieur. I am in need of some information that you can give me.”
Blunt looked at him inquiringly.
“When did you last see M. Ackroyd alive?”
“At dinner.”
“And you neither saw nor heard anything of him after that?”
“Didn’t see him. Heard his voice.”
“How was that?”
“I strolled out on the terrace——”
“Pardon me, what time was this?”
“About half-past nine. I was walking up and down smoking in front of the drawing-room window. I heard Ackroyd talking in his study——”
Poirot stooped and removed a microscopic weed.
“Surely you couldn’t hear voices in the study from that part of the terrace,” he murmured.
He was not looking at Blunt, but I was, and to my intense surprise, I saw the latter flush.
“Went as far as the corner,” he explained unwillingly.
“Ah! indeed?” said Poirot.
In the mildest manner he conveyed an impression that more was wanted.
“Thought I saw—a woman disappearing into the bushes. Just a gleam of white, you know. Must have been mistaken. It was while I was standing at the corner of the terrace that I heard Ackroyd’s voice speaking to that secretary of his.”
“Speaking to Mr. Geoffrey Raymond?”
“Yes—that’s what I supposed at the time. Seems I was wrong.”
“Mr. Ackroyd didn’t address him by name?”
“Oh, no.”
“Then, if I may ask, why did you think——?”
Blunt explained laboriously.
“Took it for granted that it would be Raymond, because he had said just before I came out that he was taking some papers to Ackroyd. Never thought of it being anybody else.”
“Can you remember what the words you heard were?”
“Afraid I can’t. Something quite ordinary and unimportant. Only caught a scrap of it. I was thinking of something else at the time.”
“It is of no importance,” murmured Poirot. “Did you move a chair back against the wall when you went into the study after the body was discovered?”
“Chair? No—why should I?”
Poirot shrugged his shoulders but did not answer. He turned to Flora.
“There is one thing I should like to know from you, mademoiselle. When you were examining the things in the silver table with Dr. Sheppard, was the dagger in its place, or was it not?”
Flora’s chin shot up.
“Inspector Raglan has been asking me that,” she said resentfully. “I’ve told him, and I’ll tell you. I’m perfectly certain the dagger was not there. He thinks it was and that Ralph sneaked it later in the evening. And—and he doesn’t believe me. He thinks I’m saying it to—to shield Ralph.”
“And aren’t you?” I asked gravely.
Flora stamped her foot.
“You, too, Dr. Sheppard! Oh! it’s too bad.”
Poirot tactfully made a diversion.
“It is true what I heard you say, Major Blunt. There is something that glitters in this pond. Let us see if I can reach it.”
He knelt down by the pond, baring his arm to the elbow, and lowered it in very slowly, so as not to disturb the bottom of the pond. But in spite of all his precautions the mud eddied and swirled, and he was forced to draw his arm out again empty-handed.
He gazed ruefully at the mud upon his arm. I offered him my handkerchief, which he accepted with fervent protestations of thanks. Blunt looked at his watch.
“Nearly lunch time,” he said. “We’d better be getting back to the house.”
“You will lunch with us, M. Poirot?” asked Flora. “I should like you to meet my mother. She is—very fond of Ralph.”
The little man bowed.
“I shall be delighted, mademoiselle.”
“And you will stay, too, won’t you, Dr. Sheppard?”
I hesitated.
“Oh, do!”
I wanted to, so I accepted the invitation without further ceremony.
We set out towards the house, Flora and Blunt walking ahead.
“What hair,” said Poirot to me in a low tone, nodding towards Flora. “The real gold! They will make a pretty couple. She and the dark, handsome Captain Paton. Will they not?”
I looked at him inquiringly, but he began to fuss about a few microscopic drops of water on his coat sleeve. The man reminded me in some ways of a cat. His green eyes and his finicking habits.
“And all for nothing, too,” I said sympathetically. “I wonder what it was in the pond?”
“Would you like to see?” asked Poirot.
I stared at him. He nodded.
“My good friend,” he said gently and reproachfully, “Hercule Poirot does not run the risk of disarranging his costume without being sure of attaining his object. To do so would be ridiculous and absurd. I am never ridiculous.”
“But you brought your hand out empty,” I objected.
“There are times when it is necessary to have discretion. Do you tell your patients everything—everything, doctor? I think not. Nor do you tell your excellent sister everything either, is it not so? Before showing117 my empty hand, I dropped what it contained into my other hand. You shall see what that was.”
He held out his left hand, palm open. On it lay a little circlet of gold. A woman’s wedding ring.
I took it from him.
“Look inside,” commanded Poirot.
I did so. Inside was an inscription in fine writing:—
From R., March 13th.
I looked at Poirot, but he was busy inspecting his appearance in a tiny pocket glass. He paid particular attention to his mustaches, and none at all to me. I saw that he did not intend to be communicative.
\ \
:::info About HackerNoon Book Series: We bring you the most important technical, scientific, and insightful public domain books.
This book is part of the public domain. Astounding Stories. (2008). ASTOUNDING STORIES OF SUPER-SCIENCE, JULY 2008. USA. Project Gutenberg. Release date: OCTOBER 2, 2008, from https://www.gutenberg.org/cache/epub/69087/pg69087-images.html
This eBook is for the use of anyone anywhere at no cost and with almost no restrictions whatsoever. You may copy it, give it away or re-use it under the terms of the Project Gutenberg License included with this eBook or online at www.gutenberg.org, located at https://www.gutenberg.org/policy/license.html.
:::
\
2026-03-23 16:39:29
Most AI failures in production aren’t technical—they’re organizational. Teams invest in accuracy and trust but ignore accountability: who owns the system, detects drift, and acts on it. Without a named individual, defined operating boundaries, and real escalation paths, AI systems fail silently until outcomes force attention. The solution isn’t better models or dashboards—it’s designing ownership into the system before deployment.
2026-03-23 00:00:30
Modern software development moves fast. Teams deploy code many times a day. New environments appear and disappear constantly. In this world, manual infrastructure setup simply does not scale.
\ For years, developers logged into dashboards, clicked through forms, and configured servers by hand. This worked for small projects, but it quickly became fragile. Every manual step increased the chance of mistakes. Environments drifted apart. Reproducing the same setup became difficult.
\ Infrastructure as Code (IaC) solves this problem. Instead of clicking through interfaces, developers define infrastructure using code. This approach makes infrastructure predictable, repeatable, and easy to automate.
\ In recent years, another approach has become popular alongside traditional IaC tools: using cloud APIs directly to create and manage infrastructure. This gives developers full control over how resources are provisioned and integrated into workflows.
\ This article explains what Infrastructure as Code means, why APIs are a powerful way to implement it, and how developers can automate cloud resources using simple scripts.
Infrastructure as Code means managing infrastructure using code instead of manual processes.
\ Instead of setting up servers, databases, and networks by hand, you define them in scripts or configuration files. These files describe the desired state of your infrastructure. A tool or script then creates and maintains that state automatically.
\ For example, instead of manually creating a database, you might define it in code like this:
database:
name: app_db
engine: postgres
version: 16
\ Once the code runs, the database is created automatically.
\ This approach provides several key benefits.
\ First, it improves consistency. Every environment is created from the same definition. Development, staging, and production environments stay aligned.
\ Second, it improves repeatability. If infrastructure fails, it can be recreated from code in minutes.
\ Third, it improves version control. Infrastructure definitions live in the same repositories as application code. Teams can review, track, and roll back changes.
\ Finally, it enables automation. Infrastructure can be created during deployments, tests, or CI/CD pipelines.
Before IaC became common, infrastructure management relied heavily on dashboards and manual configuration.
\ A developer would open a cloud console and perform steps like:
\ These steps worked, but they introduced problems.
\ Manual configuration is hard to document. Even if teams write guides, small details are often missed. Over time, environments drift apart.
\ Manual processes also slow down development. Spinning up a new environment may take hours instead of seconds.
\ Even worse, manual infrastructure cannot easily be tested. If something breaks, reproducing the same conditions becomes difficult.
\ Infrastructure as Code removes these problems by turning infrastructure into something that can be scripted, tested, and automated.
Many people associate Infrastructure as Code with tools like Terraform or CloudFormation. These tools are powerful, but they are not the only option.
\ Every modern cloud platform exposes an API. That API allows developers to create resources programmatically.
\ This means infrastructure can be controlled directly from code using HTTP requests or command-line interfaces.
\ Using APIs for IaC has several advantages.
\ First, it offers maximum flexibility. Developers can integrate infrastructure creation directly into applications, deployment scripts, or internal tools.
\ Second, it reduces tooling complexity. Instead of learning a specialized IaC language, teams can use languages they already know, such as Python, JavaScript, or Bash.
\ Third, it enables dynamic infrastructure. Scripts can create resources only when needed, scale them automatically, and remove them when work is complete.
\ For example, a test suite could automatically create a database, run tests, and delete the database afterwards. This keeps environments clean and reduces costs.
\ APIs essentially turn the cloud into a programmable platform.
Using APIs for infrastructure automation usually follows a simple workflow.
\ First, a script authenticates with the cloud platform using an API token or credentials.
\ Second, the script sends requests to create or modify resources such as applications, databases, or storage.
\ Third, the script captures identifiers or configuration values from the response.
\ Finally, those values are used in later steps, such as deployments or integrations.
\ Because these steps run in code, they can easily be included in CI/CD pipelines.
\ A typical pipeline might do the following:
\ This approach ensures every deployment follows the same process.
A practical way to apply Infrastructure as Code through APIs is to use a command-line interface that directly interacts with a cloud platform’s API. This allows developers to automate infrastructure creation using scripts rather than dashboards.
\ One example is the Sevalla CLI, which exposes infrastructure operations as terminal commands that can be executed manually or inside automation pipelines.
\ Sevalla is a developer-centric PaaS designed to simplify your workflow. They provide high-performance application hosting, managed databases, object storage, and static sites in one unified platform. Other options are AWS and Azure, which require complex CLI tools and heavy DevOps overhead. Sevalla offers simplicity and ease of use, similar to Heroku.
\ You can install the CLI using the following shell command.
bash <(curl -fsSL https://raw.githubusercontent.com/sevalla-hosting/cli/main/install.sh)
\
Once installed, you can view the list of all available commands using the help command.
\ The first step is authentication. Make sure you have an account on Sevalla before using the CLI.
sevalla login
\ For automated environments such as CI/CD pipelines, authentication can be done with an API token. The token is stored in an environment variable so scripts can run without user interaction.
export SEVALLA_API_TOKEN="your-api-token"
\
Once authenticated, you can quickly view a list of your apps using sevalla apps list
\ Your infrastructure can now be created directly from the command line. For example, a developer might start by creating an application service that will run the backend code.
sevalla apps create --name myapp --source privateGit --cluster <id>
\ This command provisions a new application resource on the platform. Instead of navigating through a web interface and filling out forms, the entire setup is performed through a single command.
\ Because the command can be stored in scripts or configuration files, it becomes part of the project’s infrastructure definition.
\ After creating the application, developers often need a database. That can also be provisioned programmatically.
sevalla databases create \
--name mydb \
--type postgresql \
--db-version 16 \
--cluster <id> \
--resource-type <id> \
--db-name mydb \
--db-password secret
\ This creates a PostgreSQL database with a defined version and credentials. In an automated workflow, the database creation step could run during environment setup for staging or testing.
\ Once the application and database exist, the next step might be configuring environment variables so the application can connect to the database.
sevalla apps env-vars create <app-id> --key DATABASE_URL --value "postgres://..."
\ These configuration values can be injected during deployments, ensuring the application always receives the correct settings.
\ Deployment automation is another key part of Infrastructure as Code. Instead of manually triggering deployments, a script can deploy new code whenever a repository is updated.
sevalla apps deployments trigger <app-id> --branch main
\ This allows CI/CD systems to deploy new versions of the application automatically after tests pass.
\ Infrastructure automation also includes scaling and monitoring. For example, if an application needs more instances to handle traffic, the number of running processes can be updated programmatically.
sevalla apps processes update <process-id> --app-id <app-id> --instances 3
\ Metrics can also be retrieved through the CLI. This allows monitoring tools or scripts to analyze system performance.
sevalla apps processes metrics cpu-usage <app-id> <process-id>
\ Similarly, application metrics such as response time or request rates can be queried to detect performance issues.
\ Another common step in infrastructure automation is configuring domains. Instead of manually linking domains to applications, a script can add them during environment setup.
sevalla apps domains add <app-id> --name example.com
\ With these commands combined in scripts or pipelines, developers can fully automate the lifecycle of their infrastructure. A CI pipeline could create an application, provision a database, configure environment variables, deploy code, attach a domain, and monitor performance — all without human intervention.
\ Because every command supports JSON output, scripts can also capture values returned by the platform and reuse them in later steps. For example:
APP_ID=$(sevalla apps list --json | jq -r '.[0].id')
\ This ability to chain commands together makes it easy to build powerful automation workflows.
\ In practice, teams often place these commands inside deployment scripts or pipeline steps. Whenever code is pushed to a repository, the pipeline automatically provisions or updates the infrastructure needed to run the application.
\ This approach demonstrates how APIs and automation tools can turn infrastructure into something developers manage the same way they manage application code, through scripts, version control, and automated workflows.
One of the biggest benefits of Infrastructure as Code is developer productivity.
\ Developers no longer need to wait for infrastructure changes or manually configure environments.
\ Instead, infrastructure becomes part of the development workflow.
\ When a new feature requires a service, the developer simply adds the infrastructure definition to the repository. The pipeline then creates it automatically.
\ This reduces delays and keeps development moving quickly.
\ It also makes onboarding easier. New team members can spin up a full environment with a single command.
Cloud infrastructure continues to evolve toward automation and programmability.
\ Platforms increasingly expose APIs that allow every resource to be created, configured, and monitored through code.
\ This trend aligns naturally with the way developers already work.
\ Applications are built with code. Deployments are automated with code. It makes sense that infrastructure should also be defined with code.
\ Infrastructure as Code with APIs takes this idea even further. It allows infrastructure to be embedded directly into development workflows, pipelines, and internal tools.
\ The result is faster development, fewer configuration errors, and more reliable systems.
Infrastructure as Code has transformed how teams manage cloud environments.
\ By replacing manual configuration with code, organizations gain consistency, automation, and repeatability.
\ Using APIs to control infrastructure adds another level of flexibility. Developers can integrate infrastructure directly into scripts, pipelines, and applications.
\ This approach turns the cloud into a programmable platform.
\ As systems grow more complex and deployment cycles accelerate, the ability to automate infrastructure will only become more important.
\ For modern development teams, treating infrastructure as code is no longer optional. It is the foundation of reliable and scalable software delivery.
\ Hope you enjoyed this article. Learn more about me by visiting my LinkedIn.
\