2026-04-13 20:11:01
I have a habit. Every time I sit down to code I end up googling the same thing, how to center a div, the Rust test module boilerplate, docker run flags, ffmpeg commands. Every. Single. Time.
So I built sinbo, a CLI snippet manager that lets you store code once and retrieve it anywhere.
sinbo add docker-run -t docker # save it once
sinbo get docker-run # get it anywhere
sinbo copy docker-run # or copy to clipboard
Snippets are stored locally as plain files in your config directory. No cloud, no account, no sync. Just files you own.
You can tag snippets, add descriptions, and filter by tag:
sinbo add nginx-conf -t infra server -d "default nginx config"
sinbo list -t infra
sinbo search "nginx"
The feature I am most proud of, sensitive snippets like API keys and tokens can be encrypted at rest using AES-256-GCM with Argon2id key derivation.
sinbo add github-token --encrypt
sinbo get github-token # prompts for password
The plaintext never touches disk. Only the .enc file is stored.
Snippets can contain placeholders using a custom SINBO:var: syntax:
# snippet content:
docker run -p SINBO:port: -it SINBO:name:
# usage:
sinbo get docker-run -a port=8080 name=myapp
# output: docker run -p 8080 -it myapp
Since sinbo get prints to stdout it composes naturally with other tools:
sinbo get deploy-script | sh
sinbo get query | psql mydb
sinbo get docker-run -a port=8080 | sh
Tab completion for snippet names works out of the box for bash, zsh, fish and powershell:
echo 'eval "$(sinbo completions bash)"' >> ~/.bashrc && source ~/.bashrc
Then sinbo get [TAB] shows your actual snippet names.
This was my first time implementing real encryption in Rust. The aes-gcm and argon2 crates made it approachable but I still had to understand the underlying concepts to use them correctly. Building sinbo pushed me to actually learn AES-256-GCM, why nonces matter, and what Argon2id does.
cargo install sinbo
github: https://github.com/opmr0/sinbo
Would love any feedback or ideas❤️.
2026-04-13 20:10:43
On April 13, 2026, the Hyperbridge ISMP (Interoperability State Machine Protocol) gateway on Ethereum was exploited. An attacker forged an ISMP PostRequest by exploiting a critical edge-case bug in the Merkle Mountain Range (MMR) proof verification logic combined with missing proof-to-request binding and weak authorization checks in the TokenGateway contract.
The result: 1,000,000,000 bridged DOT tokens were minted in a single atomic transaction, which were immediately swapped for approximately 108.2 ETH ($237,000 – $242,000). Native Polkadot remained completely unaffected. The EthereumHost contract has since been frozen.
Exploit Transaction: 0x240aeb9a8b2aabf64ed8e1e480d3e7be140cf530dc1e5606cb16671029401109
Attacker EOA: 0xC513E4f5D7a93A1Dd5B7C4D9f6cC2F52d2F1F8E7
Master Contract: 0x518AB393c3F42613D010b54A9dcBe211E3d48f26
Helper Contract: 0x31a165a956842aB783098641dB25C7a9067ca9AB
Target Token: 0x8d010bf9C26881788b4e6bf5Fd1bdC358c8F90b8 (Bridged DOT – ERC-6160)
Profit: ~108.2 ETH
Gas Used: ~0.000339 ETH
The exploit was made possible by a dangerous combination of three vulnerabilities:
1. MMR Library Edge-Case Bug
The Merkle Mountain Range library contained a boundary-condition flaw in leavesForSubtree() and CalculateRoot(). When leafCount == 1, supplying an out-of-range leaf_index (e.g., 1) caused the function to silently drop the forged leaf. The verifier then promoted the next element in the proof array , a stale but legitimate historical root, directly to the computed root position.
This allowed the system to accept a completely forged payload while still passing proof verification.
Fix:
solidity
function leavesForSubtree(uint256 leafCount, uint256 leafIndex) internal pure returns (uint256) {
if (leafIndex >= leafCount) {
revert InvalidLeafIndex(leafIndex, leafCount);
}
// existing logic
}
function CalculateRoot(...) public pure returns (bytes32) {
require(leafIndex < leafCount, "MMR: leaf index out of bounds");
// ...
}
Recommendation: Always add strict bounds checking and unit tests for edge cases where leafCount == 0 or leafCount == 1.
HandlerV1 only checked that the request commitment hash (request.hash()) had not been consumed before. However, the proof verification did not cryptographically bind the submitted request payload to the validated MMR proof.
As a result, an attacker could pair any valid historical proof with a completely different malicious request body.
Fix:
solidity
function handlePostRequests(PostRequest calldata request, bytes[] calldata proof) external {
// Bind proof to the exact request
bytes32 commitment = keccak256(abi.encode(request, proof));
require(!consumed[commitment], "ISMP: already consumed");
require(verifyProof(request, proof), "ISMP: invalid proof");
// ...
}
3. Weak Authorization in TokenGateway
Governance actions in TokenGateway used only a shallow source field check instead of the full authenticate(request) modifier:
solidity
function handleChangeAssetAdmin(PostRequest calldata request) internal {
if (!request.source.equals(IIsmpHost(_params.host).hyperbridge()))
revert UnauthorizedAction();
// CRITICAL: authenticate(request) was missing
IERC6160Ext20(erc6160Address).changeAdmin(newAdmin);
}
Additionally, the challengePeriod was set to 0, removing any delay-based safety window.
Fix:
solidity
function handleChangeAssetAdmin(PostRequest calldata request) internal {
authenticate(request); // Full authentication
require(challengePeriod > 0, "Challenge period must be enabled");
IERC6160Ext20(erc6160Address).changeAdmin(newAdmin);
}
4. Dangerous ERC-6160 Privilege Model
The ERC-6160 token standard granted the new admin immediate and unrestricted MINTER_ROLE and BURNER_ROLE upon calling changeAdmin(). There was no time-lock, multi-signature requirement, or secondary confirmation.
Once the attacker’s helper contract became the admin, it could mint 1 billion DOT tokens in a single call.
Fix Recommendations:
1. Risk: MMR Proof
Vulnerability: Edge case when leafCount == 1
Recommended Fix: Enforce strict validation (leafIndex < leafCount) and add thorough tests
Priority: Critical
2. Risk: Proof Handling
Vulnerability: No binding between proof and request
Recommended Fix: Use a cryptographic commitment over (request + proof)
Priority: Critical
3. Risk: Authorization
Vulnerability: Only a shallow source check is performed
Recommended Fix: Implement full authenticate(request) modifier
Priority: Critical
4. Risk: Governance
Vulnerability: challengePeriod = 0
Recommended Fix: Enforce a minimum delay of 1 hour
Priority: High
5. Risk: Token Admin
Vulnerability: Instant minting rights
Recommended Fix: Add time-lock and separate role management
Priority: High
6. Risk: Architecture
Vulnerability: Single gateway handling all assets
Recommended Fix: Split into separate TokenGateway per asset
Priority: Medium
Implement real-time monitoring for large mints from the zero address.
Conduct thorough audits of light clients and consensus integrations.
Run a generous bug bounty program focused on proof verification paths.
Always test boundary conditions in Merkle-based verification logic.
This exploit highlights a critical truth in cross-chain bridge security: proof verification and authorization must both be bulletproof. A single weakness in either layer can lead to catastrophic consequences.
For developers building bridges, light clients, or token gateways:
Never skip strict bounds checking.
Always cryptographically bind proofs to their payloads.
Treat governance actions with the same (or higher) security standards as asset transfers.
2026-04-13 20:10:00
This guide explains how to create a Virtual Machine for OpenMediaVault using Oracle VirtualBox; and is intended for beginners who want to build a NAS environment in a virtualized setup, on a Windows OS.
Although having somewhat been familiar with VirtualBox in the past, I've had to learn a bit about OpenMediaVault in 2025 for internship in Networking, an experience that has been both rewarding and frustrating. So i thought about compiling knowledge to help others.
The whole thing here, is to be able to setup Docker Compose and SMB/CIFS on this setup, so this is the first of other articles, all part of a whole tutorial.
Let us dive into the Virtualverse...
Prerequisites:
Latest version of Oracle VirtualBox available on the official website: https://download.virtualbox.org/virtualbox/7.1.10/VirtualBox-7.1.10-169112-Win.exe;
Latest version of an image from Open Media Vault, on the official website: https://www.openmediavault.org/download.html;
A free partition on your Hard Drive, which you will let OMV manage (optional).
Note: always take a stable version of OMV. And make sure to visit official sites only. But your judgment will speak for itself regarding the sources you trust.
VirtualBox installs like any basic application on Windows. After selecting where you want VirtualBox to install, you will need to select the add-on components for your virtual machines:
USB Support: A package containing special drivers for your Windows host machine, required by VirtualBox to recognize USB devices in your virtual machines.
Networking: A packet containing additional network drivers for the host machine, required by VirtualBox to support Bridged Networking. This mode will be explained in the configuration;
Python support: This package contains support for Python scripts for VirtualBox. We won't need this, so free-will to you if you know what you're doing.
After selection, you will continue with the installation and pay attention to any information that arises while it is running.
Once VirtualBox is installed and up and running, you will launch its application, which should have had a shortcut created under your desktop under the name "Oracle VirtualBox".
After it is executed, you will be presented with a new window:
And from there, you click on the "New" icon to digitalize dive into the Virtualverse.
(You can always zoom in, huh)
Now take my hand and walk with me:
In "Name": You will name your new Virtual Machine (e.g. OpenMediaVault);
In "Folder": You will specify the host location of your/your Virtual Machine(s) (if you delete this folder where it will be installed, the VM will go with it);
In "ISO Image", you will select the Open Media Vault ISO file that you have downloaded;
Next, you will select the System Type in "Type". OpenMediaVault runs on Linux;
In "Subtype", you will select "Debian";
In "Version", you will take 64bits, since the image you downloaded will be an "amd64" image.
"Unattended Installation" is a process that allows VirtualBox to manage the installation of a Guest System without requiring user intervention.
The catch? This process does not work with OpenMediaVault, since it is a Debian-based Linux distribution, customized with a web interface and its services.
So...huh...You can skip that section entirely.
This part refers to the parameters of memory and processing power used by the Virtual Machine. That means the more Open Media Vault services you have, the more should the tiers be raised.
Since we only have two services to install for our tutorial, you can limit the number of Processors to 2 cores and the Base Memory to 2 GB (2048 MB)...which is, to be honest, the bare minimum for anything nowadays.
Now about the Enable EFI option...
This option allows you to choose whether or not you want to enable UEFI mode for your virtual machine, allowing it to boot in UEFI rather than BIOS. Although OMV 7 supports UEFI, it is generally recommended that you leave this option disabled unless you want your MV to mimic a UEFI environment, if your .ISO supports this mode, or if your host disk boots in this mode.
But this option is not necessary for a standard installation, which we're doing.
This section is about configuring the Hard Drive ("Virtual" Hard Drive) that will be used to store the System and the data you create on it. Its location is basically in the folder that you will have specified in the "folder" option in the "Name and Operating System" section.
Systems created by VirtualBox run on ".vdi" files (VirtualBox Image, although you can specify a different format in the options in this section), that replicate real physical hard drives, so it is possible to allocate capacity to them. Even if this disk is empty, the space you allocate will be the space considered by the System that it will carry, and thus cut off from the capacity of the host hard disk.
"Pre-allocate Full Size": A capacity allocation option, which specifies whether the Virtual Disk should be created with its maximum capacity from the outset or whether it will gradually increase in size until it reaches the maximum limit predefined in the "Hard Disk File Location and Size" option bar;
"Use an existing Virtual Hard Disk File": This option is available when you have an available Virtual Hard Disk. In your case, unless you have one that you want to use, don't check it;
"Do not add a Virtual Hard Disk": Allows you not to create a Virtual Hard Disk for the installation of your virtual machine. In the case of this installation, it is better to avoid checking it since OMV will not be able to detect a Hard Drive during its installation... unless you add a Hard Drive later in the VirtualBox options.
Click finish after selecting the appropriate options, as in the example above.
And after all that configuration, your new VM will be up and ready to go. All you have to do is run it from the "Start" option.
Which will take you to the Open Media Vault setup home page:
Since we'll be configuring Docker Compose and SMB/CIFS, We'll come back to this.
But for now, create a checkpoint, save the game, close the Virtual Machine, as we will need to configure other things related to the interaction between this VM and the Host System, first.
2026-04-13 20:09:25
Machine learning (ML) is the subset of artificial intelligence (AI) focused on algorithms that can learn the patterns of training data and, subsequently, make accurate inferences about new data. This pattern recognition ability enables machine learning models to make decisions or predictions without explicit, hard-coded instructions.
Examples of Machine Learning.
1. Personal assistants and voice assistants.
ML powers popular virtual assistants like Amazon Alexa and Apple Siri. It enables speech recognition, natural language processing (NLP), and text-to-speech conversion. When you ask a question, ML not only understands your intent but also searches for relevant answers or recalls similar past interactions for more personalized responses.
2. Email Filtering and Management.
ML algorithms in Gmail automatically categorize emails into Primary, Social, and Promotions tabs while detecting and moving spam to the spam folder. Beyond basic rules, ML tools classify incoming emails, route them to the right team members, extract attachments, and enable automated personalized replies.
3. Transportation and Navigation.
Machine Learning has transformed modern transportation in several ways:
Google Maps uses ML to analyze real-time traffic conditions, calculate the fastest routes, suggest nearby places to explore, and provide accurate arrival time predictions.
Ride-sharing apps like Uber and Bolt apply ML to match riders with drivers, dynamically set pricing (surge pricing), optimize routes based on live traffic, and predict accurate ETAs.
Self-driving cars (e.g., Tesla) rely heavily on computer vision and unsupervised ML algorithms. These systems process data from cameras and sensors in real-time to understand their surroundings and make instant driving decisions.
Machine Learning generally falls into two main learning paradigms: Supervised Learning and Unsupervised Learning. These differ based on the type of data they use and the objective they aim to achieve.
Supervised learning trains a model using labeled data — where every input example is paired with the correct output (label). The goal is to learn the mapping between inputs and outputs so the model can accurately predict outcomes on new, unseen data.
Common Tasks:
Classification — Predict discrete categories (e.g., spam/not spam, cat/dog, approve/reject loan)
Regression — Predict continuous values (e.g., house price, temperature, sales forecast)
How it works:
In supervised learning, the model learns from examples where the answers are already known. It is given inputs (features) together with the correct outputs (labels), and over time it identifies patterns in the data. As it trains, it continuously adjusts itself to reduce the difference between its predictions and the actual answers.
Real-world examples:
Analogy:
Think of a student learning with a teacher. The teacher shows examples and clearly labels them — “this is a cat,” “this is a dog.” Over time, the student begins to recognize the differences and can correctly identify new animals on their own.
2. Unsupervised Learning
Unsupervised learning works with unlabeled data. The model must discover hidden patterns, structures, or groupings on its own — without any “correct answers” provided.
Common tasks:
Clustering — grouping similar data points together (e.g., customer segmentation)
Association — finding relationships in data (e.g., people who buy X also buy Y)
Dimensionality reduction — simplifying data while keeping the most important information
Real-world examples:
Customer segmentation in retail (grouping shoppers based on buying habits),
Fraud detection in mobile money or banking (flagging unusual transactions),
Product recommendations on e-commerce sites (suggesting items similar to what you’ve viewed),
Music or movie suggestions based on what you like (Spotify, Prime Video).
| Aspect | Supervised Learning | Unsupervised Learning |
|---|---|---|
| Data used | Labeled (features + answers) | Unlabeled (just features, no answers) |
| Goal | Predict an output / category | Find hidden patterns or groupings |
| Task types | Classification & regression | Clustering, association, dimensionality reduction |
| How hard to evaluate | Easy – you have ground truth to compare | Trickier – no "right answer" to check against |
| Real‑world examples | Spam detection, price prediction | Customer segments, fraud detection |
| Complexity | Generally simpler | More complex (no teacher to guide) |
Key Takeaway:
Use Supervised Learning when you have labeled historical data and want to make predictions.
Use Unsupervised Learning when you have lots of raw data and want to discover insights or patterns you didn’t already know.
Modern systems often combine both. For example, many Large Language Models (LLMs) use self‑supervised learning during pre‑training, followed by supervised fine‑tuning and RLHF (reinforcement learning from human feedback).
If you're doing supervised learning, you'll run into two terms constantly: features and labels. Here's what they actually mean.
What is a Feature?
A feature is any piece of information you feed the model – a clue that helps it make a prediction. Features are also called independent variables, predictors, or attributes
Examples of Features:
Features can be numerical (age, price), categorical (gender, color), or text-based.
What is a Label?
A label is the answer the model tries to guess – the output or correct answer. Also called target or dependent variable
Examples of Labels:
Labels are only available in supervised learning because they represent the ground truth.
| Aspect | Features (the inputs) | Label (the answer) |
|---|---|---|
| What it is | What the model uses to learn | What the model tries to guess |
| Other names | Independent variables, predictors | Target variable, dependent variable |
| Do you always have it? | Yes – in any dataset | Only in supervised learning |
| House price example | Size, bedrooms, location | The price tag |
Key Takeaway:
X (features) and y (label).
2026-04-13 20:07:17
Most web apps are optimized for the happy path: fast internet, reliable servers, users sitting at their desks. But the real world looks different. Field technicians in basements. Property managers in underground parking garages. Service workers on remote sites. For these users, "your connection was lost — please try again" is not an acceptable error message.
This is why we built smallstack on a local-first architecture. Here's what that actually means in practice.
"Offline mode" is what most apps call it when they let you view cached content without an internet connection. You can browse — but the moment you try to save something, you hit a wall.
Local-first is different in a fundamental way: the local device is the primary storage location. The app reads from and writes to local storage first. The server is a sync partner, not the single source of truth.
The practical difference:
The challenge with local-first isn't the offline part — that's straightforward. The hard part is sync: how do you reconcile changes made by multiple users who were offline at the same time?
smallstack uses SignalDB for incremental sync. The key design decision: instead of syncing entire collections, we sync only the deltas — changes since the last known sync point. Each change carries a timestamp. On reconnect, the sync protocol exchanges only what changed since the last successful sync.
This approach has several practical advantages:
When two users edit the same field while offline, you have a conflict. Our approach for most cases is last-write-wins with timestamp ordering. The most recent change (by device clock) wins.
This covers the majority of real-world cases. For more complex scenarios — like numeric counters that both users incremented — the data model can define merge strategies. The goal is predictable behavior that teams can reason about, not invisible magic.
Here's a concrete example of how this plays out:
A service technician arrives at an industrial site with no mobile coverage. They open the smallstack PWA (installed from browser — no app store needed). The app loads with all their assigned tasks already cached from the last sync.
They complete three inspections, attach photos, add notes. All changes are written to local IndexedDB immediately — no spinners, no "saving..." indicators. The UI is instant.
An hour later, back in the parking lot with signal, the app reconnects. SignalDB runs the incremental sync: the three new inspection records and their attached file references are pushed to the server. Within seconds, the office team can see the completed inspections on their dashboard.
No manual "sync now" button. No lost data. No double-entry.
For consumer apps, offline support is a nice-to-have. For business-critical workflows, it's different: if your team can't record data, they either lose it or work around your tools. Both outcomes are bad.
Local-first removes this failure mode entirely. The app works regardless of connection state. This makes it genuinely suitable for:
Offline capability is the headline, but local-first has a performance side effect that benefits everyone — including users who are always online.
When data lives locally, read operations are instant. Filtering a list of 5,000 records? No network round-trip. The UI is as fast as the local machine, not as fast as the network.
This changes how you can design UIs. You can afford live filtering, real-time search, and reactive dashboards without debouncing every keystroke or showing skeleton loaders everywhere.
smallstack ships this local-first architecture as part of its no-code platform — you get offline capability and real-time sync without writing a line of sync code yourself. The widget-based app builder sits on top of this foundation.
If you want to dig deeper into the technical side, check out the smallstack documentation or try building a simple app on the free tier to see the sync behavior in action on smallstack.com
Building something local-first? Have questions about the sync approach? Drop a comment — happy to get into the details.
2026-04-13 19:59:48
Most fintech startups don’t fail because of bad ideas.
They fail because their backend can’t handle trust, scale, or compliance when it matters most.
If your system breaks during your first spike in transactions—or worse, during a regulatory audit—you’re not just losing users. You’re losing credibility.
Founders often treat backend architecture as a “later” problem.
They start with:
A quick MVP
Basic APIs
Minimal security layers
It works… until:
Transactions increase
Payment failures start happening
Compliance requirements kick in
Data consistency becomes a nightmare
Real-world mistake:
We’ve seen fintech startups rebuild their entire backend within 8–12 months because they didn’t plan for ledger accuracy or audit trails. That’s expensive, risky, and avoidable.
Design your fintech backend like a financial system—not just another app.
That means:
You don’t need over-engineering.
You need intentional architecture.
1. Ledger-First Data Model
Your backend should revolve around a double-entry ledger system.
Why?
Think:
2. Microservices (But Not Too Early)
Split your backend into core services:
Start modular, then evolve into microservices when needed.
3. Event-Driven Architecture
Use tools like:
Benefits:
Example:
A payment triggers:
→ Ledger update
→ Notification
→ Fraud check
All independently.
4. Security & Compliance Layer
Non-negotiable in fintech:
Also plan for:
5. API Gateway & Rate Limiting
Your APIs are your product.
Ensure:
6. Database Strategy
Use a hybrid approach:
Never compromise on ACID compliance for financial data.
7. Observability & Monitoring
If you can’t monitor it, you can’t scale it.
Include:
Treating fintech like a regular SaaS app
→ Financial systems require stricter consistency
Ignoring audit trails
→ You’ll regret this during compliance checks
Overusing microservices too early
→ Complexity kills speed in early stages
No fallback for payment failures
→ Always design retry + reconciliation mechanisms
Weak security thinking
→ One breach = game over
MVP (3–4 months)
Cost: $15,000 – $30,000
Scalable Architecture (6–9 months)
Event-driven system
Advanced security
Compliance-ready backend
Cost: $40,000 – $90,000
Enterprise-Level (12+ months)
Multi-region infra
High availability
Advanced fraud detection
Cost: $100,000+
👉 Try a more detailed estimate here:
https://devqautersinv.com/free-software-development-cost-estimator-tool/
Fintech is not forgiving.
You don’t get multiple chances to fix your backend once money is involved.
The smartest founders invest early in:
That’s not a cost.
That’s your competitive advantage.