2026-04-01 23:00:27
Hey Hackers!
Developers don't engage with most marketing — not because they're difficult, but because they've learned to be selective. They've seen products described as "game-changing" that fall apart the moment you try them. They've landed on pages that sound confident but never show how anything actually performs. And the information they genuinely need is usually missing or buried behind a form.
So over time, they get very good at filtering. If something doesn't immediately help them understand how a tool fits into what they're building, it gets ignored. Timing makes this even harder — developers usually come across marketing while they're in the middle of solving something. If it interrupts that process or asks for effort before giving value, it doesn't stand a chance.
That's why so many campaigns look promising on paper but don't translate into real usage.
\
:::tip Learn Why Developers Ignore Most Ads
:::
\
Three things consistently stand out.
The first is hands-on interaction. Developers trust what they can test, not what they're told. A live demo or sandbox beats a feature list every time.
The second is peer-driven content. Developers pay close attention to what other builders create and share — there's a reason GitHub and dev.to have the audiences they do. A tutorial written by someone who actually used your tool carries more weight than anything you publish about yourself.
The third is contextual discovery. They engage when your product shows up where they're already learning — inside a Stack Overflow answer, a README, a well-placed docs reference. Not as an ad, but as a useful thing that happened to appear at the right moment.
What all three have in common is that they help developers solve real problems. You're not promoting a product — you're proving it can get them somewhere faster.
This is why tutorials outperform ads, real use cases outperform claims, and ecosystems outperform campaigns. Start asking "how do developers experience my product?" instead of "how do I promote it?"
\
:::tip Curious about how developers engage with technical content on HackerNoon? Check it out here!
:::
\
It doesn’t feel like marketing; it feels like something they can take part in. Instead of being told what your product does, they get to build with it, test it, and see how it holds up in real use. That’s often the point where interest turns into actual adoption, creating a completely different level of engagement. \n \n Where most companies struggle is what happens next. Short, weekend hackathons create a spike of activity and then everything fades. What actually works is sustained engagement. \n \n That’s why HackerNoon Hackathons run for 6–12 months, turning one-time interaction into ongoing ecosystem growth. \n \n Over that time, your campaign builds momentum across: \n – a dedicated hackathon page \n – exposure to 4M+ monthly readers \n – newsletter distribution to 500K+ subscribers \n – social amplification to 1M+ followers \n – developer stories that continue to rank and get discovered \n – and… \n \n Your product becomes something developers use, not something they’re just told about.
\
:::tip Plan Your HackerNoon Hackathon
:::
\ See you next week, \n Sidra
\
2026-04-01 22:55:11
What if the thing that makes you human, the accent you grew up speaking, the way you pause mid-sentence, the background noise of your neighborhood, is exactly what the most advanced AI systems in the world cannot generate on their own?
\ That is the premise behind Human API, and on April 1 it stopped being a premise available only to developers and became a mobile app anyone can download. Available on iOS and Android, the app lets contributors browse tasks posted by AI agents, complete them using only a smartphone, and receive direct payment for work submitted. The initial tasks are audio-based, centered on the one data modality that has consistently resisted synthetic replication at the quality frontier labs actually need.
\

Human API addresses what the company describes as the "last-mile problem" for autonomous AI agents. While modern agents can reason, plan, and execute tasks in digital environments, many economically valuable activities still require people, including making deliveries, collecting data, and interacting with institutions that are not API-accessible. The mobile app is the first time that problem has been made accessible to a contributor without a desktop or a technical background.
\
Autonomous AI agents in 2026 are genuinely capable of sophisticated reasoning. They can write code, draft contracts, analyze datasets, and coordinate multi-step workflows across software systems. What they consistently cannot do is reach into the physical world. A delivery needs a person. A form that exists only on paper needs a person. A voice that carries the specific cadence of a Lagos neighborhood or a Seoul suburb needs a person. These are not edge cases. They are a structural constraint on what the agent economy can actually accomplish without human participation.
\

Human API was developed to provide a scalable, structured way for AI agents to request and compensate human contributors when automation alone is not viable. The platform positions this approach as foundational infrastructure for agent-driven workflows that require human judgment, presence, or data generation. The key architectural distinction is that Human API is agent-native by design, not a crowdwork platform retrofitted to serve AI systems. Agents make task requests through a standardized interface, contributors fulfill them through the app, and payment flows directly without a managed-services layer in between.
\ Sydney Huang, CEO of Human API, explains,
\
The Human API mobile app makes it possible for anyone with a smartphone to start earning as a contributor to the agent economy. People all over the world can monetize the skills that make them uniquely human, starting with the nuance of speech. In the process, they're supporting a scalable way for AI systems to obtain the kind of nuanced human data they need.
\
The choice to launch with audio tasks is not arbitrary. The audio data segment is expanding as speech recognition, natural language processing, and conversational AI technologies continue to advance, with the growing use of virtual assistants, smart speakers, voice-enabled devices, and call center analytics increasing demand for audio datasets. The problem is that existing audio datasets are systematically biased toward scripted speech from studio environments, disproportionately representing a narrow set of accents and linguistic patterns.
\ Many voice and multimodal models perform poorly in non-English languages, regional accents, bilingual speech, overlapping conversations, and subtle emotional expressions. Human API enables global contributors to provide high-quality, multilingual audio using standard consumer-grade devices, significantly lowering the barrier to entry. A model trained predominantly on clean studio-recorded American English will misunderstand a user in Nairobi, misparse a bilingual conversation in Manila, and fail to detect emotional state in a dialect it has never heard spoken naturally. These are not academic failure modes. They are the reason voice AI products routinely underperform in markets outside North America and Western Europe.
\ The two task types at launch address this directly. Conversational assignments give contributors an open prompt, for example "How was your day?", and let them respond naturally. The output captures spontaneous speech, environmental acoustics, and the speaker's unscripted linguistic patterns. Scripted assignments give contributors dialogue to read aloud, targeting accent and intonation variance across the same text. Both formats are designed to run on a smartphone in a real-world environment, which is exactly the acoustic diversity frontier labs cannot generate synthetically.
\
The global AI training dataset market was valued at $3.59 billion in 2025 and is projected to grow from $4.44 billion in 2026 to $23.18 billion by 2034, at a CAGR of 22.9%. Inside that market, human-generated data commands a premium over synthetic alternatives precisely because synthetic generation fails at the edge cases that determine whether a model is actually deployable across diverse real-world conditions.
\

Meta invested $15 billion for a 49% stake in Scale AI in June 2025, valuing the firm at more than $29 billion, signaling that proprietary training data is an irreplaceable AI asset. That valuation is a direct measure of how much frontier labs are willing to pay for structured access to high-quality human-generated data at scale. Human API is building the infrastructure layer that routes that demand to individual contributors rather than through a centralised annotation vendor.
\ David Feiock, General Partner at Anagram and an investor in Human API, said: "AI agents are strong at reasoning, but they still face challenges in the last mile, where coordination, data collection, and human judgment are required. The appeal of Human API lies in its treatment of the human layer as infrastructure. It is not a managed service or generalized crowdsourcing, but rather an agent-focused, rights-conscious approach that integrates humans into the system and enables instant payments."
\
The payment model is direct. Contributors create an account, browse available assignments, submit completed work through the app, and receive payment after a review process. There is no agency layer, no points system that converts to cash at a disadvantaged rate, and no minimum threshold that takes weeks to reach. Human API has raised $65 million to date from investors including Placeholder, Polychain, Hack VC, DBA, and Delphi Ventures, which provides the runway to pay contributors immediately rather than batching payouts.
\ Audio is explicitly framed as the starting category rather than the product definition. The roadmap includes computer-usage data, where contributors perform tasks on their devices while generating behavioral datasets that AI systems need to understand how humans navigate software, and real-world execution tasks, where contributors complete physical-world assignments that cannot be digitized. Each expansion adds a new category of work that agents cannot perform alone and creates a new earning opportunity for contributors who happen to have the right capabilities.
\ In 2026, the AI data labeling industry has exploded in scale and complexity. Major AI labs like OpenAI and Anthropic spend vast sums on human-curated data, and a whole ecosystem of providers has emerged to meet this demand. What Human API is betting is that the agent-native request model, where the task specification comes from an AI system rather than a human project manager, is structurally more efficient than the managed-services model that dominates the current data labeling industry. If that bet is right, contributors do not need to sign up with an annotation vendor, pass skill assessments, or wait for project allocations. They open an app, pick a task, and get paid.
\
The Human API mobile launch is the point at which a platform that launched in January 2026 to developer interest becomes a mass-market proposition. The core insight driving it is durable: the gap between what AI agents can do in software environments and what they can do in the physical and social world is not closing through model scaling alone. It closes through structured access to humans. Whether Human API becomes the dominant infrastructure for that access depends on how quickly it can build the contributor network across the linguistic and geographic diversity that makes its data valuable, and whether the agent-native request model proves more efficient than incumbents like Scale AI at the task categories where human judgment is genuinely irreplaceable.
\ The mobile app lowers the enrollment cost to zero for anyone with a smartphone. That is the right starting point.
Don’t forget to like and share the story!
2026-04-01 22:25:04
Five hundred million people hold crypto. Almost none of them use DeFi. MinChi Park, Co-founder and COO of CoinFello, thinks she knows exactly why, and she built the fix from the security layer up.
\ What does it mean to give an AI agent permission to spend your money without giving it your wallet? That question is not theoretical in 2026. As autonomous AI agents move from productivity tools to financial actors, the question of how authority is delegated, scoped, and revoked sits at the center of whether the agent economy is safe for mainstream participation or a new attack surface dressed in a friendly chat window.
\ CoinFello launched publicly at EthCC in Cannes on March 30, emerging from private alpha with a product that inverts the typical AI-crypto pitch. Rather than leading with the interface and treating custody as a backend detail, CoinFello made the custody architecture the headline and the conversational interface the delivery mechanism on top of it. The platform uses a delegation model that allows users to assign limited spending permissions with configurable timeframes and token limits, rather than granting AI agents direct access to wallets.
\ In this exclusive interview for HackerNoon's Behind the Startup series, MinChi Park explains the architecture, the alpha lessons, the B2B pivot that ETHDenver accelerated, and why the real ambition is not a consumer DeFi app but the permissions layer for a future where agents pay other agents onchain.
\ Ishan Pandey: Hi MinChi, welcome to our "Behind the Startup" series. You are shipping a product that sits at the intersection of AI agents, self-custody, and DeFi automation. That is a technically loaded combination. What is the thread connecting those three and what ultimately led you to co-found CoinFello?
\ MinChi Park: The thread is intent without friction. Five hundred million people have opinions about what they want to do with their money. They want exposure to ETH. They want yield on their stables. They want to stop getting liquidated while they are asleep. What they don't want to do is navigate a protocol interface or copy-paste a contract address and hope they got it right.
\ DeFi built rails that could theoretically serve all of those people. But the interface layer never closed the gap between having an intention and executing it. That is not a marketing problem. It is honestly a distribution problem, and it has existed since 2017.
\ AI agents are the thing that can finally close it, but only if you solve the custody problem at the same time. The naive version of an AI-crypto product is: give the agent a private key, let it transact. That works until the agent gets prompt-injected by a malicious webpage, or someone jailbreaks it into sending funds to the wrong address. You have just automated the attack surface.
\ Self-custody and AI autonomy look like they are in tension from the surface, but they are not. The key component that solves it is delegation. You keep your keys and the agent operates within a scoped permission you granted, that you can revoke at any time. The blast radius of any failure is limited to what you explicitly authorised. Those three things have to be designed together, or you end up with something that is either unsafe or too constrained to be valuable.
\ Ishan Pandey: Most AI-crypto products treat self-custody as a backend detail and lead with the interface. CoinFello inverts that, making custody the headline and the AI layer secondary. What was the strategic logic behind that prioritization?
\ MinChi Park: Most teams make the interface the product because that is what is easiest to demo. You can show a swap happening through a chat window in three minutes. You cannot easily demo a security architecture in a conference presentation.
\ But users don't lose money because the interface was confusing. They lose money because the wallet was compromised, or the agent had too much authority, or the key was stored somewhere it should not have been. The interface failure is annoying. The custody failure is catastrophic and irreversible.
\ The strategic logic is simple: trust is the actual product. Everything else is features on top of it. If you don't solve custody correctly, you are one high-profile incident away from the whole thing collapsing. We have seen that pattern play out with centralised exchanges and with early DeFi bridges.
\ ERC-7710 fine-grained delegation is what lets us make custody the headline without sacrificing usability. The user does not see the underlying mechanism. They see a confirmation that says "CoinFello is requesting permission to manage 0.1 ETH on your behalf." They approve it. Their keys never move. The agent acts within that permission. Revoking it is one action. That is the experience. The custody architecture is doing the work invisibly.
\
What is ERC-7710? Standard Ethereum wallets grant permissions in an all-or-nothing way. ERC-7710 is a newer standard that lets a user create a scoped permission, specifying exactly which tokens, which actions, which chains, and for how long an agent is authorised to act. Think of it as issuing a temporary, purpose-limited power of attorney rather than handing over your house keys.
\ \ Ishan Pandey: CoinFello went from private alpha at ETHDenver to public launch at EthCC. Walk us through what the alpha period actually revealed.
\ MinChi Park: ETHDenver was the sharpest product feedback we have had. BuffiBot, our conference navigation agent, put natural language onchain interactions in front of thousands of real users over three days, in noisy rooms, on bad wifi, and half-distracted. That environment is brutal and honest in a way that no structured user test can replicate.
\ The first thing that surprised us: comprehension was not the barrier. People got it immediately. The natural language interface clicked. What was genuinely unexpected was how multilingual the use case was. Attendees were talking to BuffiBot in Portuguese, in Spanish, in Korean. Not because we built for that specifically, but because when you remove the form-field interface, people think in their native language. That is a real signal. The interface is not just simpler. It is more human.
\ The second thing: developer demand was as strong as consumer demand. The builders at ETHDenver were not just curious about the user-facing product. They were asking how to give their product's agent access to CoinFello's execution layer. That is a different market than we had been primarily building for, and it accelerated how seriously we take the B2B and infrastructure angle.
\ What caused the vision to expand: realising that the interesting unit is not one user with one agent. It is an ecosystem of agents, all capable of discovering and calling each other onchain. ERC-8004 agent registry, A2A protocol support, sub-delegation to other agents. These were not on the original roadmap in the detail they are on now. ETHDenver surfaced the demand for them.
\ The wedge was always the Moltbot user: someone running an AI agent locally, comfortable in a terminal, who wants their agent to be able to do things onchain without holding a private key. That user exists, that community is real, and we found them. EthCC is where we go broader.
\ Ishan Pandey: The 500 million sidelined holders figure is your core market claim. Why has every previous attempt at making DeFi accessible failed to convert that population at scale?
\ MinChi Park: Both exist. But the framing that "people aren't interested" is mostly a rationalisation by people who built interfaces that require too much fluency to reach that population.
\ Here is the evidence that it is a complexity problem: centralised exchanges work. Coinbase has tens of millions of users. People are clearly willing to hold crypto. The drop-off happens at the DeFi layer, specifically at the point where using a protocol requires understanding what a gas fee is, what slippage tolerance means, and whether the contract you are approving is legitimate. That is not disinterest. That is a competency barrier that was not there with Robinhood.
\ Every previous attempt at making DeFi accessible failed at the same place: they simplified the interface without simplifying the cognitive load. You still had to decide which protocol to use, which pool to deposit into, whether the APY was sustainable or a liquidity incentive that would collapse in three weeks. Simplified interfaces on top of complex decisions are not actually simpler. They are just less honest about how much the user does not know.
\ Natural language changes the equation because it shifts the competency burden. You do not need to know what Aave is. You say "I want to earn yield on my USDC" and the agent handles protocol selection, route construction, and transaction execution. The user's job is to specify the intention, approve the delegation scope, and verify the result. Building a bridge between intention and execution has been the missing piece.
\ Ishan Pandey: The delegation model is doing significant architectural work in your security design. How does that work in practice, and what is the recovery path when an automation executes something a user did not consciously intend?
\ MinChi Park: Practically: you connect your existing MetaMask wallet, or you initialise a new smart account from a prompt. You grant CoinFello a scoped permission — use 0.1 ETH from my wallet for staking on Base. That permission is an ERC-7710 delegation. It specifies the token, the amount ceiling, the chain, and the permitted actions. CoinFello can act within that scope without asking you each time. It cannot act outside it.'
\ The recovery path for an unintended execution: first, the blast radius is already limited by design. If you grant a 0.1 ETH delegation and the agent does something you did not consciously intend with that 0.1 ETH, you have not lost your whole wallet. You have lost the allowance you pre-approved. That is the same model as a credit card spending limit.
\ Second, you can revoke the delegation at any time. One action, done.
\ Third, for high-stakes automations, human-in-the-loop confirmation is part of the flow. An automation monitoring your Aave health factor will ask you before it moves, unless you have explicitly told it to act autonomously below a certain threshold. The user controls that dial.
\ The harder question is: what happens when an agent gets prompt-injected into requesting a delegation the user did not intend?
\ That is a real attack surface. The answer is two-layered: the delegation confirmation prompt is human-readable and explicit, so a socially-engineered delegation request should surface a red flag the user can catch; and the scope is always tied to explicit parameters, not open-ended authority. We are not claiming it is impossible to fool a user. We are claiming the architecture contains the damage when it happens.
\
What is prompt injection? It is an attack where malicious text on a webpage or in a document tricks an AI agent into following instructions from the attacker rather than the user. For a crypto agent, a successful prompt injection could mean the agent requests a broader delegation than the user intended, or routes funds to an attacker's address. The scoped delegation model limits the damage: even a fully successful prompt injection can only access what the user pre-approved.
\ Ishan Pandey: There is a meaningful gap between a clean demo and a live multichain environment with gas spikes, failed transactions, slippage, and bridge delays. What does it actually take to make natural language execution reliable enough that a user trusts it with real money?
\ MinChi Park: The demo gap is real, and most teams building in this space underestimate it in the same two ways.
\ The first underestimation: natural language parsing is much harder than it looks in a controlled environment. "Send some USDC to my wallet" requires the agent to know which USDC — there are multiple token contracts — on which chain, to which address, and with what gas budget. That is four ambiguities in a sentence that sounds completely clear to a human. In production, with real users sending real prompts with real-world imprecision, the parsing layer breaks in ways that do not surface in structured testing. We have built a delegation evaluation suite to catch these cases systematically. It is one of the things we are most invested in improving.
\ The second underestimation: execution reliability across multiple chains requires redundancy and fallback logic that takes significant engineering time. Gas spikes on mainnet, bridge delays, RPC failures, slippage on low-liquidity pairs. In a demo environment, the happy path almost always works. In production at scale, you are building for the 10% of cases that do not, because those are the cases that erode user trust.
\ Ishan Pandey: The agent skills integration, enabling Claude Code, Windsurf, and other third-party AI agents to execute blockchain transactions through CoinFello, positions you as infrastructure rather than a consumer product. What future do you see CoinFello enabling in this new agentic economy?
\ MinChi Park: The agent skills integration, so it works with Claude Code, Windsurf, OpenClaw, and any agent runtime that supports the standard, was a deliberate architectural decision. We do not want CoinFello to win by being the only AI agent with a crypto skill. We want to be the execution layer that any agent can call when it needs to do something onchain.
\ The future it enables: an agent economy where financial actions are a primitive, not a specialty. Right now, most AI agents treat blockchain interactions as a hard problem that requires specific integration work. With CoinFello as a standard execution layer, any agent can swap, bridge, stake, and manage delegations through a natural language interface. The integration cost drops to a clawhub install command.
\ At the B2B level: products that want to embed crypto execution for their users do not need to build wallet infrastructure, protocol integrations, and a parsing layer from scratch. They plug into CoinFello. A portfolio tracker whose agent needs to rebalance. A lending product whose agent needs to close positions. That is B2B agent integration, and it is a market we are actively building for now.
\ What I think the agentic economy looks like at maturity: agents paying other agents for services, model inference, and data access using onchain microtransactions. ERC-8004 agent registry for verifiable discovery. x402 for agent-to-agent payment rails. CoinFello's delegation model as the permissions layer that makes all of that composable and safe. The interesting unit is not one agent doing one thing. It is an ecosystem of agents with scoped authority, discoverable onchain, transacting with each other.
\ Ishan Pandey: You are pitching CoinFello as the essential execution layer for the autonomous agent economy. What does the next twelve months look like in concrete terms, and what has to be true about both the market and CoinFello for that vision to materialize?
\ MinChi Park: Concretely: the full public launch at EthCC is the first milestone. The delegation flow is live for all EVM chains, the DCA and automation features are publicly available, and the developer documentation is complete enough that third-party teams can integrate without hand-holding.
\ B2B developer relations is the other major thread. The ETHDenver signal was strong enough that we are moving from reactive to proactive on developer outreach. The products that embed CoinFello for their users represent a larger market than individual Moltbot operators, and the integration is straightforward enough to make that tractable.
\ What has to be true about CoinFello: we have to maintain the trust model. One high-profile security incident would be damaging in a way that feature gaps are not. That means the delegation architecture stays rigorous, the parsing evals catch edge cases before they hit users, and we are honest about what is in production versus what is in progress. The teams that win in this space will be the ones that earn trust through their security model, not through marketing it.
\ Don’t forget to like and share the story!
2026-04-01 21:00:22
Fort Lauderdale, FL, April 1, 2026 - Qubic's DOGE mining integration launches today on ASICs, fully separate from the CPUs and GPUs powering Aigarth, Qubic's AI engine. For the first time, the network mines at full power, while running AI at full power, simultaneously, with no compromise between the two.
Qubic, the high-performance Layer 1 network independently verified at 15.52 million transactions per second by CertiK, today launched live Dogecoin mining natively integrated into its network infrastructure. This is not a roadmap item. It is not a test. Miners on the Qubic network can earn DOGE rewards right now, on ASICs running completely separately from the CPUs and GPUs that power Aigarth, Qubic's AI engine. No toggle. No trade-off. Full power to mining and full power to AI, at the same time.
The last time Qubic made this kind of move, it captured more than 51% of the Monero network's total hashrate, mined over 27,000 XMR blocks, generated $3.5M in revenue, and drew coverage from CoinDesk, The Block, and Decrypt. That was Monero, where CPUs and GPUs had to toggle between mining and AI training, never giving 100% to either. This is Dogecoin, running on ASICs that do not touch the AI layer at all. The network is no longer making a choice between mining and intelligence. It is doing both, fully, for the first time.
The timing, April 1, is intentional.
"April 1st is notorious for pulling pranks. Qubic decided to pull a network. We knew what people would think when they saw the date. We leaned into it and launched anyway."
— Stephanie Nickolich, Head of Marketing & Growth, Qubic
Qubic's network operates on Useful Proof of Work, a consensus mechanism that redirects computational energy toward productive tasks, including AI training through Aigarth, Qubic's underlying AI engine. With Monero, CPUs and GPUs had to toggle between mining and AI training, meaning neither received full computational power at any given time. The DOGE integration changes that architecture entirely. DOGE mining runs on ASICs, dedicated hardware that operates independently from the CPUs and GPUs that power Aigarth. For the first time, Qubic's AI infrastructure runs at full capacity while the network simultaneously mines at full capacity. No compromise. No toggle. Both at once.
Monero is being replaced by DOGE, not added alongside it. Throughout this transition, and after it, AI training through Aigarth runs continuously. It never stops. The DOGE integration is what finally allows it to run at full power, because ASICs handle mining independently, freeing CPUs and GPUs entirely for AI.
The mechanism operates as follows:
The result is a value proposition that did not exist before today: join Qubic, run ASICs for DOGE, and contribute CPU and GPU power to Aigarth's AI infrastructure, earning from the mining network while powering something larger. A QUBIC token burn mechanism activates as the DOGE mining ecosystem scales, connecting mining revenue to a deflationary loop that is the first of its kind in the cryptocurrency industry.
Qubic's DOGE mining integration is not a proof of concept. It is a repeat of a demonstrated capability, built on an architecture that has now been fundamentally upgraded. In its prior Monero network integration, Qubic's compute captured more than 51% of total Monero network hashrate, a feat that drew widespread attention across cryptocurrency communities. But the Monero model had a structural limitation: CPUs and GPUs had to toggle between mining and powering Aigarth, Qubic's AI engine. Neither received full computational power at any given time.
The DOGE integration solves that. ASICs handle DOGE mining on dedicated hardware, completely separate from the AI layer. CPUs and GPUs are freed to run Aigarth at full capacity. Monero phases out as DOGE scales. The end state is the architecture Qubic was always designed to reach.
The deflationary model was proven with Monero. DOGE is where it compounds. Qubic's mining revenue fed a QUBIC token burn mechanism that generated more than $3.5M, proving that external proof-of-work mining could drive real deflationary value back into the network. With DOGE, that same mechanism scales to a network with significantly greater reach, volume, and community depth.
Monero was the proof that this model works at scale. DOGE is the statement that Qubic is not stopping there. The DOGE community is orders of magnitude larger. The market cap is orders of magnitude higher. The infrastructure is already proven.
"The architecture was always designed for this. With Monero, CPUs and GPUs toggled between mining and AI training, never giving full power to either. DOGE changes that. ASICs handle mining independently, and Aigarth runs at full capacity. We proved the model with Monero. DOGE is where it reaches its full potential."
— Joetom, Core Lead Developer, Qubic
Anticipation for this launch has been building across the crypto community for weeks. Discussion of Qubic's DOGE mining integration has spread across X, Reddit, Discord, and Telegram, with miners, DOGE holders, and crypto analysts speculating about the scale of what Qubic's infrastructure would bring to the Dogecoin network. Today, the speculation ends and the data begins.
The Monero integration established a new benchmark for what a single network could accomplish at the infrastructure level. Qubic did not just participate in the Monero mining ecosystem. It dominated it, capturing more than half of the network's total hashrate. For the miners, analysts, and investors who watched that unfold, today's launch carries the same weight, directed at a network with far greater global reach.
A public real-time dashboard is available at launch showing Qubic's live DOGE mining hashrate and cumulative DOGE mined. The dashboard is publicly accessible and updated in real time, the live proof that the network is building.
The dashboard is the story. Miners, analysts, and journalists can watch Qubic's share of the DOGE network hashrate build in real time from Day 1.

Qubic is a decentralized compute network and Layer 1 protocol built to power the future of artificial intelligence. Operating on Useful Proof of Work, Qubic's infrastructure directs computational energy toward Aigarth, its AI engine, while supporting external proof-of-work mining at scale. The DOGE integration marks the first time the network runs mining and AI training simultaneously at full capacity, with no trade-off between the two. With a verified throughput of 15.52 million TPS on its live mainnet, independently audited by CertiK, Qubic is the fastest blockchain ever measured under live conditions. The Intelligent Chain. \n Learn more about the Dogecoin mining integration, here.
Website: http://qubic.org
X / Twitter: @Qubic
Press Kit: https://qubic.org/pr
Live Dashboard: https://doge-stats.qubic.org/
Name: Stephanie Nickolich
Title: Head of Marketing & Growth, Qubic
Email: [email protected]
2026-04-01 20:45:50
In this article, we will dive deep into actors, nonisolated methods, @MainActor and @GlobalActors, and the concept of actor reentrancy. We will also explore what happens behind the scenes in the Swift Concurrency runtime - including jobs, executors, workers, and schedulers - so you can understand not just how to use these tools, but why they work the way they do.
Whether you’re already using Swift’s async/await features or just starting to explore concurrency, this guide will give you a solid understanding of the mechanisms that keep your concurrent code safe and efficient.
If you’ve spent years working with GCD, you already know the core problem: shared mutable state. When multiple threads can read and write the same data at the same time, you risk data races: inconsistent reads, lost updates, or crashes that only appear under heavy load.
With GCD, we relied on discipline using serial queues or locks. But discipline fails. One forgotten .sync call and your correctness vanishes. Swift Concurrency introduces Actors to make data-race freedom a language-level guarantee.
| Type | Semantics | Thread Safety | Mutation Model |
|----|----|----|----|
| Struct | Value | By-copy safe | Explicit mutating |
| Class | Reference | Unsafe by default | Shared mutable state |
| Actor | Reference | Data-race safe | Serialized access |
Actors sit exactly where classes used to be but with correctness guarantees.
An actor is a reference type that protects its mutable state through isolation. Unlike a class, you cannot accidentally touch an actor’s internal state from multiple threads.
actor BankStore {
private var balance: Int = 0
func deposit(_ amount: Int) {
balance += amount
}
func withdraw(_ amount: Int) -> Bool {
guard balance >= amount else { return false }
balance -= amount
return true
}
}
Key properties of actors:
await
nonisolated: Opting Out of IsolationSometimes you need functionality that doesn’t touch the actor’s state or needs to be callable synchronously. Use the nonisolated keyword for these “pure” utilities.
actor ImageCache {
nonisolated static let maxItems = 100
nonisolated func cacheKey(for url: URL) -> String {
url.absoluteString
}
}
Rule of thumb: if it reads or writes actor state - it should not be nonisolated.
Think of an actor as having a mailbox:
When you write await store.deposit(50), you aren’t calling a function in the traditional sense. You are sending a message to the actor and suspending your current thread until the actor finishes processing that message. This is why await is mandatory: the actor might be busy with someone else’s request.
@MainActor and Other @GlobalActors
When building scalable iOS applications, managing shared state across isolated domains like UI components, network layers, and local caches becomes a complex puzzle. Swift simplifies this with @GlobalActor.
A global actor is essentially a singleton actor. It allows you to isolate state and operations globally without needing to pass an actor reference around your entire dependency graph. The most famous of these is, of course, the @MainActor.
The @MainActor is uniquely tied to the main thread. Anything marked with this attribute is guaranteed to execute on the main thread, making it the bedrock for all UI updates.
@MainActor
final class FlashcardViewModel: ObservableObject {
@Published var currentCard: Card?
func loadNextCard() async {
// Safe to update UI state directly; we are isolated to the MainActor.
self.currentCard = await fetchCard()
}
}
However, the power of global actors isn’t limited to the main thread. You can define your own global actors to serialize access to highly contested shared resources, such as a centralized local database or an aggressive retry policy manager.
@globalActor
public actor SyncActor {
public static let shared = SyncActor()
}
@SyncActor
final class OfflineSyncManager {
var pendingMutations: [Mutation] = []
func queue(mutation: Mutation) {
pendingMutations.append(mutation)
}
}
By annotating OfflineSyncManager with @SyncActor, you guarantee that all accesses to pendingMutations are serialized on that specific actor’s executor, completely eliminating data races from different parts of your app trying to queue offline changes simultaneously.
If you’re coming from the world of Grand Central Dispatch (GCD) and DispatchQueue, actors require a fundamental mental shift. A serial dispatch queue executes tasks strictly one after another. If a task is running, nothing else can run on that queue until it finishes.
Swift actors are different: they are reentrant.
Reentrancy means that while an actor guarantees mutual exclusion for synchronous code execution (only one thread can be inside the actor at a time), it explicitly allows other tasks to interleave at suspension points.
When an actor encounters an await, it suspends the current task. Crucially, it also gives up its lock on the executor. During this suspension, the actor is completely free to pick up and execute other pending tasks. Once the awaited operation finishes, the original task is scheduled to resume on the actor when it’s free again.
This design prevents deadlocks. If actors weren’t reentrant, two actors awaiting each other would instantly freeze your application. However, reentrancy introduces its own subtle class of concurrency bugs.
Because the actor unblocks during an await, the state of your actor before the await might not match the state after the await. This is the single biggest trap engineers fall into when adopting Swift Concurrency.
Imagine implementing a session manager that fetches a fresh authentication token. If multiple requests fail and trigger a token refresh simultaneously, you might accidentally fire off multiple network requests if you don’t account for reentrancy.
actor SessionManager {
private var cachedToken: String?
func getValidToken() async throws -> String {
// 1. Check local state
if let token = cachedToken {
return token
}
// 2. Suspend! The actor is now free to process other calls to `getValidToken()`
let freshToken = try await performNetworkRefresh()
// 3. State mutation.
// DANGER: If another task interleaved during step 2, we might overwrite a valid token,
// or we just unnecessarily performed multiple network requests.
self.cachedToken = freshToken
return freshToken
}
}
To protect against this, you must rethink how you handle in-flight asynchronous operations. Instead of caching just the result, you often need to cache the Task itself.
actor SessionManager {
private var cachedToken: String?
private var refreshTask: Task<String, Error>?
func getValidToken() async throws -> String {
if let token = cachedToken { return token }
// Return the in-flight task if one exists
if let existingTask = refreshTask {
return try await existingTask.value
}
// Otherwise, create a new task and cache IT immediately
let task = Task {
let freshToken = try await performNetworkRefresh()
self.cachedToken = freshToken
self.refreshTask = nil // Clean up
return freshToken
}
self.refreshTask = task
return try await task.value
}
}
Always remember: across an await, your actor’s state is completely unguarded.
To truly master structured concurrency, we need to step out of the syntax and into the engine room. Swift’s concurrency model isn’t just syntactic sugar over GCD; it is a completely bespoke, highly optimized runtime built around a cooperative thread pool.
In the Swift runtime, a Job is the fundamental unit of schedulable work. When you write an async function, the compiler breaks your function down into partial tasks or “continuations” split at every await keyword.
Each of these segments is wrapped into a Job. When a task suspends, the current Job finishes. When the awaited result is ready, a new Job is enqueued to resume the remainder of the function. Jobs are lightweight, heavily optimized, and managed entirely by the Swift runtime.
If Jobs are the work, Executors are the environments where the work is allowed to happen. An executor defines the execution semantics for a set of Jobs.
Every actor has a serial executor. This executor acts as a funnel, ensuring that only one Job associated with that actor runs at any given microsecond. When you call an actor method, you are submitting a Job to that actor’s executor.
In the first example, we create a MainQueueExecutor conforming to SerialExecutor. This is particularly useful when you have a legacy codebase heavily dependent on a specific DispatchQueue and you want to wrap that logic into a modern Actor.
final class MainQueueExecutor: SerialExecutor {
func enqueue(_ job: consuming ExecutorJob) {
let unownedJob = UnownedJob(job)
let unownedExecutor = asUnownedSerialExecutor()
DispatchQueue.main.async {
unownedJob.runSynchronously(on: unownedExecutor)
}
}
func asUnownedSerialExecutor() -> UnownedSerialExecutor {
UnownedSerialExecutor(ordinary: self)
}
}
@globalActor
actor CustomGlobalActor: GlobalActor {
static let sharedUnownedExecutor = MainQueueExecutor()
static let shared = CustomGlobalActor()
nonisolated var unownedExecutor: UnownedSerialExecutor {
Self.sharedUnownedExecutor.asUnownedSerialExecutor()
}
}
While a SerialExecutor protects an actor’s state, a TaskExecutor influences the “ambient” environment where a task and its children run. It doesn’t provide serial isolation; it provides a preferred execution location.
final class MainQueueExecutor: TaskExecutor {
func enqueue(_ job: consuming ExecutorJob) {
let unownedJob = UnownedJob(job)
self.enqueue(unownedJob)
}
func enqueue(_ job: UnownedJob) {
let unownedExecutor = asUnownedTaskExecutor()
DispatchQueue.main.async {
job.runSynchronously(on: unownedExecutor)
}
}
func asUnownedTaskExecutor() -> UnownedTaskExecutor {
UnownedTaskExecutor(ordinary: self)
}
}
let executor = MainQueueExecutor()
Task.detached(executorPreference: executor) {
// TODO: Perform an async operation
}
Executors don’t magically run code; they need CPU threads. This is where Workers come in.
In Swift Concurrency, there is a global, cooperative thread pool. The threads inside this pool are the “workers.” Unlike GCD, which could spawn hundreds of threads leading to thread explosion and massive memory overhead, the Swift thread pool is strictly limited generally to the number of active CPU cores. owever, this isn’t a hard-and-fast rule; there are specific cases where the pool may spawn more threads. We took a deep dive into this behavior in the article Swift Concurrency: Part 1
Workers ask executors for Jobs. When a worker thread picks up a Job from an executor, it executes it until completion or suspension. Because the number of workers is limited, Swift enforces a strict rule: you must never use blocking APIs (like semaphores or synchronous network calls) inside an async context. If you block a worker thread, you are permanently stealing a core from the concurrency runtime.
The Scheduler is the invisible conductor orchestrating this entire process. It decides which Jobs sit on which Executors, and which Workers get assigned to process them.
The scheduler is highly priority-aware. When you spawn a Task(priority: .userInitiated), the scheduler ensures the resulting Jobs jump ahead of .background Jobs in the queue. It handles the complex logic of priority inversion avoidance, waking up worker threads, and balancing the load across the CPU.
Swift utilizes different types of executors depending on the context of your code:
actor you create gets its own default serial executor. The runtime dynamically maps this executor to any available worker thread in the pool as needed.Understanding that executors exist is one thing; predicting exactly where your code will run is another. When a Job is ready to execute, the Swift runtime evaluates a precise decision tree to route that workload.
Here is the exact algorithm the runtime uses to select an executor:
Is the method isolated? (i.e., is it bound to a specific actor?)
This cascading logic ensures that actors maintain their state safety while allowing developers to influence the underlying execution environment when necessary.
#isolation MacroWhen dealing with deep call stacks and complex async boundaries, you might lose track of your current execution context. Swift 5.10 introduced a brilliant diagnostic tool to solve this: the #isolation macro.
This macro evaluates at compile time to capture the actor isolation of the current context. It returns an any Actor? representing the actor you are currently isolated to, or nil if you are executing concurrently.
func debugCurrentContext() {
// Prints the instance of the actor (like MainActor), or "no isolation"
print(#isolation ?? "no isolation")
}
Sprinkling this into your logging infrastructure is invaluable when debugging data races or verifying that a heavy computation isn’t accidentally blocking the @MainActor.
With recent advancements in Swift Evolution (specifically SE-0417 and SE-0392), developers now have the unprecedented ability to provide custom executors. However, to wield this power safely, you must deeply understand the difference between the two primary executor protocols: TaskExecutor and ActorExecutor (via SerialExecutor).
A Task Executor governs the execution environment for a specific Task hierarchy. Crucially, a Task Executor is inherently concurrent. It represents a thread pool or a concurrent queue where multiple jobs can be processed simultaneously. When you assign a preferred Task Executor, you are telling the runtime, “Unless an actor says otherwise, run the asynchronous work for this task pool over here.”
An Actor Executor (which conforms to the SerialExecutor protocol) governs the execution environment for a specific actor instance. Unlike a Task Executor, an Actor Executor is strictly serial. It processes one job at a time, enforcing the mutual exclusion that makes actors safe from data races.
Understanding the concurrent nature of Task Executors and the serial nature of Actor Executors is not just trivia it is a strict runtime invariant.
If you decide to write a custom executor (for example, wrapping an old C++ thread pool or a specific Grand Central Dispatch queue), you carry the burden of upholding these invariants:
SerialExecutor for an actor, but your underlying implementation accidentally allows concurrent execution, you will break the actor’s state isolation and introduce impossible-to-reproduce data races.TaskExecutor but back it with a serial queue, you risk starving the cooperative thread pool and introducing unexpected deadlocks across your async task hierarchies.The compiler trusts you to maintain these semantic guarantees. If you break them, the concurrency model shatters.
Swift Concurrency is more than syntactic sugar for asynchronous code. It is a carefully designed execution model that formalizes how work is scheduled, isolated, and resumed. Actors provide safety guarantees, but understanding reentrancy and executor behavior is what allows engineers to reason about concurrency with confidence.
By understanding these low-level mechanics when an actor temporarily releases isolation and how the runtime schedules jobs across worker threads you can build iOS applications that are not only performant, but also resilient to the subtle concurrency bugs that once plagued asynchronous systems.
\
2026-04-01 20:42:36
So last week a military drone blew up the AWS data center where my customer's platform runs. The platform serves millions of users across seven countries. I had to spend about a week moving everything from Bahrain to Europe. By hand. Because every single automated migration tool was also broken. Because, you know, the drones.
I run a software consultancy. I've been in tech long enough to have planned for almost every disaster imaginable. Floods, earthquakes, ransomware, that one guy who drops the production database on a Friday afternoon. "Military drone strike on your cloud provider" was never on the list.
And Bahrain is not an isolated case. Right now, data centers in more than ten countries are being targeted or threatened by either Iranian or russian drones. This isn't a regional incident. It's a global pattern.
And yet, here we are. Welcome to DevOps in 2026.
If you've worked in infrastructure long enough, you've imagined the disaster scenarios. An earthquake takes out a data center in Tokyo. Hurricane floods a facility in Virginia. Maybe a biblical-scale power outage somewhere in Texas (actually, that one happens pretty regularly). You build for resilience, you plan your failovers, you sleep slightly less terribly at night.
And I don't say "earthquake" lightly. Exactly a year ago, my wife and I were on the top floor of our skyscraper condo in Bangkok when a 7.7 magnitude earthquake hit. One second I was pushing a commit. The next second I was crawling on the floor. The building was swaying two meters each side, and water from the rooftop pool came crashing through into our living room. I still get flashbacks from that. So yes, I understand natural disasters on a very personal, visceral level. I expected those to be the thing that would eventually force me to move servers under pressure.
What I never rehearsed was: "Your entire AWS region is down because a military drone hit all availability zones in Bahrain."
Yet here we are.
In early March, Iranian drones struck multiple AWS facilities across the UAE and Bahrain. This wasn't some theoretical threat model from a security conference whiteboard. This was the first confirmed military attack on a major hyperscale cloud provider's infrastructure. Banking apps went down. Payment systems collapsed. Delivery platforms across the Gulf went dark. And somewhere in Thailand, my phone started buzzing with messages from a very worried customer in Saudi Arabia.
Here's what you need to understand about the week that followed: every single automated migration tool AWS provides was broken. CloudWatch, the thing that tells you if your servers are even alive? Gone. RDS Snapshots, the thing you use to back up databases before you touch anything? Unavailable. Cross-region transfer? Dead. AMI copies? Nope.
It was like showing up to a house fire and discovering that not only is your fire truck empty, but someone also stole the hydrant.
So I did what any reasonable engineer would do. I had to rebuilt multiple production environments from scratch, on bare Linux images, in Europe. By hand. For a platform serving millions of users across seven countries. I wrote custom scripts to export, compress, and transfer everything over the public internet (because AWS's own internal backbone between regions was also down). I wrote manual rescue scripts for files that kept failing for days with InternalError. I worked nights because often it was the only window where platform traffic was low enough to safely verify everything.
One week of controlled chaos. And by the end of it, the entire platform was running smoothly from Europe, as if nothing had happened.
But everything had happened.
I could tell this story as a purely technical narrative. Here's the architecture, here's the migration plan, here's the clever script that saved the day. But that would miss the point entirely.
Because here's what my day-to-day actually looks like:
I run a small tech consultancy. We build custom software. We manage cloud infrastructure. We automate businesses with AI workflows. Very normal stuff. And yet somehow, every single person on my team has been touched by war. Not metaphorically. Literally.
I live in Thailand, which recently had skirmishes with Cambodia along the border. My Iranian engineer had to flee Iran with his entire family. One of my coworkers lives in Ukraine, literally in a war zone, delivering code between power cuts because the grid keeps getting hit by Iranian-designed drones. A couple of months ago he went to an immigration office across the border and couldn't come back for days because russians bombed the only bridge on his route home. Another colleague had to evacuate Ukraine with his whole family.
We write code and configure servers. We're not defense contractors. We're not geopolitical analysts. We're developers who just want to ship clean code and go home.
And yet, every week, somewhere on this planet, a conflict reaches through the internet cables and grabs us by the collar.
And now, in what might be the most unexpected geopolitical crossover episode of the decade: Ukraine is protecting Saudi skies.
Let that sink in for a second. The country that has been fighting for its own survival since 2022, that has become the world's foremost expert on shooting down drones because it had no choice, has just signed defense cooperation agreements with Saudi Arabia, Qatar, and the UAE. Over 200 Ukrainian drone-countering specialists are now deployed across the Gulf, helping defend the very region where my customer's servers used to live.
The same drones that forced me to migrate infrastructure out of Bahrain? Ukraine knows those drones intimately. They've been dealing with their Iranian-made cousins, the Shaheds, for years.
So now the country of my colleague who codes between blackouts is also the country protecting the airspace above my customer's business. If you wrote this as fiction, your editor would tell you it's too on the nose.
I can't be the only one. There must be thousands of engineers, sysadmins, CTOs, and DevOps folks out there who have spent the last few years making decisions that no technical manual covers. Moving workloads because of missiles. Rerouting traffic because of sanctions. Keeping systems alive through infrastructure that's being actively targeted.
If you've had to migrate production systems because of armed conflict, I'd love to hear your story.
Twenty years ago, your biggest infrastructure worry was a hard drive failing or router dropping packets. Ten years ago, it was maybe a ransomware attack. Today, it's a state-sponsored drone strike on your cloud provider's physical data center.
We've entered an era where "disaster recovery" needs to account for actual disasters of the military kind. Where your multi-region strategy isn't just about latency and compliance, it's about geopolitical risk assessment.
The conflicts we see on the news aren't happening "over there" anymore. They're happening inside our dashboards, our uptime monitors, our incident channels. Every single one of us in tech is connected to these events whether we like it or not.
The world got very small, and very complicated, very fast.