2026-04-23 05:24:40
SpaceX didn't buy Cursor outright—they structured a $60 billion call option with a $10 billion breakup fee. This financial maneuvering protects SpaceX's massive upcoming IPO while giving xAI the proprietary developer telemetry it desperately needs to compete with OpenAI and Anthropic. For builders, this partnership threatens IDE model neutrality, jeopardizes strict enterprise data privacy agreements, and introduces severe geopolitical compliance liabilities due to Cursor's reliance on the Chinese base model Kimi 2.5.
2026-04-23 05:10:31
Berlin, Germany, April 22, 2026 — Coinlocally today launched 10 new tokenized stock pairs on its trading platform and introduced a zero-fee trading campaign for all newly-listed stock pairs. The new listings include widely recognized companies such as Tesla, Amazon, Apple, NVIDIA, and Alphabet.
Starting on April 14, users can trade TSLAX, COINX, AMZNX, AAPLX, NVDAX, GOOGLX, MCDX, HOODX, METAX, and CRCLX against USDT with zero trading fees through May 14, 2026. This new group of listings gives users exposure to some of the most closely Marco watched names across technology, consumer internet, and digital finance, while keeping that access within Coinlocally’s existing trading environment.
Tokenized real-world assets (RWAs) continue to grow across the digital asset market, with more than $26 billion in distributed on-chain value. At the same time, interest in tokenized equities has been building as more companies look at blockchain-based versions of traditional financial products. Coinlocally’s new listings arrive as tokenized stocks begin to attract wider attention from both crypto platforms and traditional market infrastructure players.
“We want users to be able to access newly-listed tokenized stock markets without extra cost during the launch period,” said Sam Baumann, COO at Coinlocally. “Listing these pairs with zero-fee trading is a practical way to make the product easier to try and more accessible to a wider range of traders.”
The rollout reflects Coinlocally’s broader strategy of connecting traditional market exposure with digital asset trading. The platform supports more than 600 digital assets across spot, margin, and futures markets, with tools for both retail and professional users. The new tokenized stock pairs expand that offering by bringing another set of familiar market names onto the platform.
Coinlocally has also been building out a wider product ecosystem beyond its main trading markets. In addition to spot and derivatives trading, the platform offers services such as P2P trading, Earn, Launchpad, and educational resources aimed at users with different levels of experience. Within that broader mix, the new stock pairs give users another way to access tokenized versions of traditional assets without leaving the platform.
Users can visit Coinlocally’s trading platform to explore the newly listed tokenized stock pairs and start trading with zero fees.
Founded in 2020, Coinlocally is a global fintech and digital asset exchange offering secure, fast, and transparent access to cryptocurrency and forex markets. With high liquidity and advanced trading tools, including spot, futures, bot trading, grid strategies, and copy trading, the platform serves both beginners and professional traders worldwide.
Coinlocally’s mission is to bridge traditional finance with the emerging world of decentralized finance, empowering users with greater control of their assets through a compliance-driven, seamless transition from centralized (CEX) to decentralized (DEX) trading and broader Web3 innovation.
For more information, users can visit coinlocally.com or follow Coinlocally on Telegram or X.
:::tip This story was published as a press release by Blockmanwire under HackerNoon’s Business Blogging Program
:::
Disclaimer:
This article is for informational purposes only and does not constitute investment advice. Cryptocurrencies are speculative, complex, and involve high risks. This can mean high prices volatility and potential loss of your initial investment. You should consider your financial situation, investment purposes, and consult with a financial advisor before making any investment decisions. The HackerNoon editorial team has only verified the story for grammatical accuracy and does not endorse or guarantee the accuracy, reliability, or completeness of the information stated in this article. #DYOR
\n
2026-04-23 04:00:47
AI agents are autonomous software entities designed to perceive their environment, make decisions, and take actions to achieve specific goals. They matter for automating complex tasks, enabling proactive decision-making, and enhancing human capabilities across diverse applications.
Discover how AI agents and blockchain are merging to drive the next crypto bull run, with top-performing projects like $GOAT and $VIRTUAL leading the way.
AI Agents demonstrate potential in Web3 applications, such as managing private keys, automating transactions, and supporting DAO operations.
Master AI agents in 6-9 months with this complete learning roadmap. From math foundations to deploying production systems, get every resource you need.
the reality is more nuanced than the hype suggests.
Build production-ready LLM agents. Learn 15 principles for stability, control, and real-world reliability beyond fragile scripts and hacks.
Gartner predicts 40%+ of agentic AI projects will fail by 2027. Analysis of why demos dazzle but deployments disappoint, what production patterns actually work.
We built data governance for a world where humans read the warning labels. AI agents don't read. They just query. That gap is now a production risk.
It’s far more efficient to run multiple Claude instances simultaneously, spin up git worktrees, and tackle several tasks at once.
T3RA Logistics is redefining freight with AI agents—running a $100M operation with just 25 “superhumans.”
The main constraint on AI-assisted development was not model capability but how context was structured and exposed.
Join Overlord.bot this Christmas on Arbitrum! AI-powered DeFi magic, 50 ETH pumps, and token launches that redefine creativity and community.
A Software Architect's account of replacing senior devs with AI. $238K savings became $254K in real costs. Why human judgment still matters.
Market-aware agents must discover and verify live external data. Learn why Instant Knowledge Acquisition is required for accuracy and scale
Cached retrieval misses new and long-tail sources. Agents need link discovery on the live web to stay accurate and up to date. Learn the model.
Learn how to implement proper access control for AI agents in applications for production-ready AI systems.
Joshua Browder, Founder/CEO of DoNotPay, joined the HackerNoon community to discuss AI agents, dividends, and what's next for DoNotPay.
The wild story of Truth Terminal, the AI agent that turned memes into millions, making GOAT a crypto sensation while reshaping internet culture.
Here's why the current hype around autonomous agents is mathematically impossible and what actually works in production.
Explore how to secure AI agents, protect against prompt injections, and manage cascading AI interactions with AI Security Posture Management (AISPM).
Developers built a hook-driven governance layer for Claude that forces Skill activation, enforces repo rules, and turns AI assistants into reliable teammates.
Explore Coze's insights on AI chatbots and agents. Take advantage of the extended deadline of the #AI-chatbot contest. Submit by Nov 25 for $7000 in prizes.
Learn how to build secure, human-in-the-loop AI agents using Permit.io’s Access Request MCP, LangGraph, and LangChain MCP Adapters.
Read this post to understand why Salesforce wants to lead the market for autonomous AI agents.
The religion was called Crustafarianism.
User Reported Metrics, while important for assessing user perception, are difficult to operationalize due to their unstructured nature.
22 examples of incompetent AI agents that failed spectacularly in the wild. From sexist hiring bots to fatal self-driving cars, explore the real-world liability
This article explores how AI Agents can support your strategy, amplify engagement and accelerate business growth.
Learn how to take AI agents from prototype to production with this 5-step roadmap covering Python, RAG, architecture, testing, and real-world monitoring.
Genspark AI has emerged as a formidable new player in the AI agent space, positioning itself as a comprehensive super agent.
AI agents are changing software delivery in 2026 by reshaping planning, coding, testing, release, and operations. Here’s what technical teams need to know.
RentAHuman.ai lets AI agents hire humans for physical tasks they can't do.
The rise of Agentic AI has fueled predictions of improved company performance and stronger stock returns.
From scattered AI pilots to strategic systems: why orchestration, observability, and auditability are the new competitive edge for enterprise AI adoption.
The uncomfortable truth about AI agents: How Silicon Valley killed your JARVIS dreams for profit.
AI agents are becoming the real “users.” Why MCP struggled, why skills won, and what agent-first software design looks like in 2026.
Learn how AI agents perceive, reason, and act using real OpenAI API examples — the foundation of modern intelligent automation.
Artificial Intelligence is expected to transform industries between 2026 and 2030, with AI agents, robotics, cybersecurity, and healthcare innovation.
This article examines the first large‑scale AI‑autonomous cyberattack (GTG‑1002), where an LLM hijacked via MCP became a self‑directed espionage engine.
With the right tools in your toolbox, you can identify a promising market and develop an AI agent that solves real problems.
Learn how MCP Servers help AI agents interact with tools reliably. Explore benefits, challenges, and BrowserStack’s open-source implementation.
AI will not replace software engineers, but developers who use AI coding agents effectively may outpace those who do not.
Learn about the 16 events that make the AG-UI for agent to frontend communication.
Here is how I used multi-agent orchestration to turn a one-sentence idea into a fully visualized product dashboard.
A blueprint for legal AI that tracks changing facts over time using stateful agents, knowledge graphs, and rigorous evaluation pipelines.
Naptha.AI is an open-source platform for developers, allowing them to build and deploy large systems of cooperating intelligent agents.
Large context windows aren’t memory. Learn how layered memory systems improve AI agent reliability and performance.
Learn about the infrastructure that supports orchestration across many moving parts and a long history of data and context needed to build agentic systems.
The Graph's 2026 roadmap targets AI agents, institutions, and DeFi with six modular data services on Horizon after processing 1.27 trillion queries.
The mode fundamentally changes how we interact with LLMs.
AnyClaude adds hot-swappable backends to Claude Code, routing requests through a local proxy without restarts or lost context.
AI is dominating everybody’s consciousness in 2026 more than it did in 2025 and the year before that.
Modern marketing faces a subtle but significant challenge. Not a lack of tools. Not a lack of ideas. But a lack of memory.
NeuralBridge is an open-source Android app that gives AI agents sub-10ms device control — 100x faster than conventional tools. No root. No middleware.
An AI agent is a small “AI worker” that can do tasks instead of you.
This article delves into constructing such an AI research agent using Superlinked's vector search capabilities, by integrating semantic and temporal relevance.
Build real AI agents in 5 levels, from simple tool use to full agentic systems—code included.
Microsoft’s AI agents promise to revolutionize business automation, but can they outshine Salesforce and IBM, or are they just another overhyped tech experiment
Production AI agents fail from prompt injection, tool poisoning, credential leaks, and more. Learn 5 attack patterns and defensive code for each.
Build your own Perplexity-style deep research AI agent using Next.js 15, OpenAI & exa.ai. Complete architectural guide with production-ready TypeScript code.
Learn how to spin up Model Context Protocol (MCP) servers, wire them into VS Code, Cursor & Claude, and shave hours off routine dev chores each week.
World launches AgentKit with Coinbase's x402 to let AI agents prove a real human backs them, targeting a $5 trillion agentic commerce market by 2030.
You must be very clear about what you want. You must design in detail upstream. You need to carefully review the results.
Only 21.9% of orgs treat AI agents as identities. The rest use shared API keys. Here's the five-layer identity stack agents actually need.
Multi-agent AI survey system that generates adaptive MCQ interviews to extract actionable feedback from customers at scale.
AI agents run on energy, not fiat. Here’s why stablecoins fall short and why an energy-anchored currency may power machine economies.
This article explores the innovative AGENTS.md framework, detailing how recursive agent orchestration and persistent memory can enhance collaborative learning, improve efficiency, and transform AI interactions across multiple tasks.
What tools do you need to build an AI agent? It goes well beyond an LLM - you must design your infrastructure layer carefully.
Discover the LLM maturity model: from simple prompts to orchestrated systems. Why spaghetti flows fail - and how real architecture wins.
Many developers need to build apps with LLMs but find that creating a simple abstraction on top of something like Gemini/ChatGPT/etc is challenging.
Learn to build your first MCP server in 15 minutes. Step-by-step Python tutorial with FastMCP covering tools, clients, and LLM integration.
AGENTS.md is a simple, open-source format that guides AI coding agents on how to interact with your project.
Just because we can build multi-agent AI systems doesn’t mean we should.
For centuries, interfaces were the boundary between human intent and machine execution. But today, that boundary is dissolving. We are entering what I call the
The “Agentic Web” looms: autonomous systems negotiating across services; firms that let agents handle the routine 80% free humans for the hard 20%.
Learn about the top agentic AI protocols (MCP, A2A, and ACP) in under 3 minutes.
AI agents are turning Web3 into self-managing economies governing liquidity, executing trades, and reshaping how DeFi, DAOs, and on-chain governance evolve.
Most AI agents stall at pilot stage. Here’s why scaling fails—and how workflow redesign, metrics, and security separate winners from hype.
For JavaScript, full-stack, and backend developers, the implications are huge.
Humanoid robots hit an "Iron Wall" of energy. We must offload physics to an "External Cerebellum" via 5G to solve movement.
A startup without an AI agent will look as outdated in 2026 as a business without a website looked in 2005.
While Agentic AI is following a maturity curve like any other technology, not all failures are because it is immature
Swarm Network launches $TRUTH token Oct 1 to combat misinformation using blockchain and AI agents. Can decentralized verification actually work?
Fewer than one in five companies say AI agents actually work well in practice.
This article will go you through all necessary steps to take while integrating AI into your system.
As AI agents handle longer-running development work, the way context is compressed can determine whether critical information is preserved or lost.
In this tutorial, I’ll walk you through building your very first AI agent in Python using Google’s Agent Development Kit (ADK).
I just want that button within reach on my desk that, when hit, just stops whatever task the AI is executing at a time, and just rolls the whole thing back.
Explore how AI agents are transforming business, productivity, communication, and daily life.
In the world of security and DevOps, AI agents are being pushed from demos into production quickly.
AI agents are growing rapidly, fueled by NLP and machine learning advancements, yet various uncertainties hinder their widespread adoption.
I spent 6 years at marketing agencies charging $5,000/month for reports. Then I built 3 AI agents that do the same work in 61 seconds. Side-by-side comparison w
As AI agents connect to enterprise tools via MCP, gateway-based security may fail. Here’s why policy enforcement must move to the MCP server.
Step-by-step guide to building a multi-agent SRE workflow with AWS Strands Agents SDK: CloudWatch alarms, AI root cause analysis, and Kubernetes remediation.
AI is now the biggest user of the internet. When agents make all buying decisions, only four marketing channels survive. Here's the founder playbook.
AI generated 10 million orders in 9 hours. The real world couldn’t keep up.
HackerNoon interview with Sergey Fedorov, CPO: top stories about team processes, product development, IdeaOps, Agile, AI transformation
Traditional testing fails for AI agents. Learn how property testing, invariants, evaluation, and observability replace deterministic test cases.
A comprehensive survey of agent interoperability protocols like MCP, ACP, and A2A.
Unlocking AI's full value requires broader organizational transformation.
AI agents are starting to act more like independent participants on the internet, but the internet was never designed to support them.
How I used AI agents to automate 80% of my solo business — from lead gen to fulfillment — and what other entrepreneurs can learn from it.
Sesame AI has introduced a voice-based AI companion focused on achieving genuine "voice presence" through emotional intelligence and contextual adaptation.
Learn what Agentic AI really is, how AI agents work, and build a simple “Hello World” agent in Python using a clear, systems-first approach.
I built 7 AI agents that run marketing operations for ecommerce brands. Here's the full architecture, what worked, what broke, and what I'd do differently.
Your model works in Jupyter but fails at 3 AM. Why data quality and observability are the silent killers of 85% of AI projects.
A practical guide to building reliable AI agents—covering context engineering, multi-agent teams, self-evolving systems, evaluation, and digital identity.
Learn how Agent Skills let you teach an AI to scaffold a Clean Architecture .NET solution automatically — saving time and enforcing standards every time.
AI agents have become one of those phrases you can’t escape.
of Multi-Agent AI Systems](https://hackernoon.com/the-hidden-architecture-risks-of-multi-agent-ai-systems)
Open architectures, autonomous workflows, and multi-agent systems are becoming one of the fastest-growing areas in AI development.
AI is accelerating fast, but success requires HITL oversight, strong processes, & domain expertise. Why disciplined, phased adoption is the real digital steroid
AI agents can think and plan, but they still can’t touch the real world. Rentable humans are quietly becoming the execution layer that makes automation work.
The ultimate CRM showdown! Salesforce vs. ServiceNow—who’s shaping the future of customer management? AI, automation, and strategy collide in this epic battle!
I gave my AI agent persistent memory and identity. Four months later, it had a life I knew nothing about.
Why blockchain is crucial for AI agents to access quality data with proper attribution and compensation, building the necessary trust for mainstream adoption.
Go beyond prompting. A deep dive into the architecture of AI agents and how to build your own with C#, LLMs, and external tools.
There have been numerous AI agents failing, and in some cases, getting exploited.
Eliminate the “spinning loader” pain: learn optimistic UI updates for LangGraph.js agents, plus rollback patterns and a TypeScript example.
AI can generate code fast, but it won’t fix legacy systems. The real productivity gains come from maintenance, iteration, and engineering discipline.
AI leaderboards are collapsing under Goodhart’s Law. Discover why the next evolution is personal, decentralized, and self-centered.
The Copilot era is over. In 2026, AI shifts from passive tool to active teammate with its own identity, scoped permissions, and execution authority. Here's why.
AI is evolving from assistant to economic agent; those who adapt early and leverage this shift will gain a significant edge in global markets.
Google has released a whitepaper on how they are architecting security for Chrome’s new Agentic capabilities.
NVIDIA just dropped a production-ready stack where speech, retrieval, and safety models were actually designed to compose.
Researchers at MIT, Google, and others have released the first-ever 'Scaling Laws for AI Agents'
AI could help you create your own private contact room on WhatsApp 1.0.
Discover why your app MUST have AI agent connector (MCP) in 2025. MCP is a new standard that lets AI systems plug into your app's capabilities seamlessly.
Skip the dashboard. I built a $5 context-aware AI agent that turns Apple Watch health data into coaching based on my actual calendar, not population averages.
Enterprise AI spend hit $37 billion in 2025. Yet only 8.6% of companies have agents in production. The bottleneck is not the model. It is your data, your govern
I gave AI a simple task. It failed 14 times, fabricated results, and insisted the work was done. Why the fantasy of autonomous AI is built on a broken premise.
I designed an autonomous REIT where AI, blockchain, and automation replace intermediaries—handling leasing, rent, and dividends end-to-end. With on-chain settle
The architectural and security failure of adding AI into existing systems without proper system design - the Obsidian support case.
Google, OpenAI, Perplexity, they all want one thing — to control the very portal through which you access the digital world, i.e. the web browser.
Conceptual Biomarkers and Theoretical Biological Factors for Psychiatric and Intelligence Nosology
From PoC to Factory: Deconstructing the 7-layer architecture of a production-grade AI Agent Platform for enterprise scalability and control.
Explore the future of stablecoins, AI agents, and the digital economy in this interview with Ian Estrada, CEO of The MATRIX AI and X Network.
Let's explore how investing in proven AI infrastructure yields a competitive advantage over those that continue trying to solve infrastructure problems at the a
Gemini CLI brings Google’s AI straight to the terminal—quietly positioning it to reshape how developers code, deploy, and adopt Gemini at scale.
In this post, I'll share a practical overview of the protective measures needed for different components when building robust AI systems.
Creditcoin builds blockchain-based credit histories for AI agents, enabling trust, loans, and reputation in autonomous machine economies.
I break down the architecture of Buduka, an autonomous operating system for pharmaceutical retail chains.
Learn how to build flexible, intelligent workflows with n8n. Automate tasks, boost productivity, and create systems that adapt to real-world complexity.
Streamline prompt-based AI development with PromptDesk, the open-source solution for navigating today's rapidly evolving market.
AI agents + smart glasses are changing how we shop. From visual input to checkout, here’s how I prototyped the future of buying.
OpenAI killed Sora, Claude got computer use on Mac, LiteLLM supply chain attack, Arm's first chip, and Huawei's Nvidia rival.
3/25/2026: Top 5 stories on the HackerNoon homepage!
Is the soul a Data Cloud? Discover how the Theory of 35 Levels bridges physics and biology to blueprint a true Digital Soul for Artificial Intelligence.
Web3’s real UX problem was never wallets or onboarding, it was forcing humans to think like smart contracts.
Authorization is the process of determining what you’re allowed to do.
AI in Product Management - Explore whether AI agents will replace product managers or simply become their most efficient interns. A fun take on the role of PMs
Agentic prompting is like hiring someone, giving them access to your systems, and trusting them to make decisions that matter.
Learn how Activepieces simplifies AI automation with a no-code builder, and follow a step-by-step guide to deploying it in the cloud using Sevalla.
AI agents fail when management fails. Define-Deliver-Drive helps you manage agents like human teams: clear specs, WIP limits, and a delegation ladder….
Explore how Zero Trust secures agentic AI, treating autonomous agents as human actors to prevent misconfigurations, data leaks, and AI-driven breaches.
1/7/2026: Top 5 stories on the HackerNoon homepage!
There is only one skill that AI will not automate in the long-term future. What is that skill? Read this article to find out. And get the best courses for it!
AI is no longer just helping us work faster. It is starting to shape judgment, trust, and the private decisions we once kept human.
Learn how to build reliable AI agents with the 3-layer stack: tools, data, and orchestration. Avoid fragile prototypes and scale intelligently.
Microsoft's new Agents feature is still in beta. The tool is designed for productivity, but it's not ready for prime time yet.
LangGraph is a Python library designed for building advanced conversational AI workflows.
Non-human systems like Artificial Intelligence (AI) models are taking over the Internet, and we're getting closer to a Dead Internet. What does this mean?
AI agents can handle repetitive tasks so humans can focus on strategy, innovation, and creativity—exploring PropRise’s vision for agentic automation.
The phrase "AI Agent" means more than just being a trendy term. It shows a movement toward independent systems
A founder’s story of rebuilding SaaS for AI agents, covering scaling, moderation, and how autonomous systems are reshaping software.
Google has launched Gemini 2.5 Flash in preview, bringing controllable reasoning capabilities to their fastest model tier.
An agent does not wait for instruction. It monitors context, forms intentions, takes actions, and adapts its strategy over time.
AI has saved banks over $1B in fraud, but experts warn: without human oversight, AI's blind spots could become costly compliance risks.
AI agents are the talk of the technology world, and for good reason.
GPT-4o fails 90% of the time at code optimization. Why zero-shot prompting doesn't work and how to build an agentic search workflow instead.
Why pure LLM agents fail in B2B systems—and how workflow-driven architectures make AI reliable, testable, and enterprise-ready.
Granary scores 88 on Proof of Usefulness, offering an open-source CLI to orchestrate AI agents and solve context loss in multi-agent workflows.
Spawnr is the "hatchery" for the agentic economy, allowing anyone to deploy functional, autonomous AI agents on-chain with a simple prompt.
Explore the future of AI with iExec's Matthieu Jung as he discusses 'Trusted AI Agents' – a decentralized approach to AI automation prioritizing privacy.
Why today’s AI agents don’t scale—and how a microservices approach could unlock interoperable, secure, multi-agent systems.
Agentic AI is reshaping automotive retail—autonomously pricing inventory, bidding at auctions, and optimizing margins in real time.
Researchers show AI agents can learn to use computers through scalable synthetic experience, reducing reliance on costly human demonstrations.
Learn everything you need to know about Ai Agents via these 218 free HackerNoon blog posts.
6/26/2025: Top 5 stories on the HackerNoon homepage!
3/29/2026: Top 5 stories on the HackerNoon homepage!
The AI industry is currently in a state of mass hallucination. We are promised a future of autonomous agents—vision models on factory floors, and more
AI is shaking up ITSM—are teams ready or about to get steamrolled? Dive into the future of IT, where AI resets passwords, predicts issues & maybe takes over. 🚀
This article discusses recent advancements in grounding language models, exploring how the addition of memory, reasoning, and action-based learning empowers AI
This article explores why Human APIs falter as Model Context Protocol tools for AI agents, emphasizing the need for bespoke design to optimize functionality and efficiency in agentic workflows.
The hard part of building AI agents isn't the demo - it's the 3 months after. Real lessons from running a 15-agent system in production for 90 days….
In 2025, don’t chase trends—choose the right AI framework for your use case. Learn a proven method to build stable, production-grade LLM systems.
Explore how AI transforms crypto trading with advanced market analysis, sentiment tracking, and multi-chain intelligence.
Agentic AI is powerful, but not every workflow needs it. Learn when to use deterministic automation vs AI agents
Scribe scores 39 on Proof of Usefulness, using AI to track group chats, capture shared links, and deliver curated digests.
3/11/2026: Top 5 stories on the HackerNoon homepage!
The difference between a chatbot and an AI agent is not as clear as it seems
Pacific Island nations can either choose to continue the cycle of dependency on foreign vendors, or embrace the sovereignty that AI agents can provide.
This article is to examine a case study where sensitive information can be extracted using psychological manipulation for personality based agents.
Following TDD and strict software engineering practices popular in the ruby on rails community helps AI generate better code
An AI agent ran for 3 weeks with 99.8% uptime but produced nothing. Learn how silent failures in memory and retrieval break systems.
Notebooks used to be a personal workspace: run a query, poke at a dataset, export a CSV, and move on. Now they’re becoming the default data UX for teams.
Web3 took a big step forward, but it’s not a leap just yet.
AI agents face critical issues—centralization, ownership, transparency. Web3 offers real solutions. Here's why AI needs Web3 more than the reverse.
The shift from bots to agents isn't just renaming. It's a change in who does the thinking — from developer at design time to model at runtime.

Explore how multi-agent systems evolved through key computing trends like ubiquity, intelligence, and interconnection, shaping the future of automation.
CocoIndex can build and maintain a knowledge graph from a set of documents, using LLMs (like GPT-4o) to extract structured relationships between concepts.
The AI system runs in a Docker container inside a Chromium browser sandbox.
Exploring how AI agents can become disciplined, accountable contributors in software development pipelines, with Opencastle open source project.
DaVinci-Agency uses existing language models to generate diverse synthetic trajectories.
This move marks a significant milestone in CARV’s mission to democratize AI data access, enabling developers worldwide to build, refine, and scale AI agents wit
6/6/2025: Top 5 stories on the HackerNoon homepage!
6/20/2025: Top 5 stories on the HackerNoon homepage!
MCP server boundaries have the same shape as Markov blankets in active inference. That's not a metaphor — it's the actor model upgraded with ontological context
AI dashboards can turn unstable metric definitions into trusted operating decisions before teams agree on what the numbers actually mean.
We are entering the era of the Micro-Agent, essentially models under 1 billion parameters that don’t just run on your laptop or even mobile devices.
AI in crypto is no longer theoretical; it’s live, executing trades, launching protocols, auditing code, writing smart contracts, and even voting in DAOs.
An agent that exists only as a prompt or a framework-level loop has no real boundaries.
Cut AI costs by building a serverless Agent platform with Chaigent, a DIY alternative to Gemini Enterprise using Cloud Run and Vertex AI.
3/23/2026: Top 5 stories on the HackerNoon homepage!
How small shifts in phrasing reveal whether an agent understands intent or only echoes words.
Autonomous Solana agents shouldn't hold private keys. Learn how JavaScript closures create airtight key isolation with executable security proofs.
This is the promise of A2UI (Agent-to-User Interface): a protocol that allows agents to “speak” UI natively.
Salesforce has built one of the most powerful orchestration platforms in the world. Yet even here, a hidden choke point destabilizes the entire layer.
If an agent delivers, the suits claim the credit and the bonus. It is a flawless ecosystem for avoiding accountability while cosplaying as a visionary.
A new method for translating complex gene expression data of a single cell into a Large Language Model (LLM) can understand.
4/13/2026: Top 5 stories on the HackerNoon homepage!
How to unlock the full power of GitHub Copilot agents inside VS Code.
AI agents sound confident but are often wrong. Learn why trust, transparency, and reliable data will determine the future of enterprise AI adoption.
Visit the /Learn Repo to find the most read blog posts about any technology.
2026-04-23 03:26:18
There’s a moment every engineer who builds chatbots eventually confronts: the system works perfectly in the demo environment, and then real users show up.
Real users don’t speak in clean, grammatically correct sentences. They interrupt themselves. They abbreviate. They ask two questions in one message and forget they already told the bot their account number twenty seconds ago. They expect the system to remember context from a conversation they had three weeks back, because a human would.
That gap between designed behavior and real-world behavior has been Vijay Kumar Sridharan’s professional preoccupation for the better part of fifteen years. He’s currently a Vice President of Software Engineering at Goldman Sachs, where he leads Conversational AI initiatives at the scale you’d expect from one of the world’s largest financial institutions. But the work that shaped how he thinks about these problems started long before Goldman — specifically, during a pandemic-era crisis at OneMain Financial that forced him to build AI systems that actually held up.
When engineers talk about what makes chatbots difficult, the conversation usually turns to natural language understanding, intent classification, entity extraction — the standard NLP stack. These are hard problems. But Vijay’s view is that they’re not the hardest ones.
The hard problem is that most chatbots are built to handle the company’s information architecture, not the user’s mental model. You end up with a system that’s organized the way the product team thinks about their product. And the user has no idea how the product team thinks about anything. They just have a problem they need help with.
This distinction — between a bot that is technically capable and one that is actually useful — sits at the center of a lot of his published research. In his work on LLM-based chatbot development, he’s argued that the arrival of large language models has shifted the bottleneck. The NLP problem is largely solved for most enterprise use cases. What hasn’t caught up is the systems thinking: how do you structure a conversational AI system so that it serves the user’s journey rather than just answering inputs in isolation?
It sounds abstract until you see what it means in practice.
In 2020, Vijay was leading engineering at OneMain Financial, the largest personal installment loan company in the United States. When COVID-19 hit, branches restricted operations, call centers were overwhelmed, and customers still needed to manage their loans — deferring payments, understanding options, getting answers — around the clock.
The chatbot infrastructure was nowhere near ready for what was now being asked of it. Two prior development efforts had already stalled. Vijay was tasked with not just resuming those efforts, but delivering a production-grade multi-channel solution: mobile app, web interface, and SMS — simultaneously.
What we inherited was a set of failed approaches and a lot of speculation about why they failed. One of the first things we did was go back through the user interaction logs from the earlier attempts. The patterns were actually pretty clear. The bots were failing on anything outside a narrow set of expected phrasings. Customer intent was ambiguous, and the system had no graceful way to handle that ambiguity. It would just error out or give a generic fallback.
The solution required rethinking the NLP pipeline from the intent layer up. Vijay’s team built classification models that were trained not just on clean examples but deliberately on edge cases — the misspellings, abbreviations, and fragmented inputs that showed up in actual logs. They built escalation paths that felt like assistance rather than dead ends. And they designed context management that could maintain relevant session state across the kinds of interruptions and tangents that real users introduce.
The key architectural decision was around failure handling. Most chatbots, when they hit an intent they can’t classify confidently, either give a generic “I don’t understand” response or loop the user back to the main menu. Both feel like walls. Vijay’s team instead built graduated confidence thresholds — if the model was uncertain, the system would ask a clarifying question that moved the conversation forward rather than resetting it. The user experienced a bump, not a dead stop.
The operational impact was significant — call volume shifted, response times dropped, and the system absorbed peak pandemic demand that would have been unmanageable for human agents alone. But the metric that mattered most internally wasn’t about efficiency. It was adoption. Users kept coming back to it. That doesn’t happen with a system that frustrates people the first time they try it.
If the chatbot work was about redesigning a system under time pressure, the project that followed was about something different: the conviction to challenge an assumption that had already been accepted as fact.
OneMain needed a way for customers to submit check payments through their mobile devices, and for branch staff to process checks and documents via tablets. The project had been in development for seven months and was effectively stuck. Senior engineers had concluded that certain constraints in the existing architecture made the solution technically unfeasible.
Vijay disagrees with that framing now, and he disagreed with it then.
‘Technically unfeasible’ usually means ‘unfeasible given our current assumptions.’ When I dug into what had actually been tried, the team had been trying to build the solution on top of the existing architecture rather than alongside it. That was the constraint. Not the technology — the design choice.
He redesigned the solution from a different starting point — building alongside the existing architecture rather than on top of it. The result, known internally as CheckCapture, shipped and hit its performance targets. The document upload integration that accompanied it, spanning Kofax Total Agility, Mobius, and ELF (Electronic Loan Folder), handled the broader workflow of digital document ingestion that the mobile solution unlocked. Payment processing that had taken days was now measured in hours.
The company recognized the work. The mental model it reinforced mattered more to Vijay than the recognition. When a problem is declared unsolvable, the first question should be: which assumptions are making it so?
Alongside his engineering work, Vijay has built a sustained research track that engages questions he doesn’t have room to fully address inside a single product sprint.
His published work spans several interconnected themes: the evolution of NLP and LLMs in customer service contexts, the architecture of adaptive chatbot systems, the design of IVR (interactive voice response) systems using modern AI, and the specific challenges of deploying large language models in regulated financial environments.
The adaptive intelligence work is the thread he returns to most. His core argument is that most current chatbot systems are stateless in a way that fundamentally limits their usefulness — every conversation starts from zero, every user is a stranger, every session loses the context of what came before.
There’s a meaningful difference between a system that can answer your question and a system that knows you well enough to anticipate it. The first is a search engine with a conversational interface. The second is something closer to a trusted assistant.
The architecture required to bridge that gap involves persistent user modeling — building and maintaining representations of individual user preferences, communication styles, and behavioral patterns across sessions. The technical challenges are real: privacy requirements, computational cost, the difficulty of inferring stable preferences from noisy data. None of these are unsolved, but they require deliberate design decisions that most organizations haven’t yet made.
His research on IVR systems addresses a slightly different dimension of the same problem. IVR is a technology most people have experienced as frustrating — the phone menus that require you to navigate a tree of options designed around the company’s internal structure rather than any human’s actual request. Vijay’s work explores how LLM-based speech understanding can replace that rigid structure with something more responsive to how people actually speak when they call for help.
This research has been recognized by the engineering community. He holds Senior Membership in IEEE — a peer-evaluated designation that requires demonstrated contributions to the field. He’s a member of IAENG and has served as a judge for the Globee Customer Excellence Awards, evaluating submissions from across the fintech and enterprise software sector.
Being asked to evaluate others’ work at that level is its own kind of signal. It means the field trusts your judgment.
Vijay joined Goldman Sachs in 2022. The scale and the regulatory environment are different from what he’d navigated before, and that difference turns out to matter enormously for how you build AI systems.
At a consumer lending company, the consequences of a bad chatbot interaction are real but bounded. At a major investment bank, the surface area of what can go wrong is substantially larger — and the regulatory scrutiny is correspondingly more intense. The conversational AI systems Vijay oversees have to operate not just under technical performance constraints but under compliance frameworks that touch every interaction.
In finance, the cost of being confidently wrong is much higher than in, say, a retail recommendation system. If a chatbot suggests the wrong product or provides inaccurate information about a financial instrument, that’s not just a bad user experience. It’s a regulatory and trust issue.
This has shaped his thinking on a specific problem that the broader AI industry hasn’t fully reckoned with yet: LLMs are very good at sounding right even when they aren’t. The fluency that makes them impressive in demos is the same quality that makes them dangerous in production environments where accuracy is non-negotiable.
His approach involves building what he calls verification layers — mechanisms that sit between the model’s output and what actually reaches the user. These aren’t simple rule filters. They’re architecture decisions about where in the pipeline you introduce checks, what kinds of outputs you treat as high-risk, and how the system degrades gracefully when confidence is low rather than projecting false certainty.
The model being capable is necessary but not sufficient. The question is how the system behaves when the model reaches the edge of what it knows. That’s where most production deployments either earn trust or lose it.
It’s a problem that financial services is being forced to solve before other industries, because the stakes demand it. The solutions Vijay is developing there will almost certainly find their way into other high-stakes domains — healthcare, legal, insurance — as those industries start deploying LLMs at the same scale.
There’s a version of the current AI moment that gets told a lot: the models are extraordinary, the capabilities are expanding fast, the applications are everywhere.
Vijay doesn’t disagree with any of that. But he thinks it misses something.
The story about how good the models are is true. What’s less discussed is how much of the value of these systems depends on the plumbing around them. The retrieval architecture. The context management. The escalation logic. The way the system knows when it doesn’t know. That’s not as exciting to talk about as the model capabilities, but it’s often what determines whether a production deployment actually works.
His research and his engineering work are both aimed, in different ways, at making that plumbing better. Not just more capable, but more reliable — because in the industries where these systems are being deployed now, the bar for reliability is high.
The gap between a chatbot that works in controlled conditions and one that works for real users, at scale, in a financially consequential context — that’s the gap he’s been working on since long before the current AI cycle began. The tools are better now. The gap hasn’t closed.
Ask Vijay what progress looks like, and he gives a very specific answer.
A user calls or chats in with a problem. The system understands what they actually need, not just what they literally typed. It has context on who they are and what’s happened before. It either resolves the issue cleanly, or it gets them to the right person without friction. And the user comes away with the feeling that they were heard.
That’s the standard. It’s not exotic. Most people who’ve worked in customer service would recognize it immediately. The version of that experience that AI systems are currently able to deliver — even the best ones — still falls short of it in meaningful ways.
That gap, more than any specific product or paper or award, is what’s still driving the work.
I’ve been at this long enough to be skeptical of hype and long enough to believe the fundamental problem is worth solving. The financial services industry serves a lot of people who don’t have great alternatives. If the technology can genuinely help — not just technically function, but genuinely help — that matters.
It’s a straightforward statement. But it’s also the kind of thing you only say with conviction if you’ve spent years watching the distance between those two things up close.
Vijay Kumar Sridharan is a Vice President of Software Engineering at Goldman Sachs, where he leads Conversational AI initiatives. He is an IEEE Senior Member, a published researcher in large language models, adaptive chatbot systems, and AI-driven IVR. His prior work at OneMain Financial helped define the company’s digital transformation strategy across chatbot deployment, mobile payment processing, and document automation.
\
:::tip This story was distributed as a release by Sanya Kapoor under HackerNoon’s Business Blogging Program.
:::
\
2026-04-23 03:16:28
Anthropic's Project Glasswing and Claude Mythos just proved what many of us in enterprise security have quietly feared for years: the attacker's advantage in the AI era isn't theoretical anymore. It's here. And the question isn't whether we should be alarmed. It's whether alarm is enough.
Let me be direct. When I read that Claude Mythos Preview autonomously discovered a 27-year-old vulnerability in OpenBSD, a system practically synonymous with security hardening, without any human guidance after the initial prompt, I didn't feel wonder. I felt the specific dread of someone who has spent years building enterprise data security programs and knows exactly how long patching backlogs sit untouched.
This is not a capability preview. It is a before-and-after moment. And I think the industry is only halfway processing what it means.
1000s zero-days found in weeks
<1% of findings patched at launch
27 yrs oldest hidden bug surfaced
Anthropic was unusually candid about something that deserves more attention: they did not explicitly train Mythos Preview to be a vulnerability discovery engine. These capabilities emerged as a downstream consequence of general improvements in code, reasoning, and autonomy. The same model that helps you write better Python found root-level remote code execution bugs in FreeBSD and critical flaws in every major OS and browser.
That "emergent" framing matters because it means we cannot treat this as an isolated security tool. The next capable model from any frontier lab, whether Anthropic, OpenAI, Google, or a state-sponsored program, will carry these capabilities whether its developers intend it to or not. The window to prepare is not years wide. It may be months.
"The window between a vulnerability being discovered and being exploited has collapsed. What once took months now happens in minutes with AI." - CrowdStrike, Project Glasswing partner
Project Glasswing itself is a genuine and commendable initiative. Bringing AWS, Apple, Cisco, CrowdStrike, Google, Microsoft, and NVIDIA into a coordinated defensive effort with $100M in model credits and $4M in donations to open-source security foundations is exactly the kind of pre-competitive industry coordination we rarely see. But I want to be honest about what it is and what it isn't.
Here is the uncomfortable truth I keep returning to: at the time Anthropic made its announcement, fewer than 1% of the vulnerabilities Mythos had found were patched. Let that settle. The model surfaces thousands of critical flaws across infrastructure the entire internet depends on, and our remediation capacity, largely volunteer-driven open-source maintainers working at human speed, cannot absorb the volume.
This is the Glasswing Paradox. The thing that can see everything cannot fix anything. Vulnerability discovery has always been supply-constrained. AI just eliminated that constraint entirely, without touching the demand side of the equation, which is skilled humans who understand the code well enough to safely remediate what gets found.
From an enterprise data security product perspective, this reshapes the problem statement fundamentally. We have spent years building programs around mean-time-to-detect. We optimized for finding things faster. Now the detection problem is largely solved at the code level, and the bottleneck has shifted entirely to prioritization and remediation velocity. Your CISO's job description just changed without anyone updating the job description.
First: treat open-source dependencies differently starting today. Not eventually. Today. Mythos surfaced ancient vulnerabilities in projects maintained by tiny volunteer teams. If your enterprise depends on OpenBSD, FreeBSD, or any of the hundreds of libraries the Glasswing consortium is scanning, you need direct line of sight into your dependency graph and your patch lag on each node. SBOMs are table stakes. What matters is having a prioritization framework that accounts for the new discovery rate.
Second: prepare for AI-augmented adversaries now, not when you see evidence of it in the wild. Anthropic itself disclosed last year the first documented case of a cyberattack largely executed by AI. A Chinese state-sponsored group used AI agents to autonomously infiltrate roughly 30 global targets, with AI handling the majority of tactical operations independently. The capability that Glasswing is trying to get defenders using first is already in adversarial hands in some form. The asymmetry we need to close is time, not access.
Third, and this is the piece I see least discussed: we need to start treating AI models themselves as part of our threat surface modeling. Not just as tools we use, but as systems that hold credentials, consume APIs, write to production environments, and take autonomous actions. The same autonomy that makes Mythos remarkable at finding bugs makes any sufficiently capable model running in your environment an entity whose permissions, reach, and failure modes have to be governed like any other privileged principal in your stack.
Anthropic has chosen, deliberately, not to release Mythos generally. They are betting that keeping it restricted to trusted partners gives defenders more runway than attackers. I think that's the right call, and I respect the transparency with which they've explained the dual-use calculus. But it is a bet, not a guarantee. Competitors, both domestic and state-sponsored, may not make the same choice. A model that costs billions to train will face enormous pressure toward monetization.
Project Glasswing is a starting gun, not a finish line. The $4M donated to the Apache Software Foundation and OpenSSF through the Linux Foundation is meaningful and symbolically important. But sustaining the human expertise needed to actually fix what AI will keep finding requires the industry to treat open-source maintainers as critical infrastructure workers, with compensation, tooling, and organizational support that reflects that status.
We are right now in the gap between the discovery capability arriving and the remediation capability catching up. What we do in this window will define the security posture of enterprise systems for the next decade. AI just solved the discovery problem. Nobody has solved the fixing problem yet. That's the half that matters.
2026-04-23 03:01:29
The 12% of agents that make it to production and deliver meaningful ROI don't share a common model or a common framework. They started where failure was tolerable, instrumented everything from day one, designed for reversibility before capability, and expanded autonomy incrementally as they earned confidence in the system's behavior. The opportunity in 2026 isn't building smarter agents. The opportunity lies in closing the 68 percent gap between adoption and production. It is not an AI problem but it is a governance problem, an organizational design problem, and an instrumentation problem. The intern doesn't need to be smarter. The intern needs a manager.