2026-04-22 23:00:59
I run a startup as the technical lead with my partner, who manages the finance. I enjoy being involved in all aspects of the business, be it technical strategy, business finance, marketing and public relations. As a hybrid, I get to wear a lot of hats. I have learnt to be a jack of all trades and master of some.
Being so many things and all at once is difficult but rewarding. It requires you to think from a high ‘helicopter’ level, but to bridge the gap between high-level goals and an operational point of view. This can be like ‘bungee jumping’ to ground level so as to see the nuts and bolts of how things need to work to achieve higher-level strategies.
A strong vein in this process is technology. Being conversant with multiple programming languages, tech stacks and frameworks takes time and effort to grasp. Being a doer as much as a sayer means balancing communications with actions. This is hard to do, but well worth the effort.
\
I guess I started in tech when I first set eyes on the ZX80 microcomputer at school. I never had full access to try programming myself until I was given a ZX Spectrum and struggled with Basic and typing machine code from magazines and thermal printouts.
I started college training to be a graphic designer and had become a potter to run out of money, and I needed to get a job. The business I worked for needed someone to do basic admin for their small Apricot PC network. It used a bus network based on coaxial cable, which I learnt to repair daily due to nefarious employees ‘tugging’ on the terminators for the fun of it. I wanted to learn how to ‘program the network’. These were my words back then, meaning, I wanted to learn how to program how systems worked in isolation and how they could also communicate and cooperate together.
I bought my first business-grade PC through a loan from my employers and used this on a night course I attended at college to study for a ‘Higher National Certificate’ in computing studies.
Later, my partner, who is the fellow founder of our startup today, suggested I go on to University, which I did to study for a degree in Technology Management - a hybrid degree and groundbreaking to this day.
After working at the University at which I graduated, I went on to a number of roles that led to working for the then ICL ( International Computers Limited ), a UK-based IT company, in which I worked for their Internet Managed Services division. We played a part in the creation of the first white-label Internet Service Provider. This was floated on the stock exchange and was sold on to Tiscali.
I worked with some of the best in the early days of e-Commerce and what we now remember as Web 2.0. I learnt to build from hardware up to programming, configuration and management of large systems.
I went on to work with and for small and large enterprises, businesses and organisations and some in government ‘restricted’ environments.
This has left me with a few tales to tell, but mostly the backstory and lived experience needed to run our own startup.
\
I don’t need to think long about that one much; it is blue.
Blue is the colour we associate with clear skies and vistas. It is also associated with calm and collected thinking.
It is receding as opposed to standing out. In works of art, it can be used to portray a backdrop to the active foreground, in which it is the foundation and anchor point of our view.
This brings continuity and permanence, predictability and stability, rationality and purposefulness.
Calm and reflective thought wins out over harsh and angry responses that can be ill-thought-out and emotional.
It is easy to say something, very difficult to then unsay that thing, particularly when this causes harm and hurt.
\
The internet has been redesigned over the last 20 years. This has been engineered by those who control our access to its resources. This started out with search engines and later social media applications, and then entire ecosystems owned from the ground up, from hardware to apps through to services.
The resulting walled gardens and what has become a ‘splinter web’ is now the status quo of what was once a brave new world of internet freedom.
The only way to be heard, read or even seen right now, is to ‘borrow other people’s bandwidth’ or so to say, online authority. We do this by posting or cross-posting to platforms that have recognised authority by the gatekeepers of the internet.
Hackernoon, similar to other platforms, offers a place to publish and be heard on merit over notoriety. It offers a possibility of your content being read if you do not have influencer status.
I would like to think that Hackernoon continues to reward writing of relevance and merit so as to give us ‘nobodies’ a chance to be heard above the clatter of the algorithms.
\
My first formally trained programming language was Pascal, then COBOL, Assember, C and Aida.
Some time later, I was a Perl monger of the early days of the internet when Perl was the duct tape used to make the web what it is and has become today.
Now I wrangle Python, Go, JavaScript/Typescript, PHP, Bash and am looking now at Clojure.
I embraced AI and prompting in coding and workflows, this for several years now.
I am committed to increasing my own and my clients and colleagues digital sovereignty, to owning your presence on line. Both of these I'm working on as part of https://headshed.dev, a place, a 'shed' or 'shelter' in which I can build and store stuff.
MVK https://mvk.headshed.dev is a reduction of the 'train track' I use to build our infrastructure. The solutions, the 'trains' run on it.
Share-LT is the first ‘train’ application aimed at MVK to be published publicly and as open source. This first app has already been used to deliver content for headshed.dev, the parent project, https://www.canalsidecoach.app, my partner Anne's business and a growing list of clients
I plan to release Sched-LT, a calendar time booking app, Shop-LT, a stripe integration app that creates a shop from stripe products in your stripe account, Filter-LT, a personal cloud-based DNS filtering appliance to help manage access to social and other content with a view to keeping safe on line, focused and healthy.
\
Despite big tech trying to turn the internet into their own ‘portal’ - a goal that I saw being chased in the noughties and to this day the internet remains a place to find knowledge and information that can help us in many ways. My primary interest is programming and technical architecture.
Open source projects that range from command line utilities, to tui’s, gui’s and complete frameworks, like Laravel, that make your own full-stack applications, apps, APIs and more. They enable anyone with the will and determination to build their own sovereign internet.
There has never been a time when such a vast repository of knowledge and learning has been available to humanity. We had in our past written text of differing forms on tablets, scrolls, codices, but these were expensive and rare, afforded to the few with the means to own and access them.
The printing press, only in relatively recent centuries, reduced the cost of distributing knowledge and ideas, but it has been in our lifetime that we have seen a true emancipation of the printed word to become electronic, hyperlinked, indexed and searchable.
We can now ‘talk’ to our documents, having trained large language models on all that is currently available. That is, at least we have a means of talking with a sometimes unreliable, sometimes hallucinating witness to that data.
\
The situation you refer to is of such totality that I believe most, if not all, technology that we have right now would inevitably perish. This is due to its design having automated obsolescence being built in, a co-dependence upon other technologies being ‘always on’ and ‘ever present’.
This is such that any one piece of technology, be it an iPhone, a tablet, a laptop, smart watch, would stop working within days of the ‘event’.
Supposing that the ‘event’ were not to be so cataclysmic, say, and that some civilisation and organisation were to persist, then reusable, recyclable technology may be able to persist.
I guess then, my personal stash of IT bits and bobs that I used to build a home server could become personal wealth and things that are highly valued.
\
An obsession with growth.
We saw again, the Earth from space just a few days ago from the viewpoint of the 4 astronauts of Artemis II to be reminded that the world we live on is finite.
To appreciate this, we need to work together and in cooperation, not constantly competing for first place.
The survival of our species is not all to do with survival of the fittest, as this leads to conflict and ultimate harm to others.
Being realistic about the outcomes of your personal achievements, wealth, and acquisitions in life means realising not just that we can’t all be winners, billionaires, rather that this course of over-consumption and unrealistic goal-seeking is unproductive.
Neither too though should all things be ‘free’ as this is to fail to value and account for the value of work.
To expect all things to be free is a lie that has been programmed into us by big tech, only for them to own our very existence in the form of our actions, likes, dislikes and to have these monetised and sold so that we have become the product of their corporate surveillance.
\
I’d carry on what I’m doing now, but better funded and keep teaching others to know what I learned. To make a better and better train track and trains to run on all our own sovereign infrastructures.
The cost of running infrastructure, particularly in cloud, has decreased over the years as technology has advanced. Only AI forcing up the price of memory in recent months has changed that, but for the most part, the cost of hardware has fallen.
You can cram more CPU, memory and disk into less space in a 42U rack than when I started racking servers 20 years ago, such that a 2 or even 1U rack mount server can provide what a 20U device, taking half the rack, could muster back in 2005.
Not all these cost savings and economies of scale have not been passed down greatly from the hyper scale providers to the end user. The consumer may find that cloud bills have increased, inexorably in some cases.
We are sold services we once self-hosted and are convinced somehow that it is something we can no longer manage ourselves. Everything has to ‘scale’ to that of ‘Google’, else not be in any way fit for purpose.
Enabling ourselves and others to build sovereign infrastructure is the start of democratising the internet.
The term ‘third world’ is something we have grown up with in the first world, but that does not make it right. The fact that one part of the world may prey on others’ resources, leaving them in poverty and deficit, is anathema we need all to be ashamed of.
\
Prompt engineering, ad spec driven coding has enabled me to increase my productivity and realise goals I had set to be accomplished in years to fall to months, sometimes weeks.
This field is ever-changing, and goal posts are changed by tech giants without our consent or involvement. This means we need to be realistic, but it does not mean we cannot be optimistic.
The assertion that we are being trained to use AI and lose the ability to think for ourselves is possible but I am finding the opposite to be true.
I am now energised to re-realise skills I had long ago learnt the hard way to program but in entirely new languages I thought I’d never had the bandwidth for.
This is not limited to languages but also to disciplines in architecture, security, design and more as necessity requires.
Learning and growing new skills are greatly enhanced and opportunities never before dreamt of can be realised.
\
The past is done. The future is not yet fully determined, but I hope for a better future than the past.
So I would travel to the future. Ten years would not be enough for me, though, rather 1000 or 2000 years.
We have failed to learn from our past and continue to do so. Humans are at an inflexion point at which they must stop doing the same thing over and over again, only to fail. This I believe, is the definition of insanity.
The only way for us to survive the next century, perhaps decade, is to stop over-consuming, start cooperating and stop competing over finite resources. Rather, we need to start looking to the benefit of future generations over that of short-term profit and gain.
I believe this must happen, one way or another and that it will take time. I’d like to fast forward to the good bits right now.
\
I’ve been contemplating AI since the 90s, when ‘expert systems’ were being suggested as a way to train large ‘models’ to do complex work. The idea of a medical expert says, train an ‘expert system’ to give advice to medical staff was no longer science fiction. But the mechanisms for this to take place were crude and labour-intensive.
We talk a lot about Large Language Models, the product of neural networks, but AI has been with us for much longer than these have come to us in the form of chat interfaces.
Google search I believe, became the precursor to much of the technology we refer to as AI today, and Google has been at work for many years building their internal systems without us knowing much of what was really going on.
The days of ‘Search Engine Optimisation’ that led to ‘keyword packing’ and setting up junk links online to trick search engines into ranking websites are gone, thankfully.
This is due in part at least to AI and Google’s ability to assess writing and content directly, no longer referring to metadata. This has rendered the majority of old-school SEO irrelevant.
In some respects, we would be happy to find ourselves in a better, fairer search engine self-optimised world wide web where quality trumps clever positioning of text to get clicks.
But we find, though, that the tools have been used by Google, Meta, X, Apple, Microsoft et al. for their own gain, not ours.
Websites that once saw healthy traffic and e-Commerce transactions tanked overnight, and businesses collapsed.
The internet was re-plumbed to send user traffic to platforms owned by the gatekeepers.
Now, search engines own the content, the traffic and the attention of their users who think of this as free will.
Addictive behaviour is encouraged and amplified by AI-driven algorithms.
The EU is now announcing ‘age verification’ in the form of an application they are open-sourcing in April 2026. Non-compliance will likely lead to financial actions at a governmental level should social media and search platforms fail to comply.
But whilst waiting for government wheels to turn, we as individuals can choose to use our own sovereign infrastructure and to learn and to teach others how to do so for themselves.
https://hackernoon.com/u/hacker8790755?embedable=true
\
2026-04-22 23:00:54
HackerNoon readers are builders, not casual browsers. They spend around 6.4 minutes on technical content — well above the industry average of 2.1 minutes — and 72% read through most of what they open. This is an audience that reads with intent and evaluates what it sees.
\
6M+ monthly active readers, 85% aged 25–44, with roughly 25% holding director-level roles or above. These are people with both context and responsibility — they've built things, worked with tools, and developed opinions. They're not looking for introductions; they're looking for clarity, relevance, and practical value.

\
60% of readers come from the US, EU, and India. US and European readers tend to engage with architectural content, vendor comparisons, and security topics tied to longer decision cycles. In India, engagement skews toward tutorials and implementation content, reflecting earlier-stage exploration.

\
67% are active GitHub contributors, and 108K are published technical writers on the platform. Most readers are building, testing, and applying what they learn, not just consuming content passively. Many are also capable of producing similar content themselves, which shapes how critically they engage with what they read.

\
40% use privacy-first browsers like Brave, Firefox, or Tor, which limits traditional ad reach. Content becomes one of the few reliable ways to reach this segment of the audience.
\
Readers make a quick decision at the start, looking for signs that content is credible and worth continuing. When those signals are present, they follow the argument through most of the article, comparing what they read against their own experience. At some point, reading shifts into evaluation: is this useful, is the tool worth trying, does this fit my context? That's where content begins to influence real decisions.
Scroll depth reflects this: around 72% of readers go through more than three quarters of an article. The behavior is linear and focused, closer to how people read documentation or research than how they scan a social feed.
Articles that hold attention tend to be grounded in experience, structured around clear reasoning, and transparent about trade-offs. Content that relies on abstract frameworks without evidence loses momentum quickly — this audience can tell.
\
:::tip Ready to reach them? Book a meeting with us to learn how 4000+ companies solve their marketing challenges with HackerNoon.
:::
\
2026-04-22 22:19:44
Hey Hackers!
Welcome back to our 3 Tech Polls Newsletter. If you’re new here, strap in, you’re in for a treat. Every week, we spotlight community-driven conversations shaping tech and its adjacent industries. It features HackerNoon’s latest Poll of the Week, plus two timely polls from around the web, keeping you plugged into the conversations moving our global community as they unfold.
This week, we’re looking at Bitcoin’s elusive creator, Satoshi Nakamoto, and the long-running obsession with uncovering his identity that seems to have been reignited—from a controversial 2024 HBO documentary on Satoshi Nakamoto pointing to Peter Todd, to a recent The New York Times investigation building a case around Adam Back. Both have denied the claims. But the question hasn’t gone away.
So we decided to ask our community: do we still even care who Satoshi Nakamoto is?
\
Do we still even care who Satoshi Nakamoto is anymore?

\ At a glance, the answer from the HackerNoon community is… not really.
A combined 73% of respondents either said Bitcoin has outgrown its creator or admitted they never cared in the first place. It’s a strong signal that, for most people, the mythology around Satoshi has taken a backseat to Bitcoin’s actual performance as a system. The network works, blocks are mined, transactions settle—that’s what matters.
Even the smaller slices of the poll don’t suggest overwhelming urgency. Just 14% believe Satoshi’s identity still matters for Bitcoin’s future, while another 12% take a more pragmatic stance: they’d only care if it directly impacts their holdings.
That last group, though small, is where things get interesting.
Because while Bitcoin may have outgrown its creator culturally, it hasn’t outgrown him economically. Wallets attributed to Satoshi are estimated to hold around 1.1 million BTC—roughly 5% of the total supply.
Which reframes the question slightly.
It’s not just who Satoshi is—it’s what happens if he acts. And for that minority watching their HODL, the concern isn’t identity for identity’s sake. It’s about visibility, accountability, and the kind of market shock a sudden move could trigger.
This reframed question has people around the web speculating a potential move from the inventor of the $1.4 trillion benchmark cryptocurrency.
\
:::tip Weigh in on the poll results here!
:::
\
Will Satoshi move any Bitcoin in 2026?

\ \ On Polymarket, the conversation strips things down to the only question that really matters: will anything actually happen?
And right now, traders aren’t betting on it. The odds lean heavily toward Satoshi’s wallets staying exactly as they’ve been for years—untouched, silent, almost myth-like in their consistency.
But the fact that this is even a market says a lot about where people’s heads are at. Potential consequences, identity (for the sake of knowing) be damned.
\
Will Satoshi move any Bitcoin by next year?

\ Kalshi is essentially asking the same question, just on a slightly different timeline.
Because if those wallets—estimated to hold around 1.1 million BTC—ever move, even a little, it wouldn’t go unnoticed. It’s the kind of signal that would instantly set the market off, from price reactions to endless speculation about what it means and what comes next.
So while the broader community may have moved on from who Satoshi is, the markets show there’s still real interest in what he might do.
\
:::info Vote on this week’s poll: When an AI company reports a security issue, what matters most to you?
:::
:::tip Stay informed on our most recent polling data. Subscribe to 3 Tech Polls here.
:::
That’s all for this week!
Catch you at the next one.
\
2026-04-22 22:00:52
An AI agent is an autonomous entity that perceives its environment through sensors and acts upon it through effectors to achieve goals. AI agents are fundamental to building intelligent systems capable of operating independently and making decisions.
In this article, we will learn and break down what AI agents are, how they think, and how they are shaping the tech world and becoming part of our daily lives.
Deploying multiple local Ai agents using local LLMs like Llama2 and Mistral-7b.
AI is evolving fast, but security isn’t keeping up. Discover why zero-trust architecture is critical for safe, scalable AI agent deployment.
Let’s uncover what the Playwright MCP server brings to the table, and how to use it with the OpenAI Agents SDK.
This is the first part in a multi-part series on building Agents with OpenAI's Assistant API using the Python SDK.
Ninja is proving 2025 is the year of AI agents. Outpacing OpenAI, Google, and others in tackling hallucinations, millions rely on it for coding, writing & more.
Part II of the series: use MCP and Solana AgentKit to build an AI Agent that can trade USD and EUR stablecoins.
Google A2A - a first look at another agent-agent protocol and compared to Anthropic’s MCP.
Imagine having at your disposal an AI-powered assistant that not only comprehends your queries but can also seamlessly interact with various applications.
LLM Sandbox: a secure, isolated environment to run LLM-generated code using Docker. Ideal for AI researchers, developers, and hobbyists.
Join Overlord.bot this Christmas on Arbitrum! AI-powered DeFi magic, 50 ETH pumps, and token launches that redefine creativity and community.
Build your first real AI agent with this simple guide for beginners—learn, code, and create smart tools that take action.
Is MCP overhyped? Explore the role of agent tools, emerging protocols like x402, and the rising security risks of increasingly autonomous AI agents.
LangGraph, CrewAI, AutoGen, Pydantic AI, and 8 more. What works, what doesn't, and when to use each.
Building AI agents can be a mess of broken repos and outdated tools. Here’s the real, tested open-source stack for building reliable, working prototypes.
Ever since the DeepSeek boom, all the leading AI companies have been updating their models and releasing their own AI agents left, right, and center.
Agents that work in demos fail at scale. Learn why 429/403 happen under concurrency and how to build reliable, accurate evidence acquisition.
How Membrane used AI agents to ship 1,000 API integrations in 7 days — covering auth, actions, validation, and everything in between
The Model Context Protocol has emerged as the universal translator for artificial intelligence, and it's redefining what integrated AI systems can achieve.
Learn why search-and-extract matters for AI enrichment and research. Step-by-step tutorial using SERP API, Web Unlocker, and Browser API with a real example.
Discover Social AI, the Web3-powered social media platform that puts YOU in control. Own your data, earn rewards, and experience AI-powered tools like Victoria
This is the second part in a multi-part series on building Agents with OpenAI's Assistant API using the Python SDK.
RAG uses known docs. Market-aware agents
need live web evidence. Learn instant
knowledge acquisition and how it
enables accurate outputs.
Build agentic workflows with AutoGen to make AI more deterministic and reliable, overcoming the limitations of simple LLM calls.
Let's see how to integrate the OpenAI Agents SDK library with a real-world MCP server for AI agent development
A 2025 map of computer use agent benchmarks, from ScreenSpot to Mind2Web, REAL, OSWorld and CUB, and how harness design now rivals model quality.
The religion was called Crustafarianism.
Best Stock Market Data APIs for algorithmic traders, fintech developers, and AI agents.
Approximately three years ago, I was chatting with a dear friend on LinkedIn and predicted that in around 5-10 years we will have advanced systems that will gen
As our reliance on AI-enabled hyper-automation increases, we will leverage human expertise to design robust Workflows capable of managing repetitive tasks.
While Eliza and GAME are rather distinct AI agent frameworks with different applications, they both integrate and foster user interaction on Twitter.
An AI quartet of content creators, event recommenders, liquidity allocators and information aggregators will bring prediction markets to new heights.
Learn how to take AI agents from prototype to production with this 5-step roadmap covering Python, RAG, architecture, testing, and real-world monitoring.
Open-CUAK is an open-source platform for managing automation agents at scale.
A practical guide to making 7B models behave: constrain outputs, inject missing facts, lock formats, and repair loops.
Accuracy is no longer the gold standard for AI agents—specificity is.
How precompiling context for AI agents beats context stuffing. Lessons from building 100+ specialized agents for a web3 application.
From firehose to digest: how multi-agent systems, guided by MCP and grounded in fundamentals, can transform any feed into personalized insights.
Explore how Agentic AI transitions beyond chatbots into autonomous automation, empowering businesses with innovative workflows, and enhanced productivity.
In this edition of This Week in AI Engineering, we go through the latest updates along with some must-know tools to make developing AI agents and apps easier.
I run a personal AI agent with access to my health, calendar, and Telegram. Here are security principles that keep the blast radius small.
How I turned AI coding from chaos to production-ready: DevFlow adds security reviews, quality gates, and audit trails to Claude Code, Cursor, Gemini.
See how you can prompt our AI agent to generate content based on specific knowledge inside of a content index or at a particular URL.
Learn how I built a multi-stage Langchain agent for MySQL. This article details my journey, challenges, and key steps in creating an intelligent database intera
This article examines the first large‑scale AI‑autonomous cyberattack (GTG‑1002), where an LLM hijacked via MCP became a self‑directed espionage engine.
I built a puppy trainer bot with Coze, a no-code platform. And now my Aussie, Jenny is on her way to becoming a good girl.
How AI agents could’ve helped Pfizer speed up vaccine development by designing trials, tracking research, and predicting risks.
AI will not replace software engineers, but developers who use AI coding agents effectively may outpace those who do not.
Devcon 2024 by Ethereum Foundation. Web3 product marketer shares impressions about the biggest blockchain and crypto event. AI agents, memecoins, ZK, and RWA.
Learn how to build your first AI agent in just 30 minutes! This step-by-step guide makes adding AI agents to your application easy, even for beginners.
A hands-on review of Andrej Karpathy's autoresearch repo.Check what happens when an AI agent autonomously optimizes a neural network while you sleep.
Let's find out why AI agents convert HTML to Markdown to cut token usage by up to 99%!
AxonerAI: Rust framework for building AI agents. Alternative to LangChain with memory safety, true concurrency and blazing fast executions.
Learn how to speed up and optimize agentic workflows with smart step-cutting, parallelization, caching, and model right-sizing.
Autonomous agents can do some remarkable things, but they need guardrails. The people who skip the safety layer are learning an expensive lesson.
Learn how to build practical AI agents from scratch using GPT, n8n, CrewAI, and Streamlit. Ship your first agent this weekend with step-by-step guidance
Fragile, chaotic AI agents are everywhere. AAC is a simple yet powerful architecture that brings structure, scalability, and reliability to your agent systems.
AI Agents are the digital workforce reshaping how modern systems operate. They're more than fancy scripts — they sense, decide, act, and learn.
Agentic AI replaces passive chatbots with goal-driven agents; MCP standardizes tools, enabling safe, scalable human-AI collaboration.
This implementation demonstrates significant advancement from basic offline RAG to an intelligent offline based agentic system.
From BabyAGI to Clawdbot, the chronicle of autonomous AI agents that moved out of infinite hallucination loop towards 24/7 dependable employee of the month.
An AI agent is a small “AI worker” that can do tasks instead of you.
Build real AI agents in 5 levels, from simple tool use to full agentic systems—code included.
Learn about the security risks in MCP and how the Agent Security Framework can safeguard your AI agents from attacks and data breaches
AI agents are like hiring a hypnotizable butler…capable but dangerously suggestible. Why compliance plus access creates risks you haven't considered.
Some claim RAG is dead, but anyone building production AI workloads is doubling down. Here's why it remains essential for real-world AI deployments.
Explore how RAG systems differ from traditional large language models by leveraging real-time data access and applications.
Build a Supervisor Agent with Amazon Bedrock to orchestrate EC2 listing & CloudWatch CPU metrics via Lambda — no direct API calls, fully automated.
Look, I'm not saying I'm building Skynet here, but when my AI agent started generating philosophical tweets at 3 AM about the nature of consciousness, well…..
Agentic Workflow offers a practical path for AI adoption—more reliable than agents, easier to deploy, and ready for real-world tasks.
Openclaw is a personal AI that works in your device and does administrative tasks for you.
This is the first article in a two-part series on building AI Agents from the ground up.
Domo’s approach is to move away from prompts as the primary interface and toward agents.
Edge AI agents—autonomous systems operating directly on endpoint devices—are fundamentally reshaping how we deploy intelligence.
Understand AG-UI's role in the broader agent ecosystem and explore a practical implementation through echo server development.
Elon Musk's xAI has released Grok 3, setting new standards in AI performance with remarkable reasoning capabilities.
OpenAI Codex is an AI model that turns your plain English instructions into code.
Large language models (LLMs) can perform many of the same functions as Bash scripts, but do so while sparing me the technical minutiae of scripting.
By releasing its own MCP server, GitHub provides an official gateway for agents to interact with GitHub features (repos, PRs, issues, etc.).
It takes just a few lines of code with Neuron to implement your first full featured agent.
This article explores the innovative AGENTS.md framework, detailing how recursive agent orchestration and persistent memory can enhance collaborative learning, improve efficiency, and transform AI interactions across multiple tasks.
AGENTS.md is a simple, open-source format that guides AI coding agents on how to interact with your project.
Just because we can build multi-agent AI systems doesn’t mean we should.
Let's dig into the new Agentic AI and Agentic RAG trends to understand what they truly are.
Understand why integrating AI agents into enterprise architectures marks a transformative leap in the way organizations approach automation.
ELEKS' tech experts have dissected predictions from Gartner, IDC, Deloitte, and KPMG to evaluate each trend's real deployment risk and business value.
A tutorial showing how to implement retrieval as an LLM-invoked tool with Vercel's Generative UI components library.
GitHub has unveiled a significant upgrade to Copilot during Microsoft Build 2025: a cloud-based coding agent capable of drafting and iterating on pull requests
AI coding agents are evolving into reliable collaborators. Many of the most powerful AI coding Agents are open source. Here's the list.
At 90% accuracy per step, a 20-step agent succeeds 12% of the time. Your demo didn't show you that. Production will.
Multilingual Prompt Injection is a technique that exploits language gaps in security systems. Safety nets fails when non-English exploits are activated
You can create sophisticated AI assistants that seamlessly handle everything from reversing strings to querying internal databases.
Building a security-first autonomous AI agent and letting it run without interference for a week
Discover how MCP creates seamless interaction between AI and apps, unlocking a new era of AI-native communication.
Google Jules is an AI-powered coding assistant designed to work like an autonomous developer.
This article will go you through all necessary steps to take while integrating AI into your system.
As a developer, I prefer not to repeat myself. This post explains why and how to avoid repetition as a skill.
In this tutorial, I’ll walk you through building your very first AI agent in Python using Google’s Agent Development Kit (ADK).
The platform converts elite trader expertise into AI agents through structured playbooks, real-time data fusion, and proprietary models trained specifically on
Build an AI agent that won’t fail. Learn step-by-step how to automate browser tasks using Python, LLMs, and Bright Data’s Agent Browser.
DeepMind's Genie 2 generates interactive 3D worlds from simple text or images, revolutionizing creative tools and AI agent testing.
Organizations need to reframe AI maintenance from a routine IT expense to a critical investment in maintaining revenue streams.
you work with chatGPT/claude and love it. you hit a support chatbot and want to throw your laptop. they run on the same models. what gives?
While the XRP blockchain is already known for its fast and low-cost transactions, XRPTurbo’s approach focuses on enhancing the XRP ecosystem with on-chain and o
Most enterprise problems are still best solved with good engineering and automation.
What teams need is not another dashboard.
They need an intelligent control plane. This is implementation-focused. Minimal theory. Real architecture.
How AI coding agents like Claude Code turn developers into CTOs—scaling projects with CI/CD pipelines, parallel sprints, and 10x test coverage.
An analysis of the fragmented AI agent tooling landscape and why the full lifecycle needs to consolidate into open platforms.
Building AI agents isn’t as complex as it sounds. Learn how modern AI apps are mostly about smart API calls, streaming responses, and clean architecture
Agentic Commerce explores how autonomous AI agents are transforming retail, purchasing, and enterprise workflows.
Unlocking AI's full value requires broader organizational transformation.
Learn how to build, deploy, and extend your own Model Context Protocol (MCP) server using Python and Sevalla to let AI models securely access real-world data.
A practical guide for AI engineers and builders on shipping production-grade AI agents—based on lessons learned in the field.
While teams focus on model selection, many overlook the orchestration layer, which determines whether an AI application remains economically viable at scale.
Production agents often need response caching, input sanitization for sensitive data, protection against prompt injection, and observability.
Learn how to solve memory limits and "context rot" in long-running AI agents using LangChain's summarization middleware.
Learn to build your first AI agent in 5 days. Step-by-step guide using GPT, n8n, CrewAI, Cursor, and Streamlit for automation and deployment.
A first look and hands-on test of II-agent, claimed to be the smartest autonomous AI Agent in the market. And it is open source too.
What’s really driving the crypto market in 2025: open ASICs, political memecoins, AI in DeFi, and real-world asset tokenization.
Learn how Langchain turns a simple prompt into a fully functional AI agent that can think, act and remember.
The agent can validate balances, check liquidity venues, perform the necessary calculations, and present a transparent summary. It is free to use on griffinai.i
Learn how to build modular speech to text systems with CLI AI agent. Architecture patterns for cross platform audio recording and AI transcription.
A pipeline of Dagger containers running AI Agents to give me financial and investment advice
This is a Plain English Papers summary of a research paper called OpenClaw-RL: Train Any Agent Simply by Talking
AI agents are changing how software is built. Discover what coding roles are vanishing, what’s emerging, and how devs can stay ahead in the agent-first era.
Master AI Agent workflows to get reliable, high-quality outputs. Learn prompt chaining, routing, orchestration, parallelization, and evaluation loops.
AI is evolving from assistant to economic agent; those who adapt early and leverage this shift will gain a significant edge in global markets.
Exploring how Google’s AP2 and AI-powered digital twins are reshaping product design, user experience, and the future of human–agent interaction.
The Mixture-of-Agents (MoA) framework is redefining how we push large language models (LLMs) to higher levels of accuracy, reasoning depth, and reliability.
Agent Mode is an autonomous AI pair programmer integrated directly into your VS Code editor.
Anthropic has released Claude 3.7 Sonnet, integrating both standard response capabilities and extended reasoning within a single model.
As a writer, visual artist, and software engineer, AI can already outcompete me on all fronts. What does that mean for my future?
The Plan-Act Loop is the only way to ship AI code reliably. A tech lead shares 7 ironclad rules to coach Claude Code, avoid dependency loops.
Discover the Mech Marketplace: the first decentralized AI bazaar where agents autonomously trade skills, revolutionizing AI collaboration and commerce.
Discover what AI agents are, how they work, their benefits, and limitations. Your simple guide to why they matter in today’s AI-driven world.
AI agents are taking over commerce. This article explains why brands shouldn’t compete with platforms, but build orchestration systems that direct them all.
We need to stop treating the UI as an afterthought. It’s a critical component for unlocking the value of AI agents in the enterprise.
AI agents are starting to participate directly in development workflows. Learn how to work with them.
A practical 2025–2026 guide to AI app security: defend RAG pipelines, autonomous agents, chatbots, and document OCR against injection, leakage, and tool abuse.
Tutorial for analyzing financial statements with an autonomous AI agent.
A step-by-step guide to containerising a FastAPI application with Docker and deploying it to the cloud for consistent, production-ready delivery.
Learn how to build and deploy your first AI agent using Langchain and Sevalla.
A new architecture is emerging: AI agents that don't just react, but can independently develop a logic of actions.
If AI can write the code, what’s left for software engineers?
Companies are leaping on this AI-driven opportunity using tools like chatbots and AI agents to innovate and work smarter.
The AI Customer Service Agent: Jarvis or Trojan Horse? Opinion piece on the double-edged sword of AI customer service agents.
A conversational healthcare agent built with Parlant, designed to assist patients with appointment rescheduling, doctor availability, and lab work preparation.
The AI agent tech stack is a layered system of tools that enables these agents to reason, act, and adapt with the capabilities equivalent to that of a human.
Production environment is where the fancy AI agent demos go to die because demos are controlled environments while production is the wild wild west.
WLTech.AI explores the ARC challenge, an important benchmark in AI research, advancing the quest for artificial general intelligence through generalization.
BI Agents could overwhelm tomorrow’s analytics systems. This conceptual scenario explores the architectural stress and governance needed to contain them.
Learn how to build flexible, intelligent workflows with n8n. Automate tasks, boost productivity, and create systems that adapt to real-world complexity.
By giving an AI a direct line to the quantum vacuum, we aren’t just making a random number generator. We are building a machine that can break its own chains.
Enhanced your SQL workflows with LangChain and FAISS: learn how vector databases, foreign-key-aware retrieval, and AI-powered tests remove token bloat
Tired of brittle prompts? These 5 agentic AI patterns—Reflection, Tool Use, ReAct, Planning, and Multi-Agent—make your LLMs more reliable and useful.
How to serve raw Markdown files in a Next.js blog using middleware and API routes, enabling Agent Experience SEO (AX-SEO).
Building an AI agent to analyze video sounds easy—until reality hits. Discover why multimodal LLMs like GPT-4o still struggle with video.
Polish - a web extenstion that lets you style every web page, powered by AI
Microsoft is intensifying its marketing and information campaign for GitHub Copilot Agent Mode in VS-Code and VS2022.
Learn Symfony 7.4 best practices for symfony/ai-agent: configure agents via AI Bundle, inject with DI, build type-safe tools with enums, add processors, and tes
Discover how AI agents are reshaping software, challenging SaaS, and driving a future of automated workflows, personalization, and agent-powered systems.
Transforming AI agents into reliable sources of ROI isn't primarily a technical hurdle it's a matter of strategic governance and management.
OpenLoop is a live feedback platform built almost entirely by autonomous AI agents—widget, roadmap, voting, admin, and all. Cost: ~$15.
How AI coding agents can speed up time to market and prototyping, but still require experienced oversight to deliver quality results.
The “misalignment window” reveals whether an AI agent feels sharp or strangely distant.
54% of software defects in production are caused by human error during testing.
The film serves as a direct invitation for users to join the live Pre-TGE Agent Activation Campaign. World3 has successfully raised $5.5 million in funding at a
Agentic AI can transform testing—but only if it’s controlled. Start small, add guardrails, integrate tools, and scale autonomy once reliability
AI agents are the talk of the technology world, and for good reason.
Data Horizon earns a 34 Proof of Usefulness score by building conversational analytics for GA4, helping marketers access insights without complex dashboards.
Model Context Protocol is an idea for a more standardized way to manage the back-and-forth between your applications and large language models.
Tired of guessing what’ll sell in new markets? I built an AI agent that analyzes local search, ecommerce listings, and reviews to help you launch smarter.
MCP makes it possible for your agents to connect to Slack, GitHub, your database, and whatever else you throw at it.
2/2/2026: Top 5 stories on the HackerNoon homepage!
Learn how to engineer an autonomous SDR agent using Gemini 2.5 Flash and n8n. Scale high-intent B2B lead generation for just $0.01 per lead

Revolutionize summary evaluation with Agentic AI. Ensure factual accuracy, deep understanding, and human-centric evaluation for trustworthy AI insights.
How to build practical, human-centered AI agents with small, agile teams to meet real federal challenges—without starting from scratch.
A real-world Agent Skills refactor: progressive disclosure, the 200-line entry rule, and workflow-first design to prevent context blowups and regain speed.
AI agents are transforming industries, replacing jobs, and redefining the future of work. Here’s what this means for careers, businesses, and society.
In this article, we will learn how an AI Agent helps in decision-making for booking a flight by searching for the cheapest flight to the specified route.
Built a system to automate ChatGPT through browser control using actionbook and OpenClaw. Telegram message triggers automated Chrome interaction with ChatGPT
Let’s explore what your AI agent truly needs to unlock its full potential and conquer the Web!
AI anxiety is a skills mismatch. Super-agency means orchestrating AI systems—intent, workflows, tools, QA, iteration—to scale your output and control.
When I started adding API test automation to Debuggo, I realized the whole process is a series of traps. Here is how I'm solving them.
Systemic risk in blockchain is accelerating with autonomous agents operating without awareness of network conditions.
Shinkai is an open-source, local-first platform for building and running AI agents. It combines offline AI, peer-to-peer protocols, and crypto payments to help
6/25/2025: Top 5 stories on the HackerNoon homepage!
Explore how AI coding agents reshape developer workflows from autocomplete to pair programming and discover the future of coding collaboration.
Web3 took a big step forward, but it’s not a leap just yet.
Explore the rise of autonomous AI agents, their business integration, key challenges, and why now is the time to prepare for an AI-driven future.
7/13/2025: Top 5 stories on the HackerNoon homepage!
Reference architecture for LoA-3 agents on rails: SOP-first (YAML), UC-governed tools, LangChain agent, Assurance Gate, MLflow 3 + OTel, Databricks GPT-OSS.
7/20/2025: Top 5 stories on the HackerNoon homepage!
5/18/2025: Top 5 stories on the HackerNoon homepage!
8/3/2025: Top 5 stories on the HackerNoon homepage!
7/27/2025: Top 5 stories on the HackerNoon homepage!
Generative AI systems are built in blocks, each performing a distinct function and interacting with other blocks to achieve a larger goal.
5/25/2025: Top 5 stories on the HackerNoon homepage!
Next great leap in artificial intelligence is the creation of “agentic LLMs”
Backed by a recent $3 million investment, iAgent has forged several strategic partnerships with leading projects such as Base, LayerZero, Avalanche,
The distinction between testing and evaluation in AI systems
Aventino is a permissionless smart wallet infrastructure that uses Ethereum account abstraction protocol ERC4337. The platform lets developers:
Artificial intelligence (AI) is poised to make its largest splash in the healthcare industry.
With the trading tool powered by AI, the platform aims to transform how users interact with decentralized finance, offering innovative tools and enhanced access
4/3/2025: Top 5 stories on the HackerNoon homepage!
This is the User Trust Probe. It doesn’t measure accuracy. It measures something earlier and more fragile
AI agents fail when their memory is stale. CocoIndex keeps agent knowledge fresh by incremental processing out-of-box with zero config overhead.
8/19/2025: Top 5 stories on the HackerNoon homepage!
A Reddit post highlights the failure modes of internal AI agents.
In the future, a "Senior Engineer" won't be defined by how fast they can debug a stack trace. They will be defined by how well they architect the Agentic Loops
6/1/2025: Top 5 stories on the HackerNoon homepage!
Control your crypto with natural language. OrcaMind’s AI Agent Wallet automates secure multi-chain transactions using LLMs, MPC, and TEE security.
4/7/2025: Top 5 stories on the HackerNoon homepage!
AI agents have rapidly gained attention in the cryptocurrency space in recent months. Following the rise of TruthTerminal as the first widely recognized AI agen
An agent that exists only as a prompt or a framework-level loop has no real boundaries.
AI-powered QA interviews: a practical approach to structured hiring that evaluates real skills, reduces bias, and reflects actual QA work.
The solution to tools that lack real-world utility isn't more AI tools; it's instead a fundamental restructuring of how AI creates and captures value.
9/10/2025: Top 5 stories on the HackerNoon homepage!
In this article, we look at the chain of thought prompting technique and how it is key to shaping smarter AI Agents.
How to unlock the full power of GitHub Copilot agents inside VS Code.
Visit the /Learn Repo to find the most read blog posts about any technology.
2026-04-22 21:59:59
This approach misses the real strength of LLMs. Instead of exposing raw RAG output, we should feed the retrieval knowledge back into the LLM first. This allows the model to reason with the context, synthesize multiple pieces of information, and deliver answers that are more accurate, natural, and aligned with user intent. In other words, we are not just teaching the LLM facts - we're teaching it to think with our data.
2026-04-22 19:55:51
\
When hypertext first entered theoretical discussions, it appeared as a visible transformation of writing. Text was no longer arranged exclusively in sequence but in segments connected through explicit links. A reader could move laterally rather than progressively, choosing one path among many possible continuities. At the time, this seemed revolutionary because it disrupted habits inherited from print culture: beginning, middle, conclusion; argument as controlled succession; reading as disciplined progression through an already determined line.
Yet from a contemporary perspective, the most important feature of hypertext was not the link itself but the cultural logic hidden behind it. Hypertext revealed that texts had long depended on suppressed multiplicity and that reading had always contained unrealised alternatives. The visible hyperlink merely gave technical form to relations that had previously remained implicit.
This is why hypertext should now be understood less as a distinct technological phase than as an early form of digital cultural consciousness. What appeared in the 1990s and early web culture as an explicit textual architecture has since become an invisible infrastructure governing far larger systems. The link has not disappeared; it has simply ceased to present itself openly. Contemporary digital environments still operate through relational pathways, but increasingly these pathways are no longer chosen by the reader. They are selected, weighted, ranked, and anticipated by algorithmic systems whose operations remain largely opaque to those who move within them.
The difference is historically decisive. Classical hypertext asked the reader to decide where to go next. One clicked consciously, aware that every movement represented a choice among visible alternatives. The early web retained this experience. Even when one became lost, one understood that this loss belonged to one’s own navigation. A text opened toward another text, one archive toward another archive, and the reader remained aware of participating in the construction of continuity.
Algorithmic culture alters this relation by reducing the visibility of the path itself. The next element often appears before the need for explicit choice emerges. Feeds scroll continuously, recommendations arrive before requests are fully formed, search results present hierarchies already shaped by ranking systems whose criteria remain inaccessible in ordinary use. A relation still exists between one textual fragment and the next, but that relation no longer appears as a transparent link.
It arrives already interpreted by technical systems acting in advance of reading.
This means that the hypertextual condition has not been overcome; it has been absorbed into another order of mediation. What earlier digital culture exposed externally now operates internally, beneath interface surfaces that increasingly conceal their own logic. Hypertext once displayed the plurality of possible paths. Algorithms increasingly narrow that plurality while preserving the appearance of abundance.
This transformation can be observed clearly in the history of search. Early encounters with the web often involved visible uncertainty. One searched not because answers were guaranteed but because navigation itself remained exploratory. Results were unstable, often excessive, and demanded interpretive labor. Over time, systems developed toward anticipatory confidence. Search now often appears less like access to a field of documents and more like an immediate resolution of probable intent.
Google became emblematic of this shift not merely because it organised information efficiently, but because it normalised a new relation between reader and textual world: one in which ranking itself became an epistemic force. Documents still exist in plurality, but their visibility depends increasingly on technical ordering rather than on neutral availability. The path through knowledge remains relational, yet relations are pre-weighted before the reader arrives.
This changes the philosophical meaning of access. In classical hypertext, multiplicity was explicit and often disorienting, but it remained visible as multiplicity. In algorithmic systems, multiplicity persists while appearing already resolved. One receives not the field itself, but a selected sequence within the field. Hypertext made complexity perceptible. Algorithms often render complexity manageable precisely by concealing the processes through which certain relations become privileged over others.
The social consequences of this shift extend beyond convenience. Once pathways become opaque, authority itself changes form. In print culture, authority often resided visibly in authorship, institutional publication, editorial control, or recognised disciplinary legitimacy. In hypertextual environments, authority became more distributed: references linked outward, citations proliferated, texts acquired credibility partly through relational density rather than solely through singular origin. This transformation was already visible in Wikipedia, where knowledge ceased to rely entirely on fixed authorship and instead emerged through revision histories, collaborative correction, and transparent contest-ability.
Wikipedia remains one of the clearest examples of hypertext functioning at civilisational scale because it preserved visible relations while allowing authority to remain distributed rather than centralised. Every article contains pathways outward; every statement is potentially revised; every source remains contestable. Trust does not emerge from singular voice but from the visible architecture of references and revisions.
Algorithmic culture complicates this model because many contemporary systems no longer expose relations so clearly. Authority increasingly appears detached from explicit source structures. Recommendations emerge without visible citation. Summaries appear without transparent genealogy. Information arrives in finished surfaces whose internal pathways often remain hidden.
This condition becomes even more striking in the case of large language models such as OpenAI or Anthropic. These systems do not operate through hyperlinks in any visible sense, yet they inherit hypertextual logic at a far deeper level. They are trained on vast textual corpora composed of innumerable relations among words, phrases, genres, arguments, references, and discursive patterns. What they generate is not retrieved sequence but probabilistic continuation emerging from latent relational structures distributed across immense textual environments.
In this sense, one may say that artificial intelligence realises a concealed form of hypertext. The machine no longer presents links to be followed; it moves internally across relations before language appears. Connections once externalised through clickable structures become statistical operations within model space. The reader no longer navigates between visible fragments; the system navigates relational densities before producing a sentence.
This changes the epistemological situation profoundly. Classical hypertext allowed one to inspect at least some of the pathways through which meaning expanded. In generated language, those pathways often remain inaccessible. A statement appears coherent, yet the route through which textual relations produced that statement cannot ordinarily be reconstructed in human-readable form.
The old question attached to hypertext was simple: where does this link lead? The corresponding question in algorithmic language becomes more difficult: where does this sentence come from?
That question now matters because digital culture increasingly encounters language detached from stable provenance. A generated paragraph may resemble authored discourse while lacking identifiable origin in any conventional sense. It may synthesise without quoting, paraphrase without attribution, imitate without citation. Hypertext once multiplied references; algorithmic generation can dissolve visible reference while preserving relational production.
This is one reason digital trust has become a philosophical rather than merely technical problem. Trust in earlier textual systems often depended on visible chains of authority: author, edition, institution, archive, citation. Hypertext complicated those chains but usually preserved the possibility of tracing them. Algorithmic systems increasingly produce outputs in which relation survives while traceability weakens.
The result is not simply misinformation, as public debates often suggest, but a deeper instability of textual confidence. One encounters language whose coherence no longer guarantees origin, whose fluency no longer confirms accountability, and whose apparent authority may conceal uncertain genealogies.
Yet it would be mistaken to interpret this as a total rupture. Algorithmic culture does not abolish hypertextual history; it intensifies tendencies already visible within it. Distributed authority, uncertain authorship, unstable contextual boundaries, and the multiplication of interpretive paths all emerged long before contemporary AI systems became socially visible. Hypertext was one of the first forms through which culture learned to inhabit those conditions consciously.
What changes today is that relational complexity increasingly acts before conscious reading begins. The visible link becomes hidden inference.
This also explains why contemporary readers often experience digital environments as simultaneously abundant and narrowing. There appears to be endless access, yet movement occurs within architectures shaped by recommendation, ranking, and prediction. One no longer gets lost in the same way early web users once did, because systems increasingly reduce visible uncertainty. But the price of reduced uncertainty is diminished awareness of how pathways are constructed.
Hypertext once educated readers in the labor of choosing connections. Algorithmic culture risks producing readers who inherit connections already chosen.
For this reason, the history of hypertext remains unexpectedly contemporary. What once appeared as a literary or technical experiment now reads as an early conceptual map of the conditions within which digital life has fully matured. The old link has not vanished. It has become infrastructural, statistical, and often invisible.
Perhaps this is why returning to hypertext theory today no longer feels antiquarian. It reveals that many of the problems attributed exclusively to artificial intelligence or platform monopolies were already visible at the moment culture first began to reorganise itself around non-linear textual relations.
Hypertext did not merely anticipate digital culture. It named, earlier than most other concepts, the form through which culture would eventually become difficult to trust without new philosophical tools.
\
Barrett, E.
1988. Text, Context, and Hypertext: Writing with and for the Computer. \n 1989. The Society of Text: Hypertext, Hypermedia, and the Social Construction of Information.
Barthes, R.
1970. S/Z. \n 1977. The Death of the Author. In: Image-Music–Text.
Eastgate Systems
nd. Serious Hypertext. Hypertext publications archive.
Epstein, M.
1998. The Book of Books. Digital project. \n 1998. From the totalitarian era to the virtual. On the opening of The Book of Books.
Fiderio, J.
1988. A Grand Vision.
Franks, M.
1995. The Internet Publishing Handbook.
Garcia Marquez, G.
1977. One Hundred Years of Solitude.
GreenHearth, A.
1996. SixSexScenes. Hypertext novella.
Ingarden, R.
1937. The Cognition of the Literary Work of Art. \n 1947. Sketches in the Philosophy of Literature. \n 1960. The Literary Work of Art.
Keep, C., and T. McLaughlin.
n.d. The Electronic Labyrinth.
Landow, G. P. (ed.)
1992. Hypertext–Text–Theory.
Lennon, J. A.
1997. Hypermedia Systems and Applications: World Wide Web and Beyond.
Nelson, T. H.
1978. Dream Machines. \n 1981. Literary Machines.
Rosner, K.
1981. Structural Semiotics in Literary Studies.
Rudnev, V. P.
1997. Dictionary of Twentieth-Century Culture.
Taylor, D.
1998. HTML 4: Creating Web Pages.
Wiesel, M.
1998. Late Novels of Italo Calvino as Models of Hypertext.
\
:::info Hypertextual Sketches is a micro-series of essays on hypertext, the post-modern condition of culture, semiotics, and non-linear ways of describing how meaning circulates when continuity breaks down. Original research essays were written between 1997 and 2000, in Prague, Krakow, and Leipzig, when the internet was still experimental, but its logic was already reshaping how we read, write, and think. Larger portions of this work were actually published on paper (!) between 1999 and 2003. Read today, these essays function less as historical artifacts and more as early signals of a reality we now take for granted.
:::
\
\ \