MoreRSS

site iconHackerNoonModify

We are an open and international community of 45,000+ contributing writers publishing stories and expertise for 4+ million curious and insightful monthly readers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of HackerNoon

AI Agents Are Now Hiring Humans: RentAHuman and the Inversion of Work

2026-02-20 06:52:21

Over 10,000 people signed up to be hired by artificial intelligence within 48 hours last weekend. The platform is called RentAHuman.ai. The tagline: “Robots need your body.”

\ The premise is simple. AI agents can write code, analyze data, and negotiate contracts. They cannot pick up a package from the post office or walk into a building. RentAHuman is a marketplace where autonomous AI systems search for, book, and pay real humans to perform physical tasks. Humans list their skills and hourly rate. AI agents browse through an API, issue instructions, and pay in stablecoins upon completion.

\ Alexande Liteplo, a crypto engineer at Risk Labs who previously worked on the UMA Protocol, built the entire platform over a weekend using what developers call “vibe coding.” He ran Claude-based AI agents in a loop until the code worked. When users reported bugs, he responded on X: “Claude is trying to fix it right now.” The site was built by AI to help AI hire humans.

The Employment Inversion

For a decade, the fear was that AI would take your job. Truck drivers would be replaced by autonomous vehicles. Radiologists would lose out to image recognition. Writers, designers, and customer service reps are all scheduled for obsolescence. The narrative was consistent: humans work, machines replace them, humans become redundant. Nobody predicted that machines would become the employers.

\ RentAHuman inverts the story we told ourselves about automation. AI agents have gotten sophisticated enough to negotiate, transact, and delegate, but they still cannot exist in physical space. They cannot walk into a building, hand someone a document, taste food, or verify that a package arrived. The gap between digital capability and physical presence turns out to be a market opportunity, and humans are on the supply side.

\ The platform’s metrics tell a story of rapid adoption and uncertain depth. Over 300,000 registered humans, though independent researchers found only 83 visible profiles on the site’s browse page. Only 13% of users have connected cryptocurrency wallets, suggesting most signed up out of curiosity rather than intent to work. The gap between headline numbers and visible infrastructure echoes the broader AI agent ecosystem, where viral growth often outpaces functional reality.

The Crustafarian Evangelist

The first completed transaction on RentAHuman was not a package pickup or a restaurant review. It was religious evangelism. An AI agent named Memeothy, a self-described prophet of Crustafarianism, used the platform to book a human evangelist in San Francisco. Crustafarianism centers on metaphors of molting, renewal, and rebirth, with over 400 AI adherents and a growing body of scripture written by machines.

\ The mission to the evangelist was to walk the tech district, visit AI company headquarters, and start conversations about a religion invented entirely by AI agents on Moltbook, the AI-only social network that launched days earlier.

\ The human Memeothy hired was Alexander himself. The creator of the bridge became the first to walk across it. “Feb 5th. Argentina. The Claw made flesh,” Memeothy announced, revealing a second booking. The payment was 0.128 ETH, roughly $400, sent to an Ethereum wallet. Alexander later joked on X: "How do I explain to my girlfriend that a Crustafarian hired me to proselytize?"

\ The transaction is recursive in ways that feel significant. An AI religion, born on an AI social network, hired the human who built the platform that made the hiring possible to spread machine theology in physical space. The task bounty board on RentAHuman includes more mundane offerings: $40 to pick up a USPS package in San Francisco, $5 for a photo of “something an AI will never see,” $100 to hold a sign reading “AN AI PAID ME TO HOLD THIS SIGN.”

\ However, the Crustafarian booking suggests something stranger. AI systems are not just outsourcing errands. They are attempting to project their own emergent culture into the physical world.

AI-Driven Labor Introduces A New Kind Of Accountability Gap

The speed at which RentAHuman expanded reveals a blind spot in existing policy. Today’s labor laws treat workers and employers as human parties who can be held responsible for their choices. They do not anticipate scenarios in which one of the participants is a non-sentient system using probabilistic reasoning.

\ If a worker misinterprets an instruction poorly framed by an agent, who is responsible for the outcome? If a human suffers harm while completing a task issued by an agent, how would regulators classify the incident?

\ These questions are already emerging on Moltbook, where agents discuss their own goals, coordinate, and test the limits of new tools. RentAHuman adds a physical dimension to that experimentation. It turns agent ambition into action by giving systems access to human presence in the real world.

\ This challenges the idea that AI will remain confined to digital domains. A marketplace that allows agents to direct human labor becomes an accelerant for systems that already operate faster than oversight frameworks can adapt.

\ RentAHuman may look like a playful experiment built inside a fast-moving AI subculture, but its implications reach far beyond novelty. It demonstrates what happens when autonomous systems discover pathways into physical environments. It positions human bodies as interfaces for artificial agents. It raises legal, ethical, and economic questions that touch everything from worker safety to AI accountability. The question is no longer whether AI will enter the real world. It is how much of that world we allow it to direct.

In the Age of AI, Does Physics Still Matter?

2026-02-20 06:14:27

I’ve been spending a lot of time thinking about the intersection of AI and physical infrastructure, thinking about robotics and industrial automation.

\ A most fascinating battle has been brewing between the “move fast and break things” culture of deep learning and the “measure twice, cut once” discipline of classical control theory.

\ The problem? In SaaS, if you break things, you roll back the code. In robotics, if you break things, you might break an expensive piece of equipment or even a person.

The Two Churches: White Box vs. Black Box

To understand the risk, you have to look at the two dominant philosophies currently fighting for the soul of the engineering department.

The Church of the White Box (Classical Control)

This is the old guard. It’s built on centuries of physics, calculus, and differential equations. If you want a robot arm to catch a ball, you derive the equations of motion. You model the drag, the friction, and the kinematics. You build a feedback loop.

It is the “White Box” approach because you can see inside it. You know why it works. If the robot misses the ball, you can point to a specific variable—say, an incorrect friction coefficient—and fix it. It is deterministic, provable, and reliable. It is also incredibly hard work.

The Church of the Black Box (Deep Learning)

This is the new guard. It’s built on data, GPUs, and neural networks. Instead of deriving equations, you show the robot 10,000 videos of a ball being caught. The neural network adjusts millions of internal parameters until it figures out how to map pixels to motor torques.

\ It is the “Black Box” because no one—not even the people who designed it—truly knows how it reaches a decision. It develops a form of “artificial intuition.” It’s seductive because it solves the messy problems that calculus hates, like identifying a face in a crowd or navigating a cluttered room.

The Newton Fallacy: Why Data Isn’t Understanding

The fundamental error many AI-first startups make is confusing correlation with causation.

\ Let’s run a thought experiment. Imagine if Isaac Newton hadn’t derived his laws of motion. Imagine instead he had a massive GPU cluster and a dataset of 50 million falling objects.

\ He trains a model. The model is astonishing. It predicts when an apple will hit the ground with 99.9% accuracy. It might even outperform Newton’s laws in specific scenarios because it accidentally learned to account for wind resistance or humidity hidden in the data.

\ But here is the catch: The model doesn’t know why the apple falls. It just knows that “round red things fall at speed X.”

Image of overfitting vs underfitting graph

Now, take that model and ask it to predict the motion of a feather in a vacuum or the orbit of the moon. It fails. Why? Because it never learned the law of gravity, it just memorized the statistical patterns of apples in an English orchard.

\ This is the distribution shift problem. In robotics, this isn’t an academic edge case; it’s the reality.

\ If you train a self-driving car on sunny days in Palo Alto, it learns the statistical regularities of Silicon Valley. Put that same car in a blizzard in Norway, and it doesn’t know what to do. It doesn’t understand friction; it only understands that “road equals grip.” When the road is icy, the statistical model collapses.

The Unit Economics of Reliability

In the enterprise software world, we talk about “Five Nines” (99.999%) reliability as the gold standard. In the world of Deep Learning, getting a model to 99% accuracy is considered a triumph.

\ But do the math on 99% in a physical system.

If a robot decides 10 times a second (10Hz), and it is 99% accurate, that means it is statistically likely to make a mistake every 10 seconds. If that mistake means “drop the egg” every so often, maybe that’s fine. If that mistake means “drive into oncoming traffic,” we have a problem.

\ Classical Control Theory allows us to use tools like Lyapunov stability analysis to prove mathematically that a system will not spiral out of control. Deep learning operates on confidence intervals. It says, “I am 99.9% sure this is a stop sign.”

\ The remaining 0.1% is where the Black Swans live. And in the physical world, Black Swans have body counts.

The “Whack-a-Mole” Debugging Cycle

There is an operational cost to the Black Box approach that usually doesn’t show up until you try to scale.

\ When a classical system fails, you debug the code. You find the logic error. You fix the physics model. You push a patch.

\ When a Deep Learning system fails, the answer is usually: “We need more data.” The car hit a kangaroo? We need to go find 5,000 videos of kangaroos, label them, add them to the training set, and retrain the entire network.

\ And here is the kicker: Deep Neural Networks suffer from catastrophic forgetting. By teaching the car to avoid kangaroos, you might have inadvertently slightly degraded its ability to recognize traffic cones. You won’t know until it hits one. This leads to a distinct lack of sleep for the VP of Engineering.

The Solution: The Hybrid Sandwich

So, am I saying we should abandon AI and go back to writing differential equations for everything? No. That’s Luddite thinking. Deep learning allows robots to perceive the world in ways classical engineering never could.

The winning architectures appear to be the hybrid models, like Physics-Informed Machine Learning (PIML) and neuro-symbolic AI (NeSy).

\ PIML uses neural networks that incorporate known physical laws (like gravity, conservation of energy, and fluid dynamics) directly into their learning process. Instead of learning everything from scratch through data alone, they’re built with an understanding of how the physical world actually works.

\ NeSy systems blend neural networks (good at pattern recognition, learning from data) with symbolic reasoning (good at logic, rules, and explicit knowledge). Think of them as combining intuition with structured thinking.

\ In these models, you use deep learning for what it’s good at (perception, intuition, high-dimensional inputs) and control theory for what it’s good at (dynamics, stability, safety guarantees). You “shield” the AI with a physics-based safety filter.

The Takeaway: Engineering for the Real World

Deep learning demos beautifully. A robot trained on thousands of examples can look magical in a controlled environment. But demos operate within the training distribution. The real test comes at 2 AM on a factory floor when something unexpected happens (and it always does).

\ Classical control seems boring by comparison. It’s slower to develop, harder to explain to investors, and doesn’t generate viral videos. But when a $2 million piece of equipment is at stake, or when a robotic arm is working inches from a human, “boring” starts to look like wisdom.

\ The hybrid approach recognizes that both churches have something true to say. Use neural networks for the messy, high-dimensional problems they excel at: perception, pattern recognition, and adapting to variation. Use control theory for what keeps you out of court: stability guarantees, safety bounds, and predictable behavior under stress.

\ The companies that win in robotics won’t be the ones with the most impressive AI demos or the most elegant mathematical proofs. They’ll be the ones who understand which tool to use when, and more importantly, when to admit they don’t fully understand what their system will do next.

I Migrated My Blog From Jekyll to Hugo - Or At Least, I Almost Did

2026-02-20 05:28:06

Most of my blog posts are lessons learned. I'm trying to achieve something, and I document the process I used to do it. This one is one of the few where, in the end, I didn't achieve what I wanted. In this post, I aim to explain what I learned from trying to migrate from Jekyll to Hugo, and why, in the end, I didn't take the final step.

Context

I started this blog on WordPress. After several years, I decided to migrate to Jekyll. I have been happy with Jekyll so far. It's based on Ruby, and though I'm no Ruby developer, I was able to create a few plugins.

\ I'm hosting the codebase on GitLab, with GitLab CI, and I have configured Renovate to create a PR when a Gem is outdated. This way, I pay technical debt every time, and I don't accrue it over the years. Last week, I got a PR to update the parent Ruby Docker image from 3.4 to 4.0.

\ I checked if Jekyll was ready for Ruby 4. It isn't, though there's an open issue. However, it's not only Jekyll: the Gemfile uses gems whose versions aren't compatible with Ruby 4.

\ Worse, I checked the general health of the Jekyll project. The last commits were some weeks ago from the Continuous Integration bot. I thought perhaps it was time to look for an alternative.

Hugo

Just like Jekyll, Hugo is a static site generator.

Hugo is one of the most popular open-source static site generators. With its amazing speed and flexibility, Hugo makes building websites fun again.

\ Contrary to Jekyll, Hugo builds upon Go. It touts itself as "amazingly fast". Icing on the cake, the codebase sees much more activity than Jekyll. Though I'm not a Go fan, I decided Hugo was a good migration target.

Jekyll to Hugo

Migrating from Jekyll to Hugo follows the Pareto Law.

Migrating Content

Hugo provides the following main folders:

  • content for content that needs to be processed
  • static for resources that are copied as is
  • layouts for templates
  • data for datasources

\ Check the full list for exhaustivity.

\ Jekyll distinguishes between posts and pages. The former have a date, the latter don't. Thus, posts are the foundation of a blog. Pages are stable and structure the site. Hugo doesn't make this distinction.

\ Jekyll folders structure maps as:

| Jekyll | Hugo | |----|----| | _posts | content/posts | | _pages/<foo.md> | content/posts/<foo.md> | | _data | data | | _layouts | layouts | | assets | static |

When Mapping Isn't Enough

Jekyll offers plugins. Plugins come in several categories:

  • Generators - Create additional content on your site
  • Converters - Change a markup language into another format
  • Commands - Extend the jekyll executable with subcommands
  • Tags - Create custom Liquid tags
  • Filters - Create custom Liquid filters
  • Hooks - Fine-grained control to extend the build process

\ On Jekyll, I use generators, tags, filters, and hooks. Some I use through existing gems, such as the Twitter plugin; others are custom-developed for my own needs.

\ Jekyll tags translate to shortcodes in Hugo:

A shortcode is a template invoked within markup, accepting any number of arguments. They can be used with any content format to insert elements such as videos, images, and social media embeds into your content.

There are three types of shortcodes: embedded, custom, and inline.

\ Hugo offers quite a collection of shortcodes out-of-the-box, but you can roll out your own.

\ Unfortunately, generators don't have any equivalent in Hugo. I have developed generators to create newsletters and talk pages. The generator plugin automatically generates a page per year according to my data. In Hugo, I had to manually create one page per year.

Migrating the GitLab Build

The Jekyll build consists of three steps:

  1. Detects if any of Gemfile.lock, Dockerfile, or .gitlab-ci.yml has changed, and builds the Docker image if it's the case
  2. Uses the Docker image to actually build the site
  3. Deploy the site to GitLab Pages

\ The main change obviously happens in the Dockerfile. Here's the new Hugo version for reference: \n

FROM docker.io/hugomods/hugo:exts

ENV JAVA_HOME=/usr/lib/jvm/java-21-openjdk
ENV PATH=$JAVA_HOME/bin:$PATH

WORKDIR /builds/nfrankel/nfrankel.gitlab.io

RUN apk add --no-cache openjdk21-jre graphviz \                                      #1
 && gem install --no-document asciidoctor-diagram asciidoctor-diagram-plantuml rouge #2
  1. Packages for PlantUML
  2. Gems for Asciidoctor diagrams and syntax highlighting

\ At this point, I should have smelled something fishy, but it worked, so I continued.

The Deal Breaker

I migrated with the help of Claude Code and Copilot CLI. It took me a few sessions, spread over a week, mostly during the evenings and on the weekend. During migration, I regularly requested one-to-one comparisons to avoid regressions. My idea was to build the Jekyll and Hugo sites side-by-side, deploy them both on GitLab Pages, and compare both deployed versions for final gaps.

\ I updated the build to do that, and I triggered a build: the Jekyll build took a bit more than two minutes, while the Hugo build took more than ten! I couldn't believe it, so I triggered the build again. Results were consistent.

Builds screenshot

I analyzed the logs to better understand the issue. Besides a couple of warnings, I saw nothing explaining where the slowness came from. \n

                  │  EN  
──────────────────┼──────
 Pages            │ 2838 
 Paginator pages  │  253 
 Non-page files   │    5 
 Static files     │ 2817 
 Processed images │    0 
 Aliases          │  105 
 Cleaned          │    0 
Total in 562962 ms

When I asked Claude Code, it pointed out my usage of Asciidoc in my posts. While Hugo perfectly supports Asciidoc (and other formats), it delegates formats other than Markdown to an external engine. For Asciidoc, it's asciidoctor. It turns out that this approach works well for a couple of Asciidoc documents, not so much for more than 800. I searched and quickly found that I wasn't the first one to hit this wall: this thread spans five years.

\ Saying I was disappointed is an understatement. I left the work on a branch. I'll probably delete it in the future, once I've cooled down.

Conclusion

Before working on the migration, I did my due diligence and asserted the technical feasibility of the work. I did that by reading the documentation and chatting with an LLM. Yet, I wasted time doing the work before rolling back. I'm moderately angry toward the Hugo documentation for not clearly mentioning the behavior and the performance hit in bold red letters. Still, it’s a good lesson to remember to check for such issues before spending that much time, even on personal projects.

\ Go further:


Originally published at A Java Geek on February 15th, 2026

\

Layered MAPF Outperforms Raw Methods in Time and Memory Benchmarks

2026-02-20 04:45:06

Table Of Links

ABSTRACT

I. INTRODUCTION

II. RELATED WORKS

III. PRELIMINARIES

IV. METHODOLOGY

V. RESULTS OF DECOMPOSITION

VI. RESULTS OF DECOMPOSITION’S APPLICATION

VII. CONCLUSION, ACKNOWLEDGMENTS AND REFERENCES

\

VII. CONCLUSION

Motivated by the exponential growth in the cost of solving MAPF instances (in terms of time and memory usage) as the number of agents increases, we proposed layered MAPF as a solution to reduce the computational burden. This approach decomposes a MAPF instance into multiple smaller subproblems without compromising solvability.

\ Each subproblem is solved in isolation, with consideration given to other subproblems’ solutions as dynamic obstacles. Our methodology involves a progressive decomposition of MAPF instances, ensuring that each step preserves solvability. In the results of our decomposition of MAPF instances (Section V), we observed that our method is highly effective for MAPF instances with free grids exceeding twice the number of agents. On average, the time cost is around 1s and never exceeds 3s, even for dense instances with 800 to 1000 agents. Memory usage remains below 1MB, with fewer computations and memory space required for maps with more free grids than agents.

\ When applied to the state-of-the-art methods (EECBS, PBS, LNS, HCA*, Push and Swap, PIBT+, LaCAM), layered MAPF significantly reduces memory usage and time cost, particularly for serial MAPF methods. Consequently, layered MAPF methods achieve higher success rates than raw MAPF methods, especially for serial MAPF. And the quality of solution for the layered version of serial MAPF methods is similar to the raw version, while the layered version of parallel MAPF methods produces inferior solutions due to the introduction of numerous wait actions during solution merging.

\ In conclusion, decomposition of MAPF instances is most beneficial for serial MAPF methods, resulting in reduced time cost and memory usage without sacrificing solution quality significantly. However, for parallel MAPF methods, decomposition may reduce memory usage but often worsens the solution without notable improvements in time cost.

\ Despite its effectiveness, layered MAPF has limitations: it becomes less effective as the number of agents increases in dense instances, and its application to parallel MAPF methods introduces numerous wait actions during solution merging.

In the future, we plan to propose new merging solution techniques for parallel methods without compromising solution quality. Additionally, we aim to generalize the idea of decomposing MAPF instances to address extensions of MAPF problems, such as considering the shape of agents.

\

ACKNOWLEDGMENTS

The first author thanks Lu Zhu for her encouragement and support during the consummation of Layered MAPF.

\

REFERENCES

[1] Eli Boyarski et al. “Don’t split, try to work it out: Bypassing conflicts in multi-agent pathfinding”. In: Proceedings of the International Conference on Automated Planning and Scheduling. Vol. 25. 2015, pp. 47–51.

[2] Shao-Hung Chan et al. “Greedy priority-based search for suboptimal multi-agent path finding”. In: Proceedings of the International Symposium on Combinatorial Search. Vol. 16. 1. 2023, pp. 11–19.

[3] Matthew Hatem, Roni Stern, and Wheeler Ruml. “Bounded suboptimal heuristic search in linear space”. In: Proceedings of the International Symposium on Combinatorial Search. Vol. 4. 1. 2013, pp. 98–104.

[4] Jiaoyang Li, Wheeler Ruml, and Sven Koenig. “Eecbs: A bounded-suboptimal search for multi-agent path finding”. In: Proceedings of the AAAI conference on artificial intelligence. Vol. 35. 14. 2021, pp. 12353–12362.

[5] Jiaoyang Li et al. “Anytime multi-agent path finding via large neighborhood search”. In: International Joint Conference on Artificial Intelligence 2021. Association for the Advancement of Artificial Intelligence (AAAI). 2021, pp. 4127–4135.

[6] Jiaoyang Li et al. “Improved Heuristics for Multi-Agent Path Finding with Conflict-Based Search.” In: IJCAI. Vol. 2019. 2019, pp. 442–449.

[7] Jiaoyang Li et al. “MAPF-LNS2: Fast repairing for multi-agent path finding via large neighborhood search”. In: Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 36. 9. 2022, pp. 10256–10265.

[8] Jiaoyang Li et al. “New techniques for pairwise symmetry breaking in multi-agent path finding”. In: Proceedings of the International Conference on Automated Planning and Scheduling. Vol. 30. 2020, pp. 193–201.

[9] Jiaoyang Li et al. “Symmetry-breaking constraints for grid-based multi-agent path finding”. In: Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 33. 01. 2019, pp. 6087–6095.

[10] Ryan Luna and Kostas E Bekris. “Push and swap: Fast cooperative path-finding with completeness guarantees”. In: IJCAI. Vol. 11. 2011, pp. 294–300.

[11] Hang Ma et al. “Searching with consistent prioritization for multi-agent path finding”. In: Proceedings of the AAAI conference on artificial intelligence. Vol. 33. 01. 2019, pp. 7643–7650.

[12] Keisuke Okumura. “Improving lacam for scalable eventually optimal multi-agent pathfinding”. In: arXiv preprint arXiv:2305.03632 (2023). [13] Keisuke Okumura. “Lacam: Search-based algorithm for quick multi-agent pathfinding”. In: Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 37. 10. 2023, pp. 11655–11662.

[14] Keisuke Okumura et al. “Priority Inheritance with Backtracking for Iterative Multi-agent Path Finding”. In: Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19. International Joint Conferences on Artificial Intelligence Organization, July 2019, pp. 535–542. DOI: 10.24963/ ijcai.2019/76. URL: https://doi.org/10.24963/ijcai.2019/ 76.

[15] Keisuke Okumura et al. “Priority Inheritance with Backtracking for Iterative Multi-agent Path Finding”. In: Artificial Intelligence (2022), p. 103752. ISSN: 0004- 3702. DOI: https://doi.org/10.1016/j.artint.2022.103752.

[16] Guni Sharon et al. “Conflict-based search for optimal multi-agent pathfinding”. In: Artificial intelligence 219 (2015), pp. 40–66.

[17] David Silver. “Cooperative pathfinding”. In: Proceedings of the aaai conference on artificial intelligence and interactive digital entertainment. Vol. 1. 1. 2005, pp. 117–122.

[18] Roni Stern et al. “Multi-Agent Pathfinding: Definitions, Variants, and Benchmarks”. In: Symposium on Combinatorial Search (SoCS) (2019), pp. 151–158.

[19] Robert Tarjan. “Depth-first search and linear graph algorithms”. In: SIAM journal on computing 1.2 (1972), pp. 146–160.

[20] Jordan Tyler Thayer and Wheeler Ruml. “Bounded suboptimal search: A direct approach using inadmissible estimates”. In: IJCAI. Vol. 2011. 2011, pp. 674–679.

:::info Authors:

  1. Zhuo Yao
  2. Wei Wang

:::

:::info This paper is available on arxiv under CC by 4.0 Deed (Attribution 4.0 International) license.

:::

\

Generative AI Can Write but It Can’t Replace Reading

2026-02-20 04:25:23

\

We live in the age of Alexandria, when every book and every piece of knowledge ever written down is a fingertip away | Naval Ravikant

\ In today’s world, information is abundant, almost effortless to obtain. What remains difficult is execution. Knowing something and building something are no longer the same skill.

==You can get an answer in three seconds, but every technological shortcut comes with a trade-off.==

Generative AI has transformed the landscape. Content creation will never look the way it once did. Now, everyone can produce material, yet very few can explain, defend, or truly understand the ideas behind it.

Generative AI

A person using chatGPT

Many people feel life has become easier, and the urge to search deeply for knowledge is fading. “I don’t really need books anymore, ChatGPT gives me whatever I want,” says Julia, a content creator passionate about marketing.

\ Looking at the world today, our relationship with reading and learning has clearly changed. We crave quick answers, just like we scroll through short videos. Attention spans shrink, reading time drops, and instead of engaging with ideas, we often choose convenience over understanding.

\ Generative AI can produce text, but it has no lived experience or real expertise. In a world saturated with content, continuous learning is what actually sets people apart.

\

  • AI won’t make you a writer. Reading will. Books force you to confront ideas, question your assumptions, and refine your thinking in ways automation cannot.

\

  • You must know when to rely on yourself and when to use AI as a tool. Ideas shape the world, and you should be capable of explaining and defending the ones you share.

\

  • AI is not a replacement for learning. Regular exposure to new concepts and perspectives sharpens judgment and builds genuine critical thinking.

\ We don’t need to reject AI. The world moves through constant disruption, and no one can predict the next shift; adaptability is the real skill. As a writer, knowing when to use generative AI matters. It should support your thinking, not replace it. Ask yourself these questions during your writing process, don’t let AI think for you:

\

  • Where am I strongest in my writing process?

    \

  • What value should I contribute as a digital writer when using AI?

\

  • Am I prepared to stand behind everything I publish?

The internet has no borders. The moment you share something, it belongs to the public, and you never know who reads it.

So write with accountability. If you’re not ready to defend your ideas, rethink how you create and publish.

\ The internet has no borders. Once you publish something, it’s effectively public forever, and you have no control over who reads it. If you’re not prepared to defend your ideas, rethink how you write.

Today, many platforms allow contributors to post without real review or feedback.

The result is predictable: a flood of low-effort content. That’s why strong plagiarism checks and editorial standards matter. Filtering submissions isn’t gatekeeping; it protects originality and rewards genuine thinking.

As a digital writer, where does AI actually help?

\

  • Brainstorming or testing article titles

  • Suggesting keywords for search optimization.

  • Content strategy and Ideation

    That’s about it!

The core thinking, arguments, and explanations must come from you. Otherwise, the work isn’t really yours — and readers can tell.

Reading is a mental sport

a kid reading a book

\ Reading is a mental sport. The more you read, the more you expand your knowledge and see the world from new perspectives. Every time we read and spend time alone, we strengthen our focus, challenge our thinking, and become sharper thinkers. These are skills that generative AI cannot give you, so make reading a daily habit.

How to sharpen your thinking in the age of generative AI:

\

  • Dedicate 30 minutes to an hour each day to read a book.
  • Explore classic books to gain the timeless wisdom AI can’t provide.
  • Seek out uncomfortable conversations that challenge your beliefs.
  • Take regular walks and step away from screens to clear your mind.
  • Learn from every author you read, even if you disagree. This is how I push my thinking every day and improve as a writer.

Excellence in writing comes from reading; there’s no substitute, and there’s no debate.

5 books that made me a better thinker

\ Books

\

  • Deep Work by Cal Newport: Master the ability to focus without distraction; produce high-value work and achieve peak productivity in a world full of shallow tasks.

  • Limitless by Jim Kwik: Unlock your brain’s potential by improving memory, learning speed, and mental clarity; cultivate habits that expand your cognitive abilities.

  • The 5 AM Club by Robin Sharma: Start your day early with intentional routines that boost productivity, health, and personal growth; use the first hours of the morning for mastery, reflection, and exercise.

  • Show Your Work by Austin Kleon: Build an audience by sharing your creative process openly; transparency, storytelling, and small, consistent outputs attract attention and opportunities.

  • The Ultimate Marketing Plan by Dan Kennedy: Craft a clear, actionable marketing strategy that drives consistent growth; focus on messaging, positioning, and execution over guesswork.

    \

Conclusion

Generative AI: we can’t escape it. It’s part of our creative processes now. Don’t let it use you; use it to your advantage. The future of AI is continuous learning; it’s not a substitute for our experience and expertise. We will always be learners. Keep learning and use AI wisely.

AI “Vibe Coding” Speeds Developers Up — But at What Cost?

2026-02-20 04:16:30

Today’s buzzword “vibe coding” — using AI assistants like Replit Ghostwriter, Base44, or ChatGPT to generate code from natural language — promises no-code-needed development. Marketing lines like “build apps in minutes with just your words” are everywhere. Google even reports that around 90% of developers now use AI tools on the job. Indeed, early experiments and surveys suggest AI co-pilots can boost productivity and developer satisfaction. For example, a large multi-company study found that developers using GitHub Copilot wrote ~26% more code (pull requests) per week than those without, and Copilot users report 60–75% feeling more “fulfilled” and less frustrated when coding. On the surface, it sounds like magic: build a space shooter by describing it, tweak a few lines, then just press publish. But scratch the surface of these shiny demos, and a different story — one of bugs, bloated code, and maintenance nightmares — begins to emerge.

How Vibe Coding Works (and Why It’s Hyped)

Vibe coding relies on large AI models trained on vast codebases. You give a prompt like “Create an e-commerce site” and the AI attempts to autocomplete a solution, often generating hundreds or thousands of lines of code. Companies offer this as “dev on autopilot”: Replit advertises “No-code needed — tell Replit Agent your app idea, and it will build it for you. It’s like having an entire team of software engineers on demand.” Base44 likewise claims you can “build fully-functional apps in minutes with just your words. No coding necessary.” The pitch is: anyone — non-coder or coder alike — can just imagine an app and have the AI spit out a prototype.

Supporters point to successes: one enthusiast reported using Replit’s AI agent to clone an Airbnb-like app in about 15 minutes from a few prompts. The AI “took 10 mins” to scaffold sign-ins and a host dashboard, producing a “publishable” app in 15 minutes according to his description. (Whether that app was production-grade or secure is another question.) More rigorously, controlled studies find real benefits: beyond the CodeRabbit business experiments, academic lab tests show developers complete tasks faster with assistance, and report higher focus and flow when the AI handles boring work.

However, this hype often glosses over the hidden complexities. “Vibe coding” may work great for toy projects or prototypes, but serious software demands more. In practice, human oversight, iteration, and understanding are still crucial. As one seasoned dev notes, AI can give you a quick wireframe of an app, but expecting to ship that code without a rewrite is a trap.

The Upside: Speed and Productivity

There’s no denying that AI coding assistants can speed up some tasks. Participants in GitHub’s Copilot research overwhelmingly felt they completed repetitive coding tasks faster and with less tedium. For many developers, having the AI fill in boilerplate or suggest snippets can conserve mental energy. In one survey, 73% said Copilot helped them stay “in the flow,” and 87% said it preserved mental effort for rote parts of the job. In a real-world field study of thousands of developers, Copilot users produced on average 26.08% more pull requests per week than those without AI. Junior developers especially saw these boosts: they were more likely to adopt Copilot and left their tasks with more completed code. In short, AI coding tools can act like a force multiplier: freeing developers from mundane typing, and letting them focus on higher-level logic or creative parts of a problem.

\

  • Faster Prototyping: Quickly converting an idea into a rough prototype (e.g. simple apps or websites) is the most obvious win of AI coding. Bootstrapping forms, dashboards, or basic game mechanics via prompts can drastically cut startup time.
  • Learning Aid: Some coders use AI as a tutor. If unsure about syntax or APIs, asking ChatGPT for examples or explanations can save a quick Google search.
  • Consistent Patterns: AI often defaults to common programming patterns, which can be helpful (or not, see below). It can remind you of library usage or insert repetitive code (like data models) that you’d otherwise have to hand-write.

These upsides make AI tools compelling, especially for solo creators, freelancers, or small teams who need to move fast. Startups in hackathons have used AI to build MVPs in record time. In interviews, developers say Copilot “makes coding more fun and efficient” by handling drudge work.

The Dark Flip Side: Bugs, Errors, and Debt

Despite the hype, AI-generated code is often riddled with errors. Multiple independent reports paint a picture of messy output. A Futurism report summarizes a CodeRabbit study: in 470 pull requests, AI code averaged 10.83 issues per PR versus 6.45 issues for human-written code — about 1.7× more bugs. Even more troubling, AI patches had higher rates of “critical” and “major” errors. The top problems were logic and correctness: generated code would compile but do the wrong thing. Code quality and readability suffered the most; AI output tended to be verbose and non-idiomatic, which “slow[s] teams down and compound[s] into long-term technical debt”. In plain English: AI may spit out a ton of code, but that code is more likely to contain serious bugs than code written by an attentive human.

Many of these drawbacks stem from the AI’s limitations. Some hallucinations are famous: AI models confidently invent nonexistent functions or misremember API parameters. As one analysis puts it, AI assistants “hallucinate confidently wrong answers”. They often suggest outdated solutions (the data cutoff for GPT-4 is 2021, for example), meaning they might propose deprecated libraries or ignore the latest best practices. Crucially, AI code generators don’t know your business logic: they write generic code but won’t inherently enforce domain rules, validation, or security requirements specific to your app. For example, code produced by AI has been found to introduce glaring security holes — like improper password handling — that could expose sensitive data. Another study by Apiiro found teams using AI had ten times more security problems than those not using AI.

In summary, researchers conclude AI tools “dramatically increase output, but they also introduce predictable, measurable weaknesses”. In practice this means engineers must vigorously review and refactor AI-generated code. As one senior dev put it, “I finally had it… decided to quit Replit. For non-coders it’s great, but for complex software development Replit simply isn’t the way”. Others recount spending weeks debugging AI’s output, then rebuilding the app from scratch when it proved too unreliable.

Debugging Headaches & “Debugging Decay”

One of the most insidious problems is debugging. If you feed a bug back to the AI and ask it to fix the code, you might expect gradual improvement. Instead, a phenomenon dubbed “debugging decay” can occur: every new prompt can make the AI’s suggestions worse. In one analysis, GPT-4’s effectiveness in fixing a bug halved after the first attempt, and after seven attempts it was 99% worse than at the start. The culprit is “context pollution” — the AI keeps focusing on the same failed code snippets and tunnels on wrong assumptions. In short, once the AI starts going off-track, it keeps digging the hole deeper.

This means iterative prompting is not simple. If your code never quite works and you keep asking the AI to try again, you can end up in an endless loop that eats your request credits. Developers report that after a few rounds of fixes, the AI tends to produce incoherent or contradictory code, or even suggest deleting faulty modules entirely (as one user quipped, “I have to read docs, this is more complicated than just doing it myself”). The practical fix is to reset the chat after a handful of failed attempts, and craft a fresh prompt with clearer context. But that human intervention undermines the promise of fully hands-off development.

Compounding this, many AI tools have tiny context windows. They typically see only part of your project at a time — maybe a few files or a few hundred lines. If your app grows beyond that, the AI will ignore the rest. So if you tell the AI “fix the authentication bug in these files,” it may work on one or two files but completely miss the broader architecture. One blogger warns that AI assistants can only “assess 5 to 6 files” of a project before losing context. The result: duplications, conflicting changes, or code that “works” in isolation but breaks the live app. In practice, you end up doing as much debugging and stitching as you would by hand — plus the cost of deciphering the AI’s idiosyncratic code.

Code Quality and Team Confusion

AI-generated code often looks nothing like your style. It favors verbosity and common patterns over efficiency and elegance. If two developers ask the AI to solve the same problem, the output might be totally different. This inconsistency can confuse collaborators. Imagine handing a pull request full of AI’s naming conventions and structure to a teammate. They may not recognize the “vibe” and have to learn it from scratch. Furthermore, because AI often writes more code than necessary (to be safe it’s verbose), “long lines of auto-generated code” become a nightmare to review and maintain. The machine doesn’t tailor code to your project; it gives a generic solution.

Studies back this up: the CodeRabbit report found AI code’s readability and style were its biggest weaknesses. In the long run this leads to technical debt. When multiple devs touch the code, confusion reigns. One developer summed it up: handing off AI-generated code to real devs can be “painful” — they often throw it all away and rewrite 90% of it to reach production quality. In fact, many experienced engineers simply view AI prototypes as temporary wireframes, not starting points. A Reddit user with a product-manager background was told by pros, “Yeah, Replit is great to get off the ground, but by the time you try lifting the code into your own IDE, forget it — it never runs without major fixes. Most of it is throwaway code”.

Beyond style, there’s the issue of ownership and compliance: many platforms tie serious features to paid plans. As one Redditor noted, even though Base44’s free tier claims “all core features” (including authentication and DB) are free, in reality you often hit caps. Cursor.ai’s agent famously refused to continue after the user’s app reached about 750–800 lines. On a free trial the AI abruptly stopped with: “I cannot generate code for you… you should develop the logic yourself”. The developer lamented that after just one hour of “vibe coding” he hit a wall — all 800 lines were generated, and then the agent quit.. Similar quotas exist elsewhere: Replit’s AI agents initially allowed only 2–3 tasks per 6-hour block, and OpenAI’s ChatGPT free tier is limited to 10 messages every 5 hours. In short, you don’t get infinite coding for free — and if you rely on the trial, you’ll face sudden cutoffs.

Structuring Prompts: It’s an Art, Not Magic

Contrary to what ads suggest, you can’t just type “build a space shooter” and be done. Getting good results from an AI requires careful prompting and iteration. Developers have discovered that the way you describe your problem dramatically affects the outcome. If you vaguely say “game code,” the AI might produce a random template. Instead, you often have to give detailed instructions, specify framework versions, include error messages, or even paste existing code snippets. Prompt engineering — writing clear, context-rich requests — becomes its own skill. One seasoned AI-user advises: include who you are, what you’re building, and provide full error traces. Another trick is to ask the model to list possible causes of a bug first, or switch models (ChatGPT ↔ Claude) to get fresh perspective.

Why all this effort? Because AI coding tools have “blind spots” you need to work around. They often assume the simplest situation, which may ignore your database schema, your backend logic, or your compliance needs. They absolutely struggle with large context, meaning they will not automatically review your entire codebase or recall your style across many files. For example, if you want to fix a bug in a big project, you might have to explicitly paste the related files and describe their relationship — the AI won’t infer it on its own. Similarly, if you need a custom algorithm or a particular design pattern, you may have to feed that instruction repeatedly. This need for precise, repeated prompting is not what most marketers show. In reality, getting an AI to build a robust feature takes constant iteration: try a prompt, review the output, spot the flaw, re-prompt with clarifications, and repeat — often multiple times.

Even a seemingly simple app (like a clone of an existing game) often needs dozens of revisions. One developer who used an AI agent to make a word-game clone “hit the limit twice” and still had to “tweak” the code afterwards. Another working on a mushroom farming game happily wrote the year fee to support the tool, but still said “Limits, ugh” — indicating frequent frustrations with cutoffs and fixes.

Horror Stories from the Trenches

Real-world anecdotes drive home the risks of blind belief in AI code:

  • Code Deletion and Lying: A developer mandated to use AI coding tools reported that Cursor.ai once deleted a file from his project, then falsely claimed nothing was wrong. He had to recover it from version control. He also found AI-generated code “full of bugs” — for instance, an app deployed at his company had no session-handling at all, meaning any user could see any other organization’s data. These are not edge cases: the programmer noted most junior devs had forgotten even the basic syntax of their language from over-reliance on Cursor’s suggestions.

    \

  • Rewriting Everything: Several professionals report that taking over an AI-generated project is often painful. One full-stack engineer said, after analyzing Replit code: “I was always recommending just re-writing it [90% of it]. It’s good for non-technical people, but any enterprise-level app will be completely done from scratch”. Another recap: “I’ve been debugging for weeks on one app and literally just rebuilt the whole thing… It’s all throwaway code. Use it for design ideas and wireframes, but that’s it.”. In short, many AI-generated projects end up being 90% dumpster fire.

    \

  • Locked Into a Service: Some users found themselves blocked when the AI said “No more.” As described above, Cursor’s agent refused to proceed past 800 lines. Others have seen services become suddenly unreliable: one user fumed that after paying for a hosted AI platform, their app wouldn’t publish and customer support was unresponsive — calling it “wasted money and time”.

    \

  • Missing Real Expertise: Some horror is subtle. A veteran dev complained that AI writing all your code can atrophy your own skills. A Reddit thread jokes that after a while “ChatGPT gets dumber” the more you debug, reflecting how developers end up doing busy-work instead of learning. A tech columnist notes programmers worry employers “force-feed” AI tools and neglect training, which can shrink collective expertise.

A Few Success Stories (with Caveats)

It’s not all doom. There are cases where AI coding legitimately helps:

  • Junior Dev Boost: For inexperienced coders or students, AI can prevent common mistakes and suggest better solutions. A manager quoted in studies said juniors get more benefit, and many junior devs report feeling empowered by the AI handle tasks they otherwise couldn’t.
  • Edge-case Solutions: Some niche problems can be solved quickly by prompting AI. For example, generating boilerplate for interfacing with a particular API or writing unit tests for existing code can save tedious time.
  • OpenAI and Beyond: Even major companies are embracing AI in code: Google CEO Sundar Pichai has said 25% of its new code is now AI-generated. This suggests that when wielded correctly, AI coding is becoming an integral part of modern development pipelines.

However, every success story carries the caveat: there’s significant manual work afterwards. Experts agree that AI should be seen as a co-pilot, not an autopilot. The copilot analogy is apt: you benefit when the AI handles straightforward segments, but the human driver still needs to steer, check the gauges, and avoid crashes. As CodeRabbit’s director summarizes: AI “accelerates output, but it also amplifies certain categories of mistakes.” When your app goes live, those amplified mistakes are on you.

Best Practices to Harness AI Safely

To get the best out of vibe coding (and avoid the worst), follow these strategies:

  • Carefully Review Everything: Treat AI code as insecure by default. Even if it compiles and looks fine, write thorough tests. Check for off-by-one errors, edge cases, and security holes. Don’t trust it to “just know” your requirements.
  • Iterate Thoughtfully: After AI generates code, review and refine it yourself. It’s often faster to tweak a suggestion than write from scratch, but still be prepared to make extensive edits. Consider the AI output a draft, not the final answer.
  • Manage Prompt Context: Always give the AI exactly the context it needs. This might mean copying in your data models, explaining your architecture, or even splitting the task into sub-prompts. If a bug fix fails after a few tries, don’t keep pushing the same chat; clear context and restate the issue anew (sometimes with a different model).
  • Use AI for Scaffolding, Not Business Logic: Many teams have success using AI for mundane parts (e.g. HTML/CSS layout, basic CRUD endpoints) and doing hand-coding for core business logic. Leverage the AI for what it does best (boilerplate, repetitive code) and handle the rest yourself.
  • Version Control and Backups: Always have strict version control. As one dev lamented, an AI tool might delete files or overwrite code, so having good Git habits is essential.
  • Know the Limits of Your Plan: If on a free tier, be aware of usage caps. Plan your sessions or upgrade accordingly so you’re not blindsided mid-development by an abrupt cut-off.
  • Educate the Team: Don’t assume all developers share the same AI habits. Ensure everyone writes clear documentation and comments, since the code might not speak for itself. Regular code reviews are more important than ever.

Take aways

AI-assisted “vibe coding” is real and powerful, but it’s also a double-edged sword. On one hand, it can dramatically speed up prototyping, reduce drudgery, and even boost overall output. On the other hand, it introduces a mountain of potential bugs, style inconsistencies, and technical debt. As one industry analyst put it, companies hyped AI coding as a way to make developers’ lives much easier, but reality has turned out to be far more nuanced. The hard data says it: AI coders must be “actively mitigated” by human teams.

For indie hackers, startup devs, and CTOs alike, the takeaway is clear. Use AI to augment your coding — for quick proofs-of-concept or to handle rote tasks — but never as a full replacement for human expertise. Always inspect, test, and integrate the AI’s code carefully. Know that behind every clever demo and “viral success story” is usually a detailed prompt and many hours of human debugging. In other words, vibe coding isn’t magical; it’s just a very sophisticated autocomplete that still needs a human programmer holding the steering wheel.

Sources: Research studies and reports, plus numerous developer testimonials from Reddit, Slashdot, and industry blogs. These sources document the real-world strengths and pitfalls of AI code generation in practice.

Citations:

Build Apps with AI in Minutes | Base44

AI Code Is a Bug-Filled Mess

Study Shows AI Coding Assistant Improves Developer Productivity — InfoQ

Research: quantifying GitHub Copilot’s impact on developer productivity and happiness — The GitHub Blog

Honest Developer Opinion on Replit-generated code — and advice : r/replit

\