2026-03-21 22:00:00
OpenAI Is Throwing Everything Into Building a Fully Automated ResearcherWill Douglas Heaven | MIT Technology Review ($)
“The San Francisco firm has set its sights on building what it calls an AI researcher, a fully automated agent-based system that will be able to go off and tackle large, complex problems by itself. OpenAI says that the new goal will be its ‘North Star’ for the next few years, pulling together multiple research strands, including work on reasoning models, agents, and interpretability.”
Humanoid Robot Gets Surprisingly Good at TennisLoz Blain | New Atlas
“This ain’t teleoperation. Chinese researchers have tested a new, much quicker and easier method of teaching robots to play tennis, and the results look like a breakthrough in machine learning and real-world AI.”
This Is Not a Fly Uploaded to a ComputerRobert Hart | The Verge
“Aran Nayebi, a professor of machine learning at Carnegie Mellon University, said that the group was ‘not even close’ to capturing the full brain of the fly, showing connections between cells but not crucial details like neurotransmitters or how strong the connections between different nerve cells are. The motor system isn’t a ‘true upload’ either, he said. ‘We are not even faithfully simulating its brain in silico.'”
This May Be the World’s First Quantum BatteryGayoung Lee | Gizmodo
“Researchers finally believe they’ve found the right blueprint for scalable quantum batteries, publishing their findings in a recent study in Light: Science & Applications. ‘My ultimate ambition is a future where we can charge electric cars much faster than [fueling] petrol cars or charge devices over long distances wirelessly,’ James Quach, the study’s senior author and a researcher at CSIRO, Australia’s national science agency, said in a statement.”
My Tesla Was Driving Itself Perfectly—Until It CrashedRaffi Krikorian | The Atlantic ($)
“The problem is bigger than one company’s self-driving system. It’s about how we’re building every AI system, every algorithm, every tool that asks for our trust and trains us to give it. The pattern is everywhere: Condition people to rely on the system. Erode their vigilance. Then, when something breaks, point to the terms of service and blame them for not paying attention.”
A Private Space Company Has a Radical New Plan to Bag an AsteroidEric Berger | Ars Technica
“[TransAstra CEO Joel Sercel] envisions aggregating dozens, and then hundreds, of small asteroids at the ‘New Moon’ processing facility, which could potentially be located at the Earth-Sun L2 point, about 1.5 million km from Earth. Such asteroids could provide water for use as propellant and minerals for everything from solar panels to radiation shielding.”
Val Kilmer Set to Be Be Resurrected With AI for New FilmOwen Myers | The Guardian
“The film-maker is working in conjunction with the late actor’s estate and his daughter, Mercedes, to bring Kilmer back to life with state-of-the-art, generative AI. …The AI-generated version of Kilmer will appear in a ‘significant’ portion of the film, says Voorhees. The film will use images of the actor taken throughout his life to re-create Kilmer through the decades.”
Online Bot Traffic Will Exceed Human Traffic by 2027, Cloudflare CEO SaysSarah Perez | TechCrunch
“‘If a human were doing a task let’s say you were shopping for a digital camera—and you might go to five websites. Your agent or the bot that’s doing that will often go to 1,000 times the number of sites that an actual human would visit,’ Prince said. “So it might go to 5,000 sites. And that’s real traffic, and that’s real load, which everyone is having to deal with and take into account.”
World ID Wants You to Put a Cryptographically Unique Human Identity Behind Your AI AgentsKyle Orland | Ars Technica
“World now claims nearly 18 million unique humans have verified their identities on one of nearly 1,000 physical orbs around the world. Now, with Agent Kit, World wants to let those users tie their confirmed identity to any AI agent, letting it work on their behalf across the internet in a way other parties can trust.”
New NASA Chief Aiming for Moon Landings Every Month in 2027Passant Rabie | Gizmodo
“The regular missions will be geared toward building a lunar base on the moon’s surface, which will act as a laboratory for astronauts to develop ways to live beyond Earth’s orbit. ‘If you’re building a moon base and you’re going there to stay, you’re gonna need lots of missions to and from the moon,’ Isaacman [told SpaceFlight Now in an interview].”
Jeff Bezos Wants to Save Earth With This Freaky-Looking ProbePassant Rabie | GIzmodo
“The mission would be equipped with different techniques for mitigating the asteroid threat, including directing a powerful ion beam (a concentrated stream of charged particles) at the object to change its orbit. …[If that doesn’t work, then like the spacecraft in NASA’s DART mission], NEO Hunter can aim for a direct kinetic impact by ramming into the asteroid at high speed to redirect it from its Earth-bound trajectory.”
The post This Week’s Awesome Tech Stories From Around the Web (Through March 21) appeared first on SingularityHub.
2026-03-20 22:00:00
There’s plenty of hand-waving around AGI. DeepMind hopes to change that with a new, more rigorous approach.
Few terms are as closely associated with AI hype as artificial general intelligence, or AGI. But Google DeepMind researchers have now proposed a framework that could more concretely measure how close models are to this tech industry holy grail.
Artificial general intelligence refers to a mythical AI system that can match the general and highly adaptable form of intelligence found in humans. As the number of tasks that large language models can tackle has rocketed in recent years, there’s been a growing chorus of voices suggesting the technology is creeping ever closer to this threshold.
But so far, there’s been no clear way to assess progress toward AGI, leaving plenty of room for speculation and exaggeration. To address this gap, a team from Google DeepMind has introduced a new cognitively inspired framework that deconstructs general intelligence into 10 key faculties. More importantly, they propose a way to evaluate AI systems across these key capabilities and compare their performance to humans.
“Despite widespread discussion of AGI, there is no clear framework for measuring progress toward it. This ambiguity fuels subjective claims, makes it difficult to track progress, and risks hindering responsible governance,” the researchers write in a paper outlining their new approach. “We hope this framework will provide a practical roadmap and an initial step toward more rigorous, empirical evaluation of AGI.”
This isn’t DeepMind’s first attempt to clarify the term. In 2023, the company proposed separating AI systems into different levels of capability, in much the same way self-driving systems are categorized.
But the approach didn’t really propose a way to measure what level AI systems have reached. The new framework goes further by building a firmer conceptual footing for the key aspects underpinning model performance and a practical way to evaluate and compare systems.
Digging through decades of research in psychology, neuroscience, and cognitive science, the researchers identify eight basic cognitive building blocks that they say make up general intelligence.
These include the perception of sensory inputs and generation of outputs like text, speech, or actions. Add to those learning, memory, reasoning, and the ability to focus attention on specific information or tasks. Rounding out the list are metacognition—or the ability to reason about and control your own mental processes—and so-called executive functions, like planning and the inhibition of impulses.
The researchers also outline two “composite faculties” that require several building blocks to be applied together. These are problem solving and social cognition, which refers to the ability to understand and react appropriately to the social context.
To judge how well AI systems perform on each measure, the researchers suggest subjecting them to a broad suite of cognitive evaluations that target each specific ability. They also propose collecting human baselines for each task. This would involve asking a demographically representative sample of adults with at least a high school education to complete them under identical conditions.
The results of these tests can then be combined to create “cognitive profiles” that give a sense of a model’s strengths and weaknesses. And by comparing the results against the human baselines, it should be possible to determine when a system matches or surpasses the general intelligence of an average person.
Crucially, the framework focuses on what a system can do rather than how it does it, which means the evaluation is agnostic about the underlying technology. However, the researchers concede that there is currently no good way to measure many of the core cognitive capabilities identified.
While there are already well-established benchmarks for faculties like problem solving and perception, there are no reliable tests for things like metacognition, attention, learning, and social cognition. In addition, many of the best benchmarks are public, which means the testing criteria are easily accessible and may have already been included in model training data. So the authors say they’re working with academics to build more robust, non-public evaluations to fill the gaps.
How useful the new framework will be depends on several factors. First, it remains to be seen whether the criteria identified by the DeepMind team truly capture the essence of human general intelligence. Second, they need to prove that acing this test actually leads to better performance on practical problems compared to narrower, specialist AI systems.
But considering the hand-waving nature of the debate around AGI so far, any framework grounded in well-established cognitive theory and rigorous evaluation represents a significant step forward.
The post Google DeepMind Plans to Track AGI Progress With These 10 Traits of General Intelligence appeared first on SingularityHub.
2026-03-19 22:00:00
The prevailing narrative suggests AI is ready to replace humans, but the evidence is more nuanced.
In the past few months, a wave of tech corporations have announced significant staff cuts and attributed them to efficiency gains driven by artificial intelligence.
Companies such as Atlassian, Block, and Amazon have announced they would lay off thousands of employees due to increased reliance on AI.
The narrative these companies offer is consistent: AI is making human labor replaceable, and responsible management demands adjustment.
The evidence, however, tells a more nuanced story.
Genuine disruption is visible in specific corners of the labor market, though the scale of that disruption is commonly overstated. Research from Anthropic published earlier this month shows that although many work tasks are susceptible to automation, the vast majority are still performed primarily by humans rather than AI tools.
Moreover, some occupations are more exposed to displacement than others: Computer programmers sit at the top of the list, followed by customer service representatives and data entry workers. Yet even within the most exposed occupations, AI use is still limited.
The aggregate economic data reflects this reality. A 2025 Goldman Sachs report estimated that if AI were used across the economy for all the things it could currently do, roughly 2.5 percent of US employment would be at risk of job loss.
That’s not a trivial number. However, the report notes that workers in AI-exposed occupations are currently no more likely to lose their jobs, face reduced hours, or earn lower wages than anyone else.
The report does note early signs of strain in specific industries. Goldman Sachs identifies sectors where employment growth has slowed that align with AI-related efficiency gains. Examples include marketing consulting, graphic design, office administration, and call centers.
In the tech sector, US workers in their 20s in AI-exposed occupations saw unemployment rise by almost 3 percent in the first half of 2025. Anthropic’s research also found that job-finding rates (the chance of an unemployed person finding a job in a one-month period) for workers aged 22–25 entering AI-exposed occupations have fallen by around 14 percent since the launch of ChatGPT in 2022. This is a tentative but telling signal about where the pressure is being felt first.
These are meaningful signals, but they are sector-specific and concentrated—not the evidence of sweeping displacement that corporate announcements often imply. That gap between the evidence and the rhetoric raises an obvious question: What else might be driving these decisions?
The timing and framing of the layoffs attributed to AI warrant closer examination. Corporate restructuring, over-hiring during the post-pandemic boom as demand for online services soared, and pressure from investors to demonstrate improved profit margins are all forces operating at the same time as genuine advances in AI.
While these are not mutually exclusive explanations, they are rarely acknowledged alongside one another in corporate communications.
There is a powerful financial incentive for companies to be seen to be embracing AI aggressively. Since the launch of ChatGPT, AI-related stocks have accounted for about 75 percent of S&P 500 returns.
A workforce reduction framed around AI adoption sends a signal to investors that a straightforward cost-cutting announcement does not. A company making AI-related innovations looks a lot better than one sacking staff due to declining revenues or poor strategic decisions.
It is also worth distinguishing between two kinds of workforce reduction. In the first, AI genuinely increases productivity to the point where fewer workers are needed to produce the same output. In the second, staff reductions are not a consequence of AI, but a way to fund it.
Meta illustrates this distinction. The social media giant is reportedly planning to lay off as much as 20 percent of its workforce, while simultaneously committing $600 billion to build data centers and recruit top AI researchers.
In this case, the workers being let go are not being replaced by AI today; they are subsidizing the AI bet their employer is making on the future.
The big picture is likely one of transformation rather than elimination. According to a recent PwC report, employment is still growing in most industries exposed to AI, although growth tends to be slower than in less exposed sectors.
At the same time, wages in AI-exposed industries are rising roughly twice as fast as in those least touched by the technology. Workers with AI skills command an average wage premium of about 56 percent across the industries analyzed.
Together, the data points toward a flattening of the traditional workplace pyramid rather than mass displacement. Firms require fewer junior employees for routine analytical and administrative work, while experienced professionals who deploy AI tools effectively become more productive and command greater value.
AI is a consequential technology and will have a significant impact in the long term. What is in doubt is whether the dramatic, AI-attributed workforce reductions announced by individual companies accurately reflect that trajectory, or whether they conflate genuine technological change with decisions that would have been made regardless.
Making this distinction is not merely an academic exercise. It shapes how policymakers, educators, and workers themselves understand the nature of the disruption they are navigating.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post Tech Companies Are Blaming Massive Layoffs on AI. What’s Really Going On? appeared first on SingularityHub.
2026-03-18 05:54:35
As they imagine typing, implants translate brain signals into keystrokes on a standard digital keyboard.
It’s hard to picture a keyboard layout other than the one we know best. From laptops to smartphones, it’s an integral part of our digital lives.
Scientists at Massachusetts General Hospital have now restored the ability to communicate by keyboard to two people with paralysis—using their thoughts alone.
Both people already had brain implants that could record their minds’ electrical chatter. The new system translated brain signals in real time as each person imagined finger movements. The system then accurately predicted the character they were trying to type.
The system learned to translate brain activity to physical intent after just 30 sentences. Typing speeds reached 22 words per minute with few errors, nearly matching speeds of able-bodied smartphone users.
“To our knowledge, this system provides the fastest… [brain implant] communication method reported to date based on decoding from hand motor cortex,” wrote the team.
The participants are part of the BrainGate2 clinical trial, a pioneering effort to restore communication and movement by decoding neural signals in people who have lost the use of all four limbs and the torso. One of the participants previously used the implants to translate his inner thoughts into text, but with mixed success.
Controlling a digital keyboard is far more intuitive and familiar, which makes it easier to grasp. Once a person learns to use the system, they don’t have to look at the keyboard, giving their eyes a break as they type with their minds. It also allows users full control of when, or when not, to share their thoughts, preventing accidental leakage of private musings onto a screen or broadcasted with AI-generated speech.
Parts of the brain hum with electrical activity before we speak. Over the past decade, brain implants—microelectrodes that listen in and decode signals—have translated these seemingly chaotic buzzes into text or speech, allowing paralyzed people to regain the ability to communicate.
Methods vary. Some hardware takes the form of wafer-thin disks sitting on top of the brain and gathering signals from vast regions; other devices are inserted into the brain for more targeted recordings.
These systems are life changing. In a recent example, an implant translated the neural activity controlling a man with ALS’s vocal muscles. With just a second’s delay, the system generated coherent sentences with intonation, allowing him to sing with an artificial voice. Another device turned a paralyzed woman’s thoughts into speech with nearly no delay, so she could hold a conversation without frustrating halts. People have also benefited from a method that uses the neural signals behind handwriting for brain-to-text communication.
Brain implants aren’t purely experimental anymore: China recently approved a setup allowing people with paralysis to control a robotic hand. It’s the first such device available outside of clinical trials.
Perhaps the most widely used clinical solution is eye-tracking. Here, patients move their eyes to focus on individual letters, one at a time, on a custom digital keyboard. But the pace is agonizingly slow and prone to error. And prolonged screen time strains the eyes, making extended conversations difficult.
“Those systems take far too long for many users,” said study author Daniel Rubin in a press release, causing them to abandon the technology.
For people who already know how to type, the standard keyboard layout—known as QWERTY—feels familiar and comfortable. Fingers stretch to hit letters in the upper row, tap directly down for ones in the middle, and curl into a loose claw to hit bottom letters and punctuation.
As fingers dance across the keyboard, parts of the motor cortex that control their motion spark with activity, precisely directing each placement. Mind-typing using a familiar keyboard, compared to a custom one, could feel more intuitive and relaxing.
Two people with tetraplegia gave the idea a shot. Participant T17 was diagnosed with ALS at 30, a disease that slowly destroys motor neurons, weakening muscles and eventually impairing breathing. Three years later, when he enrolled in the study, he’d lost control of his vocal muscles and relied on a ventilator. He could move only his eyes, but his mind was still sharp. The second participant, T18, was paralyzed by a spinal cord injury 18 months before enrollment. Both had multiple brain implants in different areas. These were connected to cables that shuttled recordings to a computer system for real-time processing.
The participants used a simplified QWERTY digital keyboard containing all 26 letters, a space key, and three types of punctuation—a question mark, comma, and period. To train the system, the volunteers imagined stretching, tapping, or curling their fingers to type text prompts, while implants captured and isolated neural signals for each finger. After training, a deep learning model predicted intended characters, and a language model continuously attempted to autocomplete the sentence.
After practicing just 30 sentences, both participants could copy on-screen text or type whatever they wanted. When asked “what was the best part of your job,” T18 cheekily replied “the best part of my job was the end [of] the day.” Meanwhile, T17, a fan of The Legend of Zelda video games, told the researchers “you should try oracle of ages and seasons…another is skyward sword…the music in those games is great.”
Their typing speeds broke records. T18 communicated at 110 characters or roughly 22 words per minute, which is 20 characters more than a previous state-of-the-art method based on handwriting, wrote the team. The rate is nearly on par with able-bodied smartphone users similar to his age. Typing errors were consistently low and neared perfection after practice.
T17, with incomplete locked-in syndrome due to ALS, typed 47 characters a minute at a higher error rate. He had full use of his vocabulary, unlike with previous systems that imposed word restrictions, and communicated much faster.
The performance differences could be due to where their implants are located. T18’s microarrays are on both sides of the brain, with some covering an area that controls all four limbs. T17’s implants are on only the left half of his brain, with less coverage of finger motor areas.
The team is now tweaking the system for longer use tailored to individuals. As disease progresses, the link between brain signals and keyboard characters may drift and produce more errors. But updating the algorithm is easy. The system needs only a few sentences to learn, so users could start each day mind-typing some thoughts to keep things dialed in.
Updates to the digital keyboard, like adding numbers or the return and delete keys, are in the works. Temporarily disabling the language model could also let participants type strong gibberish passwords, internet slang (ikr, btw, lol), and other non-standard words without being autocorrected.
The brain implant “is a great example of how modern neuroscience and artificial intelligence technology can combine to create something capable of restoring communication and independence for people with paralysis,” said study author Justin Jude.
The post Brain Implants Let Paralyzed People Type Nearly as Fast as Smartphone Users appeared first on SingularityHub.
2026-03-17 07:20:21
The simulation encompasses nearly all of a cell’s molecules over roughly two hours.
Five years ago, scientists watched in wonder as synthetic bacteria grew and split into daughter cells. The bacteria’s extremely stripped-down genome still supported its entire life cycle. It was a crowning achievement in synthetic biology that shed light on life’s most basic processes.
These processes can now be viewed digitally. This month, a team at the University of Illinois at Urbana-Champaign developed a virtual model of the bacteria tracking nearly all of a cell’s molecules down to the nanoscale. The researchers made this digital cell by combining several large datasets covering thousands of molecules and then animating them as the bacteria split in two.
The model is the latest in a growing effort to make digital twins of living cells. Mimicking diseases or treatments in the digital world offers a bird’s-eye view of cellular changes and could speed up drug discovery and help researchers tackle complex diseases like cancer.
“We have a whole-cell model that predicts many cellular properties simultaneously,” study author Zan Luthey-Schulten said in a press release. The model could provide “the results of hundreds of experiments” at the same time, she said.
Every cell is a bustling metropolis. Proteins orchestrate a vast range of cellular responses. RNA molecules carry instructions from genes to the cell’s protein-building factories. Fatty acids in a cell’s membrane rearrange themselves to admit nutrients or ward off invaders. Working in tandem, they all keep the cell humming along.
This complexity makes cells hard to simulate. But with large datasets charting the genome, gene expression, and proteins alongside sophisticated AI, scientists have built static virtual cells that paint a near-complete picture with atomic-level resolution. More recent models can even predict molecular movements for a short period of time (often less than a second).
But they can’t simulate “the mechanics and chemistry that take place over minutes to hours in processes such as gene expression and cell division,” wrote the University of Illinois team.
Other efforts use physics to predict how molecular changes affect behavior in bacteria, yeast, and human cells. These treat cells as a “well-stirred system”—that is, a cup of molecular soup lacking details about where each molecule sits and how molecules vary from cell to cell.
But location is key. As cells divide, some proteins gather around DNA to help copy it; others assemble near the membrane to recruit fatty molecules for its growth as the cell splits in two.
Simulating everything, everywhere, all at once during human cell division is beyond even the most powerful supercomputers. Minimal bacteria offer an alternative. These synthetic bacteria are stripped-down versions of the parasite Mycoplasma mycoides. The team focused on one of these known as JCVI-syn3A. Its 493-gene genome—roughly half the original—is the smallest set of DNA instructions to boot up a living bacteria that can still grow and divide.
In 2022, the team developed a 3D model of the bacteria’s metabolism, genes, and growth. But the software, Lattice Microbes, struggled to track division.
The new study added more data to the software. This included membrane changes and information about how ribosomes, the cell’s protein-making machines, assemble and move inside the cell’s gooey interior. They also added stochasticity, or unpredictability, to the model.
Changes to the location of chromosomes, which house DNA, are random as the cell divides, which makes them difficult to predict. But their position influences DNA replication and gene expression.
The first update nearly broke the software. It could map molecules involved in cell division, such as an enzyme critical for DNA copying. But adding chromosome location predictions slowed the model to a crawl, even when running on advanced GPUs. Most of the cells died before their simulations were complete.
Several tweaks helped. One was to add more computational power. The team used a GPU dedicated to chromosomes, while all other details were processed on a separate chip. The model also ran faster by rendering some proteins as inert spheres that could be largely ignored.
The upgrades worked. Leaving the model running over Thanksgiving, the team returned to find it had completed the bacteria’s whole life cycle. “All of a sudden, it was just this huge leap,” study author Zane Thornburg told Nature.
The simulation matched many real-world experiments, such as how the cells elongate and bubble into dumbbell-like shapes during division. The model also accurately predicted the length of a cell cycle and captured a wide range of cellular activity.
“I can’t overstate how hard it is to simulate things that are moving—and doing it in 3D for an entire cell was…triumphant,” said Thornburg.
Every cell is like a snowflake: Although containing similar molecules, the amounts and locations differ. The model easily handled this diversity. Repeated simulations of the bacteria, each starting with slightly different genetic, molecular, and metabolic makeup, resulted in a similar cycle length and movement of chromosomes during division.
The results came at a cost: Simulating the cell’s 105-minute cycle took up to six days on a supercomputer. But the virtual cell could lend insights into the molecular dance that causes all cells to grow and divide. JCVI-syn3A doesn’t have the smallest genome. Its predecessor holds the record, but it also struggles to make normally shaped and functional daughter cells—suggesting some genes are essential for division. Simulation could help us understand why.
Other efforts using generative AI to build virtual cells are in the works. But because this study’s model was grounded in strict physical and biochemical rules, results could be easily verified in the lab. AI-generated virtual cells, however, are commonly trained on gene expression data alone, which is a snapshot of a cell’s state and often fails to predict complex cell responses.
The two approaches could inspire each other by homing in on principles that make a virtual cell run like the real deal. For example, they could show that capturing each molecule in space and time, rather than as a soup, vastly improves the model.
Although the model can’t simulate a cell atom-by-atom, the team wrote, it could “illuminate the interwoven nature of the biology, chemistry, and physics that govern life for cells.”
The post Digital Twin of a Cell Tracks Its Entire Life Cycle Down to the Nanoscale appeared first on SingularityHub.
2026-03-14 22:00:00
How Pokémon Go Is Giving Delivery Robots an Inch-Perfect View of the WorldWill Douglas Heaven | MIT Technology Review ($)
“Niantic Spatial is using that vast and unparalleled trove of crowdsourced data—images of urban landmarks tagged with super-accurate location markers taken from the phones of hundreds of millions of Pokémon Go players around the world—to build a kind of world model, a buzzy new technology that grounds the smarts of LLMs in real-world environments.”
A Roadmap for AI, if Anyone Will ListenConnie Loizos | TechCrunch
“The newly published document, signed by hundreds of experts, former officials, and public figures, opens with the no-nonsense observation that humanity is at a fork in the road. One path, which the declaration calls ‘the race to replace,’ leads to humans being supplanted first as workers, then as decision-makers, as power accrues to unaccountable institutions and their machines. The other leads to AI that massively expands human potential.”
Startup Is Building the First Data Center to Use Human Brain CellsAlex Wilkins | New Scientist ($)
“Data centers use huge amounts of energy and chips are in high demand—could brain cells be the answer? Australia-based start-up Cortical Labs has announced it is building two ‘biological’ data centers in Melbourne and Singapore, stacked with the same neuron-filled chips that it has demonstrated can play Pong or Doom.”
Why Do Humanoid Robots Still Struggle With the Small Stuff?John Pavlus | Quanta Magazine
“‘I asked each researcher: Can your flagship robot—Boston Dynamics’ Atlas or Agility’s Digit, two of the most credible and pedigreed humanoids on Earth—handle any set of stairs or doorway? ‘Not reliably,’ Hurst said. ‘I don’t think it’s totally solved,’ Kuindersma said. …It’s 2026. Why are humanoids still this…hard?”
AI Isn’t Lightening Workloads. It’s Making Them More Intense.Ray A. Smith | The Wall Street Journal ($)
“One of the great hopes for artificial intelligence—at least, among workers—is that it will ease workloads, freeing people up for more high-level, creative pursuits. So far, the opposite is happening, new data show. In fact, AI is increasing the speed, density and complexity of work rather than reducing it, according to an analysis of 164,000 workers’ digital work activity.”
Karpathy’s March of Nines Shows Why 90% AI Reliability Isn’t Even Close to EnoughNikhil Mungel | VentureBeat
“The ‘March of Nines’ frames a common production reality: You can reach the first 90% reliability with a strong demo, and each additional nine often requires comparable engineering effort. For enterprise teams, the distance between ‘usually works’ and ‘operates like dependable software’ determines adoption.”
The Race to Solve the Biggest Problem in Quantum ComputingKarmela Padavic-Callaghan | New Scientist ($)
“Quantum computers are already here, but they make far too many errors. This is arguably the biggest obstacle to the technology really becoming useful, but recent breakthroughs suggest a solution may be on the horizon. ‘It’s a very exciting time in error correction. For the first time, theory and practice are really making contact,’ says Robert Schoelkopf at Yale University.”
Modular Yard Robot Mows Lawns, Plows Snow, Gathers Leaves and Trims GrassMaryna Holovnova | New Atlas
“Homeowners usually end up with a garage filled with various equipment: a lawn mower, snow blower, shovels, and tools for clearing fallen leaves. Currently available on Kickstarter, the Yarbo M attempts to combine all those individual tools into one compact robotic platform that can automatically do all the yard work.”
These Self-Configuring Modular Robots May One Day Rule the WorldTom Hawking | Gizmodo
“Each unit has multiple points to which another unit can attach itself: 18 of them, to be precise, which means that just two units can be combined in 435 ways. The number of possible configurations explodes as the number of units increases, and by the time you get to five units, there are hundreds of billions of possible combinations.”
This SpaceX Veteran Says the Next Big Thing in Space Is Satellites That Return to EarthTim Fernholz | TechCrunch
“The reusable rocket has transformed the space industry in the last decade, and a new startup led by a SpaceX veteran wants to do the same for satellites. Brian Taylor, who helped build satellites for networks like SpaceX’s Starlink and Amazon’s Leo, founded Lux Aeterna in December 2024 to develop satellite structures with a built-in heat shield that will allow them to return to Earth with their payloads intact.”
Almost 40 New Unicorns Have Been Minted So Far This Year—Here They AreDominic-Madori Davis | TechCrunch
“Using data from Crunchbase and PitchBook, TechCrunch tracked down the VC-backed startups that became unicorns in 2026. While most are AI-related, a surprising number are focused on other industries like healthcare and even a few crypto companies.”
SETI Thinks It Might Have Missed a Few Alien Calls. Here’s WhyMatthew Phelan | Gizmodo
“A new study published by researchers at the SETI Institute, short for the Search for Extraterrestrial Intelligence, has tested the possibility that ‘space weather’ could render strong premeditated alien broadcasts into the kind of fainter radio signals that SETI typically ignores.”
The post This Week’s Awesome Tech Stories From Around the Web (Through March 14) appeared first on SingularityHub.