MoreRSS

site iconSingularity HUBModify

Singularity Hub has offered daily news coverage, feature articles, analysis, and insights on key breakthroughs and future trends in science and technology.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Singularity HUB

Robots With Different Designs Can Now Share Skills

2026-05-02 03:29:07

Abilities taught to one robot don’t usually work on another. With a new approach, it’s one and done.

As robots move into the real world, they’ll need to become more adaptable. But right now, it’s hard to transfer skills from one machine to another. A new system makes this possible.

One of the most popular ways to teach robots is to have a human show them what to do—either by physically guiding the robot’s joints, using remote control, or even drawing the desired motion.

But those skills are indelibly tied to each specific robot. If a company upgrades to a new robot with a different design, the skill breaks, and the robot has to be trained from scratch.

Researchers at the Swiss Federal Institute of Lausanne have now sidestepped this challenge by teaching robots to understand the limits of their own joints. In a paper published in Science Robotics, the new approach allowed multiple robots to complete a task based on a single human demonstration.

“With new designs come different capabilities and constraints,” Durgesh Haribhau Salunkhe, a co-author of the paper told Ars Technica. “The problem is to adapt to these constraints and capabilities—to faithfully replicate the actions demonstrated by a human.”

Surprisingly, the approach doesn’t rely on AI. Instead, the researchers analyzed the physical properties of several robotic arms with three rotating joints—a popular design in commercial settings—to map out their limits.

To complete a task, a robotic arm must calculate how to bend each joint to reach its target. It also has to avoid pushing the joints past their physical limits or twisting them at weird angles. Engineers call these limits “singularities” because they cause the math governing the robots’ motion to break down. Failures can cause sudden and unsafe movements.

The researchers mapped safe regions in each robot’s range of motion and sorted all three-joint robots into six categories based on shared physical limits.

They embedded these limits into each robot’s programming. The team calls this “kinematic intelligence,” essentially knowledge of what movements the machines can and can’t make safely.

If a movement pushes the robot into an unsafe zone, the system activates what the researchers call a “track cycle.” This is a strategy for skirting the danger zone, tailored to the robot’s category. Some robots traverse horizontally along zones, others vertically, and some switch modes.

As a real-world test, the team set up a mock assembly line with three commercial robots: one whose movements are relatively constrained, another with more flexibility, and a third capable of a much wider range of motions.

A human demonstrated three tasks. They pushed an object off a conveyor belt, picked it up, placed it on a workbench, and then put it in a basket. Each robot tried these tasks, and despite the movements pushing them close to their limits, all three followed the demonstrations successfully.

The system currently handles a robot’s physical limits well and keeps movements safe. But it isn’t designed for unpredictable environments or complex decisions. So it’s likely best suited to highly controlled factory settings rather than the messier real world.

Still, allowing robots to share skills could make it easier to roll them out across a range of commercial settings. It won’t bring us the robot butlers Silicon Valley has promised, but it could accelerate the much more practical integration of robots in industry.

The post Robots With Different Designs Can Now Share Skills appeared first on SingularityHub.

How Does Imagination Really Work in the Brain? New Theory Upends What We Knew

2026-05-01 06:48:37

Imagination may have more to do with the brain activity it silences than the activity it creates.

Your brain is currently expending about a fifth of your body’s energy, and almost none of that is being used for what you’re doing right now. Reading these words, feeling the weight of your body in a chair—all of this together barely changes the rate at which your brain consumes energy, perhaps by as little as 1 percent.

The other 99 percent is used on the activity the brain generates on its own: neurons (nerve cells) firing and signaling to each other regardless of whether you’re thinking hard, watching television, dreaming, or simply closing your eyes.

Even in the brain areas dedicated to vision, the visuals coming in through your eyes shape the activity of your neurons less than this internal ongoing action.

In a paper recently published in Psychological Review, we argue that our imagination sculpts the images we see in our mind’s eye by carving into this background brain activity. In fact, imagination may have more to do with the brain activity it silences than with the activity it creates.

Imagining as Seeing in Reverse

Consider how “seeing” is understood to work. Light enters the eyes and sparks neural signals. These travel through a sequence of brain regions dedicated to vision, each building on the work of the last.

The earliest regions pick out simple features such as edges and lines. The next combine those into shapes. The ones after that recognize objects, and those at the top of the sequence assemble whole faces and scenes.

Neuroscientists call this “feedforward activity”—the gradual transformation of raw light into something you can name, whether it’s a dog, a friend, or both.

In brain science, the standard view is that visual imagination is this original seeing process run in reverse, from within your mind rather than from light entering your eyes.

So, when you hold the face of a friend in mind, you start with an abstract idea of them—a memory or a name, pulled from the filing cabinet of regions that sit beyond the visual system itself.

That idea travels back down through the visual sequence into the early visual areas, which serve as your brain’s workshop where a face would normally be reconstructed from its parts—the curve of a jawline, the specific shade of an eye. These downward signals are called “feedback activity.”

A Signal Through the Static

However, prior research shows this feedback activity doesn’t drive visual neurons to fire in the same way as when you actually see something.

At least in the brain regions early in the vision process, feedback instead modulates brain activity. This means it increases or decreases the activity of the brain cells, reshaping what those neurons are already doing.

Even behind closed eyes, early visual brain areas keep producing shifting patterns of neural activity resembling those the brain uses to process real vision.

Imagination doesn’t need to build a face from scratch. The raw material is already there. In the internal rumblings of your visual areas, fragments of every face you know are drifting through at low volume. Your friend’s face, even now, is passing through in pieces, scattered and unrecognised. What imagining does is hold still the currents that would otherwise carry those pieces away.

All that’s needed is a small, targeted suppression of neurons that are pulled by brain activity in a different direction, and your friend’s face settles out of the noise, like a signal carving its way through static.

Steering the Brain

In mice, artificially switching on as few as 14 neurons in a sensory brain region is enough for the animal to notice it and lick a sugar-water spout in response. This shows how small an intervention in the brain can be while still steering behavior.

While we don’t know how many neurons are needed to steer internal activity into a conscious experience of imagination in humans, growing evidence shows the importance of dampening neural activity.

In our earlier experiments, when people imagined something, the fingerprint it left on their behavior matched suppression of neuronal activity—not firing. Other researchers have since found the same pattern.

Other lines of evidence strengthen our theory, too. About one in 100 people have aphantasia, which means they can’t form mental images at all. One in 30 form these images so vividly they approach the intensity of images we actually see, known as hyperphantasia.

Research has found that people with weaker mental imagery have more excitable early visual areas, where neurons fire more readily on their own. This is consistent with a visual system whose spontaneous patterns are harder to hold in shape.

Taking all this together, the spontaneous activity reshaping hypothesis—our new theory that imagination carves images out of the steady stream of ongoing brain activity—explains why imagination usually feels weaker than sight. It also explains why we rarely lose track of which is which.

Visual perception arrives with a strength and regularity the brain’s own internal patterns don’t match. Imagination works with those patterns rather than against them, reshaping what is already there into something we can almost see. The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The post How Does Imagination Really Work in the Brain? New Theory Upends What We Knew appeared first on SingularityHub.

Sony’s Table-Tennis Robot Beat Elite Human Players With Unorthodox Moves

2026-04-28 22:00:00

AI long ago surpassed humans at games like chess and Go. Now it’s powering robots that can challenge top athletes.

Peter Dürr could barely follow the table-tennis ball as it zoomed across the net, each strike’s trajectory designed to perplex the opponent. This was no ordinary match: Taira Mayuka, one of the top players in the world, was on one side—on the other, was a robot called Ace.

Mayuka launched a twisting smash that should have nailed a point. But in the blink of an eye, Ace answered with a return that kept the game alive. “Yes!” Dürr pumped his fist, knowing his team had engineered a historic moment for robotics.

Sony AI’s Ace is the latest autonomous system to be pitted against humans in a game. Since Deep Blue defeated chess champion Garry Kasparov in 1997, AI has trounced humans in Jeopardy, Go, StarCraft II, and car-racing simulations.

Ace has now taken these virtual victories into the real world.

Up against seven top human players, the AI-controlled robot arm beat three in multiple adrenaline-pumping games. Ace is an “important milestone,” wrote Carlos H. C. Ribeiro and Esther Colombini at the Aeronautics Institute of Technology and University of Campinas, respectively, who were not involved in the study.

Ace joins a humanoid robot that crushed the world record for a half marathon in Beijing last week. Neither project is focused on creating elite robotic athletes. Their main goal is to build next-generation autonomous machines that operate fluidly in the physical world.

“We wanted to prove that AI doesn’t just exist in virtual spaces,” Michael Spranger, president of Sony AI, said in a press release. “It’s not just tech you interact with in the virtual world—you can actually have a physical experience, and the technology is ready for that.”

Fast and Furious

Robots have come a long way. The clumsy, bumbling humanoids are gone, replaced by agile machines that can navigate all kinds of terrain. Autonomous vehicles once baffled by our roads now cruise the streets. Dexterous robotic arms are increasingly used for surgery, warehouse operations, or even delivering your lunch.

AI is a big part of that leap in capability. Robots are no longer strictly preprogrammed machines. They can now learn, adapt, make decisions, with generative AI models helping them understand what they’re looking at and, increasingly, how to interact with it. They’re a little less like yesterday’s rigid machines, and more like curious kids: Taking in a messy world, figuring it out, and getting better over time.

But compared to humans, robots still struggle to react on the fly, especially in fast-paced games like table tennis. The sport is a brutal mix of speed, perception, and precision. Players must read the ball and strike in a split second. There’s no margin for error. Too much power or the wrong angle, and the ball flies off the table. Too predictable, and you’ve likely handed your opponent the next point.

Professional players can smash shots up to 67 miles per hour and impart “a massive amount of spin on the ball,” exceeding 160 rotations a second, Dürr told Nature, making it tough for rookie humans and robots to react in time.

To Dürr, building a robot that could compete with elite human players was a “dream project” that “would challenge us to push the individual component technologies to their limits.”

Give Me Your Best Shot

Ace seamlessly fuses AI-based software and hardware.

For its eyes, the team placed cameras outside the court that could cover the entire playing area and track the ball’s position about 200 times per second. They also used an event-based image sensor to capture the ball’s spin. Together, these give the “robot the information it needs to anticipate where the ball is going to go, and plan how to hit it back,” said Dürr.

All that data feeds into multiple AI algorithms: Ace’s “brain.” One of these algorithms, borrowed from image processing, focuses on key parts of each frame to increase processing speed. Another, a deep reinforcement algorithm, learned to play table tennis in simulated matches. (Think student and coach: The model decides how to swing, where to aim, and how hard to hit. The “coach” gives feedback—good or bad—without demonstrating any moves.)

“So basically, we shoot a ball in simulation at our robot and let it do random things. At the beginning, it doesn’t know how to react…But eventually, it maybe be lucky enough to hit the ball back on the table,” said Dürr. And over countless iterations, it improves its play.

Expert players coached Ace too. In table tennis, the initial toss sets up the serve. Ace learned from human demonstrations adapted to its mechanics, so every toss follows the game’s rules.

After thousands of simulated hours, and with the help of yet another algorithm to weed out poor plays, the team built a library of realistic serves for Ace to draw upon.

The last component was the arm itself—and off-the-shelf didn’t work. “There’s nothing on the market that would let us play at the level we wanted to play,” said Dürr. So they built their own robot from the ground up. The lightweight, six-jointed arm can whip a racket at over 20 meters (roughly 66 feet) per second and react roughly 11 times faster than a person.

All assembled, Ace is a table-tennis powerhouse—but not unbeatable. Against five elite and two professional players, it dominated the less-experienced elites but fell to the pros. In the months since the team wrote up their results, the robot continued improving against top-tier competition.

Ace didn’t win by simply being faster than humans. Rather, it won by being inventive. It created different kinds of spins, varied its returns, and consistently landed the ball on target. When Olympic table-tennis player, Kinjiro Nakamura, watched Ace play, he was mesmerized by the robot’s unconventional moves. “No one else would have been able to do that. I didn’t think it was possible,” he said. But if a robot can pull it off, maybe humans can too.

For Colombini, who worked on soccer-playing robots, that kind of agility and improvisation is the real goal. Robots need to think on their feet and easily navigate the physical world to work safely with people. “I need the skills and the abilities of these robots, learned in these environments that are easy for us to see how they are evolving,” she said. “So, sports are just a proxy for what we want.”

The post Sony’s Table-Tennis Robot Beat Elite Human Players With Unorthodox Moves appeared first on SingularityHub.

Quantum Computers Are Coming to Break Cryptography Faster Than Anyone Expected

2026-04-28 01:14:10

Algorithmic advances are steadily lowering the bar for quantum attacks—even before large-scale hardware exists.

Online data is generally pretty secure. Assuming everyone is careful with passwords and other protections, you can think of it as being locked in a vault so strong that even all the world’s supercomputers, working together for 10,000 years, could not crack it.

But last month, Google and others released results suggesting a new kind of computer—a quantum computer—might be able to open the vault with significantly fewer resources than previously thought.

The changes are coming on two fronts. On one, tech giants such as IBM and Google are racing to build ever-larger quantum computers: IBM hopes to achieve a genuine advantage over classical computers in some special cases this year, and an even more powerful “fault-tolerant” system by 2029.

On the other front, theorists are refining quantum algorithms: Recent work shows the resources needed to break today’s cryptography may be far fewer than earlier estimates.

The net result? The day quantum computers can break widely used cryptography—portentously dubbed “Q-Day”—may be approaching faster than expected.

The Quantum Hardware Race

Quantum computers are built from quantum bits, or qubits, which use the counterintuitive properties of very tiny objects to carry out computations in a different and sometimes far more efficient way from traditional computers.

So far the technology is in its infancy, with the major goal to increase the number of qubits that can be connected to work as a single computer. Bigger quantum computers should be much better at some things than their traditional counterparts—they will have a “quantum advantage.”

Late last year, IBM unveiled a 120-qubit chip which it hopes will demonstrate a quantum advantage for some tasks.

Google also recently announced it planned to speed up its move to adopt encryption techniques that should be safe against quantum computers, known as post-quantum cryptography.

Alongside these tech giants, newer approaches are also flourishing. PsiQuantum is using light-based qubits and traditional chip-manufacturing technology. Experimental platforms such as neutral-atom systems have demonstrated control over thousands of qubits in laboratory settings.

In response, standards bodies and national agencies are setting increasingly concrete timelines for moving away from common encryption systems that are vulnerable to quantum attack.

In the United States, the National Institute of Standards and Technology (NIST) has proposed a transition away from quantum-vulnerable cryptography, with migration largely completed by 2035. In Australia, the Australian Signals Directorate has issued similar guidance, urging organizations to begin planning immediately and transition to post-quantum cryptography by 2030.

Algorithms Make the Lock-Picking Faster

Hardware is only half the story. Equally important are advances in quantum algorithms—ways to use quantum computers to attack encryption.

Much interest in quantum computer development was spurred by Peter Shor’s 1994 discovery of an algorithm that showed how quantum computers could efficiently find the prime factors of very large numbers. This mathematical trick is precisely what you need to break the common RSA encryption method.

For decades, it was believed a quantum computer would need millions of physical qubits to pose a threat to real-world encryption. This is far bigger than current systems, so the threat felt comfortably distant.

That picture is now changing.

In March 2026, Google’s Quantum AI team released a detailed study showing that far fewer resources may be needed to attack a different kind of encryption which uses mathematical objects called elliptic curves. This is what systems including Bitcoin and Ethereum use—and the study shows how a quantum computer with fewer than half a million physical qubits may be able to crack it in minutes.

That’s still a long way beyond current quantum computers, but around ten times less than earlier estimates.

At the same time, a March 2026 preprint from a Caltech—Berkeley—Oratomic collaboration explores what might be possible using neutral-atom quantum computers. The researchers estimate that Shor’s algorithm could be implemented with as few as 10,000–20,000 atomic qubits. In one design they propose, a system with around 26,000 qubits could crack Bitcoin’s encryption in a few days, while tougher problems like the RSA method with a 2048-bit key would need more time and resources.

In plain terms: The codebreakers are becoming more efficient. Advances in algorithms and design are steadily lowering the bar for quantum attacks, even before large-scale hardware exists.

What Now?

So what does this mean in practice?

First, there is no immediate catastrophe—today’s cryptography won’t be broken overnight. But the direction of travel is clear. Each improvement in hardware or algorithms reduces the gap between current capabilities and useful quantum cracking machines.

Second, viable defenses already exist. NIST has standardized several post-quantum cryptographic algorithms which are believed to be resistant to quantum attacks.

Technology companies have begun deploying these in hybrid modes: Google Chrome and Cloudflare, for example, already support post-quantum protections in some protocols and services.

Systems that rely heavily on elliptic-curve cryptography—including cryptocurrencies and many secure communication protocols—will need particular attention. Google’s recent work explicitly highlights the need to migrate blockchain systems to post-quantum schemes.

Finally, this is a two-front race. It is not enough to track progress in quantum hardware alone. Advances in algorithms and error correction can be just as important, and recent results show these improvements can significantly reduce the estimated cost of attacks.

Every new headline about reduced qubit counts or faster quantum algorithms should be understood for what it is: another step toward a future where today’s cryptographic assumptions no longer hold.

The only reliable defense is to move—deliberately but decisively—toward quantum-safe cryptography. The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The post Quantum Computers Are Coming to Break Cryptography Faster Than Anyone Expected appeared first on SingularityHub.

This Week’s Awesome Tech Stories From Around the Web (Through April 25)

2026-04-26 01:02:39

Future

The People Do Not Yearn for AutomationNilay Patel | The Verge

“Not everything about our lives can be measured and automated and optimized, and it shouldn’t be. And so the tech industry is rushing forward to put AI everywhere at enormous cost—energy, emissions, manufacturing capacity, the ability to buy RAM—and locked into the narrow framework of software brain without realizing they are also asking people to be fundamentally less human.”

BIOTECHNOLOGY

AI-Designed Drugs by a DeepMind Spinoff Are Headed to Human Trials
Emily Mullin | Wired ($)

“In a technical paper [released earlier this year], the company touts that the [new IsoDDE] platform more than doubles the accuracy of AlphaFold 3. The startup has formed partnerships with Eli Lilly and Novartis to work together on AI drug discovery and is also advancing its own ‘broad and exciting pipeline of new medicines’ in oncology and immunology, Jaderberg said.”

Computing

We Might Finally Know How to Use Quantum Computers to Boost AIKarmela Padavic-Callaghan | New Scientist ($)

“They showed not only that this approach can work but that it would allow the quantum computer to process more data at a smaller memory cost than any conventional computer. The memory advantage is so large, in fact, that a quantum computer made from about 300 error-proof building blocks called logical qubits would outperform a classical computer built using every atom in the observable universe, says Zhao.”

Future

New Gas-Powered Data Centers Could Emit More Greenhouse Gases Than Entire NationsMolly Taft | Wired ($)

“A Wired review of permits for data center projects using natural gas and linked to OpenAI, Meta, Microsoft, and xAI shows they could emit more than 129 million tons of greenhouse gases per year. …As tech companies race to secure massive power deals to build out hundreds of data centers across the country, these projects represent just the tip of the iceberg when it comes to the potential climate cost of the AI boom.”

TECH

Anthropic Has Surged to a Trillion-Dollar Valuation on Secondary Markets, Overtaking OpenAIBen Bergman | Business Insider

“Desperate buyers are in a race to secure a dwindling supply of secondary shares in Anthropic, driving the AI company’s valuation on some sites to $1 trillion, a price that would have seemed unthinkable even a few weeks ago. Meanwhile, traders Business Insider spoke with are seeing slumping demand for OpenAI, which is now trading at a discount to Anthropic, despite OpenAI being valued at $852 billion, more than twice Anthropic’s valuation in their most recent funding rounds.”

TECH

You’re About to Feel the AI Money SqueezeHayden Field | The Verge

“Ads, rate limits, feature restrictions, price hikes. The AI free ride is over. …To reach that bare minimum of 7 percent [return on invested capital], Gartner forecasts that large AI companies would need to earn cumulatively close to $7 trillion in AI-driven revenue through 2029, which is close to $2 trillion per year by the end of the period.”

Future

BMW Is One Step Closer to Selling You a Color-Changing CarAndrew Liszewski | The Verge

“The new BMW iX3 Flow Edition is potentially the most exciting of all of BMW’s concepts as it embeds the E Ink Prism technology directly into the structure of the vehicle’s hood panel, instead of just slapping it on top. The new approach has ‘undergone BMW’s stringent quality testing’ so that it meets the ‘requirements of automotive engineering and everyday use,’ according to a release from E Ink.”

Biotechnology

The FDA Gives the Green Light to the First Gene Therapy for DeafnessRob Stein | NPR

“‘That was like the most surreal moment a mother can feel when your son first hears your voice,’ [said Sierra Smith]. The treatment [Smith’s son] received was the one just approved by the FDA. …The FDA’s decision was based on the results from the treatment of 20 patients born with a defective version of a gene known as OTOF, which is necessary to transmit sound from the ears to the brain.”

Energy

Will Fusion Power Get Cheap? Don’t Count On It.Casey Crownhart | MIT Technology Review ($)

“Technologies tend to get less expensive over time. Lithium-ion batteries are now about 90% cheaper than they were in 2013. But historically, different technologies tend to go through this curve at different rates. And the cost of fusion might not sink as quickly as the prices of batteries or solar.”

Biotechnology

A Startup Says It Grew Human Sperm in a Lab—and Used It to Make EmbryosEmily Mullin | Wired ($)

“The process involves isolating sperm-making stem cells from testicular tissue and coaxing the cells into becoming fully-fledged sperm in a dish. Scientists have been attempting to produce sperm outside the body, known as in vitro spermatogenesis, for almost a century. A Japanese team was the first to produce viable mouse sperm in the lab in 2011, but making human sperm has turned out to be a more difficult task.”

Artificial Intelligence

Are OpenAI and Anthropic Moving Away From Reasoning Tech?Stephanie Palazzolo | The Information ($)

“Early signs point to both Spud and Mythos being more intelligent pretrained models, meaning they got smart during the initial part of the development process. Now, OpenAI’s upcoming Spud model is noticeably better at answering tough questions without relying on reasoning, said two people familiar with it.”

Future

Only Antimatter Provides the Energy We Need for Interstellar TravelEthan Siegel | Big Think

“If our goal is to eventually extend our reach not just to the other worlds of our Solar System, but to exoplanets around other stars, we’ll need a different, more efficient method of propulsion than chemical-based rockets can supply. The most efficient form of energy generation, theoretically, is to reach 100%, and only one fuel is capable of doing that: matter-antimatter annihilation. Here’s why that’s the ultimate dream, and how we might conceivably get there.”

Biotechnology

If a Bird Flu Pandemic Starts, We May Have an MRNA Vaccine ReadyMichael Le Page | New Scientist ($)

“It was roughly a year after the earliest cases of covid-19 before the first vaccines against the SARS-CoV-2 virus were ready for roll-out. By then millions had died worldwide and economies were devastated. In the advent of a bird flu pandemic, we will be able to react more rapidly, because we should have an mRNA vaccine already approved and ready to go. A phase III trial of a such a vaccine is now getting under way in the UK and the US.”

The post This Week’s Awesome Tech Stories From Around the Web (Through April 25) appeared first on SingularityHub.

A Humanoid Robot Beat the Human World Record for a Half Marathon

2026-04-25 06:34:04

A year after most robots failed to finish the Beijing race, nearly half the field autonomously ran a course of slopes, narrow passages, and 20 turns.

Humanoid robots are Silicon Valley’s latest obsession, but real-world performance has lagged the hype. That may be starting to change, however, after a robot beat the human record for a half marathon by nearly seven minutes in Beijing.

While tech companies around the world are piling into humanoid robots, China has made it a national priority. The government is pouring subsidies and infrastructure investment into the sector, and Chinese firms already account for around 80 percent of the humanoid machines shipped globally, according to the South China Morning Post.

Eager to show off its prowess, China has been staging sporting events for robots, most notably last year’s inaugural World Humanoid Robot Games. Another such event, the Beijing E-Town Half Marathon, pits humanoid robots against thousands of human runners over a 13-mile course. Last year, most of the non-human competitors failed to finish, and the fastest robots managed an unimpressive two hours and 40 minutes.

But this time around, four robots clocked times under an hour. And the winner, made by Chinese smartphone company Honor, registered a record-breaking 50 minutes, 26 seconds, eclipsing the benchmark set by Ugandan long-distance runner Jacob Kiplimo in Lisbon last month.

“Running faster may not seem meaningful at first, ​but it enables technology transfer, for example, into structural reliability and cooling, and eventually industrial applications,”  Du Xiaodi, an engineer on the winning team, told Reuters.

More than 100 teams fielded 300 robots at this year’s event, up from just 21 entries at the inaugural event last year. But Honor, a spinoff from Chinese telecom giant Huawei, dominated the competition, with separate teams from the company taking all three podium spots.

The winning robot, Lightning, navigated the course entirely autonomously. The bot stands 5 feet 6 inches tall but features legs 37 inches long to mimic the physical attributes of elite runners. It also boasts liquid cooling technology used in the company’s smartphones.

The growing sophistication of the robots’ control software is perhaps one of the starkest shifts since last year, with roughly 40 percent of teams operating autonomously. This is particularly impressive given the challenging course, according to Bernstein Research analysts.

“The course included flat sections, slopes, narrow passages, and ~ 20 turns, demonstrating rapid improvement in robots’ intelligence to handle generalized environments in the real world,” they wrote, according to Bloomberg.

But the technology isn’t bulletproof yet. One robot ran into a barricade and had to be carried off on a stretcher. Another veered into a bush after crossing the finish line. And one continued racing with its torso held together by packing tape after a heavy fall.

Nonetheless, the race showcased the rapid progress China’s tech industry is making, particularly in the raw components used to build these machines, like motors, joints, and batteries. Liu Xiangquan, a robotics professor at Beijing Information Science and Technology University told The South China Morning Post that long-distance running is a great test of how well these components can stand up to the kind of repeated strain that will occur in industrial settings.

And that’s likely to cause some consternation in US policy circles, where many see robotics as a key battlefront in the growing technological rivalry between the two superpowers.

Behind Sunday’s spectacle is a higher-stakes contest between China and the US over who will dominate the next generation of humanoids. US robotics firms have been lobbying Washington to draft a national strategy to counter China, which could include tariffs or bans on Chinese robots to help protect domestic producers.

However, running fast in a straight line is a very different challenge than the fine motor control and perception demanded by commercial applications. Experts told Reuters that despite impressive hardware, robotics companies are still a long way from developing the sophisticated software required to put these humanoids to practical use.

Still, these machines struggled to get over the starting line just a year ago. The gap between humanoid robots and human athletes has closed faster than anyone expected, so betting against further rapid progress seems unwise.

The post A Humanoid Robot Beat the Human World Record for a Half Marathon appeared first on SingularityHub.