2026-05-02 22:00:00
I’ve Covered Robots for Years. This One Is DifferentWill Knight | Wired ($)
“Eka’s robot demos suggest that the company’s approach should enable real robot dexterity with further training. If that’s true, it could revolutionize how robots are used—not only in factories and warehouses but also in shops, restaurants, even households. ‘Trillions of dollars flow through the human hand,’ Agrawal says. ‘To me, this is the biggest problem in the world to be solved.'”
I Built an Agent to Do My Job. Then It Hung up on My Boss.Amanda Hoover | Business Insider
“The various generative AI systems I used in this piece both unsettled me with their ability and unnerved me with their shortcomings. …The process was so tedious that even if ChatGPT could spin up the copy in seconds, every step I took to make that happen added to the workload.”
This Treatment Could Reverse Osteoarthritis Joint Damage With a Single InjectionJavier Carbajal | Wired ($)
“Osteoarthritis has no cure, but researchers have developed new therapies that help aging or damaged joints repair themselves in a matter of weeks. …The Colorado team led by biomedical engineer Stephanie Bryant proposes a radically different approach: ‘Our goal is not just to treat pain and halt progression, but to end this disease.'”
Get Ready for More Brain-Scanning Consumer GadgetsJulian Chokkattu | Wired ($)
“The next gadget you put on your head could scan your brain. Neurable, a Boston-based company that embeds its noninvasive brain-scanning technology into hardware to monitor a person’s focus levels, announced on Tuesday that it is transitioning to a licensing platform model. By certifying third parties, Neurable expects its tech to be in a ‘flood’ of consumer gadgets this year and next.”
Study Finds a Third of New Websites Are AI-GeneratedMatthew Gault | 404 Media
“Inspired by the Dead Internet Theory—the idea that much of the internet is now just bots talking back and forth—the team set out to find out how ChatGPT and its competitors had reshaped the internet since 2022. …’We find that by mid-2025, roughly 35% of newly published websites were classified as AI-generated or AI-assisted, up from zero before ChatGPT’s launch in late 2022,’ [the researchers write].”
The Clock Is Ticking for Big Tech to Make AI PayAsa Fitch and Dan Gallagher | The Wall Street Journal ($)
“Depreciation charges surged at all four companies, totaling $41.6 billion for the most recent quarter. When companies make capital investments, they don’t count the outlays immediately as expenses. Rather, these capital assets have to be depreciated over a period of time. So the impact on profits is delayed. But a multitrillion-dollar bill will have to wash through in coming years, taking a bite out of reported profits.”
The More Young People Use AI, the More They Hate ItJanus Rose | The Verge
“Contrary to the tales spun by tech companies like OpenAI and Google, polling data shows that Gen Z students and workers are a big part of the wider cultural backlash against AI. And even as they utilize these tools, vast swaths of young people are deeply acrimonious and even resentful of the AI-centric future that many feel is being forced on them.”
So, About That AI BubbleRogé Karma | The Atlantic ($)
“Six months ago, people arguing that AI was a bubble were pointing to real-world facts, whereas people arguing against the bubble hypothesis were making speculative promises about the future. Today, the roles have reversed. AI’s explosive growth may yet encounter some new unforeseen obstacle. But the burden of proof has shifted to the naysayers.”
A Falcon 9 Rocket Will Hit the Moon This Summer at Seven Times the Speed of SoundEric Berger | Ars Technica
“Bill Gray, who writes the widely used Project Pluto software to track near-Earth objects, has published a comprehensive report on the impact expected to occur at 2:44 am ET (06:44 UTC) on August 5. The Falcon 9 rocket’s upper stage is 13.8 meters (45 feet) tall and has a 3.7-meter (12 feet) diameter. Since the moon has no atmosphere, it will strike the lunar surface intact.”
OpenAI Could Be Making a Phone With AI Agents Replacing AppsIvan Mehta | TechCrunch
“Currently, Apple and Google control the app pipeline and the type of system access they get, restricting some of their functions. Kuo suggests that by creating its own smartphone and hardware stack, OpenAI would be able to use AI in all kinds of features without restrictions. With ChatGPT nearing a billion weekly users, a hardware product for daily use could also bode well for OpenAI’s ambition to reach more consumers.”
Meta Inks Deal for Solar Power at Night, Beamed From SpaceTim Fernholz | TechCrunch
“The company [Overview Energy] is developing spacecraft that collect plentiful solar power in space. It then plans to convert that energy to near-infrared light and beam it at sufficiently large solar farms—on the order of hundreds of megawatts—which can convert that light to electricity.”
Microsoft and OpenAI’s Famed AGI Agreement Is DeadHayden Field | The Verge
“The change impacts a revenue-sharing agreement, which was supposed to stay in place until AGI was declared. …The payments will also continue and then end ‘independent of OpenAI’s technology progress,’ which under any reasonable logic includes AGI.”
10,000 New Planets Found Hidden in NASA Telescope DataJonathan O’Callaghan | New Scientist ($)
“By combining images taken by the telescope, the researchers were able to look for planets around stars that are less bright, due to their smaller size or greater distance from Earth, than was previously possible. This revealed 11,554 candidate exoplanets, of which 10,091 have not been identified in previous exoplanet searches.”
The post This Week’s Awesome Tech Stories From Around the Web (Through May 2) appeared first on SingularityHub.
2026-05-02 03:29:07
Abilities taught to one robot don’t usually work on another. With a new approach, it’s one and done.
As robots move into the real world, they’ll need to become more adaptable. But right now, it’s hard to transfer skills from one machine to another. A new system makes this possible.
One of the most popular ways to teach robots is to have a human show them what to do—either by physically guiding the robot’s joints, using remote control, or even drawing the desired motion.
But those skills are indelibly tied to each specific robot. If a company upgrades to a new robot with a different design, the skill breaks, and the robot has to be trained from scratch.
Researchers at the Swiss Federal Institute of Lausanne have now sidestepped this challenge by teaching robots to understand the limits of their own joints. In a paper published in Science Robotics, the new approach allowed multiple robots to complete a task based on a single human demonstration.

“With new designs come different capabilities and constraints,” Durgesh Haribhau Salunkhe, a co-author of the paper told Ars Technica. “The problem is to adapt to these constraints and capabilities—to faithfully replicate the actions demonstrated by a human.”
Surprisingly, the approach doesn’t rely on AI. Instead, the researchers analyzed the physical properties of several robotic arms with three rotating joints—a popular design in commercial settings—to map out their limits.
To complete a task, a robotic arm must calculate how to bend each joint to reach its target. It also has to avoid pushing the joints past their physical limits or twisting them at weird angles. Engineers call these limits “singularities” because they cause the math governing the robots’ motion to break down. Failures can cause sudden and unsafe movements.
The researchers mapped safe regions in each robot’s range of motion and sorted all three-joint robots into six categories based on shared physical limits.
They embedded these limits into each robot’s programming. The team calls this “kinematic intelligence,” essentially knowledge of what movements the machines can and can’t make safely.
If a movement pushes the robot into an unsafe zone, the system activates what the researchers call a “track cycle.” This is a strategy for skirting the danger zone, tailored to the robot’s category. Some robots traverse horizontally along zones, others vertically, and some switch modes.
As a real-world test, the team set up a mock assembly line with three commercial robots: one whose movements are relatively constrained, another with more flexibility, and a third capable of a much wider range of motions.
A human demonstrated three tasks. They pushed an object off a conveyor belt, picked it up, placed it on a workbench, and then put it in a basket. Each robot tried these tasks, and despite the movements pushing them close to their limits, all three followed the demonstrations successfully.
The system currently handles a robot’s physical limits well and keeps movements safe. But it isn’t designed for unpredictable environments or complex decisions. So it’s likely best suited to highly controlled factory settings rather than the messier real world.
Still, allowing robots to share skills could make it easier to roll them out across a range of commercial settings. It won’t bring us the robot butlers Silicon Valley has promised, but it could accelerate the much more practical integration of robots in industry.
The post Robots With Different Designs Can Now Share Skills appeared first on SingularityHub.
2026-05-01 06:48:37
Imagination may have more to do with the brain activity it silences than the activity it creates.
Your brain is currently expending about a fifth of your body’s energy, and almost none of that is being used for what you’re doing right now. Reading these words, feeling the weight of your body in a chair—all of this together barely changes the rate at which your brain consumes energy, perhaps by as little as 1 percent.
The other 99 percent is used on the activity the brain generates on its own: neurons (nerve cells) firing and signaling to each other regardless of whether you’re thinking hard, watching television, dreaming, or simply closing your eyes.
Even in the brain areas dedicated to vision, the visuals coming in through your eyes shape the activity of your neurons less than this internal ongoing action.
In a paper recently published in Psychological Review, we argue that our imagination sculpts the images we see in our mind’s eye by carving into this background brain activity. In fact, imagination may have more to do with the brain activity it silences than with the activity it creates.
Consider how “seeing” is understood to work. Light enters the eyes and sparks neural signals. These travel through a sequence of brain regions dedicated to vision, each building on the work of the last.
The earliest regions pick out simple features such as edges and lines. The next combine those into shapes. The ones after that recognize objects, and those at the top of the sequence assemble whole faces and scenes.
Neuroscientists call this “feedforward activity”—the gradual transformation of raw light into something you can name, whether it’s a dog, a friend, or both.
In brain science, the standard view is that visual imagination is this original seeing process run in reverse, from within your mind rather than from light entering your eyes.
So, when you hold the face of a friend in mind, you start with an abstract idea of them—a memory or a name, pulled from the filing cabinet of regions that sit beyond the visual system itself.
That idea travels back down through the visual sequence into the early visual areas, which serve as your brain’s workshop where a face would normally be reconstructed from its parts—the curve of a jawline, the specific shade of an eye. These downward signals are called “feedback activity.”
However, prior research shows this feedback activity doesn’t drive visual neurons to fire in the same way as when you actually see something.
At least in the brain regions early in the vision process, feedback instead modulates brain activity. This means it increases or decreases the activity of the brain cells, reshaping what those neurons are already doing.
Even behind closed eyes, early visual brain areas keep producing shifting patterns of neural activity resembling those the brain uses to process real vision.
Imagination doesn’t need to build a face from scratch. The raw material is already there. In the internal rumblings of your visual areas, fragments of every face you know are drifting through at low volume. Your friend’s face, even now, is passing through in pieces, scattered and unrecognised. What imagining does is hold still the currents that would otherwise carry those pieces away.
All that’s needed is a small, targeted suppression of neurons that are pulled by brain activity in a different direction, and your friend’s face settles out of the noise, like a signal carving its way through static.
In mice, artificially switching on as few as 14 neurons in a sensory brain region is enough for the animal to notice it and lick a sugar-water spout in response. This shows how small an intervention in the brain can be while still steering behavior.
While we don’t know how many neurons are needed to steer internal activity into a conscious experience of imagination in humans, growing evidence shows the importance of dampening neural activity.
In our earlier experiments, when people imagined something, the fingerprint it left on their behavior matched suppression of neuronal activity—not firing. Other researchers have since found the same pattern.
Other lines of evidence strengthen our theory, too. About one in 100 people have aphantasia, which means they can’t form mental images at all. One in 30 form these images so vividly they approach the intensity of images we actually see, known as hyperphantasia.
Research has found that people with weaker mental imagery have more excitable early visual areas, where neurons fire more readily on their own. This is consistent with a visual system whose spontaneous patterns are harder to hold in shape.
Taking all this together, the spontaneous activity reshaping hypothesis—our new theory that imagination carves images out of the steady stream of ongoing brain activity—explains why imagination usually feels weaker than sight. It also explains why we rarely lose track of which is which.
Visual perception arrives with a strength and regularity the brain’s own internal patterns don’t match. Imagination works with those patterns rather than against them, reshaping what is already there into something we can almost see. ![]()
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post How Does Imagination Really Work in the Brain? New Theory Upends What We Knew appeared first on SingularityHub.
2026-04-28 22:00:00
AI long ago surpassed humans at games like chess and Go. Now it’s powering robots that can challenge top athletes.
Peter Dürr could barely follow the table-tennis ball as it zoomed across the net, each strike’s trajectory designed to perplex the opponent. This was no ordinary match: Taira Mayuka, one of the top players in the world, was on one side—on the other, was a robot called Ace.
Mayuka launched a twisting smash that should have nailed a point. But in the blink of an eye, Ace answered with a return that kept the game alive. “Yes!” Dürr pumped his fist, knowing his team had engineered a historic moment for robotics.
Sony AI’s Ace is the latest autonomous system to be pitted against humans in a game. Since Deep Blue defeated chess champion Garry Kasparov in 1997, AI has trounced humans in Jeopardy, Go, StarCraft II, and car-racing simulations.
Ace has now taken these virtual victories into the real world.

Up against seven top human players, the AI-controlled robot arm beat three in multiple adrenaline-pumping games. Ace is an “important milestone,” wrote Carlos H. C. Ribeiro and Esther Colombini at the Aeronautics Institute of Technology and University of Campinas, respectively, who were not involved in the study.
Ace joins a humanoid robot that crushed the world record for a half marathon in Beijing last week. Neither project is focused on creating elite robotic athletes. Their main goal is to build next-generation autonomous machines that operate fluidly in the physical world.
“We wanted to prove that AI doesn’t just exist in virtual spaces,” Michael Spranger, president of Sony AI, said in a press release. “It’s not just tech you interact with in the virtual world—you can actually have a physical experience, and the technology is ready for that.”
Robots have come a long way. The clumsy, bumbling humanoids are gone, replaced by agile machines that can navigate all kinds of terrain. Autonomous vehicles once baffled by our roads now cruise the streets. Dexterous robotic arms are increasingly used for surgery, warehouse operations, or even delivering your lunch.
AI is a big part of that leap in capability. Robots are no longer strictly preprogrammed machines. They can now learn, adapt, make decisions, with generative AI models helping them understand what they’re looking at and, increasingly, how to interact with it. They’re a little less like yesterday’s rigid machines, and more like curious kids: Taking in a messy world, figuring it out, and getting better over time.
But compared to humans, robots still struggle to react on the fly, especially in fast-paced games like table tennis. The sport is a brutal mix of speed, perception, and precision. Players must read the ball and strike in a split second. There’s no margin for error. Too much power or the wrong angle, and the ball flies off the table. Too predictable, and you’ve likely handed your opponent the next point.
Professional players can smash shots up to 67 miles per hour and impart “a massive amount of spin on the ball,” exceeding 160 rotations a second, Dürr told Nature, making it tough for rookie humans and robots to react in time.
To Dürr, building a robot that could compete with elite human players was a “dream project” that “would challenge us to push the individual component technologies to their limits.”
Ace seamlessly fuses AI-based software and hardware.
For its eyes, the team placed cameras outside the court that could cover the entire playing area and track the ball’s position about 200 times per second. They also used an event-based image sensor to capture the ball’s spin. Together, these give the “robot the information it needs to anticipate where the ball is going to go, and plan how to hit it back,” said Dürr.
All that data feeds into multiple AI algorithms: Ace’s “brain.” One of these algorithms, borrowed from image processing, focuses on key parts of each frame to increase processing speed. Another, a deep reinforcement algorithm, learned to play table tennis in simulated matches. (Think student and coach: The model decides how to swing, where to aim, and how hard to hit. The “coach” gives feedback—good or bad—without demonstrating any moves.)
“So basically, we shoot a ball in simulation at our robot and let it do random things. At the beginning, it doesn’t know how to react…But eventually, it maybe be lucky enough to hit the ball back on the table,” said Dürr. And over countless iterations, it improves its play.
Expert players coached Ace too. In table tennis, the initial toss sets up the serve. Ace learned from human demonstrations adapted to its mechanics, so every toss follows the game’s rules.
After thousands of simulated hours, and with the help of yet another algorithm to weed out poor plays, the team built a library of realistic serves for Ace to draw upon.
The last component was the arm itself—and off-the-shelf didn’t work. “There’s nothing on the market that would let us play at the level we wanted to play,” said Dürr. So they built their own robot from the ground up. The lightweight, six-jointed arm can whip a racket at over 20 meters (roughly 66 feet) per second and react roughly 11 times faster than a person.
All assembled, Ace is a table-tennis powerhouse—but not unbeatable. Against five elite and two professional players, it dominated the less-experienced elites but fell to the pros. In the months since the team wrote up their results, the robot continued improving against top-tier competition.
Ace didn’t win by simply being faster than humans. Rather, it won by being inventive. It created different kinds of spins, varied its returns, and consistently landed the ball on target. When Olympic table-tennis player, Kinjiro Nakamura, watched Ace play, he was mesmerized by the robot’s unconventional moves. “No one else would have been able to do that. I didn’t think it was possible,” he said. But if a robot can pull it off, maybe humans can too.
For Colombini, who worked on soccer-playing robots, that kind of agility and improvisation is the real goal. Robots need to think on their feet and easily navigate the physical world to work safely with people. “I need the skills and the abilities of these robots, learned in these environments that are easy for us to see how they are evolving,” she said. “So, sports are just a proxy for what we want.”
The post Sony’s Table-Tennis Robot Beat Elite Human Players With Unorthodox Moves appeared first on SingularityHub.
2026-04-28 01:14:10
Algorithmic advances are steadily lowering the bar for quantum attacks—even before large-scale hardware exists.
Online data is generally pretty secure. Assuming everyone is careful with passwords and other protections, you can think of it as being locked in a vault so strong that even all the world’s supercomputers, working together for 10,000 years, could not crack it.
But last month, Google and others released results suggesting a new kind of computer—a quantum computer—might be able to open the vault with significantly fewer resources than previously thought.
The changes are coming on two fronts. On one, tech giants such as IBM and Google are racing to build ever-larger quantum computers: IBM hopes to achieve a genuine advantage over classical computers in some special cases this year, and an even more powerful “fault-tolerant” system by 2029.
On the other front, theorists are refining quantum algorithms: Recent work shows the resources needed to break today’s cryptography may be far fewer than earlier estimates.
The net result? The day quantum computers can break widely used cryptography—portentously dubbed “Q-Day”—may be approaching faster than expected.
Quantum computers are built from quantum bits, or qubits, which use the counterintuitive properties of very tiny objects to carry out computations in a different and sometimes far more efficient way from traditional computers.
So far the technology is in its infancy, with the major goal to increase the number of qubits that can be connected to work as a single computer. Bigger quantum computers should be much better at some things than their traditional counterparts—they will have a “quantum advantage.”
Late last year, IBM unveiled a 120-qubit chip which it hopes will demonstrate a quantum advantage for some tasks.
Google also recently announced it planned to speed up its move to adopt encryption techniques that should be safe against quantum computers, known as post-quantum cryptography.
Alongside these tech giants, newer approaches are also flourishing. PsiQuantum is using light-based qubits and traditional chip-manufacturing technology. Experimental platforms such as neutral-atom systems have demonstrated control over thousands of qubits in laboratory settings.
In response, standards bodies and national agencies are setting increasingly concrete timelines for moving away from common encryption systems that are vulnerable to quantum attack.
In the United States, the National Institute of Standards and Technology (NIST) has proposed a transition away from quantum-vulnerable cryptography, with migration largely completed by 2035. In Australia, the Australian Signals Directorate has issued similar guidance, urging organizations to begin planning immediately and transition to post-quantum cryptography by 2030.
Hardware is only half the story. Equally important are advances in quantum algorithms—ways to use quantum computers to attack encryption.
Much interest in quantum computer development was spurred by Peter Shor’s 1994 discovery of an algorithm that showed how quantum computers could efficiently find the prime factors of very large numbers. This mathematical trick is precisely what you need to break the common RSA encryption method.
For decades, it was believed a quantum computer would need millions of physical qubits to pose a threat to real-world encryption. This is far bigger than current systems, so the threat felt comfortably distant.
That picture is now changing.
In March 2026, Google’s Quantum AI team released a detailed study showing that far fewer resources may be needed to attack a different kind of encryption which uses mathematical objects called elliptic curves. This is what systems including Bitcoin and Ethereum use—and the study shows how a quantum computer with fewer than half a million physical qubits may be able to crack it in minutes.
That’s still a long way beyond current quantum computers, but around ten times less than earlier estimates.
At the same time, a March 2026 preprint from a Caltech—Berkeley—Oratomic collaboration explores what might be possible using neutral-atom quantum computers. The researchers estimate that Shor’s algorithm could be implemented with as few as 10,000–20,000 atomic qubits. In one design they propose, a system with around 26,000 qubits could crack Bitcoin’s encryption in a few days, while tougher problems like the RSA method with a 2048-bit key would need more time and resources.
In plain terms: The codebreakers are becoming more efficient. Advances in algorithms and design are steadily lowering the bar for quantum attacks, even before large-scale hardware exists.
So what does this mean in practice?
First, there is no immediate catastrophe—today’s cryptography won’t be broken overnight. But the direction of travel is clear. Each improvement in hardware or algorithms reduces the gap between current capabilities and useful quantum cracking machines.
Second, viable defenses already exist. NIST has standardized several post-quantum cryptographic algorithms which are believed to be resistant to quantum attacks.
Technology companies have begun deploying these in hybrid modes: Google Chrome and Cloudflare, for example, already support post-quantum protections in some protocols and services.
Systems that rely heavily on elliptic-curve cryptography—including cryptocurrencies and many secure communication protocols—will need particular attention. Google’s recent work explicitly highlights the need to migrate blockchain systems to post-quantum schemes.
Finally, this is a two-front race. It is not enough to track progress in quantum hardware alone. Advances in algorithms and error correction can be just as important, and recent results show these improvements can significantly reduce the estimated cost of attacks.
Every new headline about reduced qubit counts or faster quantum algorithms should be understood for what it is: another step toward a future where today’s cryptographic assumptions no longer hold.
The only reliable defense is to move—deliberately but decisively—toward quantum-safe cryptography. ![]()
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post Quantum Computers Are Coming to Break Cryptography Faster Than Anyone Expected appeared first on SingularityHub.
2026-04-26 01:02:39
The People Do Not Yearn for AutomationNilay Patel | The Verge
“Not everything about our lives can be measured and automated and optimized, and it shouldn’t be. And so the tech industry is rushing forward to put AI everywhere at enormous cost—energy, emissions, manufacturing capacity, the ability to buy RAM—and locked into the narrow framework of software brain without realizing they are also asking people to be fundamentally less human.”
AI-Designed Drugs by a DeepMind Spinoff Are Headed to Human Trials
Emily Mullin | Wired ($)
“In a technical paper [released earlier this year], the company touts that the [new IsoDDE] platform more than doubles the accuracy of AlphaFold 3. The startup has formed partnerships with Eli Lilly and Novartis to work together on AI drug discovery and is also advancing its own ‘broad and exciting pipeline of new medicines’ in oncology and immunology, Jaderberg said.”
We Might Finally Know How to Use Quantum Computers to Boost AIKarmela Padavic-Callaghan | New Scientist ($)
“They showed not only that this approach can work but that it would allow the quantum computer to process more data at a smaller memory cost than any conventional computer. The memory advantage is so large, in fact, that a quantum computer made from about 300 error-proof building blocks called logical qubits would outperform a classical computer built using every atom in the observable universe, says Zhao.”
New Gas-Powered Data Centers Could Emit More Greenhouse Gases Than Entire NationsMolly Taft | Wired ($)
“A Wired review of permits for data center projects using natural gas and linked to OpenAI, Meta, Microsoft, and xAI shows they could emit more than 129 million tons of greenhouse gases per year. …As tech companies race to secure massive power deals to build out hundreds of data centers across the country, these projects represent just the tip of the iceberg when it comes to the potential climate cost of the AI boom.”
Anthropic Has Surged to a Trillion-Dollar Valuation on Secondary Markets, Overtaking OpenAIBen Bergman | Business Insider
“Desperate buyers are in a race to secure a dwindling supply of secondary shares in Anthropic, driving the AI company’s valuation on some sites to $1 trillion, a price that would have seemed unthinkable even a few weeks ago. Meanwhile, traders Business Insider spoke with are seeing slumping demand for OpenAI, which is now trading at a discount to Anthropic, despite OpenAI being valued at $852 billion, more than twice Anthropic’s valuation in their most recent funding rounds.”
You’re About to Feel the AI Money SqueezeHayden Field | The Verge
“Ads, rate limits, feature restrictions, price hikes. The AI free ride is over. …To reach that bare minimum of 7 percent [return on invested capital], Gartner forecasts that large AI companies would need to earn cumulatively close to $7 trillion in AI-driven revenue through 2029, which is close to $2 trillion per year by the end of the period.”
BMW Is One Step Closer to Selling You a Color-Changing CarAndrew Liszewski | The Verge
“The new BMW iX3 Flow Edition is potentially the most exciting of all of BMW’s concepts as it embeds the E Ink Prism technology directly into the structure of the vehicle’s hood panel, instead of just slapping it on top. The new approach has ‘undergone BMW’s stringent quality testing’ so that it meets the ‘requirements of automotive engineering and everyday use,’ according to a release from E Ink.”
The FDA Gives the Green Light to the First Gene Therapy for DeafnessRob Stein | NPR
“‘That was like the most surreal moment a mother can feel when your son first hears your voice,’ [said Sierra Smith]. The treatment [Smith’s son] received was the one just approved by the FDA. …The FDA’s decision was based on the results from the treatment of 20 patients born with a defective version of a gene known as OTOF, which is necessary to transmit sound from the ears to the brain.”
Will Fusion Power Get Cheap? Don’t Count On It.Casey Crownhart | MIT Technology Review ($)
“Technologies tend to get less expensive over time. Lithium-ion batteries are now about 90% cheaper than they were in 2013. But historically, different technologies tend to go through this curve at different rates. And the cost of fusion might not sink as quickly as the prices of batteries or solar.”
A Startup Says It Grew Human Sperm in a Lab—and Used It to Make EmbryosEmily Mullin | Wired ($)
“The process involves isolating sperm-making stem cells from testicular tissue and coaxing the cells into becoming fully-fledged sperm in a dish. Scientists have been attempting to produce sperm outside the body, known as in vitro spermatogenesis, for almost a century. A Japanese team was the first to produce viable mouse sperm in the lab in 2011, but making human sperm has turned out to be a more difficult task.”
Are OpenAI and Anthropic Moving Away From Reasoning Tech?Stephanie Palazzolo | The Information ($)
“Early signs point to both Spud and Mythos being more intelligent pretrained models, meaning they got smart during the initial part of the development process. Now, OpenAI’s upcoming Spud model is noticeably better at answering tough questions without relying on reasoning, said two people familiar with it.”
Only Antimatter Provides the Energy We Need for Interstellar TravelEthan Siegel | Big Think
“If our goal is to eventually extend our reach not just to the other worlds of our Solar System, but to exoplanets around other stars, we’ll need a different, more efficient method of propulsion than chemical-based rockets can supply. The most efficient form of energy generation, theoretically, is to reach 100%, and only one fuel is capable of doing that: matter-antimatter annihilation. Here’s why that’s the ultimate dream, and how we might conceivably get there.”
If a Bird Flu Pandemic Starts, We May Have an MRNA Vaccine ReadyMichael Le Page | New Scientist ($)
“It was roughly a year after the earliest cases of covid-19 before the first vaccines against the SARS-CoV-2 virus were ready for roll-out. By then millions had died worldwide and economies were devastated. In the advent of a bird flu pandemic, we will be able to react more rapidly, because we should have an mRNA vaccine already approved and ready to go. A phase III trial of a such a vaccine is now getting under way in the UK and the US.”
The post This Week’s Awesome Tech Stories From Around the Web (Through April 25) appeared first on SingularityHub.