2025-08-22 22:00:00
The tech could revolutionize how spacecraft, airplanes, ships, and submarines navigate when GPS is unavailable or compromised.
A US military spaceplane, the X-37B orbital test vehicle embarked on its eighth flight into space on Thursday. Much of what the X-37B does in space is secret. But it serves partly as a platform for cutting-edge experiments.
One of these experiments is a potential alternative to GPS that makes use of quantum science as a tool for navigation: a quantum inertial sensor.
Satellite-based systems like GPS are ubiquitous in our daily lives, from smartphone maps to aviation and logistics. But GPS isn’t available everywhere. This technology could revolutionize how spacecraft, airplanes, ships, and submarines navigate in environments where GPS is unavailable or compromised.
In space, especially beyond Earth’s orbit, GPS signals become unreliable or simply vanish. The same applies underwater, where submarines cannot access GPS at all. And even on Earth, GPS signals can be jammed (blocked), spoofed (making a GPS receiver think it is in a different location), or disabled—for instance, during a conflict.
This makes navigation without GPS a critical challenge. In such scenarios, having navigation systems that function independently of any external signals becomes essential.
Traditional inertial navigation systems (INS), which use accelerometers and gyroscopes to measure a vehicle’s acceleration and rotation, do provide independent navigation, as they can estimate position by tracking how the vehicle moves over time. Think of sitting in a car with your eyes closed: You can still feel turns, stops, and accelerations, which your brain integrates to guess where you are over time.
Eventually though, without visual cues, small errors will accumulate and you will entirely lose your positioning. The same goes with classical inertial navigation systems. As small measurement errors accumulate, they gradually drift off course and need corrections from GPS or other external signals.
If you think of quantum physics, what may come to your mind is a strange world where particles behave like waves and Schrödinger’s cat is both dead and alive. These thought experiments genuinely describe how tiny particles like atoms behave.
At very low temperatures, atoms obey the rules of quantum mechanics. They behave like waves and can exist in multiple states simultaneously—two properties that lie at the heart of quantum inertial sensors.
The quantum inertial sensor aboard the X‑37B uses a technique called atom interferometry, where atoms are cooled to temperatures near absolute zero, so they behave like waves. Using fine-tuned lasers, each atom is split into what’s called a superposition state, similar to Schrödinger’s cat, so that it simultaneously travels along two paths, which are then recombined.
Since the atom behaves like a wave in quantum mechanics, these two paths interfere with each other, creating a pattern similar to overlapping ripples on water. Encoded in this pattern is detailed information about how the atom’s environment has affected its journey. In particular, the tiniest shifts in motion, like sensor rotations or accelerations, leave detectable marks on these atomic “waves.”
Compared to classical inertial navigation systems, quantum sensors offer orders of magnitude greater sensitivity. Because atoms are identical and do not change, unlike mechanical components or electronics, they are far less prone to drift or bias. The result is long duration and high accuracy navigation without the need for external references.
The upcoming X‑37B mission will be the first time this level of quantum inertial navigation is tested in space. Previous missions, such as NASA’s Cold Atom Laboratory and German Space Agency’s MAIUS-1, have flown atom interferometers in orbit or suborbital flights and successfully demonstrated the physics behind atom interferometry in space, though not specifically for navigation purposes.
By contrast, the X‑37B experiment is designed as a compact, high-performance, resilient inertial navigation unit for real-world, long-duration missions. It moves atom interferometry out of the realms of pure science and into a practical application for aerospace. This is a big leap.
This has important implications for both military and civilian spaceflight. For the US Space Force, it represents a step towards greater operational resilience, particularly in scenarios where GPS might be denied. For future space exploration, such as to the moon, Mars or even deep space, where autonomy is key, a quantum navigation system could serve not only as a reliable backup but even as a primary system when signals from Earth are unavailable.
Quantum navigation is just one part of the current, broader wave of quantum technologies moving from lab research into real-world applications. While quantum computing and quantum communication often steal headlines, systems like quantum clocks and quantum sensors are likely to be the first to see widespread use.
Countries including the US, China, and the UK are investing heavily in quantum inertial sensing, with recent airborne and submarine tests showing strong promise. In 2024, Boeing and AOSense conducted the world’s first in-flight quantum inertial navigation test aboard a crewed aircraft.
This demonstrated continuous GPS-free navigation for approximately four hours. That same year, the UK conducted its first publicly acknowledged quantum navigation flight test on a commercial aircraft.
This summer, the X‑37B mission will bring these advances into space. Because of its military nature, the test could remain quiet and unpublicized. But if it succeeds, it could be remembered as the moment space navigation took a quantum leap forward.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post Quantum Alternative to GPS Will Be Tested on the US Military’s X-37B Spaceplane appeared first on SingularityHub.
2025-08-22 04:36:56
The featherlight devices are smaller than a dime and need no solar panels, propellers, or engines to move.
With a blast of light, the wafer-thin metal “parachutes” levitate into the air. The curious inventions are each smaller than a dime and need no solar panels, propellers, or engines to move. Future swarms of the tiny flyers could explore our atmosphere at the edge of space or even beyond.
This sliver of air, called the mesosphere, is roughly 50 to 80 kilometers above sea level and bridges Earth’s atmosphere and space. Some studies hint that the layer is the “miner’s canary” for climate change, because cloud formation in this extremely cold region is highly sensitive to changes in carbon dioxide, temperature, and water vapor. Further study of the atmospheric layer could yield valuable insights, but the elevation is too high for balloons and aircraft to reach and too low for satellites—earning it the nickname the “ignorosphere.”
The featherlight devices, outlined in Nature, could in theory carry tiny monitors up to the mesosphere. One day, they may even be used to analyze the atmosphere on Mars at relatively low cost, powered only by the heat of the sun.
“If the full potential of this technology can be realized, swarms or arrays of such…flyers could be collecting high-resolution data on the temperature, pressure, chemical composition, and wind dynamics of the mesosphere within the next decade,” wrote Igor Bargatin at the University of Pennsylvania, who had previously championed a similar technology but was not involved in this work. “What began as a Victorian curiosity might soon become a key tool for probing the most elusive region of the atmosphere.”
“You don’t really believe it until you see it,” study author Ben Schafer at Harvard University told Nature.
Most current space technologies use solar panels for power. But the panels are heavy and costly to shuttle into space. An alternative that started as a toy-like device invented more than 150 years ago would directly harvest the sun’s heat.
The toy itself seems simple. Picture a four-leaf pinwheel—like the ones that kids blow on to spin. Each leaf, called a “vane,” is painted black on one side and white on the other. These vanes are mounted on a spindle and encased in a low-pressure chamber similar to a light bulb.
If you’ve ever worn black or white clothing on a sunny summer day, you’ll know black absorbs light and heats up, while white reflects it and stays cooler. The contraption takes advantage of this effect. “When the vanes are exposed to bright light, they begin to spin, as if being pushed on the black side,” wrote Bargatin.
The phenomenon captivated brilliant scientists at the time, including James Maxwell and Albert Einstein. The movement couldn’t be due to solar radiation—which powers our solar panels—because the vanes would turn the other way. Photons, or light particles, “push” a surface harder when bouncing back, compared to being absorbed. In other words, the white side should contribute more energy to the spin.
Instead, the “toy” works as it does thanks to heat transfer and gas.
We’re surrounded by gas molecules—nitrogen, oxygen, and so on—that are constantly bouncing around. The higher their energy, the faster they move.
If the vanes are hotter than the surrounding air, nearby gas molecules gain speed as they randomly bump into the vanes’ surfaces. Because the black side of the toy heats up more as it absorbs more energy, that side gives nearby gas more momentum than the white side, generating air flow. The effect, called photophoresis, is especially notable at low pressures. So, in a thin atmosphere, like high above Earth’s surface or on Mars, it could generate useful amounts of force. “When you’re at low pressures, things get a little bit wonky,” Schafer told Nature.
In 2021, Bargatin and team pioneered tiny flyable devices—each thinner than a sheet of cling wrap—based on the physics. These were far lighter than the original toy, but too delicate to carry cargo.
The new devices are sturdier. They have two layers of perforated aluminum oxide—each about 1,000 times thinner than a human hair—connected by a series of pillars. The top layer allows light to soak in. The bottom layer is coated with chromium, which absorbs sunlight.
This lower layer is like the black side of the toy: When gas bounces off the layer, which is hotter, it gains more energy than gas hitting the top side. Also, because the air above is colder and denser, it naturally sinks down and generates airflow through the holes in the layers.
“Overall, more porous structures can lift more mass at lower altitudes,” wrote the team.
In natural sunlight, the device produces an airflow that lifts it up. This is “similar to [the] downward jet of gas propelling a rocket upwards,” wrote Bargatin. Although scientists have previously made similar contraptions, they needed illumination far stronger than natural sunlight to work, making them less practical for space exploration.
The team next used computer simulations to test how a palm-sized version of the new device would fly at low pressures like those that exist in the mesosphere.
This outermost region of the Earth’s atmosphere has often eluded scientific research because it’s hard to reach. Aircraft and balloons can’t fly that high. Ground-based radar and satellites offer some remote-sensing data but with low coverage.
Under pressure and temperature conditions that naturally occur in the mesosphere, the team’s simulations suggest a larger version of the ultralight devices could carry a 10-milligram payload—enough to support a small radio antenna, sensors, and other microelectronics to detect and communicate atmospheric changes.
And because they’re powered by the sun alone, the flyers could in theory stay aloft indefinitely during the summer months near the poles. They could even be powered at night by exploiting the infrared light Earth emits and, in this way, levitate for weeks to months.
If scaled up, the devices could, within a decade, begin studying high-altitude cloud and lightning events, tracking meteoric dust, and recording temperature fluctuations related to climate change. Away from Earth, swarms of the sun-powered devices could one day explore Mars, which has a thin atmosphere that roughly resembles the mesosphere.
“We did some modeling on how well these things will fly on Mars, and it turns out that they would have pretty comparable performance,” said Schafer. Because the devices are so lightweight they’d be easy to ship on a rocket. If loaded with sensors and communication devices, they could beam back data on water vapor, wind speed, and other conditions on the dusty planet.
The post These Tiny Aircraft Are Powered Entirely by the Sun’s Heat appeared first on SingularityHub.
2025-08-20 22:00:00
Energy stored in liquid CO2 is converted back into gas to turn turbines on demand.
Tech companies are throwing money at new sources of energy, from wind and solar to next-generation geothermal, nuclear, and even fusion power. But all that electricity isn’t good for much unless it can be stored then dispatched on demand, particularly for intermittent sources like wind and solar. An Italian company called Energy Dome has a novel solution, and recently signed a contract with Google to build multiple energy storage facilities for the tech giant.
Energy Dome’s battery uses carbon dioxide (CO2) to store energy in liquid form when electricity supply is high, then releases energy when supply is low by converting the liquid CO2 back to a gas, spinning a turbine in the process.
The technology is a form of compressed air energy storage, which has been around since the late 1970s when the first utility-scale facility was built in Germany. Energy Dome puts a new twist on conventional systems by using CO2 instead of regular air. Despite being vilified for our climate change woes, it turns out the greenhouse gas carries some benefits when it comes to energy storage. It has a higher energy density than air, and it liquefies at ambient temperatures under pressure.
Here’s how Energy Dome’s process works. CO2 is stored as a gas in a giant dome. When energy is cheap and abundant—namely, when the sun is shining and the wind is blowing—the gas is pumped into a compressor, where it gives off heat (which is stored) and turns into a liquid that’s stored in carbon steel tanks. When the sun sets or wind dies down but people still want to run their air conditioners or query ChatGPT, an evaporator uses the stored heat to turn the liquid CO2 into pressurized gas, which shoots out like steam from a pressure cooker, turning turbines and generating electricity.
Lithium-ion batteries are the go-to for storing electricity produced by wind and solar farms, but the batteries can only release electricity for a few hours at a time. Their maximum continuous dispatch time over the last several years has been four hours, and recent advancements could bring that up to eight hours. But that’s still not long enough to satisfy demand if the sun stops shining for days.
Energy Dome’s CO2 battery is considered a long-duration energy storage (LDES) solution. LDES is defined as a storage system that can deliver electricity at a consistent rate for 10 hours or more. The company says its CO2 battery can dispatch energy for up to 24 hours. And since the liquid CO2 can be stored at ambient temperature, it takes up less space and is more energy-dense than conventional compressed air energy storage (though the “dome” itself isn’t exactly petite). Paolo Cavallini, the company’s founder, says the CO2 batteries “can last 30 years without any kind of degradation.”
The approach is promising. But there may be engineering challenges to get it working as hoped. For example, Edward Barbour, an associate professor of energy systems and storage at Birmingham University, told MIT Technology Review in 2022 that keeping the heat exchangers in working order for decades may be tough.
Energy Dome has a commercial-scale plant up and running in Italy, which was funded by the Bill Gates-backed Breakthrough Energy and the European Investment Bank. The facility has a 20-megawatt capacity and a 10-hour cycle. The company says it can power 14,000 homes (that’s Italian homes, which consume less energy than American ones).
Google has not disclosed financial details of its agreement with Energy Dome, but the tech giant did state in a press release that it plans to support commercial projects in several different countries, and believes these projects “can unlock new clean energy for grids where we operate before 2030.”
The company isn’t the only one betting on CO2 batteries. The Department of Energy gave a $30 million grant to Alliant Energy to build the Columbia Energy Storage Project in Wisconsin, which is licensing Energy Dome’s technology.
Electricity demand is only going to rise over the next several years, but building new generation to meet that demand is just one piece of the puzzle. Storage is another, and it seems Energy Dome is well-positioned to help fill that gap.
The post Google Will Store Energy in Giant Domes Filled With CO2 appeared first on SingularityHub.
2025-08-19 22:00:00
Accurate predictions could accelerate the design of new experiments and bring practical fusion power closer.
While AI chatbots grab most of the attention, deep learning is also quietly revolutionizing science and engineering. A new AI model that can help predict the outcome of fusion power experiments could accelerate the technology’s arrival.
Achieving nuclear fusion involves some of the most extreme conditions known to nature, which makes designing and operating fusion reactors incredibly challenging. Simulations of key processes typically require huge amounts of time on supercomputers and are still far from perfect.
But AI is starting to accelerate progress in this area. Google DeepMind made headlines in 2022 when it trained a deep-learning model to control the roiling plasma inside a fusion reactor. And now, the scientists behind the first fusion experiment to show a net gain of energy have revealed that, thanks to AI, they were already pretty confident of success before they flicked the switch.
In a new paper in Science, researchers at Lawrence Livermore National Laboratory outline a generative machine learning model that they used to predict a 74 percent chance the experiment at the US National Ignition Facility would lead to net energy gain. The team say having an accurate prediction model could accelerate the design of new experiments and help them make decisions about how to upgrade hardware.
“This outcome demonstrates a promising approach to predictive modeling of ICF experiments and provides a framework for developing data-driven models for other complex systems,” write the authors.
The National Ignition Facility is taking a slightly unusual approach to achieving fusion. The most popular reactor design is a tokamak. This is a doughnut-shaped chamber wrapped in ultra-powerful magnets that contain a super-heated plasma in which atoms fuse together to generate energy.
In contrast, the National Ignition Facility is using an approach known as “inertial confinement fusion.” This involves firing extremely powerful lasers at a millimeter-sized capsule containing the hydrogen isotopes deuterium and tritium. The capsule implodes under pressure and causes the hydrogen atoms to fuse, generating power.
On December 5, 2022, researchers at the facility fired a 2.05-megajoule laser at a fuel pellet that then generated 3.15 megajoules of energy: It was the first time a fusion experiment produced more energy than it took to initiate it.
These experiments are incredibly expensive, so it would be useful to have good predictions about how they’re likely to go—and for this experiment they did. The team used a novel predictive model that relied on advanced statistical techniques and deep learning to learn from both simulation and experimental data.
Older approaches involve creating physics-based simulations and then tweaking them to match data from prior experiments. Researchers can make predictions about very small design changes using this method, but the authors say it struggles to accurately simulate more substantial modifications.
Their new approach uses Bayesian inference—a form of statistical analysis that provides probabilistic predictions—to analyze data from previous ignition experiments at the facility. This produces a generative AI model that can make predictions about future experiments.
Because there have only been a limited number of these tests, the researchers wanted to supplement existing test data with data from simulations. However, directly analyzing the simulations using Bayesian inference would be extremely computationally expensive.
Instead, they trained a deep neural network on a database of 150,000 simulations, which could then be efficiently analyzed using Bayesian inference. This resulted in a generative model informed by both experimental and simulation datasets that can accurately model how specific design changes will impact the outcome of future experiments.
The prediction of a 74 percent probability of success may still sound a bit fuzzy. But to put things in context, the authors note the model only predicted a 0.5 percent chance of success for the preceding experimental design.
This model is obviously highly specific to the unique design of the National Ignition Facility’s experimental set up, but the researchers say the broad approach could be adaptable to other complex problems where data is sparse. And it is already being used to optimize design decisions as the researchers continue to chase ever higher energy outputs from their fusion experiments.
The post This AI Model Predicts Whether Fusion Power Experiments Will Work appeared first on SingularityHub.
2025-08-18 22:00:00
Thoughts are translated into speech in real time—with a passcode to prevent broadcasting private musings.
We all talk to ourselves in our heads. It could be a pep talk heading into a wedding speech or chaotic family reunion or motivating yourself to quit procrastinating. This inner speech also hides secrets. What we say doesn’t always reflect what we think.
A team led by scientists at Stanford University have now designed a system that can decode these conversations with ourselves. They hope it can help people with paralysis communicate with their loved ones—especially those who struggle with current brain-to-speech systems.
Instead of having participants actively try to make sounds and form words, as if they’re speaking out loud, the new AI decoder captures silent monologues and translates them into speech with up to 74 percent accuracy.
Of course, no one wants their thoughts continuously broadcast. So, as a brake, the team designed “neural passwords” the volunteers can mentally activate before the implant starts translating their thoughts.
“This is the first time we’ve managed to understand what brain activity looks like when you just think about speaking,” said study author Erin Kunz. “For people with severe speech and motor impairments…[an implant] capable of decoding inner speech could help them communicate much more easily and more naturally.”
The brain sparks with electrical activity before we attempt to speak. These signals control muscles in the throat, tongue, and lips to form different sounds and intonations. Brain implants listen to and decipher these signals, allowing people with paralysis to regain their voices.
A recent system translates speech in near real time. A 45-year-old participant who took part in a study featuring the system lost the ability to control his vocal cords due to amyotrophic lateral sclerosis (ALS). His AI-guided implant decoded brain activity—captured when he actively tried to speak—into coherent sentences with different intonations. Another similar trial gathered neural signals from a middle-aged woman who suffered a stroke. An AI model translated this data into words and sentences without notable delays, allowing normal conversation to flow.
These systems are life-changing, but they struggle to help people who can’t actively try to move the muscles involved in speech. An alternative is to go further upstream and interpret speech from brain signals alone, before participants try to speak aloud—in other words, to decode their inner thoughts.
Previous brain imaging studies have found that inner speech activates a similar—but not identical—neural network as physical speech does. For example, electrodes placed on the surface of the brain have captured a unique electrical signal that spreads across a wide neural network, but scientists couldn’t home in on the specific regions contributing to inner speech.
The Stanford team recruited four people from the BrainGate2 trial, each with multiple 64-channel microelectrode arrays already implanted into their brains. One participant, a 68-year-old woman, had gradually lost her ability to speak nearly a decade ago due to ALS. She could still vocalize, but the words were unintelligible to untrained listeners.
Another 33-year-old volunteer, also with ALS, had incomplete locked-in syndrome. He relied on a ventilator to breathe and couldn’t control his muscles—except those around his eyes—but his mind was still sharp.
To decode inner speech, the team recorded electrical signals from participants’ motor cortexes as they tried to produce sounds (attempted speech) or simply thought about a single-syllable word like “kite” or “day” (inner speech). In other tests, the participants heard or silently read the words in their minds. By comparing the results from each of these scenarios, the team was able to map out the specific motor cortex regions that contribute to inner speech.
Maps in hand, the team next trained an AI decoder to decipher each participant’s thoughts.
The system was far from perfect. Even with a limited 50-word vocabulary, the decoder messed up 14 to 33 percent of the translations depending on the participant. For two people it was able to decode sentences made using a 125,000-word vocabulary, but with an even higher error rate. A cued sentence like “I think it has the best flavor” turned into “I think it has the best player.” Other sentences, such as “I don’t know how long you’ve been here,” were accurately decoded.
Errors aside, “If you just have to think about speech instead of actually trying to speak, it’s potentially easier and faster for people [to communicate],” said study author Benyamin Meschede-Krasa.
These first inner speech tests were prompted. It’s a bit like someone saying “don’t think of an elephant” and you immediately think of an elephant. To see if the decoder could capture automatic inner speech, the team taught one participant a simple game in which she memorized a series of three arrows pointing at different directions, each with a visual cue.
The team thought the game could automatically trigger inner speech as a mnemonic, they wrote. It’s like repeating to yourself a famous video game cheat code or learning how to solve a Rubik’s cube. The decoder captured her thoughts, which mapped to her performance.
They also tested the system in scenarios when participants counted in their heads or thought about relatively private things, like their favorite movie or food. Although the system picked up more words than when participants were instructed to clear their minds, the sentences were largely gibberish and only occasionally contained plausible phrases, wrote the team.
In other words, the AI isn’t a mind reader, yet.
But with better sensors and algorithms, the system could one day leak out unintentional inner speech (imagine the embarrassment). So, the team constructed multiple safeguards. One labels attempted speech—what you actually want to say out loud—differently than inner speech. This strategy only works for people who can still try to attempt speaking out loud.
They also tried creating a mental password. Here, the system only activates if the person thinks about the password first (“chittychittybangbang” was one). Real-time trials with the 68-year-old participant found the system correctly detected the password roughly 99 percent of the time, making it easy for her to protect her private thoughts.
As implants become more sophisticated, researchers and users are concerned about mental privacy, the team wrote, “specifically whether a speech BCI [brain-computer interface] would be able to read into thoughts or internal monologues of users when attempting to decode (motor) speech intentions.’’ The tests show it’s possible to prevent such “leakage.”
So far, implants to restore verbal communication have relied on attempted speech, which requires significant effort from the user. And for those with locked-in syndrome who can’t control their muscles, the implants don’t work. By capturing inner speech, the new decoder taps directly into the brain, requiring less effort and could speed up communication.
“The future of BCIs is bright,” said study author Frank Willett. “This work gives real hope that speech BCIs can one day restore communication that is as fluent, natural, and comfortable as conversational speech.”
The post New Brain Implant Decodes ‘Inner Monologue’ of People With Paralysis appeared first on SingularityHub.
2025-08-16 22:00:00
Sam Altman Says ChatGPT Is on Track to Out-Talk HumanityZoë Schiffer and Will Knight | Wired
“The OpenAI CEO addressed GPT-5 backlash, the AI bubble—and why he’s willing to spend trillions of dollars to win. …’If you project our growth forward, pretty soon billions of people a day will be talking to ChatGPT,’ [he said] during a dinner with journalists in San Francisco.”
Experimental ‘Off-the-Shelf’ Cancer Vaccine Is Already Prolonging Lives, Study SuggestsEd Cara | Gizmodo
“Phase I trials aren’t intended to conclusively show that an experimental drug or vaccine works, so the findings should still be viewed with some caution until more data is collected. But it certainly looks like we’re on the verge of a breakthrough with cancer vaccines.”
Exclusive: Inside San Francisco’s Robot Fight ClubAshlee Vance | Core Memory
“For the past few months, Cix Liv—real name—has been operating his company REK out of a no-frills warehouse space off Van Ness in San Francisco. The office has a couple of makeshift desks with computers and a bunch of virtual reality headsets on some shelves. More to the point, REK also has four humanoid-style robots hanging from gantries, and they’ve been outfitted with armor, boxing gloves, swords, and backstories.”
This Quantum Radar Could Image Buried ObjectsSophia Chen | MIT Technology Review
“Physicists have created a new type of radar that could help improve underground imaging, using a cloud of atoms in a glass cell to detect reflected radio waves. The radar is a type of quantum sensor, an emerging technology that uses the quantum-mechanical properties of objects as measurement devices.”
Why GPT-4o’s Sudden Shutdown Left People GrievingGrace Huckins | MIT Technology Review
“June was just one of a number of people who reacted with shock, frustration, sadness, or anger to 4o’s sudden disappearance from ChatGPT. Despite its previous warnings that people might develop emotional bonds with the model, OpenAI appears to have been caught flat-footed by the fervor of users’ pleas for its return.”
Taiwan’s ‘Silicon Shield’ Could Be WeakeningJohanna M. Costigan | MIT Technology Review
“Semiconductor powerhouse TSMC is under increasing pressure to expand abroad and play a security role for the island. Those two roles could be in tension. …In Taiwan, there is a worry that expansion abroad will dilute the company’s power at home, making the US and other countries less inclined to feel Taiwan is worthy of defense.”
LLMs’ ‘Simulated Reasoning’ Abilities Are a ‘Brittle Mirage,’ Researchers FindKyle Orland | Ars Technica
“The results suggest that the seemingly large performance leaps made by chain-of-thought models are ‘largely a brittle mirage’ that ‘become[s] fragile and prone to failure even under moderate distribution shifts,’ the researchers write. ‘Rather than demonstrating a true understanding of text, CoT reasoning under task transformations appears to reflect a replication of patterns learned during training.'”
AOL Announces September Shutdown for Dial-Up Internet AccessBenj Edwards | Ars Technica
“After decades of connecting Americans to its online service and the internet through telephone lines, AOL recently announced it is finally shutting down its dial-up modem service on September 30, 2025. The announcement marks the end of a technology that served as the primary gateway to the World Wide Web for millions of users throughout the 1990s and early 2000s.”
What If AI Doesn’t Get Much Better Than This?Cal Newport | The New Yorker
“In the aftermath of GPT-5’s launch, it has become more difficult to take bombastic predictions about AI at face value, and the views of critics like [Gary] Marcus seem increasingly moderate. Such voices argue that this technology is important, but not poised to drastically transform our lives. They challenge us to consider a different vision for the near-future—one in which AI might not get much better than this.”
OpenAI, Cofounder Sam Altman to Take on Neuralink With New StartupIvan Levingston, George Hammond, and James Fontanella-Khan, Financial Times | Ars Technica
“OpenAI and its cofounder Sam Altman are preparing to back a company that will compete with Elon Musk’s Neuralink by connecting human brains with computers, heightening the rivalry between the two billionaire entrepreneurs.”
Ford’s Answer to China: A Completely New Way of Making CarsJeremy White | Wired
“Ford calls its new way of making EVs the ‘Ford Universal EV Production System,’ and will spend $2 billion to set it up at the company’s Louisville assembly plant. Ford says the new method will be 40 percent faster than the existing process there, and have a comparable reduction in workstations. Parts needed to make Ford’s new EVs will be cut by 20 percent.”
How AI’s Sense of Time Will Differ From OursPetar Popovski | IEEE Spectrum
“An understanding of the passage of time is fundamental to human consciousness. While we continue to debate whether artificial intelligence (AI) can possess consciousness, one thing is certain: AI will experience time differently. Its sense of time will be dictated not by biology, but by its computational, sensory, and communication processes.”
How Alien Life Could Exist Without WaterGayoung Lee | Gizmodo
“Intriguing new research from MIT proposes that liquids are what’s important for extraterrestrial habitability, and not just water. The new research specifically focuses on ionic fluids—substances that planetary scientists believe could form on the surfaces of rocky planets and moons. Ionic liquids are highly tolerant to high temperatures and low pressures, allowing them to remain in a stable liquid state that’s potentially friendly to biomolecules.”
The post This Week’s Awesome Tech Stories From Around the Web (Through August 16) appeared first on SingularityHub.