MoreRSS

site iconIEEE SpectrumModify

IEEE is the trusted voice for engineering, computing, and technology information around the globe. 
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of IEEE Spectrum

The Top 7 Consumer Electronics Stories of 2025

2025-12-27 22:00:01



In 2025, many of IEEE Spectrum‘s top consumer electronics stories were about about creating the experience you want with technology. Open-source software offered more customization for laptops and displays, devices with less distracting design received recognition with a new certification, and smart glasses manufacturers forged paths to figure out what users really want in the wearable tech.

Other stories highlighted the fascinating fundamental tech in our smartphones, like how your new iPhone stays cool and the potential for its camera to gather information beyond what the human eye can see. And we considered the effects of U.S. tariffs from the Trump administration.

We’re gearing up for a 2026 filled with many more exciting developments. In the meantime, read on for IEEE Spectrum’s most popular consumer electronic stories of the year.

1. E-Paper Display Reaches the Realm of LCD Screens

Black-and-white cityscape with sailboat, cursor hovered over photo on blue background. Source image: Modos

When hours of our days may be dominated by screens, e-paper displays offer an option easier on the eyes. Historically, these displays have been too slow for everyday computing. But this year, a small Boston-based startup called Modos created a monitor and development kit for a display with a refresh rate of 75 hertz—comparable to some basic LCD screens. That’s even fast enough for video.

“Modos has a not-so-secret weapon,” contributing editor Matthew Smith writes. Specifically, an open-source display controller is key to the display’s speed. Modos completed its crowdfunding campaign and pre-orders are scheduled to ship in late January 2026.

2. Water Vapor Could Cool Your Next iPhone

A front and back view of an Apple iPhone 17  on an orange background with a misty blue pattern IEEE Spectrum; Source images: Apple

Without the proper cooling tech, high-end smartphones risk burning a hole in your pocket—literally. In the latest generation of Apple smartphones, released in September, the iPhone 17 Pro and Pro Max contain thin chambers of water that help dissipate heat through evaporation. Cooling phones with water vapor isn’t entirely new though: High-end smartphones from Samsung and Google also use the technique.

For more on how to keep our electronics cool, check out IEEE Spectrum’s recent special report, The Hot, Hot Future of Chips. Our editors and expert authors break down how lasers, liquid cooling, and diamond blankets could all contribute to thermal management for increasingly complex and capable chips.

3. This Year, RISC-V Laptops Really Arrive

Close-up of a chip on a laptop's RISC-V mainboard.  DeepComputing

Most laptops can only be customized so much. Once you get down to the level of specific instructions for how the computer executes instructions—the instruction set architecture—laptops usually operate on proprietary technology like x86 and Arm.

Earlier this year, repairable computer maker Framework released a laptop that can support a RISC-V mainboard, bringing open-source architecture to the masses—or at least, developers and early adopters interested in straying from mainstream closed architectures. Later in the year, Framework also made news when it released a swappable laptop graphics module, allowing users to choose between the AMD GPU the laptop originally shipped with and Nvidia’s RTX 5070.

4. Smartphone Cameras Go Hyperspectral

Young Kim posing in a research lab. Vincent Walter/Purdue University

A picture is worth a thousand words, as the cliché goes. But the images taken by your smartphone camera contain more information than you might realize. While the human eye is sensitive to a limited range of visible light wavelengths, the pixels in a standard smartphone camera sensor are potentially sensitive to a much wider range of wavelengths.

Researchers at Purdue University developed a way to capture hyperspectral information by placing a card with a color chart in frame. With this technique, an ordinary smartphone could serve as a “pocket spectrometer” and identify specific chemicals for medical diagnostics, analyzing pigments in artwork, and more.

5. The Many Ways Tariffs Will Hit Your Electronics

iPhones on display at an Apple retail store. Anthony Behar/Sipa USA/Alamy

Shortly after U.S. President Donald Trump was sworn into office for his second term in January, he began enacting new tariffs on foreign goods. In April, Trump announced significant changes, including a universal 10 percent tariff on all imports, as well as a 125 percent tariff on Chinese goods (now reduced to a much lower 10 percent baseline).

To learn how these tariffs might affect the U.S. electronics market, senior editor Samuel Moore interviewed Shawn DuBravac, chief economist of the Global Electronics Association (formerly IPC, or the Institute of Printed Circuits). Amidst ongoing changes, they spoke about predicted price increases, shifting supply chains, and the effect on lower-priced electronics.

Spectrum also covered how tariffs affect hobbyists and students, who often rely on components sourced from suppliers outside of the United States. And keep an eye out for more stories on the relationship between tech and government from technology policy editor, Lucas Laursen.

6. Calm Tech Certification “Rewards” Less Distracting Tech

A child using their finger to write "thank you for your hard work" in Japanese on the mui Board Gen 2, which looks like wood decor until the controls are illuminated.Mui Lab

With the sheer quantity of eye-catching tech displayed at the annual Consumer Electronics Show (CES), it’s fitting that the event is hosted in Las Vegas. But at CES 2025, some devices took a different approach. The Calm Tech Institute issued a new certification to several devices shown at CES that are designed to be less distracting and command less of our primary attention.

For instance, an e-ink tablet and a wood-like smart-home interface were among the first batch of devices that received the certification. While everyday devices bombard us with notifications, calm technology defaults to the minimum necessary notifications and more naturalistic design.

7. Two Visions for the Future of AR Smart Glasses

Two pairs of smart glasses, Halliday and the Xreal One Pro, against a solid background. Source images: Xreal; Patrick T. Fallon/AFP/Getty Images

Rounding out the list, this feature article considered a fundamental question in consumer tech: What do users actually want? With smart glasses finally on the verge of mainstream use, contributor Alfred Poor compares two paths forward for the wearable tech. “Should a head-worn display replicate the computer screens that we currently use, or should they work more like a smartwatch, which displays only limited amounts of information at a time?” Two smaller companies, Xreal and Halliday, offer AR glasses that represent the two design concepts, and the tradeoffs between them.

AI Data Centers Demand More Than Copper Can Deliver

2025-12-27 21:00:01



Summary

How fast you can train gigantic new AI models boils down to two words: up and out.

In data-center terms, scaling out means increasing how many AI computers you can link together to tackle a big problem in chunks. Scaling up, on the other hand, means jamming as many GPUs as possible into each of those computers, linking them so that they act like a single gigantic GPU, and allowing them to do bigger pieces of a problem faster.

The two domains rely on two different physical connections. Scaling out mostly relies on photonic chips and optical fiber, which together can sling data hundreds or thousands of meters. Scaling up, which results in networks that are roughly 10 times as dense, is the domain of much simpler and less costly technology—copper cables that often span no more than a meter or two.

This article is part of our special report Top Tech 2026.

But the increasingly high GPU-to-GPU data rates needed to make more powerful computers work are coming up against the physical limits of copper. As the bandwidth demands on copper cables approach the terabit-per-second realm, physics demands that they be made shorter and thicker, says David Kuo, vice president of product marketing and business development at the data-center-interconnect startup Point2 Technology. That’s a big problem, given the congestion inside computer racks today and the fact that Nvidia, the leading AI hardware company, plans an eightfold increase in the maximum number of GPUs per system, from 72 to 576 by 2027.

“We call it the copper cliff,” says Kuo.

The industry is working on ways to unclog data centers by extending copper’s reach and bringing slim, long-reaching optical fiber closer to the GPUs themselves. But Point2 and another startup, AttoTude, advocate for a solution that’s simultaneously in between the two technologies and completely different from them. They claim the tech will deliver the low cost and reliability of copper as well as some of the narrow gauge and distance of optical—a combination that will handily meet the needs of future AI systems.

Their answer? Radio.


Later this year, Point2 will begin manufacturing the chips behind a 1.6-terabit-per-second cable consisting of eight slender polymer waveguides, each capable of carrying 448 gigabits per second using two frequencies, 90 gigahertz and 225 GHz. At each end of the waveguide are plug-in modules that turn electronic bits into modulated radio waves and back again. AttoTude is planning essentially the same thing, but at terahertz frequencies and with a different kind of svelte, flexible cable.

Both companies say their technologies can easily outdo copper in reach—spanning 10 to 20 meters without significant loss, which is certainly long enough to handle Nvidia’s announced scale-up plans. And in Point2’s case, the system consumes one-third of optical’s power, costs one-third as much, and offers as little as one-thousandth the latency.

According to its proponents, radio’s reliability and ease of manufacturing compared with those of optics mean that it might beat photonics in the race to bring low-energy processor-to-processor connections all the way to GPU, eliminating some copper even on the printed circuit board.

What’s wrong with copper?

So, what’s wrong with copper? Nothing, so long as the data rate isn’t too high and the distance it has to go isn’t too far. At high data rates, though, conductors like copper fall prey to what’s called the skin effect.

Comparison of two cables: direct-attach and e-Tube. e-Tube is smaller, reaching 20 meters.A 1.6-terabit-per-second e-Tube cable has half the area of a 32-gauge copper cable and has up to 20 times the reach. Point2 Technology

The skin effect occurs because the signal’s rapidly changing current leads to a changing magnetic field that tries to counter the current. This countering force is concentrated at the middle of the wire, so most of the current is confined to flowing at the wire’s outer edge—the “skin”—which increases resistance. At 60 hertz—the mains frequency in many countries—most of the current is in the outer 8 millimeters of copper. But at 10 GHz, the skin is just 0.65 micrometers deep. So to push high-frequency data through copper, the wire needs to be wider, and you need more power. Both requirements work against packing more and more connections into a smaller space to scale up computing.

To counteract the skin effect and other signal-degrading issues, companies have developed copper cables with specialized electronics at either end. With the most promising, called active electrical cables, or AECs, the terminating chip is called a retimer (pronounced “re-timer”). This IC cleans up the data signal and the clock signal as they arrive from the processor. The circuit then retransmits them down the copper cable’s typically eight pairs of wires, or lanes. (There is a second set for transmitting in the other direction.) At the other end, the chip’s twin takes care of any noise or clock issues that accumulate during the journey and sends the data on to the receiving processor. Thus, at the cost of electronic complexity and power consumption, an AEC can extend the distance that copper can reach.

Don Barnetson, senior vice president and head of product at Credo, which provides network hardware to data centers, says his company has developed an AEC that can deliver 800 Gb/s as far as 7 meters—a distance that’s likely needed as computers hit 500 to 600 GPUs and span multiple racks. The first use of AECs will probably be to link individual GPUs to the network switches that form the scale-out network. This first stage in the scale-out network is important, says Barnetson, because “it’s the only nonredundant hop in the network.” Losing that link, even momentarily, can cause an AI training run to collapse.

But even if retimers manage to push the copper cliff a bit farther into the future, physics will eventually win. Point2 and AttoTude are betting that point is coming soon.

Terahertz radio’s reach

AttoTude grew out of founder and CEO Dave Welch’s deep investigations into photonics. A cofounder of Infinera, an optical telecom–equipment maker purchased by Nokia in 2025, Welch developed photonic systems for decades. He knows the technology’s weaknesses well: It consumes too much power (about 10 percent of a data center’s compute budget, according to Nvidia); it’s extremely sensitive to temperature; getting light into and out of photonics chips requires micrometer-precision manufacturing; and the technology’s lack of long-term reliability is notorious. (There’s even a term for it: “link flap.”)

“Customers love fiber. But what they hate is the photonics,” says Welch. “Electronics have been demonstrated to be inherently more reliable than optics.”

Fresh off Nokia’s US $2.3 billion purchase of Infinera, Welch asked himself some fundamental questions as he contemplated his next startup, beginning with “If I didn’t have to be at [an optical wavelength], where should I be?” The answer was the highest frequency that’s achievable purely with electronics—the terahertz regime, 300 to 3,000 GHz.

“You start with passive copper, and you do everything you can to run in passive copper as long as you can. Don Barnetson, Credo

So Welch and his team set about building a system that consists of a digital component to interface with the GPU, a terahertz-frequency generator, and a mixer to encode the data on the terahertz signal. An antenna then funnels the signal into a narrow, flexible waveguide.

As for the waveguide, it’s made of a dielectric at the center, which channels the terahertz signal, surrounded by cladding. One early version was just a narrow, hollow copper tube. Welch says that the second-generation cable—made up of fibers only about 200 µm across— points to a system with losses down to 0.3 decibels per meter—a small fraction of the loss from a typical copper cable carrying 224 Gb/s.

Welch predicts this waveguide will be able to carry data as far as 20 meters. That “happens to be a beautiful distance for scale-up in data centers,” he says.

So far, AttoTude has made the individual components—the digital data chip, the terahertz-signal generator, the circuit that mixes the two—along with a couple generations of waveguides. But the company hasn’t yet integrated them into a single pluggable form. Still, Welch says, the combination delivers enough bandwidth for at least 224 Gb/s transmission, and the startup demonstrated 4-meter transmission at 970 GHz last April at the Optical Fiber Communications Conference, in San Francisco.

Radio’s reach in the data center

Point2 has been aiming to bring radio to the data center longer than AttoTude has. Formed nine years ago by veterans of Marvell, Nvidia, and Samsung, the startup has pulled in $55 million in venture funding, most notably from computer cables and connections maker Molex. The latter’s backing “is critical, because they’re a major part of the cable-and-connector ecosystem,” says Kuo. Molex has already shown that it can make Point2’s cable without modifying its existing manufacturing lines, and now Foxconn Interconnect Technology, which makes cables and connectors, is partnering with the startup. The support could be a big selling point for the hyperscalers who would be Point2’s customers.

Bundles of grey cables cascade down the back of a black computer rack.Nvidia’s GB200 NVL72 rack-scale computer relies on many copper cables to link its 72 processors together.NVIDIA

Each end of the Point2 cable, called an e-Tube, consists of a single silicon chip that converts the incoming digital data into modulated millimeter-wave frequencies and an antenna that radiates into the waveguide. The waveguide itself is a plastic core with metal cladding, all wrapped in a metal shield. A 1.6-Tb/s cable, called an active radio cable (ARC), is made up of eight e-Tube cores. At 8.1 millimeters across, that cable takes up half the volume of a comparable AEC cable.

One of the benefits of operating at RF frequencies is that the chips that handle them can be made in a standard silicon foundry, says Kuo. A collaboration between engineers at Point2 and the Korea Advanced Institute of Science and Technology, reported this year in the IEEE Journal of Solid-State Circuits, used 28-nanometer CMOS technology, which hasn’t been cutting edge since 2010.

The scale-up network market

As promising as their tech sounds, Point2 and AttoTude will have to overcome the data-center industry’s long history with copper. “You start with passive copper,” says Credo’s Barnetson. “And you do everything you can to run in passive copper as long as you can.”

The boom in liquid cooling for data-center computing is evidence of that, he says. “The entire reason people have gone to liquid cooling is to keep [scaling up] in passive copper,” Barnetson says. To connect more GPUs in a scale-up network with passive copper, they must be packed in at densities too high for air cooling alone to handle. Getting the same kind of scale-up from a more spread-out set of GPUs connected by millimeter-wave ARCs would ease the need for cooling, suggests Kuo.

Meanwhile, both startups are also chasing a version of the technology that will attach directly to the GPU.

Nvidia and Broadcom recently deployed optical transceivers that live inside the same package as a processor, separating the electronics and optics by micrometers rather than centimeters or meters. Right now, the technology is limited to the network-switch chips that connect to a scale-out network. But big players and startups alike are trying to extend its use all the way to the GPU.

Both Welch and Kuo say their companies’ technologies could have a big advantage over optical tech in such a transceiver-processor package. Nvidia and Broadcom—separately—had to do a mountain of engineering to make their systems possible to manufacture and reliable enough to exist in the same package as a very expensive processor. One of the many challenges is how to attach an optical fiber to a waveguide on a photonic chip with micrometer accuracy. Because of its short wavelength, infrared laser light must be lined up very precisely with the core of an optical fiber, which is only around 10 µm across. By contrast, millimeter-wave and terahertz signals have a much longer wavelength, so you don’t need as much precision to attach the waveguide. In one demo system it was done by hand, says Kuo.

Pluggable connections will be the technology’s first use, but radio transceivers co-packaged with processors are “the real prize,” says Welch.

Video Friday: Holiday Robot Helpers Send Season’s Greetings

2025-12-27 02:30:02



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2026: 1–5 June 2026, VIENNA

Enjoy today’s videos!

Happy Holidays from Boston Dynamics!

I would pay any amount of money for that lamp.

[ Boston Dynamics ]

What if evolution wasn’t carbon-based—but metal instead? This short film explores an alternative, iron-based evolution through robots, simulation, and real-world machines. Inspired by biological evolution, this Christmas lab film imagines a world where machines evolve instead of organisms.

[ ETH Zurich Robotics System Lab ]

Happy Holidays from FieldAI!

[ FieldAI ]

Happy Holidays from the Institute of Robotics and Machine Intelligence at Poznan University of Technology!

[ Poznan University of Technology IRMI ]

Happy Holidays from BruBotics!

[ AugmentX ]

Thanks, Bram!

[ Humanoid ]

Check out how SCUTTLE tackles the dull, dirty, and dangerous tasks of the pest control industry.

[ Ground Control Robotics ]

Happy Holidays from LimX Dynamics!

[ LimX Dynamics ]

Happy (actually maybe not AI?) Holidays from Kawasaki Robotics!

[ Kawasaki Robotics ]

Happy Holidays from AgileX Robotics

[ AgileX Robotics ]

Big news: Badminton just got a new training partner. Our humanoid robot can rally with a human in continuous exchanges, combining fast returns with stable movement. Peak return speed reaches 19.1 m/s.

[ Phybot ]

Well, here’s one way of deploying a legged robot.

[ Kepler ]

Today, we present the world’s first demo video of a full-size robot taking on the challenging Charleston dance.

[ PNDbotics ]

The DR02 humanoid robot from DEEP Robotics showcases remarkable versatility and agility. From the graceful flow of Tai Chi to the energetic moves of street dance, DR02 combines precision, strength, and artistry with ease!

[ Deep Robotics ]

Decreasing the Cost of Morphing in Adaptive Morphogenetic Robots: By using kirigami laminar jamming flippers, the Jamming Amphibious Robotic Turtle (JART) can quickly morph its limbs to adapt to changing terrain. This pneumatic layer jamming technology enables multi-environment locomotion on land and water by changing the robot’s flipper shape and stiffness to decrease the cost of transport.

[ Paper ]

Super Odometry is a resilient sensor-fusion framework that delivers accurate, real-time state estimation in challenging environments by integrating external and inertial sensing. For decades, SLAM has depended on external sensors like cameras and lidar. We argue it’s time to reverse this hierarchy: True robustness begins from within. By placing inertial sensing at the core of state estimation, robots gain an inner sense of motion. We believe the systems that not only see, but also feel, learn, and adapt.

[ AirLab ]

The Top 8 Magnets and Motors Stories of 2025

2025-12-26 22:00:02



Rarely a week went by in 2025 without some newsworthy development related to rare earth elements, magnets, and electric motors. IEEE Spectrum was on top of the big ones, starting with the production of industrial quantities of the rare-earth oxides of neodymium and praseodymium at the Mountain Pass mine and processing facilities in California’s Mojave desert.

Between 1965 and the mid 1980s, the Mountain Pass mine produced as much as 70 percent of the world’s annual supply of rare earths, which are used in nearly all powerful permanent magnets. But following a string of reversals and environmental mishaps, the facilities began going into a decline in the 1990s and 2000s. At the same time, Chinese producers, which were much less confined by environmental regulations, began their astonishingly rapid ascendence.

Today, China controls between about 85 and 99 percent of the global market for key rare earth oxides and metals, on which huge and vital tech-based industries depend. The United States and its allies find themselves at China’s mercy for certain rare earths, including ones that are essential for motors, semiconductors, electroluminescent compounds, optoelectronics, and catalysis. They’re in critical components of countless military systems, such as ones in aircraft, submarines, weapons, and night-vision gear.

For these reasons, the resumption of mass production of rare earths at Mountain Pass, which was greatly scaled up during 2025, was a major development in geopolitics. The total output of the mine and its associated processing facilities, where the rare earth ore is turned into industrially useful oxides, is small, however, compared to China’s output.

The Trump administration invested a lot of time during 2025 trying to set up deals to establish rare-earth supply chains that do not depend on China. This effort started puzzlingly, with some high-profile arm twisting of Ukraine, whose deposits are dismissed by mining experts. And also with overtures about annexing Greenland, a district of Denmark whose rare-earth deposits are enormous but, like Ukraine’s, are not attractive from a mining standpoint. As the year wore on, the administration eventually settled on a strategy similar to that of the Biden administration, which emphasized investing in domestic production and working with allies, such as Australia, to strengthen and expand existing mining, refining, and magnet-making operations outside of the United States.

Mostly overlooked by the administration so far has been Canada (also one of Trump’s annexation targets). Canada has some exceptionally large reserves of rare earth elements, and it operates one of only about four sizable rare-earth-oxide refining plants outside of Asia. That Canadian plant, owned by Toronto-based Neo Performance Materials, is in Sillamäe, Estonia.

Here are eight of 2025’s most popular Spectrum articles on rare earth elements, magnets, and motors, ranked by the amount of time people spent reading them.

1. Rare Earths Reality Check: Ukraine Doesn’t Have Minable Deposits

Ukrainian President Zelenskyy and United States President Trump having a tense disagreement in the White House's Oval Office. Jabin Botsford/The Washington Post/Getty Images

The Trump administration’s first public move in its long-awaited rare-earths strategy was a head-scratcher. At a White House press conference on 28 February, 2025, where observers were expecting to hear about a Ukraine-U.S. deal involving critical minerals, including rare earths, Trump instead got into a heated argument with Ukrainian president Volodymyr Zelenskyy. When the deal was finally signed, two months later, it made no sense to mining and rare-earths experts. Ukraine’s four substantial rare-earth deposits, they noted, were all in or near areas of active conflict with Russia. And two of them are a type of ore for which there are no existing processing technologies.

2. Inside an American Rare-Earth Boomtown

A cluster of industrial facilities is seen from the air, with a mountain range in the background. Michael Tessler/MP Materials

In 2024, the Mountain Pass mine and refining plant, in the northeastern Mojave Desert, became the only producer of rare-earth oxides anywhere in the Americas after it began producing neodymium and praseodymium oxides. The mine and plant had been essentially inactive since the early 2000s, but were rebuilt and rehabilitated starting in 2017 by a new company, MP Materials. In July 2025, MP Materials announced that the U.S. Department of Defense (now the Department of War) was investing $400 million to take a 15 percent stake in MP, and also guaranteeing a price “floor” of US $110 per kilogram for certain rare earth oxides. That price was roughly twice what China was charging at the time for those oxides.

3. Advanced Magnet Manufacturing Begins in the United States

A gloved lab worker using two fingers to hold up a small silvery rectangular object. Business Wire

Early in 2025, there was a flurry of announcements from companies touting plans to manufacture rare-earth magnets in the United States. The most interesting of these was from MP Materials, which operates the Mountain Pass mine and processing plants in California. MP announced it had begun producing neodymium-iron-boron magnets on a “trial” basis, at a plant in Texas that it would eventually scale up to 2,000 to 3,000 tonnes per year.

In mid-July, MP Materials announced a $500 million agreement with Apple to begin supplying NdFeB magnets to the computer giant, starting in 2027. Apple uses magnets in the speakers and haptic (vibrating) components of its phones and tablets, as well as in charging connectors such as its MagSafe cable.

4. Electric Propulsion Magnets Ready for Space Tests

Close up of a bagel-shaped magnet and black flux pump on an applied field magnetoplasmadynamic thruster. Randy Pollock

The mathematics of using chemical rockets for space travel is grim. They’re inefficient, slow, and require enormous amounts of fuel. They’re not really up to the task of colonizing Mars, let alone visiting the outer planets. So researchers have long investigated alternative means of propulsion, some involving the use of intense magnetic fields to accelerate and direct ions to produce thrust. At Victoria University in Wellington, New Zealand, researchers have demonstrated one such system, which is based on applied-field magnetoplasmadynamic thrusters. Their twist is using high-temperature superconducting tape to greatly reduce the power required to energize the electromagnets to achieve a given magnetic-field strength. Hēki, a technology demonstration comprised of the novel superconducting components of the system, minus the thruster itself, was launched to the International Space Station in September. It was installed on the exterior of the station and has been operated continuously since then, said Betina Pavri, senior principal engineer at the Robinson Research Institute at Victoria University, in an email exchange in late November.

5. Superconducting Motor Could Propel Electric Aircraft

A group of 11 men watch a large electric motor spin a propeller. Hinetics

The electrification of passenger aircraft faces several very steep technological challenges, one of which is the need for motors with extremely high specific power. Of the various ways of achieving that, one of the most interesting is through the use of high-temperature superconducting (HTS) materials in the coils of the motor. That’s the approach of startup Hinetics, which was spun out of the University of Illinois Urbana-Champaign and has received funding from the Advanced Research Projects Agency–Energy (ARPA-E). In its motors, designed with passenger aviation in mind, Hinetics is using HTS tape originally designed for winding the high-power electromagnets used in experimental Tokamak fusion reactors. Based on the performance of prototypes, the company believes it will soon achieve a specific power of 40 kW/kg, much higher than that of the radial-flux motors that now dominate commercial applications in vehicles and industrial machinery.

6. Airbus Is Working on a Superconducting Electric Aircraft

A futuristic medium-size passenger aircraft is shown flying over a large city. Airbus

For years, Airbus has had a high-profile corporate goal of building a large, zero-emission, single-aisle passenger airliner. The centerpiece of that effort was a project to build an ultraefficient, high-specific-power motor with superconducting coils. The motor would be powered by fuel cells running on liquid hydrogen. It’s a breathtakingly ambitious initiative, called ZEROe, and it’s far ahead of anything Airbus’s rival Boeing is doing. As recently as late March, 2025, at a symposium for the press, Airbus’s CEO, Guillaume Faury, reaffirmed the company’s support for the blue-sky project. But he also cautioned that Airbus doesn’t see hydrogen-powered planes making substantial inroads into the passenger aviation market before the 2040s, which was interpreted as meaning before the late 2040s.

7. Donut Lab’s New Motor Brings Power to the Wheel Hub

A thick car rim, without spokes or a hub, displayed on a pedestal. Donut Lab

In-wheel hub motors are one of the perennial grails of the electric-vehicle industry. They’d unleash remarkable opportunities, including torque-vectoring: the ability to finely adjust the power and torque at each wheel to deliver ultra responsive handling. But fundamental problems have long precluded their widespread use. One of these is the need to deal with unsprung weight, which refers (in this case) to the mass of the wheels, which are not supported by the vehicle’s suspension and can therefore bounce around on rough terrain and make it really hard to provide a smooth ride. With its latest, highest-power motor, however, Donut Lab claimed a weight of just 40 kg, and a power rating of 650 kW, figures that it claimed rendered the unsprung-weight problem “insignificant.”

8. Can Geopolitics Unlock Greenland’s Critical Materials Treasure Chest?

Two workers in protective gear stand by mining equipment on rocky ground in Greenland. Hannibal Hanschke/Reuters/Redux

During 2025, Greenland, another resource-rich, sparsely populated country, was repeatedly identified as an annexation target by President Trump and other members of his administration. In an interview on 9 January, 2025, Michael Waltz, then Trump’s national security advisor, explicitly linked Trump’s interest in Greenland to critical minerals, including rare earths. However, mining Greenland’s rare earths on an industrial scale would require surmounting staggering challenges. To explain them in detail, we called on Flemming Getreuer Christiansen, a Danish mining and geology consultant with expertise in research and exploration projects in Greenland.

How to Stay Ahead of AI as an Early-Career Engineer

2025-12-26 01:00:02



“AI is not going to take your job. The person who uses AI is going to take your job.”

This is an idea that has become a refrain for, among others, Nvidia CEO Jensen Huang, who has publicly made the prediction several times since October 2023. Meanwhile, other AI developers and stalwarts say the technology will eliminate countless entry-level jobs. These predictions have come at the same time as reports of layoffs at companies including IBM and Amazon, causing anxiety for tech workers—especially those starting their careers, whose responsibilities are often more easily automated.

Early reports have borne out some of these anxieties in employment data. For example, entry-level hiring at the 15 biggest tech firms fell 25 percent from 2023 to 2024, according to a report from SignalFire last May. Still, it’s unclear what the long-term effects will be, or whether hiring cuts are actually a result of AI. For instance, while Meta laid off 600 employees from its AI division in October (and continued hiring other AI researchers), OpenAI began hiring junior software engineers.

This article is part of our special report Top Tech 2026.

In 2026, all new graduates may face a tougher job market in the United States. Employers’ rating of the job market for college graduates is now at its most pessimistic since 2020, according to data from the National Association of Colleges and Employers (NACE) Job Outlook 2026 survey. However, 49 percent of respondents still consider the job market “good” or “very good.”

So, what does the rise of generative AI mean for early-career engineers?

“This is a tectonic shift,” says Hugo Malan, president of the science, engineering, technology and telecom reporting unit within the staffing agency Kelly Services. AI agents aren’t poised to replace workers one-to-one, though. Instead, there will be a realignment of which jobs are needed, and what those roles look like.

How Jobs Are Changing

When publicly available AI tools first arrived, Malan says the expectation was that jobs like call-center roles would be most vulnerable. “But what nobody predicted was that the biggest impact by far would be on programmers,” a trend he attributes to the relatively solitary and highly structured nature of the work. He notes that, while other economic conditions also factor into the job market, the pace of programmer employment decline has accelerated since generative AI came on the scene. In the United States, overall programmer employment fell a dramatic 27.5 percent between 2023 and 2025, according to data from the U.S. Bureau of Labor and Statistics. But employment for software developers—a distinct, more design-oriented position in the government data—only fell 0.3 percent in the same period.

At the same time, some positions, such as information security analyst and AI engineer are actually growing, Malan says. “There’s been this pretty dramatic readjustment of the job landscape, even with as narrow a field as IT. Within IT, some jobs have exploded, like InfoSec analysts have grown in double digits, whereas programmers declined double digits” over the past few years, he says. (Eventually, Malan says he expects generative AI to affect all intellectual work.)

Job responsibilities also appear to be changing. For recent graduates pursuing roles labeled as software-engineering jobs, “they’re not necessarily just coding,” says Jamie Grant, senior associate director for the engineering team at the University of Pennsylvania’s career services. “There tends to be so much higher-order thinking and knowledge of the software-development life cycle,” as well as a need to work with other parties, such as understanding user and client demands, she says.

Using AI to Your Advantage

In her work advising Penn students, Grant hears concerns about AI’s effects on the job market from many engineering students and their parents. But during conversations with them, she says she tries to maintain an ethos of “we can make this work for us, not against us.”

According to a report from the Stanford Digital Economy Lab, jobs involving tasks that could be automated with AI appear to be more susceptible to early-career employment dips than those where AI augments an employee’s ability to perform their job. The NACE data supports this: Sixty-one percent of employers say they are not replacing entry-level jobs with AI, while 41 percent are discussing or planning to augment these jobs with AI within the next five years.

Two charts showing 12-month employment averages over time, from the 1980s until present. Over the past few years, computer-programmer employment in the United States has dropped sharply—but overall employment in the computing industry hasn’t seen the same decline.

“Think about an exoskeleton that you could wear that allows you to lift 1,000 pounds,” Grant says. “AI should be, just as the people at Stanford say, an augmentation to your work, to your higher-order critical-thinking skills.” That being said, she advises students to be cautious of the risks, such as sharing sensitive or proprietary information with a chatbot.

At this point, Grant thinks proficiency with AI tools is an unwritten expectation of many employers. But students and early-career workers should also recognize where AI can’t help. “AI can’t necessarily be with you in that moment of negotiation or of client-relationship development,” she says. “You still need to be able to perform at your highest level of capabilities.” And foundational skills like problem solving and communication are consistently prioritized by employers.

How Education Needs to Change

With AI tools performing more of the “grunt work” that has served as a training ground for early-career workers, expectations for recent graduates are high. In the past, junior engineers have cultivated proficiency while doing simpler, more task-oriented work. “But if all of those are going to get taken over, you need to slot in at a higher level almost from day one,” Malan says. This leaves recent graduates in a difficult spot.

To help students prepare, the education system will likely need to change, for instance by encouraging students to become proficient using AI and take on more hands-on, experiential learning.

Today’s employers are looking for demonstrated skills, says Grant. “If you’re just going to class and doing projects and maybe getting a great GPA, that’s amazing. But you also need to be applying what you’re learning,” she says. Industry experience and demonstrated proficiencies are among the top factors considered by employers surveyed in NACE’s Job Outlook 2026.

One solution may even lie in entirely different educational models, like apprenticeship. “Often, students in a more traditional computer-software degree program get a lot of theoretical knowledge,” but they may not have much experience building software on a team, says Mike Roberts, founder and CEO of the nonprofit Creating Coding Careers. Recent graduates may not be ready to ship code on day one—but AI can. Apprenticeship allows students to learn on the job in a structured program, and helps “to much more effectively close the experience gap,” Roberts says.

Training the next generation of humans might also better serve the long-term interests of employers, he says. In today’s software engineering, many companies tend to be short-sighted in their hiring, thinking more of the next quarter than four or five years down the line. But “if you don’t train new early entrants into the market, you will eventually have no more people becoming mid-levels,” says Roberts. “It’s very myopic.”

Also, AI can help ramp up new employees faster than ever. “I find it an exciting time, because it’s never been faster to build high-quality software,” Roberts says. “But it’s weird that folks are not seeing the virtue of continuing to invest in humans.”

The Top 8 Computing Stories of 2025

2025-12-25 22:00:02



This year, AI continued looming large in the software world. But more than before, people are wrestling with both its amazing capabilities and its striking shortcomings. New research has found that AI agents are doubling the length of task they can do every seven months—an astounding rate of exponential growth. But the quality of their work still suffers, clocking in at about a 50 percent success rate on the hardest tasks. Chatbots are assisting coders and even coding autonomously, but this may not help solve the biggest and costliest IT failures, which stem from managerial failures that have remained constant for the past twenty years or more.

AI’s energy demands continue to be a major concern. To try to alleviate the situation, a startup is working on cutting the heat produced in computation by making computing reversible. Another is building a computer of actual human brain cells, capable of running tests on drug candidates. And some are even considering sending data centers to the moon.

1. The Top Programming Languages 2025

Illustration of a laptop surrounded by different programming languages.iStock

While the rankings of software languages this year were rather predictable—yes, Python is still number one—the future of software engineering is as uncertain as can be. With AI chatbots assisting many with coding tasks, or just coding themselves, it is becoming increasingly different to gather reliable data on what software engineers are working on day-to-day. People no longer post their questions on StackExchange or a similar site—they simply ask a chatbot.

This year’s top programming languages list does its best to work with this limited data, but it also poses a question: In a world where AI writes much of our code, how will programming languages change? Will we even need them, or will the AI simply bust out optimized assembly code, without the need for abstraction?

2. How IT Managers Fail Software Projects

Race car crashes into wall, digital binary code exploding, dramatic sky in background.Eddie Guy

Robert Charette, lifelong technologist and frequent IEEE Spectrum contributor, wrote back in 2005 about all the known, preventable reasons software projects end in disaster. Twenty years later, nothing has changed—except for trillions of more dollars lost on software failures. In this over 3,500-word screed, Charette recounts multiple case studies, backed up by statistics, recounting the paltry state of IT management as it is—still—done today. And to top it off, he explains why AI will not come to the rescue.

3. Human Brain Cells on a Chip For Sale

White box with wires and tubes leading into different pieces of a machine. Cortical Labs

Australian startup Cortical Labs announced that they are selling a biocomputer powered by 800,000 living human neurons on a silicon chip. For US $35,000, you get what amounts to a mini-brain in a box that can learn, adapt, and respond to stimuli in real time. The company already proved the concept by teaching lab-grown brain cells to play Pong (they often beat standard AI algorithms at learning efficiency). But the real application is drug discovery. This “little brain in a vat,” as one scientist put it, lets researchers test whether experimental drugs restore function to impaired neural cultures.

4. Large Language Models Are Improving Exponentially

AI success rate graph from 2019 to 2030 for tasks by model version and time completion Model Evaluation & Threat Research

It’s difficult to agree on a consistent way to evaluate how well large language models (LLMs) are performing. The nonprofit research organization Model Evaluation & Threat Research (METR) proposed an intuitive metric—tracking how long it would take a human to do the tasks AI can do. By this metric, LLM capabilities are doubling every seven months. If the trend continues, by 2030, the most advanced models could quickly handle tasks that currently take humans a full month of work. But, for now, the AI doesn’t always do a good job—the chance the work will be done correctly, for the longest and most challenging tasks, is about 50 percent. So the question is: How useful is a fast, cheap employee that produces garbage about half the time?

5. Reversible Computing Escapes the Lab in 2025

Collage of a computer chip connected to a matching ball of yarn. Edmon de Haro

There is a surprising principle that connects all software to the underlying physics of hardware: Erasing a bit of information in a computer necessarily costs energy, usually lost as heat. The only way to avoid losing this energy is to never erase information. This is the basic idea behind reversible computing—an approach that has remained in the academic sphere until this year.

After three decades of academic research, reversible computing is finally going commercial with startup Vaire Computing. Vaire’s first prototype chip recovers energy in an arithmetic circuit. The team claims that with their approach, they could eventually deliver a 4,000x energy efficiency improvement over conventional chips. The catch is that this requires new gate architectures, new design tools, and integrating MEMS resonators on chip. But with a prototype already in the works, reversible computing has graduated from “interesting theory” to “we’re actually building this.”

6. Airbnb’s Dying Software Gets a Second Life

Pixel art of four pinwheels. Nicole Millman

Apache Airflow—the open-source workflow orchestration software originally built by Airbnb—was basically dead by 2019. Then, one enthusiastic open-source contributor stumbled across it while working in IoT and thought “this is too good to die.” He rallied the community, and by late 2020 they shipped Airflow 2.0. Now the project is thriving. It boasts 35 to 40 million downloads per month and over 3,000 contributors worldwide. And Airflow 3.0 launched with a modular architecture that can run anywhere.

7. The Doctor Will See Your Electronic Health Records Now

A photo collage of a doctor looking at a device with various electronic medical records pieced together around him iStock/IEEE Spectrum

In 2004, President Bush set a goal for the United States to transition to electronic health records (HER) by 2014, promising transformed healthcare and huge cost savings. Twenty years and over $100 billion later, we’ve achieved widespread EHR adoption—and created a different nightmare. Doctors now spend on average 4.5 hours per day staring at screens instead of looking at patients, and clicking through poorly designed software systems.

The rush to adopt EHRs before they were ready meant ignoring warnings about systems engineering, interoperability, and cybersecurity. Now we’re stuck with fragmented systems that don’t talk to each other (the average hospital uses 10 different EHR vendors internally) and physicians experiencing record burnout levels. And to top it off, data breaches have exposed 520 million records since 2009. Healthcare costs haven’t bent downward as promised—they’ve hit $4.8 trillion, or 17.6 percent of GDP. The irony? AI scribes are now being developed to solve the problems that the last generation of technology created, allowing doctors to actually look at patients again instead of their keyboards.

8. Is it Lunacy to Put a Data Center On the Moon?

A mini data center next to various wires inside a lunar lander. Intuitive Machines

Whether space-based or moon-based data centers are a promising avenue or a fever dream is the subject of much debate. Nevertheless, earlier this year company Lonestar Data Holdings sent a 1-kilogram, 8-terabyte mini data center to the moon aboard an Intuitive Machines lander. The goal is to protect sensitive data from Earthly disasters (undersea cable cuts, hurricanes, wars) and exploit a loophole in data sovereignty laws—because the moon isn’t subject to any nation’s jurisdiction, you can host black boxes under any country’s law you want.The lunar surface offers permanently shadowed craters at -173 °C, which may make cooling easier (although the lack of atmosphere makes thermal radiation challenging). Nearby sunlit peaks would provide solar power. Governments are interested—Florida and the Isle of Man are already storing data there. But the problems are obvious: 1.4-second latency rules out real-time applications, fixing anything requires a moon mission, and bandwidth is terrible.