MoreRSS

site iconIEEE SpectrumModify

IEEE is the trusted voice for engineering, computing, and technology information around the globe. 
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of IEEE Spectrum

In Edison’s Revenge, Data Centers Are Transitioning From AC to DC

2026-03-25 00:00:05



Last week’s Nvidia GTC conference highlighted new chip architectures to power AI. But as the chips become faster and more powerful, the remainder of data center infrastructure is playing catchup. The power delivery community is responding: Announcements from Delta, Vertiv, and Eaton showcased new designs for the AI era. Complex and inefficient AC to DC power conversions are gradually being replaced by DC configurations, at least in hyperscale data centers.

“While AC distribution remains deeply entrenched, advances in power electronics and the rising demands of AI infrastructure are accelerating interest in DC architectures,” says Chris Thompson, vice president of advanced technology and global microgrids at Vertiv.

AC to DC Conversion Challenges

Today, nearly all data centers are designed around AC utility power. The electrical path includes multiple conversions before power reaches the compute load. Power typically enters the data center as medium-voltage AC (1kV to 35kV), is stepped down to low-voltage AC (480V or 415V) using a transformer, converted to DC inside an uninterruptible power supply (UPS) for battery storage, converted back to AC, and converted again to low-voltage DC (typically 54 V DC) at the server, supplying the DC power computing chips actually require.

“The double conversion process ensures the output AC is clean, stable and suitable for data center servers,” says Luiz Fernando Huet de Bacellar, vice president of engineering and technology at Eaton.

That setup worked well enough for the amounts of power required by traditional data centers. Traditional data center computational racks draw on the order of 10 kW each. For AI, that is starting to approach 1 MW. At that scale, the energy losses, current levels, and copper requirements of AC to DC conversions become increasingly difficult to justify. Every conversion incurs some power loss. On top of that, as the amount of power that needs to be delivered grows, the sheer size of the convertors, as well as the connector requirements of copper busbars, becomes untenable. According to an Nvidia blog, a 1 MW rack could require as much as 200 kg of copper busbar. For a 1 GW data center, it could amount to 200,000 kg of copper.

Benefits of High-Voltage DC Power

By converting 13.8 kV AC grid power directly to 800 VDC at the data center perimeter, most intermediate conversion steps are eliminated. This reduces the number of fans and power supply units, and leads to higher system reliability, lower heat dissipation, improved energy efficiency, and a smaller equipment footprint.

“Each power conversion between the electric grid or power source and the silicon chips inside the servers causes some energy loss,” says Fernando.

Switching from 415 V AC to 800 V DC in electrical distribution enables 85 percent more power to be transmitted through the same conductor size. This happens because higher voltage reduces current demand, lowering resistive losses and making power transfer more efficient. Thinner conductors can handle the same load, reducing copper requirements by 45 percent, a 5 percent improvement in efficiency, and 30 percent lower total cost of ownership for GW-scale facilities.

“In a high-voltage DC architecture, power from the grid is converted from medium-voltage AC to roughly 800 V DC and then distributed throughout the facility on a DC bus,” said Vertiv’s Thompson. “At the rack, compact DC-DC converters step that voltage down for GPUs and CPUs.”

A report from technology advisory group Omdia claims that higher voltage DC data centers have already appeared in China. In the Americas, the Mt. Diablo Initiative (a collaboration among Meta, Microsoft, and the Open Compute Project) is a 400 V DC rack power distribution experiment.

Innovations in DC Power Systems

A handful of vendors are trying to get ahead of the game. Vertiv’s 800 V DC ecosystem that integrate with NVIDIA Vera Rubin Ultra Kyber platforms will be commercially available in the second half of 2026. Eaton, too, is well advanced in its 800 V DC systems innovation courtesy of a medium-voltage solid-state transformer (SST) that will sit at the heart of DC power distribution system. Meanwhile Delta, has released 800 V DC in-row 660kW power racks with a total of 480 kW of embedded battery backup units. And, SolarEdge is hard at work on a 99%-efficient SST that will be paired with a native DC UPS and a DC power distribution layer.

But much of the industry is far behind. Patrick Hughes, senior vice president of strategy, technical, and industry affairs for the National Electrical Manufacturers Association, says most innovation is happening at the 400 V DC level, though some are preparing 800 V DC. He believes the industry needs a complete, coordinated ecosystem, including power electronics, protection, connectors, sensing, and service‑safe components that scale together rather than in isolation. That, in turn, requires retooling manufacturing capacity for DC‑specific equipment, expanding semiconductor and materials supply, and clear, long‑term demand commitments that justify major capital investment across the value chain.

“Many are taking a cautious approach, offering limited or adapted solutions while waiting for clearer standards, safety frameworks, and customer commitments,” said Hughes. “Building the supply chain will hinge on stabilizing standards and safety frameworks so suppliers can design, certify, manufacture, and install equipment with confidence.”

What Will It Take to Build the World’s Largest Data Center?

2026-03-24 23:00:05



The undying thirst for smarter (historically, that means larger) AI models and greater adoption of the ones we already have has led to an explosion in data-center construction projects, unparalleled both in number and scale. Chief among them is Meta’s planned 5-gigawatt data center in Louisiana, called Hyperion, announced in June of 2025. Meta CEO Mark Zuckerberg said Hyperion will “cover a significant part of the footprint of Manhattan,” and the first phase—a 2-GW version—will be completed by 2030.

Though the project’s stated 5-GW scale is the largest among its peers, it’s just one of several dozen similar projects now underway. According to Michael Guckes, chief economist at construction-software company ConstructConnect, spending on data centers topped US $27 billion by July of 2025 and, once the full-year figures are tallied, will easily exceed $40 billion. Hyperion alone accounts for about a quarter of that.

For the engineers assigned to bring these projects to life, the mix of challenges involved represent a unique moment. The world’s largest tech companies are opening their wallets to pay for new innovations in compute, cooling, and network technology designed to operate at a scale that would’ve seemed absurd five years ago.

At the same time, the breakneck pace of building comes paired with serious problems. Modern data-center construction frequently requires an influx of temporary workers and sharply increases noise, traffic, pollution, and often local electricity prices. And the environmental toll remains a concern long after facilities are built due to the unprecedented 24/7 energy demands of AI data centers which, according to one recent study, could emit the equivalent of tens of millions of tonnes of CO2 annually in the United States alone.

Regardless of these issues, large AI companies, and the engineers they hire, are going full steam ahead on giant data-center construction. So, what does it really take to build an unprecedentedly large data center?

AI Rewrites Building Design

The stereotypical data-center building rests on a reinforced concrete slab foundation. That’s paired with a steel skeleton and poured concrete wall panels. The finished building is called a “shell,” a term that implies the structure itself is a secondary concern. Meta has even used gigantic tents to throw up temporary data centers.

Still, the scale of the largest AI data centers brings unique challenges. “The biggest challenge is often what’s under the surface. Unstable, corrosive, or expansive soils can lead to delays and require serious intervention,” says Robert Haley, vice president at construction consulting firm Jacobs. Amanda Carter, a senior technical lead at Stantec, said a soil’s thermal conductivity is also important, as most electrical infrastructure is placed underground. “If the soil has high thermal resistivity, it’s going to be difficult to dissipate [heat].” Engineers may take hundreds or thousands of soil samples before construction can begin.

GPUs


Yellow microchip icon on a black background.

Modern AI data centers often use rack-scale systems, such as the Nvidia GB200 NVL72, which occupy a single data-center rack. Each rack contains 72 GPUs, 36 CPUs, and up to 13.4 terabytes of GPU memory. The racks measure over 2.2 meters tall and weigh over one and a half tonnes, forcing AI data centers to use thicker concrete with more reinforcement to bear the load.

A single GB200 rack can use up to 120 kilowatts. If Hyperion meets its 5-gigawatt goals, the data-center campus could include over 41,000 rack-scale systems, for a total of more than 3 million GPUs. The final number of GPUs used by Hyperion is likely to be less than that, though only because future GPUs will be larger, more capable, and use more power.

Money


Black hand and dollar symbol combined on an orange background.

According to ConstructConnect, spending on data centers neared US $27 billion through July of 2025 and, according to the latest data, will tally close to $60 billion through the end of the year. Meta’s Hyperion project is a big slice of the pie, at $10 billion.

Data-center spending has become an important prop for the construction industry, which is seeing reduced demand in other areas, such as residential construction and public infrastructure. ConstructConnect’s third quarter 2025 financial report stated that the quarter’s decline “would have been far more severe without an $11 billion surge in data center starts.”


There’s apparently no shortage of eligible sites, however, as both the number of data centers under construction, and the money spent on them, has skyrocketed. The spending has allowed companies building data centers to throw out the rule book. Prior to the AI boom, most data centers relied on tried-and-true designs that prioritized inexpensive and efficient construction. Big tech’s willingness to spend has shifted the focus to speed and scale.

The loose purse strings open the door to larger and more robust prefabricated concrete wall and floor panels. Doug Bevier, director of development at Clark Pacific, says some concrete floor panels may now span up to 23 meters and need to handle floor loads up to 3,000 kilograms per square meter, which is more than twice the load international building codes normally define for manufacturing and industry. In some cases, the concrete panels must be custom-made for a project, an expensive step that the economics of pre-AI data centers rarely justified.

Simultaneously, the time scale for projects is also compressed: Jamie McGrath, senior vice president of data-center operations at Crusoe, says the company is delivering projects in “about 12 months,” compared to 30 to 36 months before. Not all projects are proceeding at that pace, but speed is universally a priority.

That makes it difficult to coordinate the labor and materials required. Meta’s Hyperion site, located in rural Richland Parish, Louisiana, is emblematic of this challenge. As reported by NOLA.com, at least 5,000 temporary workers have flocked to the area, which has only about 20,000 permanent residents. These workers earn above-average wages and bring a short-term boost for some local businesses, such as restaurants and convenience stores. However, they have also spurred complaints from residents about traffic and construction noise and pollution.

This friction with residents includes not only these obvious impacts, but also things you might not immediately suspect, such as light pollution caused by around-the-clock schedules. Also significant are changes to local water tables and runoff, which can reduce water quality for neighbors who rely on well water. These issues have motivated a few U.S. cities to enact data-center bans.

Data Centers Often Go BYOP (bring your own power)

Meta’s Richland Parish site also highlights a problem that’s priority No. 1 for both AI data centers and their critics: power.

Data centers have always drawn large amounts of power, which nudged data-center construction to cluster in hubs where local utilities were responsive to their demands. Virginia’s electric utility, Dominion Energy, met demand with agreements to build new infrastructure, often with a focus on renewable energy.

The power demands of the largest AI data centers, though, have caught even the most responsive utilities off guard. A report from the Lawrence Berkeley National Laboratory, in California, estimated the entire U.S. data-center industry consumed an average load of roughly 8 GW of power in 2014. Today, the largest AI data-center campuses are built to handle up to a gigawatt each, and Meta’s Hyperion is projected to require 5 GW.

“Data centers are exasperating issues for a lot of utilities,” says Abbe Ramanan, project director at the Clean Energy Group, a Vermont-based nonprofit.

Ramanan explains that utilities often use “peaker plants” to cope with extra demand. They’re usually older, less efficient fossil-fuel plants which, because of their high cost to operate and carbon output, were due for retirement. But Ramanan says increased electricity demand has kept them in service.

Meta secured power for Hyperion by negotiating with Entergy, Louisiana’s electric utility, for construction of three new gas-turbine power plants. Two will be located near the Richland Parish site, while a third will be located in southeast Louisiana.

Entergy frames the new plants as a win for the state. “A core pillar of Entergy and Meta’s agreement is that Meta pays for the full cost of the utility infrastructure,” says Daniel Kline, director of power-delivery planning and policy at Entergy. The utility expects that “customer bills will be lower than they otherwise would have been.” That would prove an exception, as a recent report from Bloomberg found electricity rates in regions with data centers are more likely to increase than in regions without.

CO2


Diagram of CO2 molecule with black carbon and red oxygen atoms connected by lines.

Research published in Nature in 2025 projects that data-center emissions will range from 24 million to 44 million CO2-equivalent metric tonnes annually through 2030 in the United States alone. While some materials used in data centers, such as concrete, lead to significant emissions, the majority of these emissions will result from the high energy demands of AI servers.

Estimating the carbon emissions of Hyperion is difficult, as the project won’t be completed until 2030. Assuming that the three new natural gas plants that are planned for construction as part of the project produce emissions typical for their type, however, the plants could lead to full life-cycle emissions of between 4 million and 10 million metric tons of CO2 annually—roughly equivalent to the annual emissions of a country like Latvia.

Concrete


Silhouette of a cement truck on an orange background.

Data centers are typically built from concrete, with steel used as a skeleton to reinforce and shape the concrete shell. While the foundation is often poured concrete, the walls and floors are most often built from prefabricated concrete panels that can span up to 23 meters. Floors use a reinforced T-shape, similar to a steel girder, measuring up to 1.2 meters across at its thickest point. The largest data centers include hundreds of these concrete panels.

The America Cement Association projects that the current surge in building will require 1 million tonnes of cement over the next three years, though that’s still a tiny fraction of the overall cement industry, which weighed in at roughly 103 million tonnes in 2024.


The plants, which will generate a combined 2.26 GW, will use combined-cycle gas turbines that recapture waste heat from exhaust. This boosts thermal efficiency to 60 percent and beyond, meaning more fuel is converted to useful energy. Simple-cycle turbines, by contrast, vent the exhaust, which lowers efficiency to around 40 percent.

Even so, total life-cycle emissions for the Hyperion plants could range from 4 million to over 10 million tonnes of CO2 each year, depending on how frequently the plants are put in use and the final efficiency benchmarks once built. On the high end, that’s as much CO2 as produced by over 2 million passenger cars. Fortunately, not all of Meta’s data centers take the same approach to power. The company has announced a plan to power Prometheus, a large data-center project in Ohio scheduled to come online before the end of 2026, with nuclear energy.

But other big tech companies, spurred by the need to build data centers quickly, are taking a less efficient approach.

xAI’s Colossus 2, located in Memphis, is the most extreme example. The company trucked dozens of temporary gas-turbine generators to power the site located in a suburban neighborhood. OpenAI, meanwhile, has gas turbines capable of generating up to 300 megawatts at its new Stargate data center in Abilene, Texas, slated to open later in 2026. Both use simple-cycle turbines with a much lower efficiency rating than the combined-cycle plants Entergy will build to power Hyperion.

Demand for gas turbines is so intense, in fact, that wait times for new turbines are up to seven years. Some data centers are turning toward refurbished jet engines to obtain the turbines they need.

AI Racks Tip the Scales

The demand for new, reliable power is driven by the power-hungry GPUs inside modern AI data centers.

In January of 2025, Mark Zuckerberg announced in a post on Facebook that Meta planned to end 2025 with at least 1.3 million GPUs in service. OpenAI’s Stargate data center plans to use over 450,000 Nvidia GB200 GPUs, and xAI’s Colossus 2, an expansion of Colossus, is built to accommodate over 550,000 GPUs.

GPUs, which remain by far the most popular for AI workloads, are bundled into human-scale monoliths of steel and silicon which, much like the data centers built to house them, are rapidly growing in weight, complexity, and power consumption.

Memory


Outlined head with a microchip brain on blue background, symbolizing AI and technology.

In addition to raw compute performance, Nvidia GB200 NVL72 racks also require huge amounts of memory. An Nvidia GB200 NVL72 rack may include up to 13.4 terabytes of high-bandwidth memory, which implies a data-center campus at Hyperion’s scale will require at least several dozen petabytes.

The immense demand has sent memory prices soaring: The price of DRAM, specifically DDR5, has increased 172 percent in 2025.

Power


Hyperion is expected to use 5 gigawatts of power across 11 buildings, which works out to just under 500 megawatts per building, assuming each will be similar to its siblings. That’s enough to power roughly 4.2 million U.S. homes.

Just one Hyperion data center built at the Richland Parish site will consume twice as much power as xAI’s Colossus which, at the time of its completion in the summer of 2024, was among the largest data centers yet built.


Nvidia’s GB200 NVL72—a rack-scale system—is currently a leading choice for AI data centers. A single GB200 rack contains 72 GPUs, 36 CPUs, and up to 17 terabytes of memory. It measures 2.2 meters tall, tips the scales at up to 1,553 kilograms, and consumes about 120 kilowatts—as much as around 100 U.S. homes. And this, according to Nvidia, is just the beginning. The company anticipates future racks could consume up to a megawatt each.

Viktor Petik, senior vice president of infrastructure solutions at Vertiv, says the rapid change in rack-scale AI systems has forced data centers to adapt. “AI racks consume far more power and weigh more than their predecessors,” says Petik. He adds that data centers must supply racks with multiple power feeds, without taking up extra space.

The new power demands from rack-scale systems have consequences that are reflected in the design of the data center—even its footprint.

In 2022 Meta broke ground on a new data center at a campus in Temple, Texas. According to SemiAnalysis, which studies AI data centers, construction began with the intent to build the data center in an H-shaped configuration common to other Meta data centers.

LAND


Black location pin icon on orange background.


Meta CEO Mark Zuckerberg kicked off the buzz around Hyperion by saying it would cover a large chunk of Manhattan. Many took that to mean Hyperion would be a single building of that size, which isn’t correct. Hyperion will actually be a cluster of data centers—11 are currently planned—with over 370,000 square meters of floor space. That’s a lot smaller even than New York City’s Central Park, which covers 6 percent of Manhattan.

Meta has room to grow, however. The Richland Parish site spans 14.7 million square meters in total, which is about a quarter the area of Manhattan. And the 370,000 square meters of floor space Hyperion is expected to provide doesn’t include external infrastructure, such as the three new combined-cycle gas power plants Louisiana utility Entergy is building to power the project.


Map with site layout and regional location in Louisiana, showing roads and distances.


Construction was paused midway in December of 2022, however, as part of a company-wide review of its data-center infrastructure. Meta decided to knock down the structure it had built and start from scratch. The reasons for this decision were never made public, but analysts believe it was due to the old design’s inability to deliver sufficient electricity to new, power-hungry AI racks. Construction resumed in 2023.

Meta’s replacement ditches the H-shaped building for simple, long, rectangular structures, each flanked by rows of gas-turbine generators. While Meta’s plans are subject to change, Hyperion is currently expected to comprise 11 rectangular data centers, each packed with hundreds of thousands of GPUs, spread across the 13.6-square-kilometer Richland Parish campus.

Cooling, and Connecting, at Scale

Nvidia’s ultradense AI GPU racks are changing data centers not only with their weight, and power draw, but also with their intense cooling and bandwidth requirements.

Data centers traditionally use air cooling, but that approach has reached its limits. “Air as a cooling medium is inherently inferior,” says Poh Seng Lee, head of CoolestLAB, a cooling research group at the National University of Singapore.

Instead, going forward, GPUs will rely on liquid cooling. However, that adds a new layer of complexity. “It’s all the way to the facilities level,” says Lee. “You need pumps, which we call a coolant distribution unit. The CDU will be connected to racks using an elaborate piping network. And it needs to be designed for redundancy.” On the rack, pipes connect to cold plates mounted atop every GPU; outside the data-center shell, pipes route through evaporation cooling units. Lee says retrofitting an air-cooled data center is possible but expensive.

The networking used by AI data centers is also changing to cope with new requirements. Traditional data centers were positioned near network hubs for easy access to the global internet. AI data centers, though, are more concerned with networks of GPUs.

These connections must sustain high bandwidth with impeccable reliability. Mark Bieberich, a vice president at network infrastructure company Ciena, says its latest fiber-optic transceiver technology, WaveLogic 6, can provide up to 1.6 terabytes per second of bandwidth per wavelength. A single fiber can support 48 wavelengths in total, and Ciena’s largest customers have hundreds of fiber pairs, placing total bandwidth in the thousands of terabits per second.


a piece of land with a big platform in the middle.

This is a point where the scale of Meta’s Hyperion, and other large AI data centers, can be deceptive. It seems to imply the physical size of a single data center is what matters. But rather than being a single building, Hyperion is actually a set of buildings connected by high-speed fiber-optics.

“Interconnecting data centers is absolutely essential,” says Bieberich. “You could think about it as one logical AI training facility, but with geographically distributed facilities.” Nvidia has taken to calling this “scale across,” to contrast it with the idea that data centers must “scale up” to larger singular buildings.

The Big but Hazy Future

The full scale of the challenges that face Hyperion, and other future AI data centers of similar scale, remain hazy. Nvidia has yet to introduce the rack-scale AI GPU systems it will host. How much power will it demand? What type of cooling will it require? How much bandwidth must be provided? These can only be estimated.

In the absence of details, the gravity of AI data-center design is pulled toward one certainty: It must be big. New data-center designers are rewriting their rule book to handle power, cooling, and network infrastructure at a scale that would’ve seemed ridiculous five years ago.

This innovation is fueled by big tech’s fat wallet, which shelled out tens of billions of dollars in 2025 alone, leading to questions about whether the spending is sustainable. For the engineers in the trenches of data-center design, though, it’s viewed as an opportunity to make the impossible possible.

“I tell my engineers, this is peak. We’re being engineers. We’re being asked complicated questions,” says Stantec’s Carter. “We haven’t got to do that in a long time.”

This article appears in the April 2026 print issue.

The Coming Drone-War Inflection in Ukraine

2026-03-24 21:00:05



WHEN KYIV-BORN ENGINEER Yaroslav Azhnyuk thinks about the future, his mind conjures up dystopian images. He talks about “swarms of autonomous drones carrying other autonomous drones to protect them against autonomous drones, which are trying to intercept them, controlled by AI agents overseen by a human general somewhere.” He also imagines flotillas of autonomous submarines, each carrying hundreds of drones, suddenly emerging off the coast of California or Great Britain and discharging their cargoes en masse to the sky.

“How do you protect from that?” he asks as we speak in late December 2025; me at my quiet home office in London, he in Kyiv, which is bracing for another wave of missile attacks.

Azhnyuk is not an alarmist. He cofounded and was formerly CEO of Petcube, a California-based company that uses smart cameras and an app to let pet owners keep an eye on their beloved creatures left alone at home. A self-described “liberal guy who didn’t even receive military training,” Azhnyuk changed his mind about developing military tech in the months following the Russian invasion of Ukraine in February 2022. By 2023, he had relinquished his CEO role at Petcube to do what many Ukrainian technologists have done—to help defend his country against a mightier aggressor.

It took a while for him to figure out what, exactly, he should be doing. He didn’t join the military, but through friends on the front line, he witnessed how, out of desperation, Ukrainian troops turned to off-the-shelf consumer drones to make up for their country’s lack of artillery.

Ukrainian troops first began using drones for battlefield surveillance, but within a few months they figured out how to strap explosives onto them and turn them into effective, low-cost killing machines. Little did they know they were fomenting a revolution in warfare.

Group observes a drone demonstration indoors, with a presenter explaining features.

Compact black camera module with textured surface and orange ribbon cable on white background.The Ukrainian robotics company The Fourth Law produces an autonomy module [above] that uses optics and AI to guide a drone to its target. Yaroslav Azhnyuk [top, in light shirt], founder and CEO of The Fourth Law, describes a developmental drone with autonomous capabilities to Ukrainian President Volodymyr Zelenskyy and German Chancellor Olaf Scholz.Top: THE PRESIDENTIAL OFFICE OF UKRAINE; Bottom: THE FOURTH LAW

That revolution was on display last month, as the U.S. and Israel went to war with Iran. It soon became clear that attack drones are being extensively used by both sides. Iran, for example, is relying heavily on the Shahed drones that the country invented and that are now also being manufactured in Russia and launched by the thousands every month against Ukraine.

A thorough analysis of the Middle East conflict will take some time to emerge. And so to understand the direction of this new way of war, look to Ukraine, where its next phase—autonomy—is already starting to come into view. Outnumbered by the Russians and facing increasingly sophisticated jamming and spoofing aimed at causing the drones to veer off course or fall out of the sky, Ukrainian technologists realized as early as 2023 that what could really win the war was autonomy. Autonomous operation means a drone isn’t being flown by a remote pilot, and therefore there’s no communications link to that pilot that can be severed or spoofed, rendering the drone useless.

By late 2023, Azhnyuk set out to help make that vision a reality. He founded two companies, The Fourth Law and Odd Systems, the first to develop AI algorithms to help drones overcome jamming during final approach, the second to build thermal cameras to help those drones better sense their surroundings.

“I moved from making devices that throw treats to dogs to making devices that throw explosives on Russian occupants,” Azhnyuk quips.

Since then, The Fourth Law has dispatched “more than thousands” of autonomy modules to troops in eastern Ukraine (it declines to give a more specific figure), which can be retrofitted on existing drones to take over navigation during the final approach to the target. Azhnyuk says the autonomy modules, worth around US $50, increase the drone-strike success rate by up to four times that of purely operator-controlled drones.

And that is just the beginning. Azhnyuk is one of thousands of developers, including some who relocated from Western countries, who are applying their skills and other resources to advancing the drone technology that is the defining characteristic of the war in Ukraine. This eclectic group of startups and founders includes Eric Schmidt, the former Google CEO, whose company Swift Beat is churning out autonomous drones and modules for Ukrainian forces. The frenetic pace of tech development is helping a scrappy, innovative underdog hold at bay a much larger and better-equipped foe.

All of this development is careening toward AI-based systems that enable drones to navigate by recognizing features in the terrain, lock on to and chase targets without an operator’s guidance, and eventually exchange information with each other through mesh networks, forming self-organizing robotic kamikaze swarms. Such an attack swarm would be commanded by a single operator from a safe distance.

According to some reports, autonomous swarming technology is also being developed for sea drones. Ukraine has had some notable successes with sea drones, which have reportedly destroyed or damaged around a dozen Russian vessels.

Hand holding a drone with six rotors, outdoors against a blue sky.The Skynode X system, from Auterion, provides a degree of autonomy to a drone.AUTERION

For Ukraine, swarming can solve a major problem that puts the nation at a disadvantage against Russia—the lack of personnel. Autonomy is “the single most impactful defense technology of this century,” says Azhnyuk. “The moment this happens, you shift from a manpower challenge to a production challenge, which is much more manageable,” he adds.

The autonomous warfare future envisioned by Azhnyuk and others is not yet a reality. But Marc Lange, a German defense analyst and business strategist, believes that “an inflection point” is already in view. Beyond it, “things will be so dramatically different,” he says.

“Ukraine pretty rapidly realized that if the operator-to-drone ratio can be shifted from one-to-one to one-to-many, that creates great economies of scale and an amazing cost exchange ratio,” Lange adds. “The moment one operator can launch 100, 50, or even just 20 drones at once, this completely changes the economics of the war.”

Drones With a View

For a while, jammers that sever the radio links between drones and operators or that spoof GPS receivers were able to provide fairly reliable defense against human-controlled first-person-view attack drones (FPVs). But as autonomous navigation progressed, those electronic shields have gradually become less effective. Defenders must now contend with unjammable drones—ones that are attached to hair-thin optical fibers or that are capable of finding their way to their targets without external guidance. In this emerging struggle, the defenders’ track records aren’t very encouraging: The typical countermeasure is to try to shoot down the attacking drone with a service weapon. It’s rarely successful.

Truck on rural road covered with camouflage netting, trees and fields in the background.A truck outfitted with signal-jamming gear drives under antidrone nets near Oleksandriya, in eastern Ukraine, on 2 October 2025.ED JONES/AFP/GETTY IMAGES

“The attackers gain an immense advantage from unmanned systems,” says Lange. “You can have a drone pop up from anywhere and it can wreak havoc. But from autonomy, they gain even more.”

The self-navigating drones rely on image-recognition algorithms that have been around for over a decade, says Lange. And the mass deployments of drones on Ukrainian battlefields are enabling both Russian and Ukrainian technologists to create huge datasets that improve the training and precision of those AI algorithms.

Six-wheeled robotic vehicle with mounted equipment in a grassy field.A Ukrainian land robot, the Ravlyk, can be outfitted with a machine gun.

While uncrewed aerial vehicles (UAVs) have received the most attention, the Ukrainian military is also deploying dozens of different kinds of drones on land and sea. Ukraine, struggling with the shortage of infantry personnel, began working on replacing a portion of human soldiers with wheeled ground robots in 2024. As of early 2026, thousands of ground robots are crawling across the gray zone along the front line in Eastern Ukraine. Most are used to deliver supplies to the front line or to help evacuate the wounded, but some “killer” ground robots fitted with turrets and remotely controlled machine guns have also been tested.

In mid-February, Ukrainian authorities released a video of a Ukrainian ground robot using its thermal camera to detect a Russian soldier in the dark of the night and then kill the invader with a round from a heavy machine gun. So far these robots are mostly controlled by a human operator, but the makers of these uncrewed ground vehicles say their systems are capable of basic autonomous operations, such as returning to base when radio connection is lost. The goal is to enable them to swarm so that one operator controls not one, but a whole herd of mesh-connected killer robots.

But Bryan Clark, senior fellow and director of the Center for Defense Concepts and Technology at the Hudson Institute, questions how quickly ground robots’ abilities can progress. “Ground environments are very difficult to navigate in because of the terrain you have to address,” he says. “The line of sight for the sensors on the ground vehicles is really constrained because of terrain, whereas an air vehicle can see everything around it.”

To achieve autonomy, maritime drones, too, will require navigational approaches beyond AI-based image recognition, possibly based on star positions or electronic signals from radios and cell towers that are within reach, says Clark. Such technologies are still being developed or are in a relatively early operational stage.

How the Shaheds Got Better

Russia is not lagging behind. In fact, some analysts believe its autonomous systems may be slightly ahead of Ukraine’s. For a good example of the Russian military’s rapid evolution, they say, consider the long-range Iranian-designed Shahed drones. Since 2022, Russia has been using them to attack Ukrainian cities and other targets hundreds of kilometers from the front line. “At the beginning, Shaheds just had a frame, a motor, and an inertial navigation system,” Oleksii Solntsev, CEO of Ukrainian defense tech startup MaXon Systems, tells me. “They used to be imprecise and pretty stupid. But they are becoming more and more autonomous.” Solntsev founded MaXon Systems in late 2024 to help protect Ukrainian civilians from the growing threat of Shahed raids.

Silhouette of a triangular drone flying in the sky.A Russian Geran-2 drone, based on the Iranian Shahed-136, flies over Kyiv during an attack on 27 December 2025.SERGEI SUPINSKY/AFP/GETTY IMAGES

First produced in Iran in the 2010s, Shaheds can carry 90-kilogram warheads up to 650 km (50-kg warheads can go twice as far). They cost around $35,000 per unit, compared to a couple of million dollars, at least, for a ballistic missile. The low cost allows Russia to manufacture Shaheds in high quantities, unleashing entire fleets onto Ukrainian cities and infrastructure almost every night.

The early Shaheds were able to reach a preprogrammed location based on satellite-navigation coordinates. Even one of these early models could frequently overcome the jamming of satellite-navigation signals with the help of an onboard inertial navigation unit. This was essentially a dead-reckoning system of accelerators and gyroscopes that estimate the drone’s position from continual measurements of its motions.

Silhouette of person with large equipment under a starry night sky.In the Donetsk Region, on 15 August 2025, a Ukrainian soldier hunts for Shaheds and other drones with a thermalimaging system attached to a ZU23 23-millimeter antiaircraft gun.KOSTYANTYN LIBEROV/LIBKOS/GETTY IMAGES

Ukrainian defense forces learned to down Shaheds with heavy machine guns, but as Russia continued to innovate, the daily onslaughts started to become increasingly effective.

Today’s Shaheds fly faster and higher, and therefore are more difficult to detect and take down. Between January 2024 and August 2025, the number of Shaheds and Shahed-type attack drones launched by Russia into Ukraine per month increased more than tenfold, from 334 to more than 4,000. In 2025, Ukraine found AI-enabling Nvidia chipsets in wreckages of Shaheds, as well as thermal-vision modules capable of locking onto targets at night.

“Now, they are interconnected, which allows them to exchange information with each other,” Solntsev says. “They also have cameras that allow them to autonomously navigate to objects. Soon they will be able to tell each other to avoid a jammed region or an area where one of them got intercepted.”

These Russian-manufactured Shaheds, which Russian forces call Geran-2s, are thought to be more capable than the garden variety Shahed-136s that Iran has lately been launching against targets throughout the Middle East. Even the relatively primitive Shahed-136s have done considerable damage, according to press accounts.

Those Shahed successes may accrue, at least in part, from the fact that the United States and Israel lack Ukraine’s long experience with fending them off. In just two days in early March, upward of a thousand drones, mostly Shaheds, were launched against U.S. and Israeli targets, with hundreds of them reportedly finding their marks.

One attack, caught on videotape, shows a Shahed destroying a radar dome at the U.S. navy base in Manama, Bahrain. U.S. forces were understood to be attempting to fend off the drones by striking launch platforms, dispatching fighter aircraft to shoot them down, and by using some extremely costly air-defense interceptors, including ones meant to down ballistic missiles. On 4 March, CNN reported that in a congressional briefing the day before, top U.S. defense officials, including Secretary of Defense Pete Hegseth, acknowledged that U.S. air defenses weren’t keeping up with the onslaught of Shahed drones.

Broken drone on soil, cylindrical container nearby.Russian V2U attack drones are outfitted with Nvidia processors and run computer-vision software and AI algorithms to enable the drones to navigate autonomously.GUR OF THE MINISTRY OF DEFENSE OF UKRAINE

Russia is also starting to field a newer generation of attack drones. One of these, the V2U, has been used to strike targets in the Sumy region of northeastern Ukraine. The V2U drones are outfitted with Nvidia Jetson Orin processors and run computervision software and AI algorithms that allow the drones to navigate even where satellite navigation is jammed.

The sale of Nvidia chips to Russia is banned under U.S. sanctions against the country. However, press reports suggest that the chips are getting to Russia via intermediaries in India.

Antidrone Systems Step Up

MaXon Systems is one of several companies working to fend off the nightly drone onslaught. Within one year, the company developed and battle-tested a Shahed interception system that hints at the sci-fi future envisioned by Azhnyuk. For a system to be capable of reliably defending against autonomous weaponry, it, too, needs to be autonomous.

MaXon’s solution consists of ground turrets scanning the sky with infrared sensors, with additional input from a network of radars that detects approaching Shahed drones at distances of, typically, 12 to 16 km. The turrets fire autonomous fixed-winged interceptor drones, fitted with explosive warheads, toward the approaching Shaheds at speeds of nearly 300 km/h. To boost the chances of successful interception, MaXon is also fielding an airborne anti-Shahed fortification system consisting of helium-filled aerostats hovering above the city that dispatch the interceptors from a higher altitude.

“We are trying to increase the level of automation of the system compared to existing solutions,” says Solntsev. “We need automatic detection, automatic takeoff, and automatic mid-track guidance so that we can guide the interceptor before it can itself flock the target.”

Gray drone on display stand, surrounded by military personnel in camouflage uniforms.An interceptor drone, part of the U.S. MEROPS defensive system, is tested in Poland on 18 November 2025.WOJTEK RADWANSKI/AFP/GETTY IMAGES

In November 2025, the Ukrainian military announced it had been conducting successful trials of the Merops Shahed drone interceptor system developed by the U.S. startup Project Eagle, another of former Google CEO Eric Schmidt’s Ukraine defense ventures. Like the MaXon gear, the system can operate largely autonomously and has so far downed over 1,000 Shaheds.

What Works in the Lab Doesn’t Necessarily Fly on the Battlefield

Despite the progress on both sides, analysts say that the kind of robotic warfare imagined by Azhnyuk won’t be a reality for years.

“The software for drone collaboration is there,” says Kate Bondar, a former policy advisor for the Ukrainian government and currently a research fellow at the U.S. Center for Strategic and International Studies. “Drones can fly in labs, but in real life, [the forces] are afraid to deploy them because the risk of a mistake is too high,” she adds.

Two people launching a drone in an open field using a catapult system.Ukrainian soldiers watch a GOR reconnaissance drone take to the sky near Pokrovsk in the Donetsk region, on 10 March 2025.ANDRIY DUBCHAK/FRONTLINER/GETTY IMAGES

In Bondar’s view, powerful AI-equipped drones won’t be deployed in large numbers given the current prices for high-end processors and other advanced components. And, she adds, the more autonomous the system needs to be, the more expensive are the processors and sensors it must have. “For these cheap attack drones that fly only once, you don’t install a high-resolution camera that [has] the resolution for AI to see properly,” she says. “[You install] the cheapest camera. You don’t want expensive chips that can run AI algorithms either. Until we can achieve this balance of technological sophistication, when a system can conduct a mission but at the lowest price possible, it won’t be deployed en masse.”

While existing AI systems are doing a good job recognizing and following large objects like Shaheds or tanks, experts question their ability to reliably distinguish and pursue smaller and more nimble or inconspicuous targets. “When we’re getting into more specific questions, like can it distinguish a Russian soldier from a Ukrainian soldier or at least a soldier from a civilian? The answer is no,” says Bondar. “Also, it’s one thing to track a tank, and it’s another to track infantrymen riding buggies and motorcycles that are moving very fast. That’s really challenging for AI to track and strike precisely.”

Clark, at the Hudson Institute, says that although the AI algorithms used to guide the Russian and Ukrainian drones are “pretty good,” they rely on information provided bysensors that “aren’t good enough.” “You need multiphenomenology sensors that are able to look at infrared and visual and, in some cases, different parts of the infrared spectrum to be able to figure out if something is a decoy or real target,” he says.

German defense analyst Lange agrees that right now, battlefield AI image-recognition systems are too easily fooled. “If you compress reality into a 2D image, a lot of things can be easily camouflaged—like what Russia did recently, when they started drawing birds on the back of their drones,” he says.

Autonomy Remains Elusive on the Ground and at Sea, Too

To make Ukraine’s emerging uncrewed ground vehicles (UGVs) equally self-sufficient will be an even greater task, in Clark’s view. Still, Bondar expects major advances to materialize within the next several years, even if humans are still going to be part of the decision-making loop.

Military radar equipment in a grassy field.A mobile electronic-warfare system built by PiranhaTech is demonstrated near Kyiv on 21 October 2025.DANYLO ANTONIUK/ANADOLU/GETTY IMAGES

“I think in two or three years, we will have pretty good full autonomy, at least in good weather conditions,” she says, referring to aerial drones in particular. “Humans will still be in the loop for some years, simply because there are so many unpredictable situations when you need an intervention. We won’t be able to fully rely on the machine for at least another 10 or 15 years.”

Ukrainian defenders are apprehensive about that autonomous future. The boom of drone innovation has come hand in hand with the development of sophisticated jamming and radio-frequency detection systems. But a lot of that innovation will become obsolete once the pendulum swings away from human control. Ukrainians got their first taste of dealing with unjammable drones in mid-2024, when Russia began rolling out fiber-optic tethered drones. Now they have to brace for a threat on a much larger scale.

Quadcopter drone flying with a fire extinguisher attached in a cloudy sky.An experimental drone is demonstrated at the Brave1 defense-tech incubator in Kyiv.DANYLO DUBCHAK/FRONTLINER/GETTY IMAGES

“Today, we have a situation where we have lots of signals on the battlefield, but in the near future, in maybe two to five years, UAVs are not going to be sending any signals,” says Oleksandr Barabash, CTO of Falcons, a Ukrainian startup that has developed a smart radio-frequency detection system capable of revealing precise locations of enemy radio sources such as drones, control stations, and jammers.

Last September, Falcons secured funding from the U.S.-based dual-use tech fund Green Flag Ventures to scale production of its technology and work toward NATO certification. But Barabash admits that its system, like all technologies fielded in Ukrainian war zones, has an expiration date. Instead of radio-frequency detectors, Barabash thinks, the next R&D push needs to focus on passive radar systems capable of identifying small and fast-moving targets based on the signal from sources like TV towers or radio transmitters that propagate through the environment and are reflected by those moving targets. Passive radars have a significant advantage in the war zone, according to Barabash. Since they don’t emit their own signal, they can’t be that easily discovered by the enemy.

“Active radar is emitting signals, so if you are using active radars, you are target No. 1 on the front line,” Barabash says.

Bondar, on the other hand, thinks that the increased onboard compute power needed for AI-controlled drones will, by itself, generate enough electromagnetic radiation to prevent autonomous drones from ever operating completely undetectably.

“You can have full autonomy, but you will still have systems onboard that emit electromagnetic radiation or heat that can be detected,” says Bondar. “Batteries emit electromagnetic radiation, motors emit heat, and [that heat can be] visible in infrared from far away. You just need to have the right sensors to be able to identify it in advance.” She adds that that takeaway is “how capable contemporary detection systems have become and how technically challenging it is to design drones that can reliably operate in the Ukrainian battlefield environment.”

There Will Be Nowhere to Hide from Autonomous Drones

When autonomous drones become a standard weapon of war, their threat will extend far beyond the battlefields of Ukraine. Autonomous turrets and drone-interceptor fortification might soon dot the perimeter of European cities, particularly in the eastern part of the continent.

Person holding gray drone against a blue sky, preparing to launch it.A fixed-wing drone is tested in Ukraine in April 2025.ANDREWKRAVCHENKO/BLOOMBERG/GETTY IMAGES

Nefarious actors from all over the world have closely watched Ukraine and taken notes, warns Lange. Today, FPV drones are being used by Islamic terrorists in Africa and Mexican drug cartels to fight against local authorities.

When autonomous killing machines become widely available, it’s likely that no city will be safe. “We might see nets above city centers, protecting civilian streets,” Lange says. “In every case, the West needs to start performing similar kinetic-defense development that we see in Ukraine. Very rapid iteration and testing cycles to find solutions.”

Azhnyuk is concerned that the historic defenders of Europe—the United States and the European countries themselves—are falling behind. “We are in danger,” he says. While Russia and Ukraine made major strides in their drones and countermeasures over the past year, “Europe and the United States have progressed, in the best-case scenario, from the winter-of-2022 technology to the summer-of-2022 technology.

“The gap is getting wider,” he warns. “I think the next few years are very dangerous for the security of Europe.”

This article appears in the April 2026 print issue as “Rise of the AUTONOMOUS Attack Drones.”

Remembering IEEE Power & Energy Society Leader Mel Olken

2026-03-24 02:00:05



Mel Olken

Former executive director of the IEEE Power & Energy Society

Fellow, 92; died 9 January

Olken became the first executive director of the IEEE Power & Energy Society (PES) in 1995. In 2002 he left the position to serve as founding editor in chief of the society’s Power & Energy Magazine. Olken led the publication until 2016, when he retired.

After receiving a bachelor’s degree in engineering from the City College of New York, Olken was hired as an electrical engineer by American Electric Power, a utility based in Columbus, Ohio. He helped design coal, hydroelectric, and nuclear power plants. While at AEP, he was promoted to manager of the electrical generation department.

He joined IEEE in 1958 and became a PES member in 1973. An active volunteer, he chaired the society’s energy development and power generation committee and its technical council.

Olken was elected an IEEE Fellow in 1988 for “contributions to innovative design of reliable generating stations.”

He became an IEEE staff member in 1984 as society services director for IEEE Technical Activities. From 1990 to 1995 he served as managing director of Regional Activities group (now IEEE Member and Geographic Activities), before becoming PES executive director.

He received a PES Lifetime Achievement Award in 2012 for his “broad and sustained technical contributions to the development of power engineering and the power engineering profession.”

Stephanie A. Huguenin

Research scientist

IEEE member, 48; died 1 October

Huguenin was an administrative assistant in the physics and biophysics department at Augusta University, in Georgia. According to her Augusta obituary, she died of an illness acquired during her volunteer work in India.

She received a bachelor’s degree in engineering in 1999 from the College of Charleston, in South Carolina. During her senior year, she worked as a mathematics and science tutor at the Jenkins Orphanage (now the Jenkins Institute for Children), in North Charleston. After graduating, Huguenin traveled to India to volunteer at an orphanage run by the Mother Teresa Foundation.

Upon returning to the United States in 2001, Huguenin worked as a freelance research consultant. Three years later she was hired as a systems administrator and archivist by photographer Ebet Roberts in New York City. In 2010 she left to work as an operations strategist and technical consultant.

She earned a master’s degree in communication and research science in 2016 from New York University. While at NYU, she conducted experimental and theoretical research in Internet Protocol design and implementation as well as network security and management.

From 2020 to 2024 she was a research scientist at businesses owned by her family. She joined Augusta University in 2023.

She was a member of the IEEE Geoscience and Remote Sensing Society and the IEEE Systems Council.

Huguenin volunteered for the Internet Engineering Task Force, a standards development organization, and the American Registry for Internet Numbers. ARIN manages and distributes internet number resources such as IP addresses and autonomous system numbers.

The nonprofits she supported included the Coastal Conservation League, the Longleaf Alliance, the Lowcountry Land Trust, the Nature Conservancy, and Women in Defense.

Transforming Data Science With NVIDIA RTX PRO 6000 Blackwell Workstation Edition

2026-03-23 21:00:04



This is a sponsored article brought to you by PNY Technologies.

In today’s data-driven world, data scientists face mounting challenges in preparing, scaling, and processing massive datasets. Traditional CPU-based systems are no longer sufficient to meet the demands of modern AI and analytics workflows. NVIDIA RTX PROTM 6000 Blackwell Workstation Edition offers a transformative solution, delivering accelerated computing performance and seamless integration into enterprise environments.

Key Challenges for Data Science

  • Data Preparation: Data preparation is a complex, time-consuming process that takes most of a data scientist’s time.
  • Scaling: Volume of data is growing at a rapid pace. Data scientists may resort to downsampling datasets to make large datasets more manageable, leading to suboptimal results.
  • Hardware: Demand for accelerated AI hardware for data centers and cloud service providers (CSPs) is exceeding supply. Current desktop computing resources may not be suitable for data science workflows.

Benefits of RTX PRO-Powered AI Workstations

NVIDIA RTX PRO 6000 Blackwell Workstation Edition delivers ultimate acceleration for data science and AI workflows. These powerful and robust workstations enable real-time rendering, rapid prototyping, and seamless collaboration. With support for up to four NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition GPUs, users can achieve data center-level performance right at their desk, making even the most demanding tasks manageable.

PNY is redefining professional computing with the ‪@NVIDIA‬ RTX PRO 6000 Blackwell Workstation Edition, the most powerful desktop GPU ever built. Engineered for unmatched compute power, massive memory capacity, and breakthrough performance, this cutting-edge solution delivers a quantum leap forward in workflow efficiency, enabling professionals to tackle the most demanding applications with ease.PNY

NVIDIA RTX PRO 6000 Blackwell Workstation Edition empowers data scientists to handle massive datasets, perform advanced visualizations, and support multi-user environments without compromise. It’s ideal for organizations scaling up their analytics or running complex models. NVIDIA RTX PRO 6000 Blackwell Workstation Edition is optimized for AI workflows, leveraging the NVIDIA AI software stack, including CUDA-X, and NVIDIA Enterprise software. These platforms enable zero-code-change acceleration for Python-based workflows and support over 100 AI-powered applications, streamlining everything from data preparation to model deployment.

Finally, NVIDIA RTX PRO 6000 Blackwell Workstation Edition offers significant advantages in security and cost control. By offloading compute from the data center and reducing reliance on cloud resources, organizations can lower expenses and keep sensitive data on-premises for enhanced protection.

Accelerate Every Step of Your Workflow

NVIDIA RTX PRO 6000 Blackwell Workstation Edition is designed to transform the entire data science pipeline, delivering end-to-end acceleration from data preparation to model deployment. With NVIDIA CUDA-X open-source data science cuDF library and other GPU-accelerated libraries, data scientists can process massive datasets at lightning speed, often achieving up to 50X faster performance compared to traditional CPU-based tools. This means tasks like cleaning data, managing missing values, and engineering features can be completed in seconds, not hours, allowing teams to focus on extracting insights and building better models.

NVIDIA RTX PRO 6000 Blackwell Workstation Edition is designed to transform the entire data science pipeline, delivering end-to-end acceleration from data preparation to model deployment

Exploratory data analysis is elevated with advanced analytics and interactive visualizations, powered by NVIDIA CUDA-X and PyData libraries. These tools enable users to create expansive, responsive visualizations that enhance understanding and support critical decision-making. When it comes to model training, GPU-accelerated XGBoost slashes training times from weeks to minutes, enabling rapid iteration and faster time-to-market AI solutions.

NVIDIA RTX PRO 6000 Blackwell Workstation Edition streamlines collaboration and scalability. With NVIDIA AI Workbench, teams can set up projects, develop, and collaborate seamlessly across desktops, cloud platforms, and data centers. The unified software stack ensures compatibility and robustness, while enterprise-grade hardware maximizes uptime and reliability for demanding workflows.

By integrating these advanced capabilities, NVIDIA RTX PRO 6000 Blackwell Workstation Edition empowers data scientists to overcome bottlenecks, boost productivity, and drive innovation, making them an essential foundation for modern, enterprise-ready AI development.

Performance Benchmarks

NVIDIA’s cuDF library offers zero-code change acceleration for pandas, delivering up to 50X performance gains. For example, a join operation that takes nearly 5 minutes on CPU completes in just 14 seconds on GPU. Advanced group by operations drop from almost 4 minutes to just 4 seconds.

Enterprise-Ready Solutions from PNY

Black PNY logo with stylized uppercase letters on a transparent background.

Available from leading OEM manufacturers, NVIDIA RTX PRO 6000 Blackwell Workstation Edition Series GPUs are specifically engineered to meet the rigorous demands of enterprise environments. These systems incorporate NVIDIA Connect-X networking, now available at PNY and a comprehensive suite of deployment and support tools, ensuring seamless integration with existing IT infrastructure.

Designed for scalability, the latest generation of workstations can tackle complex AI development workflows at scale for training, development, or inferencing. Enterprise-grade hardware maximizes uptime and reliability.

To learn more about NVIDIA RTX PRO™ Blackwell solutions, visit: NVIDIA RTX PRO Blackwell | PNY Pro | pny.com or email [email protected]

Why Thermal Metrology Must Evolve for Next-Generation Semiconductors

2026-03-23 18:00:04



An in-depth examination of how rising power density, 3D integration, and novel materials are outpacing legacy thermal measurement — and what advanced metrology must deliver.

What Attendees will Learn

  1. Why heat is now the dominant constraint on semiconductor scaling — Explore how heterogeneous integration, 3D stacking, and AI-driven power density have shifted the primary bottleneck from lithography to thermal management, with heat flux projections exceeding 1,000 W/cm² for next-generation accelerators.
  2. How extreme material properties are redefining thermal design requirements —Understand the measurement challenges posed by nanoscale thin films where bulk assumptions fail, engineered ultra-high-conductivity materials (diamond, BAs, BNNTs), and devices operating above 200 °C in wide-band gap systems.
  3. Why interfaces and buried layers now govern reliability — Examine how thermal boundary resistance at bonded interfaces, TIM layers, and dielectric stacks has become a first-order reliability accelerator.
  4. What a thermal-first design workflow looks like in practice — Learn how measured, scale-appropriate thermal properties can be integrated early in the design cycle to calibrate models, reduce uncertainty, and prevent costly late-stage failures across advanced packaging and 3D architectures.