2025-12-11 23:00:02

Ghost Robotics is today announcing a major upgrade for their Vision 60 quadruped: an arm. Ghost, a company which originated at the GRASP Lab at the University of Pennsylvania, specializes in exceptionally rugged quadrupeds, and while many of its customers use its robots for public safety and disaster relief, it also provides robots to the United States military, which has very specific needs when it comes to keeping humans out of danger.
In that context, it’s not unreasonable to assume that Ghost’s robots may sometimes be used to carry weapons, and despite the proliferation of robots in many roles in the Ukraine war, the idea of a legged robot carrying a weapon is not a comfortable one for many people. IEEE Spectrum spoke with Ghost co-founder and current CEO Gavin Kenneally to learn more about the new arm, and to get his perspective on selling robots to the military.
The Vision 60’s new arm has six degrees of freedom. Ghost Robotics
Ghost Robotics initially made a name for itself with its very impressive early work with the Minitaur direct-drive quadruped in 2016. The company also made headlines in late 2021, when a now-deleted post on Twitter (now X) went viral because it included a photograph of one of Ghost’s Vision 60 quadrupeds with a rifle mounted on its back.
That picture resulted in a very strong reaction, although as IEEE Spectrum reported at the time, robots with guns affixed to them wasn’t new: To mention one early example, the U.S. military had already deployed weapons on mobile robots in Iraq in 2007. And while several legged robot companies pledged in 2022 not to weaponize their general purpose robots, the Chinese military in 2024 displayed quadrupeds from Unitree equipped with guns. (Unitree, based in China, was one of the signers of the 2022 pledge.)
The issue of weaponized robots goes far beyond Ghost Robotics, and far beyond robots with legs. We’ve covered both the practical and ethical perspectives on this extensively at IEEE Spectrum, and the intensity of the debates show that there is no easy answer. But to summarize one important point made by some ethicists, some military experts, and Ghost Robotics itself: robots are replaceable, humans are not. “Customers use our robots to keep people out of harm’s way,” Ghost CEO Kenneally tells Spectrum.
It’s also worth pointing out that even the companies who signed the pledge not to weaponize their general purpose robots acknowledge that military robots exist, and are accepting of that, provided that such robots are used under existing legal doctrines and operate within those safeguards—and that what constraints should or should not be imposed on these kinds of robots is best decided by policymakers rather than industry.
This is essentially Ghost Robotics’ position as well, says Kenneally. “We sell our robots to U.S. and allied governments, and as part of that, the robots are used in defense applications where they will sometimes be weaponized. What’s most critical to us is that the decisions about how to use these robots are happening systematically and ethically at the government policy level.”
To some extent, these decisions are already being made within the U.S. government. Department of Defense Directive 3000.09, ‘Autonomy in Weapon Systems,’ lays out the responsibilities and limitations for how autonomous or human-directed robotics weapons systems should be developed and deployed, including requirements for human use-of-force judgements. At least in the U.S., this directive implies that there are rules and accountability for robotic weapons.
Ghost sees its Vision 60 quadruped as a system that its trusted customers can use as they see fit, and the manipulator enables many additional capabilities. “The primary purpose of the robot has been as a sensor platform,” Kenneally says, “but sometimes there are doors in the way, or objects that need to be moved, or you might want the robot to take a sample. So the ability to do all of that mobile manipulation has been hugely valuable for our customers.”
As it turns out, arms are good for more than manipulation. “One thing that’s been very interesting is that our customers have been using the arm as a sensor boom, which is something that we hadn’t anticipated,” says Kenneally. Ghost’s robot has plenty of cameras, but they’re mostly at the viewpoint of a moderately-sized dog. The new arm offers a more human-like vantage and a way to peek around corners or over things without exposing the whole robot.
Ghost was not particularly interested in building their own arm, and tried off-the-shelf options to get the manipulation bit working. And they did get the manipulation working; what didn’t work were any of those arms after the 50 kilogram robot rolled over on them. “We wanted to make sure that we could build an arm that could stand up to the same intense rigors of our customers’ operations that the rest of the robot can,” says Kenneally. “Morphologically, we actually consider the arm to be a fifth leg, so that the robot operates as a unified system for whole-body control.”
The rest of the robot is exceptionally rugged, which is what makes it appealing to customers with unique needs, like special forces teams. Enough battery life for more than three hours of walking (or more than 20 hours on standby) isn’t bad, and the Vision 60 is sealed against sand and dust, and can survive complete submergence in shallow water. It can operate in extreme temperatures ranging from -40 °C to 55 °C, which has been a particular challenge for robots. And if you do manage to put it in a situation where it physically breaks one of its legs, it’s easy to swap in a spare in just a few minutes, even out in the field.
The Vision 60 can open doors withe high-level direction from a human operator.Ghost Robotics
Despite Ghost quietly selling over a thousand quadrupeds to date, Kenneally is cautious about the near future for legged robots, as is anyone who has seriously considered buying one, because it’s impossible to ignore the option of just buying one from a Chinese company at about a tenth the cost of a quadruped from a company based in the U.S. or Europe.
“China has identified legged robotics as a lynchpin technology that they are strategically funding,” Kenneally says. “I think it’s an extremely serious threat in the long term, and we have to take these competitors very seriously despite their current shortcomings.” There is a technological moat, for now, but if the market for legged robots follows the same trajectory as the market for drones did, that moat will shrink drastically over the next few years.
The United States is poised to ban consumer drone sales from Chinese manufacturer DJI, and banned DJI drone use by federal agencies in 2017. But it may be too late in some sense, as DJI’s global market share is something like 90 percent. Meanwhile, Unitree may have already cornered somewhere around 70 percent of the global market for quadrupeds, despite the recent publication of exploits that allow the robots to send unauthorized data to China.
In the United States in particular, private sector robotics funding is unpredictable at the best of times, and Kenneally argues that to compete with Chinese-subsidized robot-makers American companies like Ghost who produce these robots domestically will need sustained U.S. government support, too. That doesn’t mean the government has to pick which companies will be the winners, but that it should find a way to support the U.S. robotics industry as a whole, if it still wants to have a meaningful one. “The quadruped industry isn’t a science project anymore,” says Kenneally. “It’s matured, and quadruped robots are going to become extremely important in both commercial and government applications. But it’s only through continued innovation that we’ll be able to stay ahead.”
2025-12-11 22:00:02

Even if a GPU in a data center should only require 700 watts to run a large language model, it may realistically need 1,700 watts because of inefficiencies in how electricity reaches it. That’s a problem Peng Zou and his team at startup PowerLattice say they have solved by miniaturizing and repackaging high-voltage regulators.
The company claims that its new chiplets deliver up to a 50 percent reduction in power consumption and twice performance per watt by sizing down the voltage conversion process and moving it significantly closer to processors.
Traditional systems deliver power to AI chips by converting AC power from the grid into DC power, which then gets transformed again into low-voltage (around one volt) DC, usable by the GPU. With that voltage drop, current must increase to conserve power.
This exchange happens near the processor, but the current still travels a meaningful distance in its low-voltage state. A high current traveling any distance is bad news, because the system loses power in the form of heat proportional to the current squared. “The closer you get to the processor, the less distance that the high current has to travel, and thus we can reduce the power loss,” says Hanh-Phuc Le, who researches power electronics at the University California, San Diego and has no connection to PowerLattice.
Given the ever-growing power consumption of AI data centers, “this has almost become a show-stopping issue today,” PowerLattice’s Zou says.
Zou thinks he and his colleagues have found a way to avoid this huge loss of power. Instead of dropping the voltage a few centimeters away from the processor, they figured out how to do it millimeters away, within the processor’s package. PowerLattice designed tiny power delivery chiplets—shrinking inductors, voltage control circuits, and software-programmable logic into an IC about twice the size of a pencil eraser. The chiplets sit under the processor’s package substrate, to which they’re connected.
One challenge the minds at PowerLattice faced was how to make inductors smaller without altering their capabilities. Inductors temporarily store energy and then release it smoothly, helping regulators maintain steady outputs. Their physical size directly influences how much energy they can manage, so shrinking them weakens their effect.
The startup countered this issue by building their inductors from a specialized magnetic alloy that “enables us to run the inductor very efficiently at high frequency,” Zou says. “We can operate at a hundred times higher frequency than the traditional solution.” At higher operating frequencies, circuits can be designed to use an inductor with a much lower inductance, meaning the component itself can be made with less physical material. The alloy is unique because it maintains better magnetic properties than comparable materials at these high frequencies.
The resulting chiplets are less than 1/20th the area of today’s voltage regulators, Zou says. And each is only 100 micrometers thick, around the thickness of a strand of hair. Being so tiny allows the chiplets to fit as close as possible to the processor, and the space savings provide valuable real estate to other components.
PowerLattice’s chiplets would sit on the underside of a GPU’s package to provide power from below.PowerLattice
Even at their small size, the proprietary tech is “highly configurable and scalable,” Zou says. Customers can use multiple chiplets for a more comprehensive fix or fewer if their architecture doesn’t require it. “It’s one key differentiator” of PowerLattice’s solution to the voltage regulation problem, according to Zou.
Employing the chiplets can reduce 50 percent of power needs for an operator, effectively doubling performance, the company claims. But this number seems ambitious to Le. He says that 50 percent power savings “could be achievable, but that means PowerLattice has to have direct control of the load, which includes the processor as well.” The only way he sees it as realistic is if the company has the ability to manage power supply in real time depending on a processor’s workload—a technique called dynamic voltage and frequency scaling—which PowerLattice does not.
Right now, PowerLattice is in the midst of reliability and validation testing before it releases its first product to customers, in about two years. But bringing the chiplets to market won’t be straightforward because PowerLattice has some big-name competition. Intel, for example, is developing a Fully Integrated Voltage Regulator, a device partially devoted to solving the same problem.
Zou doesn’t consider Intel competition because, in addition to the products differing in their approaches to the power delivery problem, he does not believe Intel will be providing its technology to its competitors. “From a market position perspective, we are quite a bit different,” Zou says.
A decade ago, PowerLattice wouldn’t have room to succeed, Le says, because companies that sold processors only ensured reliability for their chips if customers purchased their power supplies as well. “Qualcomm, for example, can sell their processor chip and the vast majority of their customers also have to buy their proprietary Qualcomm power supply management chip because otherwise they would say, ‘We don’t guarantee the reliable operation of the whole system.’”
Now, though, there may be hope. “There’s a trend of what we call chiplet implementation, so it is a heterogeneous integration,” Le says. Customers are mixing and matching components from different companies to achieve better system optimization, he says.
And while notable providers like Intel and Qualcomm may continue to have the upper hand with notable customers, smaller companies—mostly startups—building processors and AI infrastructures will also be power hungry. These groups will need to look for a power supply source, and that’s where PowerLattice and similar companies could come in, Le says. “That’s how the market is. We have a startup working with a startup doing something that actually rivals, and even competes with, some large companies.”
2025-12-11 21:23:32

On 18 November 1844, the Washington Chess Club challenged its counterparts in Baltimore to a match. Two teams were organized, and at 4 p.m. on 26 November, the first game commenced with three consulting members to a side. Washington began conventionally, pushing a pawn to the center of the board. Baltimore immediately responded by mirroring the move. But this was unlike any chess game ever played before. The Baltimoreans were still in Baltimore, the Washingtonians were still in Washington, D.C, 60 kilometers away, and they were playing by electrical telegraph.
Successive moves were transmitted over the new Baltimore–Washington telegraph line, the first in the United States, which Samuel Morse and company had inaugurated in May of that year with the message “What hath God wrought.”
Samuel F.B. Morse pushed for the first U.S. telegraph, which connected Washington, D.C., to Baltimore, Md.Mathew B. Brady/Library of Congress
One chess game led to another, and play continued on and off for days. Records of the games are incomplete and sometimes inconsistent—181 years later, it’s unclear who exactly dreamt up chess over wire and why. But thanks in part to historical documents at the Smithsonian Institution, we know enough about the people involved and the operation of the early telegraph to have a sense of the proceedings. We know that Morse would cite chess in lobbying Congress to fund the extension of the telegraphic network to New York via Philadelphia. And we know that there was much more chess by telegraph to come.
Not simply a novelty or a one-off tech demo, telegraph chess eventually became a well-known, joked-about trend in the United States and Britain, writes historian Simone Müller-Pohl. Chess by telegraph also prefigured chess played through other means of telecommunications. There are records of recreational and serious games played over radio, on telephone lines, satellite, and through online interfaces including forums, email, and dedicated live services. Most recently, chess has evolved into an esport. Earlier this year, chess joined the likes of Call of Duty, Street Fighter, and Rocket League at the 2025 Esports World Cup in Riyadh, Saudi Arabia.
Last August, chess grandmaster Magnus Carlsen won the first ever Chess Esports World Cup.Esports World Cup
The number of adults worldwide who play chess regularly is often estimated at around 600 million, and many of them use whatever means available to play games across long distances with friends, rivals, and strangers. Indeed, the 1,500-year-old game and the latest in telecommunications always seem to find each other, starting just months after the first telegraph was built in the United States, when chess went electric.
The Baltimore–Washington telegraph was financed in 1843 with US $30,000 (about $1.3 million today) appropriated by Congress, with the help of Morse’s business partner, Francis O.J. Smith, who had supported the project in 1838 while still a sitting congressman from Maine. By late 1844, a bill to extend the line to New York was in front of the U.S. House of Representatives. In at least one way, drawing the attention of legislators to the new line was relatively easy—the Washington end moved back and forth between the Capitol building and the post office, near the present-day National Portrait Gallery. If you were a lawmaker in Washington at the time, the telegraph would’ve been hard to miss.
But perhaps they needed more persuading. Orrin S. Wood, a telegraph operator, thought so. On 5 December 1844, Wood wrote a letter to his brother-in-law, engineer Ezra Cornell, who had worked on the line and would go on to cofound Western Union:
“We have had considerable excitement in playing chess between this place and Baltimore for the last 2 or 3 weeks.…I am inclined to think that Congress will do something for Prof Morse as very many of them appear to be very much interested with [chess].”
A week later, Morse wrote to George M. Bibb, Secretary of the Treasury, to lobby for the funding. The telegraph could relay congressional news, presidential convention results, or the whereabouts of wanted criminals, he argued. He also played the chess card:
On 18 November 1844, the Washington Chess Club challenged Baltimore to a game of telegraph chess. One telegraph operator asked his counterpart, “Are you tired of checkers?”Smithsonian Institution Archives
“To show the variety of the operations of the telegraph, a game of draughts [checkers], and several games of chess, have been played between the cities of Baltimore and Washington, with the same ease as if the players were seated at the same table.”
Chess had even been played on rainy nights, he noted. The telegraph’s continued operation in both inclement weather and darkness compared favorably with optical telegraphy. Such systems, popular in France, consisted of regularly spaced flag towers that relayed messages by semaphore; they were costly to build and operate and only worked in daylight and good weather.
While Morse played up people’s interest in telegraphic chess, the game itself didn’t obviously begin with promotional intent. It began with checkers. We know this because Morse’s associate Alfred Vail kept a “Journal of the Magnetic Telegraph between Washington and Baltimore” (now part of the Vail Telegraph Collection at the Smithsonian) in which he meticulously recorded messages sent over the wire for posterity.
Notes from a 26 November 1844 chess game record players’ moves, as well as other snippets of information, such as “I sent tea for Mr. Vail by 5 o’clock train.”Smithsonian Institution Archives
On 15 November 1844, Vail in Washington instructed Henry J. Rogers in Baltimore to “get a checkerboard and let us play a game tomorrow morning.” Vail promised to send instructions by regular mail on the five o’clock train. At first confused, Rogers came up with the idea of using numbered squares to communicate locations on the board. Later that day, Rogers announced that John Wills, a journalist with the Baltimore Patriot, would play in his place.
The next morning, before the checkers game began, Vail recorded a telegraphic exchange between himself and Rogers, in which Vail suggests the game is for private enjoyment, and he would prefer that Wills—a reporter—not write about it:
Do you think the game any advantage R
What game V
Checkers R
Amusement V
Don’t you think people will make remarks R
Not if it is done by ourselves V
yes have you any objections to Wills R
none if he does not publish it V
yes R
Wills was thoroughly impressed with the technology, calling it “another wonder of the age,” according to Rogers. And so the telegraphers agreed that he could publish an account of the game, which perhaps was Vail’s hope all along. The story was still being prepared for publication on 18 November when Vail tapped, “The Washington Chess Club challenge Baltimore to a game.”
Vail’s 1845 book about the telegraph includes a brief report on chess. He writes that in the Washington–Baltimore match, seven games were played, totalling 686 moves “transmitted without a single mistake or interruption.” These details reappear in The Book of the First American Chess Congress, which called the Baltimore–Washington games the first telegraphic chess match.
Alfred Vail, Morse’s associate, was instrumental in organizing the first telegraph chess match and kept detailed notes on messages sent over the line.Zoom Historical/Alamy
Before the electrical telegraph, such descriptions would have been used in correspondence chess, played by mail. And The Oxford Companion to Chess (1984) describes a proposed 1823 match between France and England that intended to use semaphore telegraph, although the notation used was either never planned or has been lost to time.
But Vail and Rogers used a system that assigned a unique number to each of the 64 squares. So “pawn to queen’s bishop’s four” would have been rendered as “11 to 27.” Though the game itself can be remarkably complex, that system allowed individual moves to be communicated simply. “The exchange of information in chess is relatively low,” says David Kazdan, an engineer at Case Western Reserve University who has recently overseen a renewed collaboration between the school’s radio and chess clubs. “You don’t need much of a communication channel to play chess.”
To represent the positions on their telegraph chess board, Alfred Vail and Henry Rogers assigned a unique number to each of the 64 squares.Smithsonian Institution Archives
Vail’s book logs the moves for two of the chess games, and both accounts include an illegal move—probably errors that were introduced later. The accounts in Vail’s telegraphic journal, on the other hand, appear accurate, and even include a real-time correction of one move.
In the first game of telegraph chess, White was defeated.Google Books
In Vail’s journal, Washington claims the white pieces, but close examination shows that Washington either played the first move as black, or the board was mirrored left to right. At the time, the white pieces did not always take the first move. The sides also agreed to a limit of 10 minutes per move, even though time controls weren’t common in chess, and the chess clock had not yet been invented. And while Vail wanted “first rate players,” McCrary calls the overall play weak, with a poor understanding of the long-term planning needed to coordinate all of the pieces. Both teams also made tactical errors. For example, in the second game, Washington overlooked that one of their pawns was overworked defending two other pieces simultaneously, watched as Baltimore captured one of the pieces, and elected not to retaliate in order to continue defending a more valuable knight. “Even with changing conventions of that time, what was there in the description was atypical,” says McCrary.
The teams took a break during the first game and then reconvened on 28 November. With a pawn in position to advance to the last row, where it would be “promoted”—that is, replaced by a more dangerous piece of the player’s choice—Baltimore swept in with its queen and readied checkmate in one move. Unable to salvage the game, Washington resigned. “Ha ha,” wrote Rogers. “Ha ha,” responded Vail.
There is no record of overall standings, and no winner was declared between the two cities after all games had been played.
By today’s standards, the hardware that relayed the moves was relatively simple, mainly consisting of a battery, a switch, and a magnet. “It’s not all that different from a doorbell,” says David Hochfelder, a historian at the State University of New York at Albany who has studied the early American telegraph.
Laying the line between the two cities had been difficult, with costly delays after failed attempts to bury the cable and to use cheaper noninsulated wire. Eventually, overhead insulated copper wire was strung the distance between poles.
On 24 May 1844, this telegraph register received the first message sent by telegraph: “What hath God wrought.”AP
Years before the chess match, Morse had considered a messaging system that used only numbers, which corresponded to set words or phrases listed in a code book. But he soon realized that a practical communications service would need an alphabetic component to spell out proper names.
This led to Morse’s eponymous code, which assigns a series of short and long signals to different alphanumeric characters. By tapping on a key, telegraph operators would interrupt a battery-powered current that ran the length of the telegraph wire. At the other end, an electromagnet moved a stylus, pen, or pencil, to mark a piece of paper with the corresponding dots and dashes, which an operator would then read. (The sounder, which turned the signals into audible sounds, hadn’t yet been invented.)
During the chess games, the telegraph operators occasionally asked each other how many people were in the room. At times, a dozen kibitzers looked on. At others, only the rotating cast of chess players and telegraph operators was present.
The Baltimore–Washington telegraph line was an immediate hit with a general public that embraced popular science through lectures and popular books and magazines. Scientific American was founded in 1845, for example. But people were more curious to see the telegraph at work than they were to use its services, even though the line operated free of charge for the first year. “Operators tended to show its capabilities rather than handling actual message traffic,” says Hochfelder.
The lack of activity is sometimes evident in the telegraph journal. Many of the messages are purely functional (“I am ready,” “stop 30 minutes”); simple greetings; notifications of letters sent and received; or requests for daily newspapers. The Baltimore end of the telegraph was in the Mt. Clare station of the B&O railroad, and the telegraph line ran alongside the tracks. Mail delivered by train took half a day door to door, says Hochfelder, and the telegraph offered little practical advantage.
On 5 December 1844, Rogers wrote to Vail:
“I hear from several sources that we are making rather an unfavorable impression with the religious part of the community, and I am under the impression if we continue after the present party is through that we will be injured more than any benefit might or can be derived from it.”
The exact nature of the religious community’s complaint with telegraph chess is unclear.
Although Morse wrote to Vail on the day of the first chess game that he “was much pleased with your game of drafts,” he came to feel that chess was too frivolous for the telegraph, as noted by the chess writer Tim Harding in his Correspondence Chess in Britain and Ireland, 1824–1987 (McFarland, 2011). Whatever the reasons, it appears that after 17 December 1844, no more chess was played on the line. And in the end, Congress didn’t fund a telegraphic connection to New York, nor did it acquire perpetual rights to the telegraph, in part because Morse’s business partner had other designs, says Hochfelder. The Baltimore–Washington line operated under the auspices of the Postal Service from 1845 to 1847, when funding ended.
When U.S. chess grandmaster Bobby Fischer was prevented from attending an international tournament in Havana, his moves were relayed via teletype. Left: Everett Collection Historical/Alamy; Right: Smith Archive/Alamy
After that, the U.S. telegraph thrived in private ventures. Over the next few years, companies built local lines and networks to connect cities across the country. Most notably, Ezra Cornell’s Western Union completed a transcontinental telegraph in 1861, and eventually became a monopoly in the United States. Ordinary people rarely used the telegraph, says Hochfelder, but it transformed industries such as finance and journalism.
Meanwhile, telegraph chess was taken up elsewhere. In 1845, for example, a match between London and Gosport, England, involved inventor Charles Wheatstone and chess master Howard Staunton. But it would take another few decades for telechess to become more widespread, with prominent club matches played over telegraph from the 1890s into the 1920s.
High-level chess competitions tend to be held in person, but games have been played remotely from time to time. For example, in 1965, U.S. grandmaster Bobby Fischer relayed his moves by teletype over telephone lines from New York City to Havana, after the U.S. State Department prevented his attending a tournament there. And in 1999, a couple of years after losing a rematch to the IBM supercomputer Deep Blue, world champion Garry Kasparov played a promotional game against a team representing “the world,” which consulted on moves via a Microsoft forum.
In a promotional game in 1999, Russian grandmaster Garry Kasparov played an online game against “the world.” Jeff Christensen/Getty Images
Today, the internet has taken telecom chess to fabulous new heights, with one site alone, chess.com, often hosting up to 20 million games a day. Indeed, the growth in online play has sometimes stretched the capacities of the servers and the engineers who maintain them.
Why have technologists taken the opportunity to play chess using so many generations of telecommunications? It may simply be that chess is popular, and by its nature can actually be played with short messages and perfect information, unlike soccer or poker.
But is there something more, maybe a natural affinity? “There are similarities in thinking processes [between] engineering design, and the sort of puzzle solving that a chess game involves,” says Kazdan of Case Western Reserve. The connection may be one-sided. “Many engineers like chess. I’m not sure many chess players like engineering.”
2025-12-11 04:33:57

Prevent costly AI failures in production by mastering data-centric approaches to detect bias, classimbalance, and data leakage before deployment impacts your business.
2025-12-11 03:00:02

It appears that nearly every organization is planning to use artificial intelligence to improve operations. Although autonomous intelligent systems (AIS) can offer significant benefits, they also can be used unethically. The technology can create deepfakes, realistic-looking altered images and videos that help spread misinformation and disinformation. Meanwhile, AI systems trained on biased data can perpetuate discrimination in hiring, lending, and other practices. And surveillance systems that incorporate AI can lead to misidentification.
Those issues have led to concerns about AIS trustworthiness, and it has become more crucial for AI developers and companies to ensure the systems they use and sell are ethically sound. To help them, the IEEE Standards Association (IEEE SA) launched its IEEE CertifAIEd ethics program, which offers two certifications: one for individuals and one for products.
IEEE CertifAIEd was developed by an interdisciplinary group of AI ethics experts. The program is based on IEEE’s AI ethics framework and methodology, centered around the pillars of accountability, privacy, transparency, and avoiding bias. The program incorporates criteria outlined in the AI ontological specifications released under Creative Commons licenses.
IEEE is the only international organization that offers the programs, says Jon Labrador, director for conformity assessment of IEEE SA programs.
The professional certification provides individuals with the skills to assess an AIS for adherence to IEEE’s methodology and ethics framework.
Those with at least one year of experience in the use of AI tools or systems in their organization’s business processes or work functions are eligible to apply for the certification.
You don’t have to be a developer or engineer to benefit from the training, Labrador says. Insurance underwriters, policymakers, human resources personnel, and others could benefit from it, he says.
“Professionals from just about any industry or any company that’s using an AI tool to process business transactions are eligible for this program,” he says.
The training program covers how to ensure that AI systems are open and understandable; identify and mitigate biases in algorithms; and protect personal data. The curriculum includes use cases. Courses are available in virtual, in-person, or self-study formats.
Learners must take a final exam. Once they’ve successfully passed the test, they’ll receive their three-year IEEE professional certification, which is globally recognized, accepted, and respected, Labrador says.
“With the certification, you’ll become a trusted source for reviewing AI tools used in your business processes, and you’ll be qualified to run an assessment,” he says. “It would be incumbent on a company to have a few IEEE CertifAIEd professionals to review its tools regularly to make sure they conform with the values identified in our program.”
The self-study exam preparatory course is available to IEEE members at US $599; it costs $699 for nonmembers.
The product certification program assesses whether an organization’s AI tool or AIS conforms to the IEEE framework and continuously aligns with legal and regulatory principles such as the European Union AI Act.
An IEEE CertifiAIEd assessor evaluates the product to ensure it meets all criteria. There are more than 300 authorized assessors.
Upon completion of the assessment, the company submits it to IEEE Conformity Assessment, which certifies the product and issues the certification mark.
“That mark lets customers know that the company has gone through the rigors and is 100 percent in conformance with the latest IEEE AI ethics specifications,” Labrador says.
“The IEEE CertifiAIEd program can also be viewed as a risk mitigation tool for companies,” he says, “reducing the risk of system or process failures with the introduction of a new AI tool or system in established business processes.”
You can complete an application to begin the process of getting your product certified.
2025-12-10 22:00:02

The power surging through transmission lines over the iconic stone walls of England’s northern countryside is pushing the United Kingdom’s grid to its limits. To the north, Scottish wind farms have doubled their output over the past decade. In the south, where electricity demand is heaviest, electrification and new data centers promise to draw more power, but new generation is falling short. Construction on a new 3,280-megawatt nuclear power plant west of London lags years behind schedule.
The result is a lopsided flow of power that’s maxing out transmission corridors from the Highlands to London. That grid strain won’t ease any time soon. New lines linking Scotland to southern England are at least three to four years from operation, and at risk of further delays from fierce local opposition.
At the same time, U.K. Prime Minister Keir Starmer is bent on installing even more wind power and slashing fossil-fuel generation by 2030. His Labour government says low-carbon power is cheaper and more secure than natural gas, much of which comes from Norway via the world’s longest underwater gas pipeline and is vulnerable to disruption and sabotage.
The lack of transmission lines available to move power flowing south from Scottish wind farms has caused grid congestion in England. To better manage it, the U.K. has installed SmartValves at three substations in northern England—Penwortham, Harker, and Saltholme—and is constructing a fourth at South Shields. Chris Philpot
With congestion relief needed now, the U.K.’s grid operators are getting creative, rapidly tapping new cable designs and innovations in power electronics to squeeze more power through existing transmission corridors. These grid-enhancing technologies, or GETs, present a low-cost way to bridge the gap until new lines can be built.
“GETs allow us to operate the system harder before an investment arrives, and they save a s***load of money,” says Julian Leslie, chief engineer and director of strategic energy planning at the National Energy System Operator (NESO), the Warwick-based agency that directs U.K. energy markets and infrastructure.
Transmission lines running across England’s countryside are maxed out, creating bottlenecks in the grid that prevent some carbon-free power from reaching customers. Vincent Lowe/Alamy
The U.K.’s extreme grid challenge has made it ground zero for some of the boldest GETs testing and deployment. Such innovation involves some risk, because an intervention anywhere on the U.K.’s tightly meshed power system can have system-wide impacts. (Grid operators elsewhere are choosing to start with GETs at their systems’ periphery—where there’s less impact if something goes wrong.)
The question is how far—and how fast—the U.K.’s grid operators can push GETs capabilities. The new technologies still have a limited track record, so operators are cautiously feeling their way toward heavier investment. Power system experts also have unanswered questions about these advanced grid capabilities. For example, will they create more complexity than grid operators can manage in real time? Might feedback between different devices destabilize the grid?
There is no consensus yet as to how to even screen for such risks, let alone protect against them, says Robin Preece, professor in future power systems at the University of Manchester, in England. “We’re at the start of establishing that now, but we’re building at the same time. So it’s kind of this race between the necessity to get this technology installed as quickly as possible, and our ability to fully understand what’s happening.”
One of the most innovative and high-stakes tricks in the U.K.’s toolbox employs electronic power-flow controllers, devices that shift electricity from jammed circuits to those with spare capacity. These devices have been able to finesse enough additional wind power through grid bottlenecks to replace an entire gas-fired generator. Installed in northern England four years ago by Smart Wires, based in Durham, N.C., these SmartValves are expected to help even more as NESO installs more of them and masters their capabilities.
Warwick-based National Grid Electricity Transmission, the grid operator for England and Wales, is adding SmartValves and also replacing several thousand kilometers of overhead wire with advanced conductors that can carry more current. And it’s using a technique called dynamic line rating, whereby sensors and models work together to predict when weather conditions will allow lines to carry extra current.
Other kinds of GETs are also being used globally. Advanced conductors are the most widely deployed. Dynamic line rating is increasingly common in European countries, and U.S. utilities are beginning to take it seriously. Europe also leads the world in topology-optimization software, which reconfigures power routes to alleviate congestion, and advanced power-flow-control devices like SmartValves.
Engineers install dynamic line rating technology from the Boston-based company LineVision on National Grid’s transmission network. National Grid Electricity Transmission
SmartValves’ chops stand out at the Penwortham substation in Lancashire, England, one of two National Grid sites where the device made its U.K. debut in 2021. Penwortham substation is a major transmission hub, whose spokes desperately need congestion relief. Auditory evidence of heavy power flows was clear during my visit to the substation, which buzzes loudly. The sound is due to the electromechanical stresses on the substation’s massive transformers, explains my guide, National Grid commissioned engineer Paul Lloyd.
Penwortham’s transformers, circuits, and protective relays are spread over 15 hectares, sandwiched between pastureland and suburban homes near Preston, a small city north of Manchester. Power arrives from the north on two pairs of 400-kilovolt AC lines, and most of it exits southward via 400-kV and 275-kV double-circuit wires.
Transmission lines lead to the congested Penwortham substation, which has become a test-bed for GETs such as SmartValves and dynamic line rating. Peter Fairley
What makes the substation a strategic test-bed for GETs is its position just north of the U.K. grid’s biggest bottleneck, known as Boundary B7a, which runs east to west across the island. Nine circuits traverse the B7a: the four AC lines headed south from Penwortham, four AC lines closer to Yorkshire’s North Sea coast, and a high-voltage direct-current (HVDC) link offshore. In theory, those circuits can collectively carry 13.6 gigawatts across the B7a. But NESO caps its flow at several gigawatts lower to ensure that no circuits overload if any two lines turn off.
Such limits are necessary for grid reliability, but they are leaving terawatt-hours of wind power stranded in Scotland and increasing consumers’ energy costs: an extra £196 million (US $265 million) in 2024 alone. The costs stem from NESO having to ramp up gas-fired generators to meet demand down south while simultaneously compensating wind-farm operators for curtailing their output, as required under U.K. policy.
So National Grid keeps tweaking Penwortham. In 2011 the substation got its first big GET: phase-shifting transformers (PSTs), a type of analog flow controller. PSTs adjust power flow by creating an AC waveform whose alternating voltage leads or lags its alternating current. They do so by each PST using a pair of connected transformers to selectively combine power from an AC transmission circuit’s three phases. Motors reposition electrical connections on the transformer coils to adjust flows.
Phase-shifting transformers (PSTs) were installed in 2012 at the Penwortham substation and are the analog predecessor to SmartValves. They’re powerful but also bulky and relatively inflexible. It can take 10 minutes or more for the PST’s motorized actuators at Penwortham to tap their full range of flow control, whereas SmartValves can shift within milliseconds.National Grid Electricity Transmission
Penwortham’s pair of 540-tonne PSTs occupy the entire south end of the substation, along with their dedicated chillers, relays, and power supplies. Delivering all that hardware required extensive road closures and floating a huge barge up the adjacent River Ribble, an event that made national news.
The SmartValves at Penwortham stand in stark contrast to the PSTs’ heft, complexity, and mechanics. SmartValves are a type of static synchronous series compensator, or SSSC—a solid-state alternative to PSTs that employs power electronics to tweak power flows in milliseconds. I saw two sets of them tucked into a corner of the substation, occupying a quarter of the area of the PSTs.
The SmartValve V103 design [above] experienced some teething and reliability issues that were ironed out with the technology’s next iteration, the V104. National Grid Electricity Transmission/Smart Wires
The SmartValves are first and foremost an insurance policy to guard against a potentially crippling event: the sudden loss of one of the B7a’s 400-kV lines. If that were to happen, gigawatts of power would instantly seek another route over neighboring lines. And if it happened on a windy day, when lots of power is streaming in from the north, the resulting surge could overload the 275-kV circuits headed from Penwortham to Liverpool. The SmartValves’ job is to save the day.
They do this by adding impedance to the 275-kV lines, thus acting to divert more power to the remaining 400-kV lines. This rerouting of power prevents a blackout that could potentially cascade through the grid. The upside to that protection is that NESO can safely schedule an additional 350 MW over the B7a.
The savings add up. “That’s 350 MW of wind you’re no longer curtailing from wind farms. So that’s 350 times £100 a megawatt-hour,” says Leslie, at NESO. “That’s also 350 MW of gas-fired power that you don’t need to replace the wind. So that’s 350 times £120 a megawatt-hour. The numbers get big quickly.”
Mark Osborne, the National Grid lead asset life-cycle engineer managing its SmartValve projects, estimates the devices are saving U.K. customers over £100 million (US $132 million) a year. At that rate, they’ll pay for themselves “within a few years,” Osborne says. By utility standards, where investments are normally amortized over decades, that’s “almost immediately,” he adds.
The way Smart Wires’ SSSC devices adjust power flow is based on emulating impedance, which is a strange beast created by AC power. An AC flow’s changing magnetic field induces an additional voltage in the line’s conductor, which then acts as a drag on the initial field. Smart Wires’ SSSC devices alter power flow by emulating that natural process, effectively adding or subtracting impedance by adding their own voltage wave to the line. Adding a wave that leads the original voltage wave will boost flow, while adding a lagging wave will reduce flow.
The SSSC’s submodules of capacitors and high-speed insulated-gate bipolar transistors operate in sequence to absorb power from a line and synthesize its novel impedance-altering waves. And thanks to its digital controls and switches, the device can within milliseconds flip from maximum power push to maximum pull.
You can trace the development of SSSCs to the advent of HVDC transmission in the 1950s. HVDC converters take power from an AC grid and efficiently convert it and transfer it over a DC line to another point in the same grid, or to a neighboring AC grid. In 1985, Narain Hingorani, an HVDC expert at the Palo Alto–based Electric Power Research Institute, showed that similar converters could modulate the flow of an AC line. Four years later, Westinghouse engineer Laszlo Gyugyi proposed SSSCs, which became the basis for Smart Wires’ boxes.
Major power-equipment manufacturers tried to commercialize SSSCs in the early 2000s. But utilities had little need for flow control back then because they had plenty of conventional power plants that could meet local demand when transmission lines were full.
The picture changed as solar and wind generation exploded and conventional plants began shutting down. In years past, grid operators addressed grid congestion by turning power plants on or off in strategic locations. But as of 2024, the U.K. had shut down all of its coal-fired power plants—save one, which now burns wood—and it has vowed to slash gas-fired generation from about a quarter of electricity supply in 2024 to at most 5 percent in 2030.
The U.K.’s extreme grid challenge has made it ground zero for some of the boldest GETs testing and deployment.
To seize the emerging market opportunity presented by changing grid operations, Smart Wires had to make a crucial technology upgrade: ditching transformers. The company’s first SSSC, and those from other suppliers, relied on a transformer to absorb lightning, voltage surges, and every other grid assault that could fry their power electronics. This made them bulky and added cost. So Smart Wires engineers set to work in 2017 to see if they could live without the transformer, says Frank Kreikebaum, Smart Wires’s interim chief of engineering. Two years later the company had assembled a transformerless electronic shield. It consisted of a suite of filters and diverters, along with a control system to activate them. Ditching the transformer produced a trim, standardized product—a modular system-in-a-box.
SmartValves work at any voltage and are generally ganged together to achieve a desired level of flow control. They can be delivered fast, and they fit in the kinds of tight spaces that are common in substations. “It’s not about cost, even though we’re competitive there. It’s about ‘how quick’ and ‘can it fit,’” says Kreikebaum.
And if the grid’s pinch point shifts? The devices can be quickly moved to another substation. “It’s a Lego-brick build,” says Owen Wilkes, National Grid’s director of network design. Wilkes’s team decides where to add equipment based on today’s best projections, but he appreciates the flexibility to respond to unexpected changes.
National Grid’s deployments in 2021 were the highest-voltage installation of SSSCs at the time, and success there is fueling expansion. National Grid now has packs of SmartValves installed at three substations in northern England and under construction at another, with five more installations planned in that area. Smart Wires has also commissioned commercial projects at transmission substations in Australia, Brazil, Colombia, and the United States.
In addition to SSSCs, National Grid has deployed lidar that senses sag on Penwortham’s 275-kV lines—an indication that they’re starting to overheat. The sensors are part of a dynamic line rating system and help grid operators maximize the amount of current that high-voltage lines can carry based on near-real-time weather conditions. (Cooler weather means more capacity.) Now the same technology is being deployed across the B7a—a £1 million investment that is projected to save consumers £33 million annually, says Corin Ireland, a National Grid optimization engineer with the task of seizing GETs opportunities.
There’s also a lot of old conductor wires being swapped out for those that can carry more power. National Grid’s business plan calls for 2,416 kilometers of such reconductoring over the coming five years, which is about 20 percent of its system. Scotland’s transmission operators are busy with their own big swaps.
Scottish wind farms have doubled their power output over the past decade, but it often gets stranded due to grid congestion in England. Andreas Berthold/Alamy
But while National Grid and NESO are making some of the boldest deployments of GETs in the world, they’re not fully tapping the technologies’ capabilities. That’s partly due to the conservative nature of power utilities, and partly because grid operators already have plenty to keep their eyes on. It also stems from the unknowns that still surround GETs, like whether they might take the grid in unforeseen directions if allowed to respond automatically, or get stuck in a feedback loop responding to each other. Imagine SmartValve controllers at different substations fighting, with one substation jumping to remove impedance that the other just added, causing fluctuating power flows.
“These technologies operate very quickly, but the computers in the control room are still very reliant on people making decisions,” says Ireland. “So there are time scales that we have to take into consideration when planning and operating the network.”
This kind of conservative dispatching leaves value on the table. For example, the dynamic line rating models can spit out new line ratings every 15 minutes, but grid operators get updates only every 24 hours. Fewer updates means fewer opportunities to tap the system’s ability to boost capacity. Similarly, for SmartValves, NESO activates installations at only one substation at a time. And control-room operators turn them on manually, even though the devices could automatically respond to faults within milliseconds.
National Grid is upgrading transmission lines dating as far back as the 1960s. This includes installing conductors that retain their strength at higher temperatures, allowing them to carry more power. National Grid Electricity Transmission
Modeling by Smart Wires and National Grid shows a significant capacity boost across Boundary B7a if Penwortham’s SmartValves were to work in tandem with another set further up the line. For example, when Penwortham is adding impedance to push megawatts off the 275-kV lines, a set closer to Scotland could simultaneously pull the power north, nudging the sum over to the B7a’s eastern circuits. Simulations by Andy Hiorns, a former National Grid planning director who consults for Smart Wires, suggest that this kind of cooperative action should increase the B7a circuits’ usable capacity by another 250 to 300 MW. “You double the effectiveness by using them as pairs,” he says.
Operating multiple flow controllers may become necessary for unlocking the next boundary en route to London, south of the B7a, called Boundary B8. As dynamic line rating, beefier conductors, and SmartValves send more power across the B7a, lines traversing B8 are reaching their limits. Eventually, every boundary along the route will have to be upgraded.
Meanwhile, back at its U.S. headquarters, Smart Wires is developing other applications for its SSSCs, such as filtering out power oscillations that can destabilize grids and reduce allowable transfers. That capability could be unlocked remotely with firmware.
The company is also working on a test program that could turn on pairs of SmartValve installations during slack moments when there isn’t much going on in the control rooms. That would give National Grid and NESO operators an opportunity to observe the impacts, and to get more comfortable with the technology.
National Grid and Smart Wires are also hard at work developing industry-first optimization software for coordinating flow-control devices. “It’s possible to extend the technology from how we’re using it today,” says Ireland at National Grid. “That’s the exciting bit.”
NESO’s Julian Leslie shares that excitement and says he expects SmartValves to begin working together to ease power through the grid—once the operators have the modeling right and get a little more comfortable with the technology. “It’s a great innovation that has the potential to be really transformational,” he says. “We’re just not quite there yet.”