MoreRSS

site iconIEEE SpectrumModify

IEEE is the trusted voice for engineering, computing, and technology information around the globe. 
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of IEEE Spectrum

This Low-Cost Stopgap Tech Can Fix the Grid

2025-12-10 22:00:02



The power surging through transmission lines over the iconic stone walls of England’s northern countryside is pushing the United Kingdom’s grid to its limits. To the north, Scottish wind farms have doubled their output over the past decade. In the south, where electricity demand is heaviest, electrification and new data centers promise to draw more power, but new generation is falling short. Construction on a new 3,280-megawatt nuclear power plant west of London lags years behind schedule.

The result is a lopsided flow of power that’s maxing out transmission corridors from the Highlands to London. That grid strain won’t ease any time soon. New lines linking Scotland to southern England are at least three to four years from operation, and at risk of further delays from fierce local opposition.

At the same time, U.K. Prime Minister Keir Starmer is bent on installing even more wind power and slashing fossil-fuel generation by 2030. His Labour government says low-carbon power is cheaper and more secure than natural gas, much of which comes from Norway via the world’s longest underwater gas pipeline and is vulnerable to disruption and sabotage.

Map of transmission lines in southern Scotland and northern England.The lack of transmission lines available to move power flowing south from Scottish wind farms has caused grid congestion in England. To better manage it, the U.K. has installed SmartValves at three substations in northern England—Penwortham, Harker, and Saltholme—and is constructing a fourth at South Shields. Chris Philpot

The U.K.’s resulting grid congestion prevents transmission operators from delivering some of their cleanest, cheapest generation to all of the consumers who want it. Congestion is a perennial problem whenever power consumption is on the rise. It pushes circuits to their thermal limits and creates grid stability or security constraints.

With congestion relief needed now, the U.K.’s grid operators are getting creative, rapidly tapping new cable designs and innovations in power electronics to squeeze more power through existing transmission corridors. These grid-enhancing technologies, or GETs, present a low-cost way to bridge the gap until new lines can be built.

“GETs allow us to operate the system harder before an investment arrives, and they save a s***load of money,” says Julian Leslie, chief engineer and director of strategic energy planning at the National Energy System Operator (NESO), the Warwick-based agency that directs U.K. energy markets and infrastructure.

A stone wall along a dirt trail with large powerlines running overhead on a sunny day Transmission lines running across England’s countryside are maxed out, creating bottlenecks in the grid that prevent some carbon-free power from reaching customers. Vincent Lowe/Alamy

The U.K.’s extreme grid challenge has made it ground zero for some of the boldest GETs testing and deployment. Such innovation involves some risk, because an intervention anywhere on the U.K.’s tightly meshed power system can have system-wide impacts. (Grid operators elsewhere are choosing to start with GETs at their systems’ periphery—where there’s less impact if something goes wrong.)

The question is how far—and how fast—the U.K.’s grid operators can push GETs capabilities. The new technologies still have a limited track record, so operators are cautiously feeling their way toward heavier investment. Power system experts also have unanswered questions about these advanced grid capabilities. For example, will they create more complexity than grid operators can manage in real time? Might feedback between different devices destabilize the grid?

There is no consensus yet as to how to even screen for such risks, let alone protect against them, says Robin Preece, professor in future power systems at the University of Manchester, in England. “We’re at the start of establishing that now, but we’re building at the same time. So it’s kind of this race between the necessity to get this technology installed as quickly as possible, and our ability to fully understand what’s happening.”

How is the U.K. Managing Grid Congestion?

One of the most innovative and high-stakes tricks in the U.K.’s toolbox employs electronic power-flow controllers, devices that shift electricity from jammed circuits to those with spare capacity. These devices have been able to finesse enough additional wind power through grid bottlenecks to replace an entire gas-fired generator. Installed in northern England four years ago by Smart Wires, based in Durham, N.C., these SmartValves are expected to help even more as NESO installs more of them and masters their capabilities.

Warwick-based National Grid Electricity Transmission, the grid operator for England and Wales, is adding SmartValves and also replacing several thousand kilometers of overhead wire with advanced conductors that can carry more current. And it’s using a technique called dynamic line rating, whereby sensors and models work together to predict when weather conditions will allow lines to carry extra current.

Other kinds of GETs are also being used globally. Advanced conductors are the most widely deployed. Dynamic line rating is increasingly common in European countries, and U.S. utilities are beginning to take it seriously. Europe also leads the world in topology-optimization software, which reconfigures power routes to alleviate congestion, and advanced power-flow-control devices like SmartValves.

Workers wearing hard hats and safety harnesses standing high up on an electricity pylon in the countrysideEngineers install dynamic line rating technology from the Boston-based company LineVision on National Grid’s transmission network. National Grid Electricity Transmission

SmartValves’ chops stand out at the Penwortham substation in Lancashire, England, one of two National Grid sites where the device made its U.K. debut in 2021. Penwortham substation is a major transmission hub, whose spokes desperately need congestion relief. Auditory evidence of heavy power flows was clear during my visit to the substation, which buzzes loudly. The sound is due to the electromechanical stresses on the substation’s massive transformers, explains my guide, National Grid commissioned engineer Paul Lloyd.

Penwortham’s transformers, circuits, and protective relays are spread over 15 hectares, sandwiched between pastureland and suburban homes near Preston, a small city north of Manchester. Power arrives from the north on two pairs of 400-kilovolt AC lines, and most of it exits southward via 400-kV and 275-kV double-circuit wires.

Power lines crossing a rural pasture, with an old fence in the foreground and a substation in the background.Transmission lines lead to the congested Penwortham substation, which has become a test-bed for GETs such as SmartValves and dynamic line rating. Peter Fairley

What makes the substation a strategic test-bed for GETs is its position just north of the U.K. grid’s biggest bottleneck, known as Boundary B7a, which runs east to west across the island. Nine circuits traverse the B7a: the four AC lines headed south from Penwortham, four AC lines closer to Yorkshire’s North Sea coast, and a high-voltage direct-current (HVDC) link offshore. In theory, those circuits can collectively carry 13.6 gigawatts across the B7a. But NESO caps its flow at several gigawatts lower to ensure that no circuits overload if any two lines turn off.

Such limits are necessary for grid reliability, but they are leaving terawatt-hours of wind power stranded in Scotland and increasing consumers’ energy costs: an extra £196 million (US $265 million) in 2024 alone. The costs stem from NESO having to ramp up gas-fired generators to meet demand down south while simultaneously compensating wind-farm operators for curtailing their output, as required under U.K. policy.

So National Grid keeps tweaking Penwortham. In 2011 the substation got its first big GET: phase-shifting transformers (PSTs), a type of analog flow controller. PSTs adjust power flow by creating an AC waveform whose alternating voltage leads or lags its alternating current. They do so by each PST using a pair of connected transformers to selectively combine power from an AC transmission circuit’s three phases. Motors reposition electrical connections on the transformer coils to adjust flows.

Large machines outdoors connected to power lines Phase-shifting transformers (PSTs) were installed in 2012 at the Penwortham substation and are the analog predecessor to SmartValves. They’re powerful but also bulky and relatively inflexible. It can take 10 minutes or more for the PST’s motorized actuators at Penwortham to tap their full range of flow control, whereas SmartValves can shift within milliseconds.National Grid Electricity Transmission

Penwortham’s pair of 540-tonne PSTs occupy the entire south end of the substation, along with their dedicated chillers, relays, and power supplies. Delivering all that hardware required extensive road closures and floating a huge barge up the adjacent River Ribble, an event that made national news.

The SmartValves at Penwortham stand in stark contrast to the PSTs’ heft, complexity, and mechanics. SmartValves are a type of static synchronous series compensator, or SSSC—a solid-state alternative to PSTs that employs power electronics to tweak power flows in milliseconds. I saw two sets of them tucked into a corner of the substation, occupying a quarter of the area of the PSTs.

Workers on an elevated platform and tall, cylindrical machineryThe SmartValve V103 design [above] experienced some teething and reliability issues that were ironed out with the technology’s next iteration, the V104. National Grid Electricity Transmission/Smart Wires

The SmartValves are first and foremost an insurance policy to guard against a potentially crippling event: the sudden loss of one of the B7a’s 400-kV lines. If that were to happen, gigawatts of power would instantly seek another route over neighboring lines. And if it happened on a windy day, when lots of power is streaming in from the north, the resulting surge could overload the 275-kV circuits headed from Penwortham to Liverpool. The SmartValves’ job is to save the day.

They do this by adding impedance to the 275-kV lines, thus acting to divert more power to the remaining 400-kV lines. This rerouting of power prevents a blackout that could potentially cascade through the grid. The upside to that protection is that NESO can safely schedule an additional 350 MW over the B7a.

The savings add up. “That’s 350 MW of wind you’re no longer curtailing from wind farms. So that’s 350 times £100 a megawatt-hour,” says Leslie, at NESO. “That’s also 350 MW of gas-fired power that you don’t need to replace the wind. So that’s 350 times £120 a megawatt-hour. The numbers get big quickly.”

Mark Osborne, the National Grid lead asset life-cycle engineer managing its SmartValve projects, estimates the devices are saving U.K. customers over £100 million (US $132 million) a year. At that rate, they’ll pay for themselves “within a few years,” Osborne says. By utility standards, where investments are normally amortized over decades, that’s “almost immediately,” he adds.

How Do Grid-Enhancing Technologies Work?

The way Smart Wires’ SSSC devices adjust power flow is based on emulating impedance, which is a strange beast created by AC power. An AC flow’s changing magnetic field induces an additional voltage in the line’s conductor, which then acts as a drag on the initial field. Smart Wires’ SSSC devices alter power flow by emulating that natural process, effectively adding or subtracting impedance by adding their own voltage wave to the line. Adding a wave that leads the original voltage wave will boost flow, while adding a lagging wave will reduce flow.

The SSSC’s submodules of capacitors and high-speed insulated-gate bipolar transistors operate in sequence to absorb power from a line and synthesize its novel impedance-altering waves. And thanks to its digital controls and switches, the device can within milliseconds flip from maximum power push to maximum pull.

You can trace the development of SSSCs to the advent of HVDC transmission in the 1950s. HVDC converters take power from an AC grid and efficiently convert it and transfer it over a DC line to another point in the same grid, or to a neighboring AC grid. In 1985, Narain Hingorani, an HVDC expert at the Palo Alto–based Electric Power Research Institute, showed that similar converters could modulate the flow of an AC line. Four years later, Westinghouse engineer Laszlo Gyugyi proposed SSSCs, which became the basis for Smart Wires’ boxes.

Major power-equipment manufacturers tried to commercialize SSSCs in the early 2000s. But utilities had little need for flow control back then because they had plenty of conventional power plants that could meet local demand when transmission lines were full.

The picture changed as solar and wind generation exploded and conventional plants began shutting down. In years past, grid operators addressed grid congestion by turning power plants on or off in strategic locations. But as of 2024, the U.K. had shut down all of its coal-fired power plants—save one, which now burns wood—and it has vowed to slash gas-fired generation from about a quarter of electricity supply in 2024 to at most 5 percent in 2030.

The U.K.’s extreme grid challenge has made it ground zero for some of the boldest GETs testing and deployment.

To seize the emerging market opportunity presented by changing grid operations, Smart Wires had to make a crucial technology upgrade: ditching transformers. The company’s first SSSC, and those from other suppliers, relied on a transformer to absorb lightning, voltage surges, and every other grid assault that could fry their power electronics. This made them bulky and added cost. So Smart Wires engineers set to work in 2017 to see if they could live without the transformer, says Frank Kreikebaum, Smart Wires’s interim chief of engineering. Two years later the company had assembled a transformerless electronic shield. It consisted of a suite of filters and diverters, along with a control system to activate them. Ditching the transformer produced a trim, standardized product—a modular system-in-a-box.

SmartValves work at any voltage and are generally ganged together to achieve a desired level of flow control. They can be delivered fast, and they fit in the kinds of tight spaces that are common in substations. “It’s not about cost, even though we’re competitive there. It’s about ‘how quick’ and ‘can it fit,’” says Kreikebaum.

And if the grid’s pinch point shifts? The devices can be quickly moved to another substation. “It’s a Lego-brick build,” says Owen Wilkes, National Grid’s director of network design. Wilkes’s team decides where to add equipment based on today’s best projections, but he appreciates the flexibility to respond to unexpected changes.

National Grid’s deployments in 2021 were the highest-voltage installation of SSSCs at the time, and success there is fueling expansion. National Grid now has packs of SmartValves installed at three substations in northern England and under construction at another, with five more installations planned in that area. Smart Wires has also commissioned commercial projects at transmission substations in Australia, Brazil, Colombia, and the United States.

Dynamic Line Rating Boosts Grid Efficiency

In addition to SSSCs, National Grid has deployed lidar that senses sag on Penwortham’s 275-kV lines—an indication that they’re starting to overheat. The sensors are part of a dynamic line rating system and help grid operators maximize the amount of current that high-voltage lines can carry based on near-real-time weather conditions. (Cooler weather means more capacity.) Now the same technology is being deployed across the B7a—a £1 million investment that is projected to save consumers £33 million annually, says Corin Ireland, a National Grid optimization engineer with the task of seizing GETs opportunities.

There’s also a lot of old conductor wires being swapped out for those that can carry more power. National Grid’s business plan calls for 2,416 kilometers of such reconductoring over the coming five years, which is about 20 percent of its system. Scotland’s transmission operators are busy with their own big swaps.

Rural farmland with shape and rotating wind turbines Scottish wind farms have doubled their power output over the past decade, but it often gets stranded due to grid congestion in England. Andreas Berthold/Alamy

But while National Grid and NESO are making some of the boldest deployments of GETs in the world, they’re not fully tapping the technologies’ capabilities. That’s partly due to the conservative nature of power utilities, and partly because grid operators already have plenty to keep their eyes on. It also stems from the unknowns that still surround GETs, like whether they might take the grid in unforeseen directions if allowed to respond automatically, or get stuck in a feedback loop responding to each other. Imagine SmartValve controllers at different substations fighting, with one substation jumping to remove impedance that the other just added, causing fluctuating power flows.

“These technologies operate very quickly, but the computers in the control room are still very reliant on people making decisions,” says Ireland. “So there are time scales that we have to take into consideration when planning and operating the network.”

This kind of conservative dispatching leaves value on the table. For example, the dynamic line rating models can spit out new line ratings every 15 minutes, but grid operators get updates only every 24 hours. Fewer updates means fewer opportunities to tap the system’s ability to boost capacity. Similarly, for SmartValves, NESO activates installations at only one substation at a time. And control-room operators turn them on manually, even though the devices could automatically respond to faults within milliseconds.

Workers in harnesses and hard hats climbing and standing high up on a transmission tower in the countryside.National Grid is upgrading transmission lines dating as far back as the 1960s. This includes installing conductors that retain their strength at higher temperatures, allowing them to carry more power. National Grid Electricity Transmission

Modeling by Smart Wires and National Grid shows a significant capacity boost across Boundary B7a if Penwortham’s SmartValves were to work in tandem with another set further up the line. For example, when Penwortham is adding impedance to push megawatts off the 275-kV lines, a set closer to Scotland could simultaneously pull the power north, nudging the sum over to the B7a’s eastern circuits. Simulations by Andy Hiorns, a former National Grid planning director who consults for Smart Wires, suggest that this kind of cooperative action should increase the B7a circuits’ usable capacity by another 250 to 300 MW. “You double the effectiveness by using them as pairs,” he says.

Operating multiple flow controllers may become necessary for unlocking the next boundary en route to London, south of the B7a, called Boundary B8. As dynamic line rating, beefier conductors, and SmartValves send more power across the B7a, lines traversing B8 are reaching their limits. Eventually, every boundary along the route will have to be upgraded.

Meanwhile, back at its U.S. headquarters, Smart Wires is developing other applications for its SSSCs, such as filtering out power oscillations that can destabilize grids and reduce allowable transfers. That capability could be unlocked remotely with firmware.

The company is also working on a test program that could turn on pairs of SmartValve installations during slack moments when there isn’t much going on in the control rooms. That would give National Grid and NESO operators an opportunity to observe the impacts, and to get more comfortable with the technology.

National Grid and Smart Wires are also hard at work developing industry-first optimization software for coordinating flow-control devices. “It’s possible to extend the technology from how we’re using it today,” says Ireland at National Grid. “That’s the exciting bit.”

NESO’s Julian Leslie shares that excitement and says he expects SmartValves to begin working together to ease power through the grid—once the operators have the modeling right and get a little more comfortable with the technology. “It’s a great innovation that has the potential to be really transformational,” he says. “We’re just not quite there yet.”

Virtual Power Plants Are Finally Having Their Moment

2025-12-10 01:00:02



German utility RWE implemented the first known virtual power plant (VPP) in 2008, aggregating nine small hydroelectric plants for a total capacity of 8.6 megawatts. In general, a VPP pulls together many small components—like rooftop solar, home batteries, and smart thermostats—into a single coordinated power system. The system responds to grid needs on demand, whether by making stored energy available or reducing energy consumption by smart devices during peak hours.

VPPs had a moment in the mid-2010s, but market conditions and the technology weren’t quite aligned for them to take off. Electricity demand wasn’t high enough, and existing sources—coal, natural gas, nuclear, and renewables—met demand and kept prices stable. Additionally, despite the costs of hardware like solar panels and batteries falling, the software to link and manage these resources lagged behind, and there wasn’t much financial incentive for it to catch up.

But times have changed, and less than a decade later, the stars are aligning in VPPs’ favor. They’re hitting a deployment inflection point, and they could play a significant role in meeting energy demand over the next 5 to 10 years in a way that’s faster, cheaper, and greener than other solutions.

U.S. Electricity Demand Is Growing

Electricity demand in the United States is expected to grow 25 percent by 2030 due to data center buildouts, electric vehicles, manufacturing, and electrification, according to estimates from technology consultant ICF International.

At the same time, a host of bottlenecks are making it hard to expand the grid. There’s a backlog of at least three to five years on new gas turbines. Hundreds of gigawatts of renewables are languishing in interconnection queues, where there’s also a backlog of up to five years. On the delivery side, there’s a transformer shortage that could take up to five years to resolve, and a dearth of transmission lines. This all adds up to a long, slow process to add generation and delivery capacity, and it’s not getting faster anytime soon.

“Fueling electric vehicles, electric heat, and data centers solely from traditional approaches would increase rates that are already too high,” says Brad Heavner, the executive director of the California Solar & Storage Association.

Enter the vast network of resources that are already active and grid-connected—and the perfect storm of factors that make now the time to scale them. Adel Nasiri, a professor of electrical engineering at the University of South Carolina, says variability of loads from data centers and electric vehicles has increased, as has deployment of grid-scale batteries and storage. There are more distributed energy resources available than there were before, and the last decade has seen advances in grid management using autonomous controls.

At the heart of it all, though, is the technology that stores and dispatches electricity on demand: batteries.

Advances in Battery Technology

Over the last 10 years, battery prices have plummeted: the average lithium-ion battery pack price fell from US $715 per kilowatt-hour in 2014 to $115 per kWh in 2024. Their energy density has simultaneously increased thanks to a combination of materials advancements, design optimization of battery cells, and improvements in the packaging of battery systems, says Oliver Gross, a senior fellow in energy storage and electrification at automaker Stellantis.

The biggest improvements have come in batteries’ cathodes and electrolytes, with nickel-based cathodes starting to be used about a decade ago. “In many ways, the cathode limits the capacity of the battery, so by unlocking higher capacity cathode materials, we have been able to take advantage of the intrinsic higher capacity of anode materials,” says Greg Less, the director of the University of Michigan’s Battery Lab.

Increasing the percentage of nickel in the cathode (relative to other metals) increases energy density because nickel can hold more lithium per gram than materials like cobalt or manganese, exchanging more electrons and participating more fully in the redox reactions that move lithium in and out of the battery. The same goes for silicon, which has become more common in anodes. However, there’s a trade-off: These materials cause more structural instability during the battery’s cycling.

The anode and cathode are surrounded by a liquid electrolyte. The electrolyte has to be electrically and chemically stable when exposed to the anode and cathode in order to avoid safety hazards like thermal runaway or fires and rapid degradation. “The real revolution has been the breakthroughs in chemistry to make the electrolyte stable against more reactive cathode materials to get the energy density up,” says Gross. Chemical compound additives—many of them based on sulfur and boron chemistry—for the electrolyte help create stable layers between it and the anode and cathode materials. “They form these protective layers very early in the manufacturing process so that the cell stays stable throughout its life.”

These advances have primarily been made on electric vehicle batteries, which differ from grid-scale batteries in that EVs are often parked or idle, while grid batteries are constantly connected and need to be ready to transfer energy. However, Gross says, “the same approaches that got our energy density higher in EVs can also be applied to optimizing grid storage. The materials might be a little different, but the methodologies are the same.” The most popular cathode material for grid storage batteries at the moment is lithium iron phosphate, or LFP.

Thanks to these technical gains and dropping costs, a domino effect has been set in motion: The more batteries deployed, the cheaper they become, which fuels more deployment and creates positive feedback loops.

Regions that have experienced frequent blackouts—like parts of Texas, California, and Puerto Rico—are a prime market for home batteries. Texas-based Base Power, which raised $1 billion in Series C funding in October, installs batteries at customers’ homes and becomes their retail power provider, charging the batteries when excess wind or solar production makes prices cheap, and then selling that energy back to the grid when demand spikes.

There is, however, still room for improvement. For wider adoption, says Nasiri, “the installed battery cost needs to get under $100 per kWh for large VPP deployments.”

Improvements in VPP Software

The software infrastructure that once limited VPPs to pilot projects has matured into a robust digital backbone, making it feasible to operate VPPs at grid scale. Advances in AI are key: Many VPPs now use machine learning algorithms to predict load flexibility, solar and battery output, customer behavior, and grid stress events. This improves the dependability of a VPP’s capacity, which was historically a major concern for grid operators.

Close-up of a roof-top solar panel.While solar panels have advanced, VPPs have been held back by a lack of similar advancement in the needed software until recently.Sunrun

Cybersecurity and interoperability standards are still evolving. Interconnection processes and data visibility in many areas aren’t consistent, making it hard to monitor and coordinate distributed resources effectively. In short, while the technology and economics for VPPs are firmly in place, there’s work yet to be done aligning regulation, infrastructure, and market design.

On top of technical and cost constraints, VPPs have long been held back by regulations that prevented them from participating in energy markets like traditional generators. SolarEdge recently announced enrollment of more than 500 megawatt-hours of residential battery storage in its VPP programs. Tamara Sinensky, the company’s senior manager of grid services, says the biggest hurdle to achieving this milestone wasn’t technical—it was regulatory program design.

California’s Demand Side Grid Support (DSGS) program, launched in mid-2022, pays homes, businesses, and VPPs to reduce electricity use or discharge energy during grid emergencies. “We’ve seen a massive increase in our VPP enrollments primarily driven by the DSGS program,” says Sinensky. Similarly, Sunrun’s Northern California VPP delivered 535 megawatts of power from home-based batteries to the grid in July, and saw a 400 percent increase in VPP participation from last year.

FERC Order 2222, issued in 2020, requires regional grid operators to allow VPPs to sell power, reduce load, or provide grid services directly to wholesale market operators, and get paid the same market price as a traditional power plant for those services. However, many states and grid regions don’t yet have a process in place to comply with the FERC order. And because utilities profit from grid expansion and not VPP deployment, they’re not incentivized to integrate VPPs into their operations. Utilities “view customer batteries as competition,” says Heavner.

According to Nasiri, VPPs would have a meaningful impact on the grid if they achieve a penetration of 2 percent of the market’s peak power. “Larger penetration of up to 5 percent for up to 4 hours is required to have a meaningful capacity impact for grid planning and operation,” he says.

In other words, VPP operators have their work cut out for them in continuing to unlock the flexible capacity in homes, businesses, and EVs. Additional technical and policy advances could move VPPs from a niche reliability tool to a key power source and grid stabilizer for the energy tumult ahead.

Why the Most “Accurate” Glucose Monitors Are Failing Some Users

2025-12-10 00:09:28



When Dan Heller received his first batch of Dexcom’s latest continuous glucose monitors in early 2023, he decided to run a small experiment: He wore the new biosensor and the previous generation at the same time to see how they compared in measuring his glucose levels.

The new, seventh-generation model (aptly called the G7) made by San Diego-based healthcare company Dexcom had just begun shipping in the United States. Dexcom claimed the G7 to be the “most accurate sensor” available to the thousands of people with Type 1 diabetes who use continuous glucose monitors to help manage their blood sugars. But Heller found that its real-world performance wasn’t up to par. In a September 2023 post on his Substack, which is dedicated to covering Type 1 diabetes research and management, he wrote about the experience and predicted an increase in adverse events with the G7, drawing on his past experience leading tech and biotech companies.

In the two years since Heller’s experiment, many other users have reported issues with the device. Some complaints regard failed connection and deployment issues, which Dexcom claims to have now addressed. More concerning are reports of erratic, inaccurate readings. A public Facebook group dedicated to sharing negative experiences with the G7 has grown to thousands of users, and several class action lawsuits have been filed against the company, alleging false advertising and misleading claims about device accuracy.

Yet, based on a standard metric in the industry, the G7 is one of the most accurate glucose sensors available. “Accuracy in the performance of our device is our number one priority. We understand this is a lifesaving device for people with Type 1 diabetes,” Peter Simpson, Dexcom’s senior vice president of innovation and sensor technology, told IEEE Spectrum. Simpson acknowledged some variability in individual sensors, but stood by the accuracy of the devices.

So why have users faced issues? In part, metrics used in marketing can be misleading compared to real world performance. Differences in study design, combined with complex biological realities, mean that the accuracy of these biosensors can’t be boiled down to one number—and users are learning this the hard way.

Dexcom’s Glucose Monitors

Continuous glucose monitors (CGMs) typically consist of a small filament inserted under the skin, a transmitter, and a receiver. The filament is coated with an enzyme that generates an electrical signal when it reacts with glucose in the fluid surrounding the body’s cells. That signal is then converted to a digital signal and processed to generate glucose readings every few minutes. Each sensor lasts a week or two before needing to be replaced.

The technology has come a long way in recent years. In the 2010s, these devices required blood glucose calibrations twice a day and still weren’t reliable enough to dose insulin based on the readings. Now, some insulin pumps use the near-real-time data to automatically make adjustments. With those improvements has come greater trust in the data users receive—and higher standards. A faulty reading could result in a dangerous dose of insulin.

The G7 introduced several changes to Dexcom’s earlier designs, including a much smaller footprint, and updated the algorithm used to translate sensor signals into glucose readings for better accuracy, Simpson says. “From a performance perspective, we did demonstrate in a clinical trial that the G7 is significantly more accurate than the G6,” he says.

So Heller and others were surprised when the new Dexcom sensor seemed to be performing worse. For some batches of sensors, it’s possible that the issue was in part due to an unvalidated change in a component used in a resistive layer of the sensors. The new component showed worse performance, according to a warning letter issued by the U.S. Food and Drug Administration in March 2025, following an audit of two U.S. manufacturing sites. The material has since been removed from all G7 sensors, Simpson says, and the company is continuing to work with the FDA to address concerns. (“The warning letter does not restrict Dexcom’s ability to produce, market, manufacture or distribute products, require recall of any products, nor restrict our ability to seek clearance of new products,” Dexcom added in a statement.)

There is a distribution of accuracies that have to do with people’s physiology and also the devices themselves. Even in our clinical studies, we saw some that were really precise and some that had a little bit of inaccuracy to them,” says Simpson. “But in general, our sensor is very accurate.”

In late November Abbott—one of Dexcom’s main competitors—recalled some of its CGMs due to inaccurate low glucose readings. The recall affects approximately 3 million sensors and was caused by an issue with one of Abbott’s production lines.

The discrepancy between reported accuracy and user experience, however, goes beyond any one company’s manufacturing missteps.

Does MARD Matter?

The accuracy of CGM systems is frequently measured via “mean absolute relative difference,” or MARD, a percentage that compares the sensor readings to laboratory blood glucose measurements. The lower the MARD, the more accurate the sensor.

This number is often used in advertising and marketing, and it has a historical relevance, says Manuel Eichenlaub, a biomedical engineer at the Institute for Diabetes Technology Ulm in Germany, where he and his colleagues conduct independent CGM performance studies. For years, there was a general belief that a MARD under 10 percent meant a system would be accurate enough to be used for insulin dosing. In 2018, the FDA established a specific set of accuracy requirements beyond MARD for insulin-guiding glucose monitors, including Dexcom’s. But manufacturers design the clinical trials that determine accuracy metrics, and the way studies are designed can make a big difference.

Graph comparing readings from two glucose monitors from 12 AM to 2:24 PM. Blue dots represent the Dexcom G6 and red dots represent the G7. When Dan Heller wore the Dexcom G6 and G7 at the same time, he says he noticed the G7 readings were more erratic, making it more difficult to properly control his blood sugar. Dan Heller

For instance, blood glucose levels serve as the “ground truth to compare the CGM values against,” says Eichenlaub. But glucose levels vary across blood compartments in the body; blood collected from capillaries with a finger prick fluctuates more and can have glucose levels around 5 to 10 percent higher than venous blood. (Dexcom tests against a gold-standard venous blood analyzer. When users see inaccuracies against home meters that use capillary blood, it could in part be a reflection of the meter’s own inaccuracy, Simpson says, though he acknowledges real inaccuracies in CGMs as well.)

Additionally, the distribution of sampling isn’t standardized. CGMs are known to be less accurate at the beginning and end of use, or when glucose levels are out of range or changing quickly. That means measured accuracy could be skewed by taking fewer samples right after a meal or late in the CGM’s lifetime.

According to Simpson, Dexcom’s trial protocol meets the FDA’s expectation and tests the devices in different blood sugar ranges across the life of the sensor. “Within these clinical trials, we do stress the sensors to try and simulate those real world conditions,” he says.

Dexcom and other companies advertise a MARD around 8 percent. But some independent studies are more demanding and find higher numbers; a head-to-head study of three popular CGMs that Eichenlaub led found MARD values closer to 10 percent or higher.

Eichenlaub and other CGM experts believe that more standardization of testing and an extension of the FDA requirements are necessary, so they recently proposed comprehensive guidelines on CGM performance testing. In the United States and Europe, a few manufacturers currently dominate the market. But newer players are entering the growing market and, especially in Europe, may not meet the same standards as legacy manufacturers, he says. “Having a standardized way of evaluating the performance of those systems is very important.”

For users like Heller though, better accuracy only matters if it yields better diabetes management. “I don’t care about MARD. I want data that is reliably actionable,” Heller says. He encourages engineers working on these devices to think like the patient. “At some point, there’s quantitative data, but you need qualitative data.”

Amazon’s “Catalog AI” Product Platform Helps You Shop Smarter

2025-12-09 03:00:03



If you’ve shopped on Amazon in the past few months, you might have noticed it has gotten easier to find what you’re looking for. Listings now have more images, detailed product names, and better descriptions. The website’s predictive search feature uses the listing updates to anticipate needs and suggests a list of items in real time as you type in the search bar.

The improved shopping experience is thanks to Abhishek Agrawal and his Catalog AI system. Launched in July, the tool collects information from across the Internet about products being sold on Amazon and, based on the data, updates listings to make them more detailed and organized.

Abhishek Agrawal


Employer

Amazon Web Services in Seattle

Job title

Engineering leader

Member grade

Senior member

Alma maters

University of Allahabad in India and the Indian Statistical Institute in Kolkata

Agrawal is an engineering leader at Amazon Web Services in Seattle. An expert in AI and machine learning, the IEEE senior member worked on Microsoft’s Bing search engine before moving to Amazon. He also developed several features for Microsoft Teams, the company’s direct messaging platform.

“I’ve been working in AI for more than 20 years now,” he says. ”Seeing how much we can do with technology still amazes me.”

He shares his expertise and passion for the technology as an active member and volunteer at the IEEE Seattle Section. He organizes and hosts career development workshops that teach people to create an AI agent, which can perform tasks autonomously with minimal human oversight.

An AI career inspired by a computer

Agrawal was born and raised in Chirgaon, a remote village in Uttar Pradesh, India. When he was growing up, no one in Chirgaon had a computer. His family owned a pharmacy, which Agrawal was expected to join after he graduated from high school. Instead, his uncle and older brother encouraged him to attend college and find his own passion.

He enjoyed mathematics and physics, and he decided to pursue a bachelor’s degree in statistics at the University of Allahabad. After graduating in 1996, he pursued a master’s degree in statistics, statistical quality control, and operations research at the Indian Statistical Institute in Kolkata.

While at the ISI, he saw a computer for the first time in the laboratory of Nikhil R. Pal, an electronics and communication sciences professor. Pal worked on identifying abnormal clumps of cells in mammogram images using the fuzzy c-means model, a data-clustering technique employing a machine learning algorithm.

Agrawal earned his master’s degree in 1998. He was so inspired by Pal’s work, he says, that he stayed on at the university to earn a second master’s degree, in computer science.

After graduating in 2001, he joined Novell as a senior software engineer working out of its Bengaluru office in India. He helped develop iFolder, a storage platform that allows users across different computers to back up, access, and manage their files.

After four years, Agrawal left Novell to join Microsoft as a software design engineer, working at the company’s Hyderabad campus in India. He was part of a team developing a system to upgrade Microsoft’s software from XP to Vista.

Two years later, he was transferred to the group developing Bing, a replacement for Microsoft’s Live Search, which had been launched in 2006.

Improving Microsoft’s search engine

Live Search had a traffic rate of less than 2 percent and struggled to keep up with Google’s faster-paced, more user-friendly system, Agrawal says. He was tasked with improving search results but, Agrawal says, he and his team didn’t have enough user search data to train their machine learning model.

Data for location-specific queries, such as nearby coffee shops or restaurants, was especially important, he says.

To overcome those challenges, the team used deterministic algorithms to create a more structured search. Such algorithms give the same answers for any query that uses the same specific terms. The process gets results by taking keywords—such as locations, dates, and prices—and finding them on webpages. To help the search engine understand what users need, Agrawal developed a query clarifier that asked them to refine their search. The machine learning tool then ranked the results from most to least relevant.

To test new features before they were launched, Agrawal and his team built an online A/B experimentation platform. Controlled tests were completed on different versions of the products, and the platform ran performance and user engagement metrics, then it produced a scorecard to show changes for updated features.

Bing launched in 2009 and is now the world’s second-largest search engine, according to Black Raven.

Throughout his 10 years of working on the system, Agrawal upgraded it. He also worked with the advertising department to improve Microsoft’s services on Bing. Ads relevant to a person’s search are listed among the search results.

“The work seems easy,” Agrawal says, “but behind every search engine are hundreds of engineers powering ads, query formulations, rankings, relevance, and location detection.”

Testing products before launch

Agrawal was promoted to software development manager in 2010. Five years later he was transferred to Microsoft’s Seattle offices. At the time, the company was deploying new features for existing platforms without first testing them to ensure effectiveness. Instead, they measured their performance after release, Agrawal says, and that was wreaking havoc.

He proposed using his online A/B experimentation platform on all Microsoft products, not just Bing. His supervisor approved the idea. In six months Agrawal and his team modified the tool for company-wide use. Thanks to the platform, he says, Microsoft was able to smoothly deploy up-to-date products to users.

After another two years, he was promoted to principal engineering manager of Microsoft Teams, which was facing issues with user experience, he says.

“Many employees received between 50 and 100 messages a day—which became overwhelming for them,” Agrawal says. To lessen the stress, he led a team that developed the system’s first machine learning feature: Trending. It prioritized the five most important messages users should focus on. Agrawal also led the launch of incorporating emoji reactions, screen sharing, and video calls for Teams.

In 2020 he was ready for new experiences, he says, and he left Microsoft to join Amazon as an engineering leader.

Improved Amazon shopping

Agrawal led an Amazon team that manually collected information about products from the company’s retail catalog to create a glossary. The data, which included product dimensions, color, and manufacturer, was used to standardize the language found in product descriptions to keep listings more consistent.

That is especially important when it comes to third-party sellers, he notes. Sellers listing a product had been entering as much or as little information as they wanted. Agrawal built a system that automatically suggests language from the glossary as the seller types.

He also developed an AI algorithm that utilizes the glossary’s terminology to refine search results based on what a user types into the search bar. When a shopper types “red mixer,” for example, the algorithm lists products under the search bar that match the description. The shopper can then click on a product from the list.

In 2023 the retailer’s catalog became too large for Agrawal and his team to collect information manually, so they built an AI tool to do it for them. It became the foundation for Amazon’s Catalog AI system.

After gathering information about products from around the Web, Catalog AI uses large language models to update Amazon listings with missing information, correct errors, and rewrite titles and product specifications to make them clearer for the customer, Agrawal says.

The company expects the AI tool to increase sales this year by US $7.5 billion, according to a Fox News report in July.

Finding purpose at IEEE

Since Agrawal joined IEEE last December, he has been elevated to senior member and has become an active volunteer.

“Being part of IEEE has opened doors for collaboration, mentorship, and professional growth,” he says. “IEEE has strengthened both my technical knowledge and my leadership skills, helping me progress in my career.”

Agrawal is the social media chair of the IEEE Seattle Section. He is also vice chair of the IEEE Computational Intelligence Society.

He was a workshop cochair for the IEEE New Era AI World Leaders Summit, which was held from 5 to 7 December in Seattle. The event brought together government and industry leaders, as well as researchers and innovators working on AI, intelligent devices, unmanned aerial vehicles, and similar technologies. They explored how new tools could be used in cybersecurity, the medical field, and national disaster rescue missions.

Agrawal says he stays up to date on cutting-edge technologies by peer-reviewing 15 IEEE journals.

“The organization plays a very important role in bringing authenticity to anything that it does,” he says. “If a journal article has the IEEE logo, you can believe that it was thoroughly and diligently reviewed.”

Privacy Concerns Lead Seniors to Unplug Vital Health Devices

2025-12-08 22:00:02



I was interviewing a 72-year-old retired accountant who had unplugged his smart glucose monitor. He explained that he “didn’t know who was looking” at his blood sugar data.

This wasn’t a man unfamiliar with technology—he had successfully used computers for decades in his career. He was of sound mind. But when it came to his health device, he couldn’t find clear answers about where his data went, who could access it, or how to control it. The instructions were dense, and the privacy settings were buried in multiple menus. So, he made what seemed like the safest choice: he unplugged it. That decision meant giving up real-time glucose monitoring that his doctor had recommended.

The healthcare IoT (Internet of Things) market is projected to exceed $289 billion by 2028, with older adults representing a major share of users. These devices are fall detectors, medication reminders, glucose monitors, heart rate trackers, and others that enable independent living. Yet there’s a widening gap between deployment and adoption. According to an AARP survey, 34% of adults over 50 list privacy as a primary barrier to adopting health technology. That represents millions of people who could benefit from monitoring tools but avoid them because they don’t feel safe.

In my study at the University of Denver’s Ritchie School of Engineering and Computer Science, I surveyed 22 older adults and conducted in-depth interviews with nine participants who use health-monitoring devices. The findings revealed a critical engineering failure: 82% understood security concepts like two-factor authentication and encryption, yet only 14% felt confident managing their privacy when using these devices. In my research, I also evaluated 28 healthcare apps designed for older adults and found that 79% lacked basic breach-notification protocols.

One participant told me, “I know there’s encryption, but I don’t know if it’s really enough to protect my data.” Another said, “The thought of my health data getting into the wrong hands is very concerning. I’m particularly worried about identity theft or my information being used for scams.”

This is not a user knowledge problem; it’s an engineering problem. We’ve built systems that demand technical expertise to operate safely, then handed them to people managing complex health needs while navigating age-related changes in vision, cognition, and dexterity.

Measuring the Gap

To quantify the issues with privacy setting transparency, I developed the Privacy Risk Assessment Framework (PRAF), a tool that scores healthcare apps across five critical domains.

First, the regulatory compliance domain evaluates whether apps explicitly state adherence to the Health Insurance Portability and Accountability Act (HIPAA), the General Data Protection Regulation (GDPR), or other data protection standards. Just claiming to be compliant is not enough—they must provide verifiable evidence.

Second, the security mechanisms domain assesses the implementation of encryption, access controls, and, most critically, breach-notification protocols that alert users when their data may have been compromised. Third, in the usability and accessibility domain, the tool examines whether privacy interfaces are readable and navigable for people with age-related visual or cognitive changes. Fourth, data-minimization practices evaluate whether apps collect only necessary information and clearly specify retention periods. Finally, third-party sharing transparency measures whether users can easily understand who has access to their data and why.

When I applied PRAF to 28 healthcare apps commonly used by older adults, the results revealed systemic gaps. Only 25% explicitly stated HIPAA compliance, and just 18% mentioned GDPR compliance. Most alarmingly, 79% lacked breach notification protocols, which means that the users may never find out if their data was compromised. The average privacy policy readability scored at a 12th-grade level, even though research shows that the average reading level of older adults is at an 8th grade level. Not a single app included accessibility accommodations in their privacy interfaces.

Consider what happens when an older adult opens a typical health app. They face a multi-page privacy policy full of legal terminology about “data controllers” and “processing purposes,” followed by settings scattered across multiple menus. One participant told me, “The instructions are hard to understand, the print is too small, and it’s overwhelming.” Another explained, “I don’t feel adequately informed about how my data is collected, stored, and shared. It seems like most of these companies are after profit, and they don’t make it easy for users to understand what’s happening with their data.”

When protection requires a manual people can’t read, two outcomes follow: they either skip security altogether leaving themselves vulnerable, or abandon the technology entirely, forfeiting its health benefits.

Engineering for privacy

We need to treat trust as an engineering specification, not a marketing promise. Based on my research findings and the specific barriers older adults face, three approaches address the root causes of distrust.

The first approach is adaptive security defaults. Rather than requiring users to navigate complex configuration menus, devices should ship with pre-configured best practices that automatically adjust to data sensitivity and device type. A fall detection system doesn’t need the same settings as a continuous glucose monitor. This approach draws from the principle of “security by default” in systems engineering.

Biometric or voice authentication can replace passwords that are easily forgotten or written down. The key is removing the burden of expertise while maintaining strong protection. As one participant put it: “Simplified security settings, better educational resources, and more intuitive user interfaces will be beneficial.”

The second approach is real-time transparency. Users shouldn’t have to dig through settings to see where their data goes. Instead, notification systems should show each data access or sharing event in plain language. For example: “Your doctor accessed your heart-rate data at 2 p.m. to review for your upcoming appointment.” A single dashboard should summarize who has access and why.

This addresses a concern that came up repeatedly in my interviews: users want to know who is seeing their data and why. The engineering challenge here isn’t technical complexity, it’s designing interfaces that convey technical realities in language anyone can understand. Such systems already exist in other domains; banking apps, for instance, send immediate notifications for every transaction. The same principle applies to health data, where the stakes are arguably higher.

The third approach is invisible security updates. Manual patching creates vulnerability windows. Automatic, seamless updates should be standard for any device handling health data, paired with a simple status indicator so users can confirm protection at a glance. As one participant said, “The biggest issue that we as seniors have is the fact that we don’t remember our passwords... The new technology is surpassing the ability of seniors to keep up with it.” Automating updates removes a significant source of anxiety and risk.

What’s at Stake

We can keep building healthcare IoT the way we have: fast, feature-rich, and fundamentally untrustworthy. Or, we can engineer systems that are transparent, secure, and usable by design. Trust isn’t something you market through slogans or legal disclaimers. It’s something you engineer, line by line, into the code itself. For older adults relying on technology to maintain independence, that kind of engineering matters more than any new feature we could add. Every unplugged glucose monitor, every abandoned fall detector, every health app deleted out of confusion or fear represents not just a lost sale but a missed opportunity to support someone’s health and autonomy.

The challenge of privacy in healthcare IoT goes beyond fixing existing systems, it requires reimagining how we communicate privacy itself. My ongoing research builds on these findings through an AI-driven Data Helper, a system that uses large language models to translate dense legal privacy policies into short, accurate, and accessible summaries for older adults. By making data practices transparent and comprehension measurable, this approach aims to turn compliance into understanding and trust, thus advancing the next generation of trustworthy digital health systems.

Entrepreneurship Program Brings Incubator Ideas to More Countries

2025-12-06 03:00:02



Technology evolves rapidly, and innovation is key to business survival, so mentoring young professionals, promoting entrepreneurship, and connecting tech startups to a global network of experts and resources are essential.

Some IEEE volunteers do all of the above and more as part of the IEEE Entrepreneurship Ambassador Program.

The program was launched in 2018 in IEEE Region 8 (Europe, Middle East, and Africa) thanks to a grant from the IEEE Foundation. The ambassadors organize networking events with industry representatives to help IEEE young professionals and student members achieve their entrepreneurial endeavors and strengthen their technical, interpersonal, and business skills. The ambassadors also organize pitch competitions in their geographic area.

The ambassador program launched this year in Region 10 (Asia Pacific).

Last year the program was introduced in Region 9 (Latin America) with funding from the Taenzer Memorial Fund. The results of the program’s inaugural year were impressive: 13 ambassadors organized events in Bolivia, Brazil, Colombia, Ecuador, Mexico, Panama, Peru, and Uruguay.

“The program is beneficial because it connects entrepreneurs with industry professionals, fosters mentorship, helps young professionals build leadership skills, and creates opportunities for startup sponsorships,” says Susana Lau, vice chair of IEEE Entrepreneurship in Latin America. “The program has also proven successful in attracting IEEE volunteers to serve as ambassadors and helping to support entrepreneurship and startup ventures.”

Lau, an IEEE senior member, is a past president of the IEEE Panama Section and an active IEEE Women in Engineering volunteer.

A professional development opportunity

People who participated in the Region 9 program say the experience was life-changing, both personally and professionally.

Pedro José Pineda, whose work was recognized with one of the region’s two Top Ambassador Awards, says he’s been able to “expand international collaborations and strengthen the innovation ecosystem in Latin America.

“It’s more than an award,” the IEEE member says. “It’s an opportunity to create global impact from local action.”

“This remarkable experience has opened new doors for my future career within IEEE, both nationally and globally.”—Vitor Paiva

The region’s other Top Ambassador recipient was Vitor Paiva of Natal, Brazil. He had the opportunity to attend this year’s IEEE Rising Stars in Las Vegas—his first international experience outside Brazil.

After participating in the program, the IEEE student member volunteered with its regional marketing committee.

“I was proud to showcase Brazil’s IEEE community while connecting with some of IEEE’s most influential leaders,” Paiva, a student at the Universidade Federal do Rio Grande do Norte, says. “This remarkable experience has opened new doors for my future career within IEEE, both nationally and globally.”

Expanding the initiative

The IEEE Foundation says it will invest in the regional programs by funding the grants presented to the winners of the regional pitch competitions, similar to the funding for Region 9. The goal is to hold a worldwide competition, Lau says.

The ongoing expansion is a testament to the program’s efforts, says Christopher G. Wright, senior manager of programs and governance at the IEEE Foundation.

“I’ve had the pleasure of working on the grants for the IEEE Entrepreneurship Ambassador Program team over the years,” Wright says, “and I am continually impressed by the team’s dedication and the program’s evolution.”

To learn more about the program in your region or to apply to become an ambassador, visit the IEEE Entrepreneurship website and search for your region.