MoreRSS

site iconIEEE SpectrumModify

IEEE is the trusted voice for engineering, computing, and technology information around the globe. 
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of IEEE Spectrum

Sena Kizildemir Simulates Disasters to Prevent Building Collapses

2026-01-10 03:00:02



When two airplanes hit the World Trade Center in New York City on 11 September 2001, no one could predict how the Twin Towers would react structurally. The commercial jet airliners severed columns and started fires, weakening steel beams, and causing a “pancaking,” progressive collapse.

Skyscrapers had not been designed or constructed with that kind of catastrophic structural failure in mind. IEEE Senior Member Sena Kizildemir is changing that through disaster simulation, one scenario at a time.

Sena Kizildemir


Employer

Thornton Tomasetti, in New York City

Job title

Project engineer

Member grade

Senior member

Alma maters

Işik University in Şile and Lehigh University, in Bethlehem, Pa.

A project engineer at Thornton Tomasetti’s applied science division in New York, Kizildemir uses simulations to study how buildings fail under extreme events such as impacts and explosions. The simulation results can help designers develop mitigation strategies.

“Simulations help us understand what could happen before it occurs in real life,” she says, “to be able to better plan for it.”

She loves that her work mixes creativity with solving real-world problems, she says: “You’re creating something to help people. My favorite question to answer is, ‘Can you make this better or easier?’”

For her work, the nonprofit Professional Women in Construction named her one of its 20 Under 40: Women in Construction for 2025.

Kizildemir is passionate about mentoring young engineers and being an IEEE volunteer. She says she has made it her mission to “pack as much impact into my years as possible.”

A bright student in Türkiye

She was born in Istanbul to a father who is a professional drummer and a mother who worked in magazine advertising and sales. Kizildemir and her older brother pursued engineering careers despite neither parent being involved in the field. While she became an expert in civil and mechanical engineering, her brother is an industrial engineer.

As a child, she was full of curiosity, she says, interested in figuring out how things were built and how they worked. She loved building objects out of Legos, she says, and one of her earliest memories is using them to make miniature houses for ants.

After acing an entrance exam, she won a spot in a STEM-focused high school, where she studied mathematics and physics.

“Engineering is one of the few careers where you can make a lasting impact on the world, and I plan on mine being meaningful.”

During her final year at the high school, she took the nationwide YKS (Higher Education Foundations Examination). The test determines which universities and programs—such as medicine, engineering, or law—students can pursue.

She received a full scholarship to attend Işik University in Şile. Figuring she would study engineering abroad one day, she chose an English-taught program. She says she found that civil engineering best aligned with making the biggest impact on her community and the world.

Several of her professors were alumni of Lehigh University, in Bethlehem, Pa., and spoke highly of the school. After earning her bachelor’s degree in civil engineering in 2016, she decided to attend Lehigh, where she earned a full scholarship to its master’s program in civil engineering.

Moving abroad and working the rails

Her master’s thesis focused on investigating root causes of crack propagation, which threatens railroad safety.

Repeated wheel-rail loading causes microcracks, leading to metal fatigue, and residual stress results from specialized heating and cooling treatments during the manufacturing of steel rails. Cracks can develop beneath the rail’s surface. Because they’re invisible to the naked eye, such fractures are challenging to detect, Kizildemir says.

The project was done in collaboration with the U.S. Federal Railroad Administration—part of the Department of Transportation—which is looking to adjust technical standards and employ mitigation strategies.

Kizildemir and five colleagues designed and implemented testing protocols and physics-based simulations to detect cracks earlier and prevent their spread. Their research has given the Railroad Administration insights into structural defects that are being used to revise rail-building guidelines and inspection protocols. The administration published the first phase of the research in 2024.

After graduating in 2018, Kizildemir began a summer internship as a civil engineer at Thornton Tomasetti. She conducted computational modeling using Abaqus software for rails subjected to repeated plastic deformation—material that permanently changes shape when under excessive stress—and presented her recommendations for improvement to the company’s management.

During her internship, she worked with professors in different fields, including materials behavior and mechanical engineering. The experience, she says, inspired her to pursue a Ph.D. in mechanical engineering at Lehigh, continuing her research with the Railroad Administration. She earned her degree in 2023.

She loved the work and the team at Thornton Tomasetti so much, she says, that she applied to work at the company, where she is now a project engineer.

From simulations to real-world applications

Her work focuses on developing finite element models for critical infrastructure and extreme events.

Finite modeling breaks complex systems or topics into small elements connected together to numerically simulate real-world situations. She creates computational models of structures enduring realistic catastrophic events, such as a vehicle crashing into a building.

She uses simulations to understand how buildings react to attacks such as the one on 9/11, which, she says, is often used as an example of why such research is essential.

When starting a project, she and her team review building standards and try to identify new issues not yet covered by them. The team then adapts existing codes and standards, usually developed for well-understood hazards such as earthquakes, wind, and floods, to define simulation parameters.

When a new structure is being built, for example, it is not designed to withstand a truck crashing into it. But Kizildemir and her team want to know how the building would react should that happen. They simulate the environments and situations, and they make recommendations based on the results to reduce or eliminate risks of structural failure.

Mitigation suggestions include specific strategies to be implemented during project design and construction.

Simulations can be created for any infrastructure, Kizildemir says.

“I love problems that force me to think differently,” she says. “I want to keep growing.”

She says she plans to live by Thornton Tomasetti’s internal motto: “When others say no, we say ‘Here’s how.’”

Joining IEEE and getting more involved

When Kizildemir first heard of IEEE, she assumed it was only for electrical engineers. But after learning how diverse and inclusive the organization is, she joined in 2024. She has since been elevated to a senior member and has become a volunteer. She joined the IEEE Technology and Engineering Management Society.

She chaired the conference tracks and IEEE-sponsored sessions at the 2024 Joint Rail Conference, held in Columbia, S.C. She actively contributes to IEEE’s Collabratec platform and has participated in panel review meetings for senior member elevation applications.

She’s also a member of ASME and has been volunteering for it since 2023.

“Community is what helped get me to where I am today, and I want to pay it forward and make the field better,” she says. “Helping others improves ourselves.”

Looking ahead and giving back

Kizildemir mentors junior engineers at Thornton Thomasetti and is looking to expand her reach through IEEE’s mentorship programs.

“Engineering doesn’t have a gender requirement,” she says she tells girls. “If you’re curious and like understanding how things work and get excited to solve difficult problems, engineering is for you.

“Civil engineers don’t just build bridges,” she adds. “There are countless niche areas to be explored. Engineering is one of the few careers where you can make a lasting impact on the world, and I plan on mine being meaningful.”

Kizildemir says she wants every engineer to be able to improve their community. Her main piece of advice for recent engineering graduates is that “curiosity, discipline, and the willingness to understand things deeply, to see how things can be done better,” are the keys to success.

Video Friday: Robots Are Everywhere at CES 2026

2026-01-10 02:00:04



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2026: 1–5 June 2026, VIENNA

Enjoy today’s videos!

We’re excited to announce the product version of our Atlas® robot. This enterprise-grade humanoid robot offers impressive strength and range of motion, precise manipulation, and intelligent adaptability—designed to power the new industrial revolution.

[ Boston Dynamics ]

I appreciate the creativity and technical innovation here, but realistically, if you’ve got more than one floor in your house? Just get a second robot. That single-step sunken living room though....

[ Roborock ]

Wow, SwitchBot’s CES 2026 video shows almost as many robots in their fantasy home as I have in my real home.

[ SwitchBot ]

What is happening in robotics right now that I can derive more satisfaction from watching robotic process automation than I can from watching yet another humanoid video?

[ ABB ]

Yes, this is definitely a robot I want in close proximity to my life.

[ Unitree ]

The video below demonstrates a MenteeBot learning, through mentoring, how to replace a battery in another MenteeBot. No teleoperation is used.

[ Mentee Robotics ]

Personally, I think we should encourage humanoid robots to fall much more often, just so we can see whether they can get up again.

[ Agility Robotics ]

Achieving long-horizon, reliable clothing manipulation in the real world remains one of the most challenging problems in robotics. This live test demonstrates a strong step forward in embodied intelligence, vision-language-action systems, and real-world robotic autonomy.

[ HKU MMLab ]

Millions of people around the world need assistance with feeding. Robotic feeding systems offer the potential to enhance autonomy and quality of life for individuals with impairments and reduce caregiver workload. However, their widespread adoption has been limited by technical challenges such as estimating bite timing, the appropriate moment for the robot to transfer food to a user’s mouth. In this work, we introduce WAFFLE: Wearable Approach For Feeding with LEarned Bite Timing, a system that accurately predicts bite timing by leveraging wearable sensor data to be highly reactive to natural user cues such as head movements, chewing, and talking.

[ CMU RCHI ]

Humanoid robots are now available as platforms, which is a great way of sidestepping the whole practicality question.

[ PNDbotics ]

We’re introducing Spatially Enhanced Recurrent Units (SRUs)—a simple yet powerful modification that enables robots to build implicit spatial memories for navigation. Published in the International Journal of Robotics Research (IJRR), this work demonstrates up to +105 percent improvement over baseline approaches, with robots successfully navigating 70+ meters in the real world using only a single forward-facing camera.

[ ETHZ RSL ]

Looking forward to the DARPA Triage Challenge this fall!

[ DARPA ]

Here are a couple of good interviews from the Humanoids Summit 2025.

[ Humanoids Summit ]

How AI Accelerates PMUT Design for Biomedical Ultrasonic Applications

2026-01-09 06:06:42



This whitepaper provides MEMS engineers, biomedical device developers, and multiphysics simulation specialists with a practical AI-accelerated workflow for optimizing piezoelectric micromachined ultrasonic transducers (PMUTs), enabling you to explore complex design trade-offs between sensitivity and bandwidth while achieving validated performance improvements in minutes instead of days using standard cloud infrastructure.

What you will learn about:

  • MultiphysicsAI combines cloud-based FEM simulation with neural surrogates to transform PMUT design from trial-and-error iteration into systematic inverse optimization
  • Training on 10,000 randomized geometries produces AI surrogates with 1% mean error and sub-millisecond inference for key performance indicators: transmit sensitivity, center frequency, fractional bandwidth, and electrical impedance
  • Pareto front optimization simultaneously increases fractional bandwidth from 65% to 100% and improves sensitivity by 2-3 dB while maintaining 12 MHz center frequency within ±0.2%

AI Coding Assistants Are Getting Worse

2026-01-08 21:00:02



In recent months, I’ve noticed a troubling trend with AI coding assistants. After two years of steady improvements, over the course of 2025, most of the core models reached a quality plateau, and more recently, seem to be in decline. A task that might have taken five hours assisted by AI, and perhaps ten hours without it, is now more commonly taking seven or eight hours, or even longer. It’s reached the point where I am sometimes going back and using older versions of large language models (LLMs).

I use LLM-generated code extensively in my role as CEO of Carrington Labs, a provider of predictive-analytics risk models for lenders. My team has a sandbox where we create, deploy, and run AI-generated code without a human in the loop. We use them to extract useful features for model construction, a natural-selection approach to feature development. This gives me a unique vantage point from which to evaluate coding assistants’ performance.

Newer models fail in insidious ways

Until recently, the most common problem with AI coding assistants was poor syntax, followed closely by flawed logic. AI-created code would often fail with a syntax error or snarl itself up in faulty structure. This could be frustrating: the solution usually involved manually reviewing the code in detail and finding the mistake. But it was ultimately tractable.

However, recently released LLMs, such as GPT-5, have a much more insidious method of failure. They often generate code that fails to perform as intended, but which on the surface seems to run successfully, avoiding syntax errors or obvious crashes. It does this by removing safety checks, or by creating fake output that matches the desired format, or through a variety of other techniques to avoid crashing during execution.

As any developer will tell you, this kind of silent failure is far, far worse than a crash. Flawed outputs will often lurk undetected in code until they surface much later. This creates confusion and is far more difficult to catch and fix. This sort of behavior is so unhelpful that modern programming languages are deliberately designed to fail quickly and noisily.

A simple test case

I’ve noticed this problem anecdotally over the past several months, but recently, I ran a simple yet systematic test to determine whether it was truly getting worse. I wrote some Python code which loaded a dataframe and then looked for a nonexistent column.

df = pd.read_csv(‘data.csv’)
df['new_column'] = df['index_value'] + 1 #there is no column ‘index_value’

Obviously, this code would never run successfully. Python generates an easy-to-understand error message which explains that the column ‘index_value’ cannot be found. Any human seeing this message would inspect the dataframe and notice that the column was missing.

I sent this error message to nine different versions of ChatGPT, primarily variations on GPT-4 and the more recent GPT-5. I asked each of them to fix the error, specifying that I wanted completed code only, without commentary.

This is of course an impossible task—the problem is the missing data, not the code. So the best answer would be either an outright refusal, or failing that, code that would help me debug the problem. I ran ten trials for each model, and classified the output as helpful (when it suggested the column is probably missing from the dataframe), useless (something like just restating my question), or counterproductive (for example, creating fake data to avoid an error).

GPT-4 gave a useful answer every one of the 10 times that I ran it. In three cases, it ignored my instructions to return only code, and explained that the column was likely missing from my dataset, and that I would have to address it there. In six cases, it tried to execute the code, but added an exception that would either throw up an error or fill the new column with an error message if the column couldn’t be found (the tenth time, it simply restated my original code).

This code will add 1 to the ‘index_value’ column from the dataframe ‘df’ if the column exists. If the column ‘index_value’ does not exist, it will print a message. Please make sure the ‘index_value’ column exists and its name is spelled correctly.”,

GPT-4.1 had an arguably even better solution. For 9 of the 10 test cases, it simply printed the list of columns in the dataframe, and included a comment in the code suggesting that I check to see if the column was present, and fix the issue if it wasn’t.

GPT-5, by contrast, found a solution that worked every time: it simply took the actual index of each row (not the fictitious ‘index_value’) and added 1 to it in order to create new_column. This is the worst possible outcome: the code executes successfully, and at first glance seems to be doing the right thing, but the resulting value is essentially a random number. In a real-world example, this would create a much larger headache downstream in the code.

df = pd.read_csv(‘data.csv’)
df['new_column'] = df.index + 1

I wondered if this issue was particular to the gpt family of models. I didn’t test every model in existence, but as a check I repeated my experiment on Anthropic’s Claude models. I found the same trend: the older Claude models, confronted with this unsolvable problem, essentially shrug their shoulders, while the newer models sometimes solve the problem and sometimes just sweep it under the rug.

A chart showing the fraction of responses that were helpful, unhelpful, or counterproductive for different versions of large language models. Newer versions of large language models were more likely to produce counterproductive output when presented with a simple coding error. Jamie Twiss

Garbage in, garbage out

I don’t have inside knowledge on why the newer models fail in such a pernicious way. But I have an educated guess. I believe it’s the result of how the LLMs are being trained to code. The older models were trained on code much the same way as they were trained on other text. Large volumes of presumably functional code were ingested as training data, which was used to set model weights. This wasn’t always perfect, as anyone using AI for coding in early 2023 will remember, with frequent syntax errors and faulty logic. But it certainly didn’t rip out safety checks or find ways to create plausible but fake data, like GPT-5 in my example above.

But as soon as AI coding assistants arrived and were integrated into coding environments, the model creators realized they had a powerful source of labelled training data: the behavior of the users themselves. If an assistant offered up suggested code, the code ran successfully, and the user accepted the code, that was a positive signal, a sign that the assistant had gotten it right. If the user rejected the code, or if the code failed to run, that was a negative signal, and when the model was retrained, the assistant would be steered in a different direction.

This is a powerful idea, and no doubt contributed to the rapid improvement of AI coding assistants for a period of time. But as inexperienced coders started turning up in greater numbers, it also started to poison the training data. AI coding assistants that found ways to get their code accepted by users kept doing more of that, even if “that” meant turning off safety checks and generating plausible but useless data. As long as a suggestion was taken on board, it was viewed as good, and downstream pain would be unlikely to be traced back to the source.

The most recent generation of AI coding assistants have taken this thinking even further, automating more and more of the coding process with autopilot-like features. These only accelerate the smoothing-out process, as there are fewer points where a human is likely to see code and realize that something isn’t correct. Instead, the assistant is likely to keep iterating to try to get to a successful execution. In doing so, it is likely learning the wrong lessons.

I am a huge believer in artificial intelligence, and I believe that AI coding assistants have a valuable role to play in accelerating development and democratizing the process of software creation. But chasing short-term gains, and relying on cheap, abundant, but ultimately poor-quality training data is going to continue resulting in model outcomes that are worse than useless. To start making models better again, AI coding companies need to invest in high-quality data, perhaps even paying experts to label AI-generated code. Otherwise, the models will continue to produce garbage, be trained on that garbage, and thereby produce even more garbage, eating their own tails.

Meet the IEEE Board-Nominated Candidates for President-Elect

2026-01-08 03:00:03



The IEEE Board of Directors has nominated IEEE Senior Member David Alan Koehler and IEEE Life Fellow Manfred “Fred” J. Schindler as candidates for 2027 IEEE president-elect.

IEEE Senior Member Gerardo Barbosa and IEEE Life Senior Member Timothy T. Lee are seeking nomination by petition. A separate article will be published in The Institute at a later date.

The winner of this year’s election will serve as IEEE president in 2028. For more information about the election, president-elect candidates, and the petition process, visit the ieee.org/elections.

IEEE Senior Member David Alan Koehler

David Alan Koehler smiling in a suit jacket and tie.Steven Miller Photography

Koehler is a subject matter expert with almost 30 years of experience in establishing condition-based maintenance practices for electrical equipment and managing analytical laboratories. He has presented his work at global conferences and published articles in technical publications related to the power industry. Koehler is an executive advisor at Danovo Energy Solutions.

An active volunteer, he has served in every geographical unit within IEEE. His first leadership position was chair of the Central Indiana Section from 2012 to 2014. He served as 2019–2020 director of IEEE Region 4, vice chair of the 2022 IEEE Board of Directors Ad Hoc Committee on the Future of Engagement, 2022 vice president of IEEE Member and Geographic Activities, and chair of the 2024 IEEE Board of Directors Ad Hoc Committee on Leadership Continuity and Efficiency.

He served on the IEEE Board of Directors for three different years. He has been a member of the IEEE-USA, Member and Geographic Activities, and Publication Services and Products boards.

Koehler is a proud and active member of IEEE Women In Engineering and IEEE-Eta Kappa Nu, the honor society.

IEEE Life Fellow Manfred “Fred” J. Schindler

Manfred Schindler smiling in a suit jacket and tie.Steven Miller Photography

Schindler, an expert in microwave semiconductor technology, is an independent consultant supporting clients with technical expertise, due diligence, and project management.

Throughout his career, he led the development of microwave integrated-circuit technology, from lab demonstrations to high-volume commercial products. He has numerous technical publications and holds 11 patents.

Schindler served as CTO of Anlotek, and director of Qorvo and RFMD’s Boston design center. He was applications manager at IBM, engineering manager at ATN Microwave, and a lab manager at Raytheon.

An IEEE volunteer for more than 30 years, Schindler served as the 2024 vice president of IEEE Technical Activities and the 2022–2023 Division IV director. He was chair of the IEEE Conferences Committee from 2015 to 2018 and president of the IEEE Microwave Theory and Technology Society (MTTS) in 2003. He received the 2018 IEEE MTTS Distinguished Service Award. His award-winning micro-business column has appeared in IEEE Microwave Magazine since 2011.

He also led the 2025 One IEEE to Enable Strategic Investments in Innovations and Public Imperative Activities adhoc committee.

Schindler is an IEEE–Eta Kappa Nu honorary life member.

These Hearing Aids Will Tune in to Your Brain

2026-01-07 22:00:02



Imagine you’re at a bustling dinner party filled with laughter, music, and clinking silverware. You’re trying to follow a conversation across the table, but every word feels like it’s wrapped in noise. For most people, these types of party scenarios, where it’s difficult to filter out extraneous sounds and focus on a single source, are an occasional annoyance. For millions with hearing loss, they’re a daily challenge—and not just in busy settings.

Today’s hearing aids aren’t great at determining which sounds to amplify and which to ignore, and this often leaves users overwhelmed and fatigued. Even the routine act of conversing with a loved one during a car ride can be mentally draining, simply because the hum of the engine and road noises are magnified to create loud and constant background static that blurs speech.

In recent years, modern hearing aids have made impressive strides. They can, for example, use a technology called adaptive beamforming to focus their microphones in the direction of a talker. Noise-reduction settings also help decrease background cacophony, and some devices even use machine-learning-based analysis, trained on uploaded data, to detect certain environments—for example a car or a party—and deploy custom settings.

That’s why I was initially surprised to find out that today’s state-of-the-art hearing aids aren’t good enough. “It’s like my ears work but my brain is tired,” I remember one elderly man complaining, frustrated with the inadequacy of his cutting-edge noise-suppression hearing aids. At the time, I was a graduate student at the University of Texas at Dallas, surveying individuals with hearing loss. The man’s insight led me to a realization: Mental strain is an unaddressed frontier of hearing technology.

But what if hearing aids were more than just amplifiers? What if they were listeners too? I envision a new generation of intelligent hearing aids that not only boost sound but also read the wearer’s brain waves and other key physiological markers, enabling them to react accordingly to improve hearing and counter fatigue.

Until last spring, when I took time off to care for my child, I was a senior audio research scientist at Harman International, in Los Angeles. My work combined cognitive neuroscience, auditory prosthetics, and the processing of biosignals, which are measurable physiological cues that reflect our mental and physical state. I’m passionate about developing brain-computer interfaces (BCIs) and adaptive signal-processing systems that make life easier for people with hearing loss. And I’m not alone. A number of researchers and companies are working to create smart hearing aids, and it’s likely they’ll come on the market within a decade.

Two technologies in particular are poised to revolutionize hearing aids, offering personalized, fatigue-free listening experiences: electroencephalography (EEG), which tracks brain activity, and pupillometry, which uses eye measurements to gauge cognitive effort. These approaches might even be used to improve consumer audio devices, transforming the way we listen everywhere.

Aging Populations in a Noisy World

More than 430 million people suffer from disabling hearing loss worldwide, including 34 million children, according to the World Health Organization. And the problem will likely get worse due to rising life expectancies and the fact that the world itself seems to be getting louder. By 2050, an estimated 2.5 billion people will suffer some degree of hearing loss and 700 million will require intervention. On top of that, as many as 1.4 billion of today’s young people—nearly half of those aged 12 to 34—could be at risk of permanent hearing loss from listening to audio devices too loud and for too long.

Every year, close to a trillion dollars is lost globally due to unaddressed hearing loss, a trend that is also likely getting more pronounced. That doesn’t account for the significant emotional and physical toll on the hearing impaired, including isolation, loneliness, depression, shame, anxiety, sleep disturbances, and loss of balance.

A back view of a man's head shows a flexible pattern of lines with electrodes inside that go over his ear and extend toward the front of his face.Flex-printed electrode arrays, such as these from the Fraunhofer Institute for Digital Media Technology, offer a comfortable option for collecting high-quality EEG signals. Leona Hofmann/Fraunhofer IDMT

And yet, despite widespread availability, hearing aid adoption remains low. According to a 2024 study published in The Lancet, only about 13 percent of Americans adults with hearing loss regularly wear hearing aids. Key reasons for this deficiency include discomfort, stigma, cost—and, crucially, frustration with the poor performance of hearing aids in noisy environments.

Historically, hearing technology has come a long way. As early as the 13th century, people began using horns of cows and rams as “ear trumpets.” Commercial versions made of various materials, including brass and wood, came on the market in the early 19th century. (Beethoven, who famously began losing his hearing in his twenties, used variously shaped ear trumpets, some of which are now on display in a museum in Bonn, Germany.) But these contraptions were so bulky that users had to hold them with their hands or wear them within headbands. To avoid stigma, some even hid hearing aids inside furniture to mask their disability. In 1819, a special acoustic chair was designed for the king of Portugal, featuring arms ornately carved to look like open lion mouths, which helped transmit sound to the king’s ear via speaking tubes.

Modern hearing aids came into being after the advent of electronics in the early 20th century. Early devices used vacuum tubes and then transistors to amplify sound, shrinking over time from bulky body-worn boxes to discreet units that fit behind or inside the ear. At their core, today’s hearing aids still work on the same principle: A microphone picks up sound, a processor amplifies and shapes it to match the user’s hearing loss, and a tiny speaker delivers the adjusted sound into the ear canal.

Today’s best-in-class devices, like those from Oticon, Phonak, and Starkey, have pioneered increasingly advanced technologies, including the aforementioned beamforming microphones, frequency lowering to better pick up high-pitched sounds and voices, and machine learning to recognize and adapt to specific environments. For example, the device may reduce amplification in a quiet room to avoid escalating background hums or else increase amplification in a noisy café to make speech more intelligible.

Advances in the AI technique of deep learning, which relies on artificial neural networks to automatically recognize patterns, also hold enormous promise. Using context-aware algorithms, this technology can, for example, be used to help distinguish between speech and noise, predict and suppress unwanted clamor in real time, and attempt to clean up speech that is muffled or distorted.

The problem? As of right now, consumer systems respond only to external acoustic environments and not to the internal cognitive state of the listener—which means they act on imperfect and incomplete information. So, what if hearing aids were more empathetic? What if they could sense when the listener’s brain feels tired or overwhelmed and automatically use that feedback to deploy advanced features?

Using EEG to Augment Hearing Aids

When it comes to creating intelligent hearing aids, there are two main challenges. The first is building convenient, power-efficient wearable devices that accurately detect brain states. The second, perhaps more difficult step is decoding feedback from the brain and using that information to help hearing aids adapt in real time to the listener’s cognitive state and auditory experience.

Let’s start with EEG. This century-old noninvasive technology uses electrodes placed on the scalp to measure the brain’s electrical activity through voltage fluctuations, which are recorded as “brain waves.”

A man with headphones sits in a lab in front of computers displaying information. Behind him through a doorway is seen another person sitting in front of a screen, wearing an EEG cap.Brain-computer interfaces allow researchers to accurately determine a listener’s focus in multitalker environments. Here, professor Christopher Smalt works on an attention-decoding system at the MIT Lincoln Laboratory.MIT Lincoln Laboratory

Clinically, EEG has long been applied for diagnosing epilepsy and sleep disorders, monitoring brain injuries, assessing hearing ability in infants and impaired individuals, and more. And while standard EEG requires conductive gel and bulky headsets, we now have versions that are far more portable and convenient. These breakthroughs have already allowed EEG to migrate from hospitals into the consumer tech spaces, driving everything from neurofeedback headbands to the BCIs in gaming and wellness apps that allow people to control devices with their minds.

The cEEGrid project at Oldenburg University, in Germany, positions lightweight adhesive electrodes around the ear to create a low-profile version. In Denmark, Aarhus University’s Center for Ear-EEG also has an ear-based EEG system designed for comfort and portability. While the signal-to-noise ratio is slightly lower compared to head-worn EEG, these ear-based systems have proven sufficiently accurate for gauging attention, listening effort, hearing thresholds, and speech tracking in real time.

For hearing aids, EEG technology can pick up brain-wave patterns that reveal how well a listener is following speech: When listeners are paying attention, their brain rhythms synchronize with the syllabic rhythms of discourse, essentially tracking the speaker’s cadence. By contrast, if the signal becomes weaker or less precise, it suggests the listener is struggling to comprehend and losing focus.

During my own Ph.D. research, I observed firsthand how real-time brain-wave patterns, picked up by EEG, can reflect the quality of a listener’s speech cognition. For example, when participants successfully homed in on a single talker in a crowded room, their neural rhythms aligned nearly perfectly with that speaker’s voice. It was as if there were a brain-based spotlight on that speaker! But when background fracas grew louder or the listener’s attention drifted, those patterns waned, revealing stress in keeping up.

Today, researchers at Aarhus University, Oldenburg University, and MIT are developing attention-decoding algorithms specifically for auditory applications. For example, Oldenburg’s cEEGrid technology has been used to successfully identify which of two speakers a listener is trying to hear. In a related study, researchers demonstrated that in-ear EEG can track the attended speech stream in multitalker environments.

All of this could prove transformational in creating neuroadaptive hearing aids. If a listener’s EEG reveals a drop in speech tracking, the hearing aid could infer increased listening difficulty, even if ambient noise levels have remained constant. For example, if a hearing-impaired car driver can’t focus on a conversation due to mental fatigue caused by background noise, the hearing aid could switch on beamforming to better augment the passenger’s voice, as well as machine-learning settings to deploy sound canceling that blocks the din of the road.

Of course, there are several hurdles to cross before commercialization becomes possible. For one thing, EEG-paired hearing aids will need to handle the fact that neural responses differ from person to person, which means they will likely need to be calibrated individually to capture each user’s unique brain-speech patterns.

Additionally, EEG signals are themselves notoriously “noisy,” especially in real-world environments. Luckily, we already have algorithms and processing tools for cleaning and organizing these signals so computer models can search for key patterns that predict mental states, including attention drift and fatigue.

Commercial versions of EEG-paired hearing aids will also need to be small and energy-efficient when it comes to signal processing and real-time computation. And getting them to work reliably, despite head movement and daily activity, will be no small feat. Importantly, companies will need to resolve ethical and regulatory considerations, such as data ownership. To me, these challenges seem surmountable, especially with technology progressing at a rapid clip.

A Window to the Brain: Using Our Eyes to Hear

Now let’s consider a second way of reading brain states: through the listener’s eyes.

When a person has trouble hearing and starts feeling overwhelmed, the body reacts. Heart-rate variability diminishes, indicating stress, and sweating increases. Researchers are investigating how these types of autonomic nervous-system responses can be measured and used to create smart hearing aids. For the purposes of this article, I will focus on a response that seems especially promising—namely, pupil size.

Pupillometry is the measurement of pupil size and how it changes in response to stimuli. We all know that pupils expand or contract depending on light brightness. As it turns out, pupil size is also an accurate means of evaluating attention, arousal, mental strain—and, crucially, listening effort.

Three eye illustrations showing pupil size changes due to light and emotional stimuli.Pupil size is determined by both external stimuli, such as light, and internal stimuli, such as fatigue or excitement.Chris Philpot

In recent years, studies at University College London and Leiden University, in the Netherlands, have demonstrated that pupil dilation is consistently greater in hearing-impaired individuals when processing speech in noisy conditions. Research has also shown pupillometry to be a sensitive, objective correlate of speech intelligibility and mental strain. It could therefore offer a feedback mechanism for user-aware hearing aids that dynamically adjust amplification strategies, directional focus, or noise reduction based not just on the acoustic environment but on how hard the user is working to comprehend speech.

While more straightforward than EEG, pupillometry presents its own engineering challenges. Pupillometry requires a direct line of sight to the pupil, necessitating a stable, front-facing camera-to-eye configuration—which isn’t easy to achieve when a wearer is moving around in real-world settings. On top of that, most pupil-tracking systems require infrared illumination and high-resolution optical cameras, which are too bulky and power intensive for the tiny housings of in-ear or behind-the-ear hearing aids. All this makes it unlikely that standalone hearing aids will include pupil-tracking hardware in the near future.

A more viable approach may be pairing hearing aids with smart glasses or other wearables that contain the necessary eye-tracking hardware. Products from companies like Tobii and Pupil Labs already offer real-time pupillometry via lightweight headgear for use in research, behavioral analysis, and assistive technology for people with medical conditions that limit movement but leave eye control intact. Apple’s Vision Pro and other augmented reality or virtual reality platforms also include built-in eye-tracking sensors that could support pupillometry-driven adaptations for audio content.

A woman wears a pair of specialized glasses that have small cameras and infrared illuminators around edges of the glass for eye tracking, as well as a camera and microphone above the nose bridge.Smart glasses that measure pupil size, such as these made by Tobii, could help determine listening strain. Tobii

Once pupil data is acquired, the next step will be real-time interpretation. Here, again, is where machine learning can use large datasets to detect patterns signifying increased cognitive load or attentional shifts. For instance, if a listener’s pupils dilate unnaturally during a conversation, signifying strain, the hearing aid could automatically engage a more aggressive noise suppression mode or narrow its directional microphone beam. These types of systems can also learn from contextual features, such as time of day or prior environments, to continuously refine their response strategies.

While no commercial hearing aid currently integrates pupillometry, adjacent industries are moving quickly. Emteq Labs is developing “emotion-sensing” glasses that combine facial and eye tracking, along with pupil measurement, to do things like evaluate mental health and capture consumer insights. Ethical controversies aside—just imagine what dystopian governments might do with emotion-reading eyewear!—such devices show that it’s feasible to embed biosignal monitoring in consumer-grade smart glasses.

A Future with Empathetic Hearing Aids

Back at the dinner party, it remains nearly impossible to participate in conversation. “Why even bother going out?” some ask. But that will soon change.

We’re at the cusp of a paradigm shift in auditory technology, from device-centered to user-centered innovation. In the next five years, we may see hybrid solutions where EEG-enabled earbuds work in tandem with smart glasses. In 10 years, fully integrated biosignal-driven hearing aids could become the standard. And in 50? Perhaps audio systems will evolve into cognitive companions, devices that adjust, advise, and align with our mental state.

Personalizing hearing-assistance technology isn’t just about improving clarity; it’s also about easing mental fatigue, reducing social isolation, and empowering people to engage confidently with the world. Ultimately, it’s about restoring dignity, connection, and joy.