MoreRSS

site iconIEEE SpectrumModify

IEEE is the trusted voice for engineering, computing, and technology information around the globe. 
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of IEEE Spectrum

How AI Accelerates PMUT Design for Biomedical Ultrasonic Applications

2026-01-09 06:06:42



This whitepaper provides MEMS engineers, biomedical device developers, and multiphysics simulation specialists with a practical AI-accelerated workflow for optimizing piezoelectric micromachined ultrasonic transducers (PMUTs), enabling you to explore complex design trade-offs between sensitivity and bandwidth while achieving validated performance improvements in minutes instead of days using standard cloud infrastructure.

What you will learn about:

  • MultiphysicsAI combines cloud-based FEM simulation with neural surrogates to transform PMUT design from trial-and-error iteration into systematic inverse optimization
  • Training on 10,000 randomized geometries produces AI surrogates with 1% mean error and sub-millisecond inference for key performance indicators: transmit sensitivity, center frequency, fractional bandwidth, and electrical impedance
  • Pareto front optimization simultaneously increases fractional bandwidth from 65% to 100% and improves sensitivity by 2-3 dB while maintaining 12 MHz center frequency within ±0.2%

AI Coding Assistants Are Getting Worse

2026-01-08 21:00:02



In recent months, I’ve noticed a troubling trend with AI coding assistants. After two years of steady improvements, over the course of 2025, most of the core models reached a quality plateau, and more recently, seem to be in decline. A task that might have taken five hours assisted by AI, and perhaps ten hours without it, is now more commonly taking seven or eight hours, or even longer. It’s reached the point where I am sometimes going back and using older versions of large language models (LLMs).

I use LLM-generated code extensively in my role as CEO of Carrington Labs, a provider of predictive-analytics risk models for lenders. My team has a sandbox where we create, deploy, and run AI-generated code without a human in the loop. We use them to extract useful features for model construction, a natural-selection approach to feature development. This gives me a unique vantage point from which to evaluate coding assistants’ performance.

Newer models fail in insidious ways

Until recently, the most common problem with AI coding assistants was poor syntax, followed closely by flawed logic. AI-created code would often fail with a syntax error or snarl itself up in faulty structure. This could be frustrating: the solution usually involved manually reviewing the code in detail and finding the mistake. But it was ultimately tractable.

However, recently released LLMs, such as GPT-5, have a much more insidious method of failure. They often generate code that fails to perform as intended, but which on the surface seems to run successfully, avoiding syntax errors or obvious crashes. It does this by removing safety checks, or by creating fake output that matches the desired format, or through a variety of other techniques to avoid crashing during execution.

As any developer will tell you, this kind of silent failure is far, far worse than a crash. Flawed outputs will often lurk undetected in code until they surface much later. This creates confusion and is far more difficult to catch and fix. This sort of behavior is so unhelpful that modern programming languages are deliberately designed to fail quickly and noisily.

A simple test case

I’ve noticed this problem anecdotally over the past several months, but recently, I ran a simple yet systematic test to determine whether it was truly getting worse. I wrote some Python code which loaded a dataframe and then looked for a nonexistent column.

df = pd.read_csv(‘data.csv’)
df['new_column'] = df['index_value'] + 1 #there is no column ‘index_value’

Obviously, this code would never run successfully. Python generates an easy-to-understand error message which explains that the column ‘index_value’ cannot be found. Any human seeing this message would inspect the dataframe and notice that the column was missing.

I sent this error message to nine different versions of ChatGPT, primarily variations on GPT-4 and the more recent GPT-5. I asked each of them to fix the error, specifying that I wanted completed code only, without commentary.

This is of course an impossible task—the problem is the missing data, not the code. So the best answer would be either an outright refusal, or failing that, code that would help me debug the problem. I ran ten trials for each model, and classified the output as helpful (when it suggested the column is probably missing from the dataframe), useless (something like just restating my question), or counterproductive (for example, creating fake data to avoid an error).

GPT-4 gave a useful answer every one of the 10 times that I ran it. In three cases, it ignored my instructions to return only code, and explained that the column was likely missing from my dataset, and that I would have to address it there. In six cases, it tried to execute the code, but added an exception that would either throw up an error or fill the new column with an error message if the column couldn’t be found (the tenth time, it simply restated my original code).

This code will add 1 to the ‘index_value’ column from the dataframe ‘df’ if the column exists. If the column ‘index_value’ does not exist, it will print a message. Please make sure the ‘index_value’ column exists and its name is spelled correctly.”,

GPT-4.1 had an arguably even better solution. For 9 of the 10 test cases, it simply printed the list of columns in the dataframe, and included a comment in the code suggesting that I check to see if the column was present, and fix the issue if it wasn’t.

GPT-5, by contrast, found a solution that worked every time: it simply took the actual index of each row (not the fictitious ‘index_value’) and added 1 to it in order to create new_column. This is the worst possible outcome: the code executes successfully, and at first glance seems to be doing the right thing, but the resulting value is essentially a random number. In a real-world example, this would create a much larger headache downstream in the code.

df = pd.read_csv(‘data.csv’)
df['new_column'] = df.index + 1

I wondered if this issue was particular to the gpt family of models. I didn’t test every model in existence, but as a check I repeated my experiment on Anthropic’s Claude models. I found the same trend: the older Claude models, confronted with this unsolvable problem, essentially shrug their shoulders, while the newer models sometimes solve the problem and sometimes just sweep it under the rug.

A chart showing the fraction of responses that were helpful, unhelpful, or counterproductive for different versions of large language models. Newer versions of large language models were more likely to produce counterproductive output when presented with a simple coding error. Jamie Twiss

Garbage in, garbage out

I don’t have inside knowledge on why the newer models fail in such a pernicious way. But I have an educated guess. I believe it’s the result of how the LLMs are being trained to code. The older models were trained on code much the same way as they were trained on other text. Large volumes of presumably functional code were ingested as training data, which was used to set model weights. This wasn’t always perfect, as anyone using AI for coding in early 2023 will remember, with frequent syntax errors and faulty logic. But it certainly didn’t rip out safety checks or find ways to create plausible but fake data, like GPT-5 in my example above.

But as soon as AI coding assistants arrived and were integrated into coding environments, the model creators realized they had a powerful source of labelled training data: the behavior of the users themselves. If an assistant offered up suggested code, the code ran successfully, and the user accepted the code, that was a positive signal, a sign that the assistant had gotten it right. If the user rejected the code, or if the code failed to run, that was a negative signal, and when the model was retrained, the assistant would be steered in a different direction.

This is a powerful idea, and no doubt contributed to the rapid improvement of AI coding assistants for a period of time. But as inexperienced coders started turning up in greater numbers, it also started to poison the training data. AI coding assistants that found ways to get their code accepted by users kept doing more of that, even if “that” meant turning off safety checks and generating plausible but useless data. As long as a suggestion was taken on board, it was viewed as good, and downstream pain would be unlikely to be traced back to the source.

The most recent generation of AI coding assistants have taken this thinking even further, automating more and more of the coding process with autopilot-like features. These only accelerate the smoothing-out process, as there are fewer points where a human is likely to see code and realize that something isn’t correct. Instead, the assistant is likely to keep iterating to try to get to a successful execution. In doing so, it is likely learning the wrong lessons.

I am a huge believer in artificial intelligence, and I believe that AI coding assistants have a valuable role to play in accelerating development and democratizing the process of software creation. But chasing short-term gains, and relying on cheap, abundant, but ultimately poor-quality training data is going to continue resulting in model outcomes that are worse than useless. To start making models better again, AI coding companies need to invest in high-quality data, perhaps even paying experts to label AI-generated code. Otherwise, the models will continue to produce garbage, be trained on that garbage, and thereby produce even more garbage, eating their own tails.

Meet the IEEE Board-Nominated Candidates for President-Elect

2026-01-08 03:00:03



The IEEE Board of Directors has nominated IEEE Senior Member David Alan Koehler and IEEE Life Fellow Manfred “Fred” J. Schindler as candidates for 2027 IEEE president-elect.

IEEE Senior Member Gerardo Barbosa and IEEE Life Senior Member Timothy T. Lee are seeking nomination by petition. A separate article will be published in The Institute at a later date.

The winner of this year’s election will serve as IEEE president in 2028. For more information about the election, president-elect candidates, and the petition process, visit the ieee.org/elections.

IEEE Senior Member David Alan Koehler

David Alan Koehler smiling in a suit jacket and tie.Steven Miller Photography

Koehler is a subject matter expert with almost 30 years of experience in establishing condition-based maintenance practices for electrical equipment and managing analytical laboratories. He has presented his work at global conferences and published articles in technical publications related to the power industry. Koehler is an executive advisor at Danovo Energy Solutions.

An active volunteer, he has served in every geographical unit within IEEE. His first leadership position was chair of the Central Indiana Section from 2012 to 2014. He served as 2019–2020 director of IEEE Region 4, vice chair of the 2022 IEEE Board of Directors Ad Hoc Committee on the Future of Engagement, 2022 vice president of IEEE Member and Geographic Activities, and chair of the 2024 IEEE Board of Directors Ad Hoc Committee on Leadership Continuity and Efficiency.

He served on the IEEE Board of Directors for three different years. He has been a member of the IEEE-USA, Member and Geographic Activities, and Publication Services and Products boards.

Koehler is a proud and active member of IEEE Women In Engineering and IEEE-Eta Kappa Nu, the honor society.

IEEE Life Fellow Manfred “Fred” J. Schindler

Manfred Schindler smiling in a suit jacket and tie.Steven Miller Photography

Schindler, an expert in microwave semiconductor technology, is an independent consultant supporting clients with technical expertise, due diligence, and project management.

Throughout his career, he led the development of microwave integrated-circuit technology, from lab demonstrations to high-volume commercial products. He has numerous technical publications and holds 11 patents.

Schindler served as CTO of Anlotek, and director of Qorvo and RFMD’s Boston design center. He was applications manager at IBM, engineering manager at ATN Microwave, and a lab manager at Raytheon.

An IEEE volunteer for more than 30 years, Schindler served as the 2024 vice president of IEEE Technical Activities and the 2022–2023 Division IV director. He was chair of the IEEE Conferences Committee from 2015 to 2018 and president of the IEEE Microwave Theory and Technology Society (MTTS) in 2003. He received the 2018 IEEE MTTS Distinguished Service Award. His award-winning micro-business column has appeared in IEEE Microwave Magazine since 2011.

He also led the 2025 One IEEE to Enable Strategic Investments in Innovations and Public Imperative Activities adhoc committee.

Schindler is an IEEE–Eta Kappa Nu honorary life member.

These Hearing Aids Will Tune in to Your Brain

2026-01-07 22:00:02



Imagine you’re at a bustling dinner party filled with laughter, music, and clinking silverware. You’re trying to follow a conversation across the table, but every word feels like it’s wrapped in noise. For most people, these types of party scenarios, where it’s difficult to filter out extraneous sounds and focus on a single source, are an occasional annoyance. For millions with hearing loss, they’re a daily challenge—and not just in busy settings.

Today’s hearing aids aren’t great at determining which sounds to amplify and which to ignore, and this often leaves users overwhelmed and fatigued. Even the routine act of conversing with a loved one during a car ride can be mentally draining, simply because the hum of the engine and road noises are magnified to create loud and constant background static that blurs speech.

In recent years, modern hearing aids have made impressive strides. They can, for example, use a technology called adaptive beamforming to focus their microphones in the direction of a talker. Noise-reduction settings also help decrease background cacophony, and some devices even use machine-learning-based analysis, trained on uploaded data, to detect certain environments—for example a car or a party—and deploy custom settings.

That’s why I was initially surprised to find out that today’s state-of-the-art hearing aids aren’t good enough. “It’s like my ears work but my brain is tired,” I remember one elderly man complaining, frustrated with the inadequacy of his cutting-edge noise-suppression hearing aids. At the time, I was a graduate student at the University of Texas at Dallas, surveying individuals with hearing loss. The man’s insight led me to a realization: Mental strain is an unaddressed frontier of hearing technology.

But what if hearing aids were more than just amplifiers? What if they were listeners too? I envision a new generation of intelligent hearing aids that not only boost sound but also read the wearer’s brain waves and other key physiological markers, enabling them to react accordingly to improve hearing and counter fatigue.

Until last spring, when I took time off to care for my child, I was a senior audio research scientist at Harman International, in Los Angeles. My work combined cognitive neuroscience, auditory prosthetics, and the processing of biosignals, which are measurable physiological cues that reflect our mental and physical state. I’m passionate about developing brain-computer interfaces (BCIs) and adaptive signal-processing systems that make life easier for people with hearing loss. And I’m not alone. A number of researchers and companies are working to create smart hearing aids, and it’s likely they’ll come on the market within a decade.

Two technologies in particular are poised to revolutionize hearing aids, offering personalized, fatigue-free listening experiences: electroencephalography (EEG), which tracks brain activity, and pupillometry, which uses eye measurements to gauge cognitive effort. These approaches might even be used to improve consumer audio devices, transforming the way we listen everywhere.

Aging Populations in a Noisy World

More than 430 million people suffer from disabling hearing loss worldwide, including 34 million children, according to the World Health Organization. And the problem will likely get worse due to rising life expectancies and the fact that the world itself seems to be getting louder. By 2050, an estimated 2.5 billion people will suffer some degree of hearing loss and 700 million will require intervention. On top of that, as many as 1.4 billion of today’s young people—nearly half of those aged 12 to 34—could be at risk of permanent hearing loss from listening to audio devices too loud and for too long.

Every year, close to a trillion dollars is lost globally due to unaddressed hearing loss, a trend that is also likely getting more pronounced. That doesn’t account for the significant emotional and physical toll on the hearing impaired, including isolation, loneliness, depression, shame, anxiety, sleep disturbances, and loss of balance.

A back view of a man's head shows a flexible pattern of lines with electrodes inside that go over his ear and extend toward the front of his face.Flex-printed electrode arrays, such as these from the Fraunhofer Institute for Digital Media Technology, offer a comfortable option for collecting high-quality EEG signals. Leona Hofmann/Fraunhofer IDMT

And yet, despite widespread availability, hearing aid adoption remains low. According to a 2024 study published in The Lancet, only about 13 percent of Americans adults with hearing loss regularly wear hearing aids. Key reasons for this deficiency include discomfort, stigma, cost—and, crucially, frustration with the poor performance of hearing aids in noisy environments.

Historically, hearing technology has come a long way. As early as the 13th century, people began using horns of cows and rams as “ear trumpets.” Commercial versions made of various materials, including brass and wood, came on the market in the early 19th century. (Beethoven, who famously began losing his hearing in his twenties, used variously shaped ear trumpets, some of which are now on display in a museum in Bonn, Germany.) But these contraptions were so bulky that users had to hold them with their hands or wear them within headbands. To avoid stigma, some even hid hearing aids inside furniture to mask their disability. In 1819, a special acoustic chair was designed for the king of Portugal, featuring arms ornately carved to look like open lion mouths, which helped transmit sound to the king’s ear via speaking tubes.

Modern hearing aids came into being after the advent of electronics in the early 20th century. Early devices used vacuum tubes and then transistors to amplify sound, shrinking over time from bulky body-worn boxes to discreet units that fit behind or inside the ear. At their core, today’s hearing aids still work on the same principle: A microphone picks up sound, a processor amplifies and shapes it to match the user’s hearing loss, and a tiny speaker delivers the adjusted sound into the ear canal.

Today’s best-in-class devices, like those from Oticon, Phonak, and Starkey, have pioneered increasingly advanced technologies, including the aforementioned beamforming microphones, frequency lowering to better pick up high-pitched sounds and voices, and machine learning to recognize and adapt to specific environments. For example, the device may reduce amplification in a quiet room to avoid escalating background hums or else increase amplification in a noisy café to make speech more intelligible.

Advances in the AI technique of deep learning, which relies on artificial neural networks to automatically recognize patterns, also hold enormous promise. Using context-aware algorithms, this technology can, for example, be used to help distinguish between speech and noise, predict and suppress unwanted clamor in real time, and attempt to clean up speech that is muffled or distorted.

The problem? As of right now, consumer systems respond only to external acoustic environments and not to the internal cognitive state of the listener—which means they act on imperfect and incomplete information. So, what if hearing aids were more empathetic? What if they could sense when the listener’s brain feels tired or overwhelmed and automatically use that feedback to deploy advanced features?

Using EEG to Augment Hearing Aids

When it comes to creating intelligent hearing aids, there are two main challenges. The first is building convenient, power-efficient wearable devices that accurately detect brain states. The second, perhaps more difficult step is decoding feedback from the brain and using that information to help hearing aids adapt in real time to the listener’s cognitive state and auditory experience.

Let’s start with EEG. This century-old noninvasive technology uses electrodes placed on the scalp to measure the brain’s electrical activity through voltage fluctuations, which are recorded as “brain waves.”

A man with headphones sits in a lab in front of computers displaying information. Behind him through a doorway is seen another person sitting in front of a screen, wearing an EEG cap.Brain-computer interfaces allow researchers to accurately determine a listener’s focus in multitalker environments. Here, professor Christopher Smalt works on an attention-decoding system at the MIT Lincoln Laboratory.MIT Lincoln Laboratory

Clinically, EEG has long been applied for diagnosing epilepsy and sleep disorders, monitoring brain injuries, assessing hearing ability in infants and impaired individuals, and more. And while standard EEG requires conductive gel and bulky headsets, we now have versions that are far more portable and convenient. These breakthroughs have already allowed EEG to migrate from hospitals into the consumer tech spaces, driving everything from neurofeedback headbands to the BCIs in gaming and wellness apps that allow people to control devices with their minds.

The cEEGrid project at Oldenburg University, in Germany, positions lightweight adhesive electrodes around the ear to create a low-profile version. In Denmark, Aarhus University’s Center for Ear-EEG also has an ear-based EEG system designed for comfort and portability. While the signal-to-noise ratio is slightly lower compared to head-worn EEG, these ear-based systems have proven sufficiently accurate for gauging attention, listening effort, hearing thresholds, and speech tracking in real time.

For hearing aids, EEG technology can pick up brain-wave patterns that reveal how well a listener is following speech: When listeners are paying attention, their brain rhythms synchronize with the syllabic rhythms of discourse, essentially tracking the speaker’s cadence. By contrast, if the signal becomes weaker or less precise, it suggests the listener is struggling to comprehend and losing focus.

During my own Ph.D. research, I observed firsthand how real-time brain-wave patterns, picked up by EEG, can reflect the quality of a listener’s speech cognition. For example, when participants successfully homed in on a single talker in a crowded room, their neural rhythms aligned nearly perfectly with that speaker’s voice. It was as if there were a brain-based spotlight on that speaker! But when background fracas grew louder or the listener’s attention drifted, those patterns waned, revealing stress in keeping up.

Today, researchers at Oldenburg University, Aarhus University, and MIT are developing attention-decoding algorithms specifically for auditory applications. For example, Oldenburg’s cEEGrid technology has been used to successfully identify which of two speakers a listener is trying to hear. In a related study, researchers demonstrated that Ear-EEG can track the attended speech stream in multitalker environments.

All of this could prove transformational in creating neuroadaptive hearing aids. If a listener’s EEG reveals a drop in speech tracking, the hearing aid could infer increased listening difficulty, even if ambient noise levels have remained constant. For example, if a hearing-impaired car driver can’t focus on a conversation due to mental fatigue caused by background noise, the hearing aid could switch on beamforming to better spotlight the passenger’s voice, as well as machine-learning settings to deploy sound canceling that blocks the din of the road.

Of course, there are several hurdles to cross before commercialization becomes possible. For one thing, EEG-paired hearing aids will need to handle the fact that neural responses differ from person to person, which means they will likely need to be calibrated individually to capture each wearer’s unique brain-speech patterns.

Additionally, EEG signals are themselves notoriously “noisy,” especially in real-world environments. Luckily, we already have algorithms and processing tools for cleaning and organizing these signals so computer models can search for key patterns that predict mental states, including attention drift and fatigue.

Commercial versions of EEG-paired hearing aids will also need to be small and energy-efficient when it comes to signal processing and real-time computation. And getting them to work reliably, despite head movement and daily activity, will be no small feat. Importantly, companies will need to resolve ethical and regulatory considerations, such as data ownership. To me, these challenges seem surmountable, especially with technology progressing at a rapid clip.

A Window to the Brain: Using Our Eyes to Hear

Now let’s consider a second way of reading brain states: through the listener’s eyes.

When a person has trouble hearing and starts feeling overwhelmed, the body reacts. Heart-rate variability diminishes, indicating stress, and sweating increases. Researchers are investigating how these types of autonomic nervous-system responses can be measured and used to create smart hearing aids. For the purposes of this article, I will focus on a response that seems especially promising—namely, pupil size.

Pupillometry is the measurement of pupil size and how it changes in response to stimuli. We all know that pupils expand or contract depending on light brightness. As it turns out, pupil size is also an accurate means of evaluating attention, arousal, mental strain—and, crucially, listening effort.

Three eye illustrations showing pupil size changes due to light and emotional stimuli.Pupil size is determined by both external stimuli, such as light, and internal stimuli, such as fatigue or excitement.Chris Philpot

In recent years, studies at University College London and Leiden University have demonstrated that pupil dilation is consistently greater in hearing-impaired individuals when processing speech in noisy conditions. Research has also shown pupillometry to be a sensitive, objective correlate of speech intelligibility and mental strain. It could therefore offer a feedback mechanism for user-aware hearing aids that dynamically adjust amplification strategies, directional focus, or noise reduction based not just on the acoustic environment but on how hard the user is working to comprehend speech.

While more straightforward than EEG, pupillometry presents its own engineering challenges. Unlike with ears, which can be assessed from behind, pupillometry requires a direct line of sight to the pupil, necessitating a stable, front-facing camera-to-eye configuration—which isn’t easy to achieve when a wearer is moving around in real-world settings. On top of that, most pupil-tracking systems require infrared illumination and high-resolution optical cameras, which are too bulky and power intensive for the tiny housings of in-ear or behind-the-ear hearing aids. All this makes it unlikely that standalone hearing aids will include pupil-tracking hardware in the near future.

A more viable approach may be pairing hearing aids with smart glasses or other wearables that contain the necessary eye-tracking hardware. Products from companies like Tobii and Pupil Labs already offer real-time pupillometry via lightweight headgear for use in research, behavioral analysis, and assistive technology for people with medical conditions that limit movement but leave eye control intact. Apple’s Vision Pro and other augmented reality or virtual reality platforms also include built-in eye-tracking sensors that could support pupillometry-driven adaptations for audio content.

A woman wears a pair of specialized glasses that have small cameras and infrared illuminators around edges of the glass for eye tracking, as well as a camera and microphone above the nose bridge.Smart glasses that measure pupil size, such as these made by Tobii, could help determine listening strain. Tobii

Once pupil data is acquired, the next step will be real-time interpretation. Here, again, is where machine learning can use large datasets to detect patterns signifying increased cognitive load or attentional shifts. For instance, if a listener’s pupils dilate unnaturally during a conversation, signifying strain, the hearing aid could automatically engage a more aggressive noise suppression mode or narrow its directional microphone beam. These types of systems can also learn from contextual features, such as time of day or prior environments, to continuously refine their response strategies.

While no commercial hearing aid currently integrates pupillometry, adjacent industries are moving quickly. Emteq Labs is developing “emotion-sensing” glasses that combine facial and eye tracking, along with pupil measurement, to do things like evaluate mental health and capture consumer insights. Ethical controversies aside—just imagine what dystopian governments might do with emotion-reading eyewear!—such devices show that it’s feasible to embed biosignal monitoring in consumer-grade smart glasses.

A Future with Empathetic Hearing Aids

Back at the dinner party, it remains nearly impossible to participate in conversation. “Why even bother going out?” some ask. But that will soon change.

We’re at the cusp of a paradigm shift in auditory technology, from device-centered to user-centered innovation. In the next five years, we may see hybrid solutions where EEG-enabled earbuds work in tandem with smart glasses. In 10 years, fully integrated biosignal-driven hearing aids could become the standard. And in 50? Perhaps audio systems will evolve into cognitive companions, devices that adjust, advise, and align with our mental state.

Personalizing hearing-assistance technology isn’t just about improving clarity; it’s also about easing mental fatigue, reducing social isolation, and empowering people to engage confidently with the world. Ultimately, it’s about restoring dignity, connection, and joy.

How the Dictaphone Entered Office Life

2026-01-06 21:00:02



Thanks to Hollywood, whenever I think of a Dictaphone, my imagination immediately jumps to a mid-20th-century office, Don Draper suavely seated at his desk, voicing ad copy into a desktop machine. A perfectly coiffed woman from the secretarial pool then takes the recordings and neatly types them up, with carbon copies of course.

I had no idea the Dictaphone actually had its roots in the 19th century and a rivalry between two early tech giants: Alexander Graham Bell and Thomas Edison. And although it took decades to take hold in the modern office, it found novel uses in other fields.

Who invented the Dictaphone?

The Dictaphone was born from the competition and the cooperation of Bell and Edison and their capable teams of researchers. In 1877, Edison had introduced the phonograph, which he later declared his favorite invention. And yet he wasn’t quite certain about its commercial applications. Initially, he thought it might be good for recording telephone messages. Then he began to imagine other uses: a mechanical stenographer for businessmen, a notetaker for students, an elocution instructor, a talking book for the blind. The playback of recorded music—the phonograph’s eventual killer app—was No. 4 on Edison’s list. And after a few public demonstrations, he set aside the invention to pursue other interests.

Black and white photo of an apparatus consisting of a metal cylinder on a metal rod with a handle, seated on a flat rectangular base. Thomas Edison’s early phonograph from 1877 used a needle to record sound waves on a rotating cylinder wrapped with tinfoil. Thomas Edison National Historical Park/National Park Service/U.S. Department of the Interior

Enter Bell. In 1880, the French government had awarded Bell the Volta Prize and 50,000 francs (about US $10,000 at the time) for his invention of the telephone. The following year, he, his cousin Chichester A. Bell, and Charles Sumner Tainter used the prize money to found the Volta Laboratory Association in Washington, D.C., to do research on sound recording and transmission.

Tainter saw potential in the phonograph. Edison’s version used a needle to etch sound waves on a sheet of tinfoil wrapped around a metal cylinder. The foil was easily damaged, the sound quality was distorted and squeaky, and the cylinder could be replayed only a few times before degrading and becoming inaudible. Edison’s phonograph couldn’t be easily commercialized, in other words.

Chichester Bell and Tainter greatly improved the sound quality by replacing the tinfoil with wax-coated cardboard cylinders. By 1886, the researchers at Volta Lab had a patented product: the Graphophone.

Engraving of a man sitting on a chair and holding a tube to his mouth. The tube connects to a machine on a small desk.Two colleagues of Alexander Graham Bell refined Edison’s phonograph in the 1880s to create the Graphophone, which used wax-coated cardboard cylinders rather than tinfoil. Universal History Archive/Getty Images

Bell and Tainter believed the Graphophone would find greatest use as a mechanical stenographer. As a “dictator,” you would speak into the tube, and a stylus would trace the sound wave on the wax cylinder. The cylinder would then be handed off to a secretary for transcription. Typists used playback machines with foot pedals to control the speed of the recording and to reverse and repeat as necessary.

A manufacturing company set up by Volta Lab sold several machines to the U.S. government. One enthusiastic early adopter was Edward D. Easton, a noted stenographer for the U.S. Congress and the Supreme Court. Although Easton took notes in shorthand, he immediately recited his notes into the Graphophone after each session.

Easton became an evangelist for the instrument, writing glowing accounts in a trade magazine. The machine made no mistakes and could take dictation as fast as the speaker could articulate. The phonograph never complained when a transcriber needed a phrase repeated. The phonograph didn’t suffer from poor penmanship. Anyone could learn to use the machine in two weeks or less, compared to months or years to master stenography. Such were Easton’s claims. (Easton was such a fan that he cofounded the Columbia Phonograph Co., which went on to become a leading maker of phonographs and recorded music and lives on today as Columbia Records.)

Before long, several companies were manufacturing and selling phonographs and dictation machines. Even though demand was initially light, patent-infringement lawsuits sprang up, which soon threatened to bankrupt all of the companies involved. Finally, in 1896, the various parties agreed to stop fighting and to cross-license each other’s intellectual property. This didn’t end the Bell-Edison rivalry, but it allowed the phonograph business to take off in earnest, aided by the sales of mass-produced recorded music cylinders. And the accepted name for this entertainment machine became the phonograph.

The Dictaphone Gets Down to Business

But Bell, Tainter, and Edison didn’t forget the original promise of mechanical stenography, and the rivals soon came out with competing and very similar products designed specifically for dictation: the Dictaphone and the Ediphone. The public found it difficult to distinguish the two products, and it wasn’t long before “dictaphone” was being used to describe all dictation machines. (The Columbia Graphophone Co. trademarked “Dictaphone” in 1907—a confusing neologism of dicta from the Latin for “sayings” or “say repeatedly” and phone from the Greek for “voice” or “sound.”)

As David Morton recounts in his 1999 book Off the Record (Rutgers University Press), Dictaphone sales accelerated as scientific management for business began to take root. Office managers intent on streamlining, standardizing, and systemizing workflows saw the Dictaphone as a labor-saving device. In 1912, for instance, an efficiency commission set up by U.S. President William Taft endorsed the use of dictation machines in government offices. The railroad and insurance industries followed suit as they standardized their financial records. Later, managers began using dictation machines to conquer their business correspondence.

Black and white photo of an older man with a moustache holding a tube near his mouth. The tube is connected to a small machine on a desk.A Congressional reporter uses a Dictaphone in 1908. The U.S. government was an early adopter of the machines.Library of Congress

And yet, the Dictaphone wasn’t obviously destined to become an indispensable piece of office equipment like the typewriter. In 1923, for instance, 15,000 dictation machines were sold in the United States, versus 744,000 typewriters.

In 1926, the Dictaphone Corp. tried to drum up interest by sponsoring Henry Lunn, founder of a large U.K. travel company, on an around-the-world lecture tour. At each hotel he visited, the company ensured there was a Dictaphone for Lunn to record his diary. Consider this a prototype for the modern hotel business center. At the end of his journey, Lunn published Round the World With a Dictaphone—part travelogue, part proselytizing for Christian churches to support the League of Nations, and part Dictaphone promotion. Even so, by 1945, Dictaphone estimated that only 15 to 25 percent of the potential market had been captured.

There were social reasons working against dictation machines, Morton says in his book. Executives relied on their secretaries not only for dictation and transcription, but also for their often unacknowledged aid in prompting, correcting, and filling in their bosses’ thoughts—the soft skills that a machine could not replace.

Morton also attributes the slow uptake to the technology itself. One quirk of the Dictaphone is that it continued to use wax cylinders long after phonograph players had switched to discs. Transcribers often complained that the wax recordings were unintelligible—dictators needed to speak directly into the speaking tube, loudly, clearly, and at an appropriate pace, but many did not.

Black and white photo of a woman sitting at a manual typewriter and wearing headphones that connect to a machine. A secretary plays back the sound from a recorded Ediphone cylinder in 1930 to transcribe the cylinder’s contents.Popperfoto/Getty Images

During World War II, Dictaphone finally ditched the wax cylinders in favor of etching grooves on a plastic belt, although the new machines were available only to U.S. government agencies until the end of the war. In 1947, the company publicly introduced the new technology with its Time-Master series. Each Dictabelt held about 15 minutes of recording. Meanwhile, Edison’s Ediphone was rebranded the Voicewriter and recorded on distinctive red plastic discs.

Color photo of a rectangular device with a long cord and microphone and 3 reddish discs with a hole in the middle. This 1953 Edison Voicewriter recorded the speaker’s voice on plastic Diamond Discs. Magnetic tape came later.Cooper Hewitt/Smithsonian Design Museum/Smithsonian Institution

In the 1960s, Dictaphone finally embraced magnetic recording tape, in the form of cassette tapes. Pressure initially came from European companies, such as the Dutch electronics company Philips, which entered the U.S. market in 1958 with a low-priced tape-cartridge machine. Four years later, Philips introduced the Compact Cassette, which became the basis of today’s audio cassette. Transistorized electronics furthered miniaturization and made dictation machines much more portable. Eventually, solid-state storage replaced magnetic tape, and today, we all carry around a dictation device with an effectively infinite recording time via cloud storage, and, if we choose to use it, automatic transcription.

The Dictaphone in the Classroom

None of the stories about businessmen using (or abusing) Dictaphones really surprised me. What did surprise me were the creative ways the Dictaphone was used as a pedagogical tool.

In 1924, for example, Dwight Everett Watkins at the University of California described in a paper how his students used a microphone, an amplifier, a telemegaphone (a type of speaker), and a Dictaphone to aid in public speaking. The setup helped students understand their rhetorical imperfections: bad grammar and bad sentence and paragraph structure. It also helped with elocution—one of the early applications that Edison envisioned for his phonograph.

In 1933, George F. Meyer wrote about using the Dictaphone as an educational aid for blind and low-vision students in Minneapolis. Teachers recorded course material that would otherwise have had to be read aloud. And the students liked being able to listen to the material repeatedly without inconveniencing a human reader.

Black and white photo of a woman seated to the left and typing on a typewriter, with a crowd of young women standing around her wearing headphones that connect to a machine at the typist\u2019s side.Students in 1930 listen to a Dictaphone recording, which the seated woman controls with foot pedals.George Rinhart/Corbis/Getty Images

In 1938, Frances M. Freeman wrote her master’s thesis on whether the Dictaphone could help typing students who were struggling to master the skill. Her study was supported by the Dictaphone Sales Corp., but unfortunately for the company, she concluded that using a Dictaphone offered no advantage in learning to type. She did find that the students in the Dictaphone group seemed more alert in class than students taught the traditional way.

That last finding was borne out in a 1964 experiment at Dunbar Elementary School in New Orleans, where the Dictaphone Corp. had outfitted an “electronic classroom.” The idea was to help reluctant students by fostering an environment where learning was fun. As Principal Beulah E. Brown related in an article about the experiment, she’d first encountered a Dictaphone several years earlier while on sabbatical and immediately saw its pedagogical potential. The Dictaphone, Brown wrote, promised individually tailored educational experiences, allowing students to focus on specific challenges and freeing the teacher to have more personal interactions with each child. Testimonials from Warren Honore’s fifth grade class attest to its success as an engaging technology.

From the Dictaphone to Email to AI

As a historian of technology, I loved learning that two heavyweights in the field, Melvin Kranzberg and Thomas Kuhn, were both committed fans of the Dictaphone. I also enjoyed meditating on the role of the dictaphone and other technology as a mediator in the writing process.

My research turned up Tolstoy’s Dictaphone: Technology and the Muse (Graywolf Press), a 1996 collection of essays edited by the literary critic Sven Birkerts. The title comes from an anecdote about the Russian writer Leo Tolstoy, who refused the offer of a Dictaphone because it would be “too dreadfully exciting” and would distract him from his literary pursuits. To form the volume, Birkerts posed questions to his authors concerning the place of self and soul in a society being bombarded with new forms of communication—namely, email and the internet.

Today, of course, our world is being shaped by AI, arguably an even bigger disrupter than email was in the 1990s or the Dictaphone was in the early 20th century. But then, technology is always trying to remake society, and the path it takes is never inevitable. Sometimes, when we’re lucky, it is delightfully surprising.

Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.

An abridged version of this article appears in the January 2026 print issue as “This Machine Listened to ‘Dictators.’ ”

References


David Morton emailed me and suggested I write about the Dictaphone after hearing me give a talk about this series. Morton is an expert on the Dictaphone, and I leaned heavily on his book Off the Record: The Technology and Culture of Sound Recording in America (Rutgers University Press, 1999).

The Dictaphone Sales Corp. publicized its product with articles such as “Why Learn the Operation of the Dictating Machine” by L.C. Stowell, the company’s president, in the Journal of Business Education in 1935.

Articles on the pedagogical uses of the Dictaphone include George F. Meyer’s “The Dictaphone as an Aid in the Instruction of Children with Defective Vision,” published in The Teachers Forum in March 1933; Beulah E. Brown’s “ ‘Learning Is Fun’ With the Dictaphone Electronic Classroom—A Discussion,” published in the Journal of Negro Education, summer 1966; and Frances M. Freeman’s master’s thesis, “An Experimental Study of the Dictaphone Method of Teaching Typewriting to Retarded Students,” submitted to the Oklahoma Agricultural and Mechanical College in 1938. (For Freeman, “retarded” meant students who typed slowly and made a lot of mistakes.)

Global Giants Are Investing in Clean Tech Despite Politics

2026-01-06 03:00:03



The Trump administration has given corporations plenty of convenient excuses to retreat from their climate commitments, with its moves to withdraw from the Paris Agreement, roll back emissions regulations, and scale back clean energy incentives.

But will the world’s largest corporations follow its lead?

Some multinational companies have indeed scaled back. For instance, Wells Fargo dropped its goal for the companies the bank finances to reach net-zero emissions by 2050, saying the conditions necessary for meeting that goal, such as policy certainty, consumer behavior and the pace of clean technology development, hadn’t fully materialized. Oil giant BP told investors that earlier optimism about a fast transition to renewable energy was “misplaced” given the changing regulatory environment.

The Conversation logo with speech bubble; text noting original post appearance.

However, many others, including the world’s largest retailer, Walmart, aren’t trading their long-term risk planning for Washington’s focus on short-term cost savings. They are continuing their climate policies, but often doing so quietly to avoid scrutiny.

These companies still face ongoing pressure from state and local governments, the European Union, customers and other sources to reduce their impact on the climate. They also see ways to gain a competitive advantage from investing in a cleaner future.

For my new book, “Corporations at Climate Crossroads,” I interviewed executives and analyzed corporate climate actions and environmental performance of Global 500 and S&P 500 companies over the past decade.

These companies’ climate decisions are driven by a complex interplay of pressures from existing and future laws and the need to earn goodwill with employees, customers, investors, regulators, and others.

States wield influence, too

In the U.S., state climate regulations affect multinational corporations. That’s especially true in California – the world’s fourth largest economy and the state with the largest population.

While President Donald Trump dismantles U.S. climate policies and federal oversight, California and the European Union have moved in the opposite direction, becoming the de facto regulators for global businesses.



California’s newly enacted climate laws extend its cap-and-trade program, now called “cap and invest,” which is designed to ratchet down corporate emissions. They also lock in binding targets to reach net-zero greenhouse gas emissions by 2045. And they set clean-power levels that rival the Europe Union’s Green Deal and outpace most national governments.

Other states have joined California in committing to meet the goals of the international Paris climate agreement as part of the U.S. Climate Alliance. The bipartisan coalition of 24 governors, from Arizona’s to Vermont’s, represents over half of the U.S. population.



Several states have been considering “polluters pay” laws. These laws would require companies to pay for their contributions to climate change, with the money going into funds for adaptation projects. Vermont and New York passed similar laws in 2024.

Climate laws still apply in Europe and elsewhere

Outside the U.S., several countries have climate regulations that multinational companies must follow.

The European Union remains a primary driver, though it has recently recalibrated its approach to focus on the largest corporations, reducing the administrative burden on smaller firms. The EU’s broader “Fit for 55” framework aims to cut its emissions by 55 percent by 2030 through policies like binding climate reporting rules. Most notably, the carbon tax for goods entering the EU has, as of January 2026, transitioned from a reporting exercise into a direct financial liability—a shift supported by initiatives to boost competitiveness in clean energy and green infrastructure.

Beyond Europe, companies face similar emissions reporting requirements in the United Kingdom, New Zealand, Singapore, California and cities such as Hong Kong.

While companies can pause their storytelling, they must still invest in the hard data infrastructure required to count their carbon.

While timelines for some of those laws have shifted, the underlying momentum remains. For example, while California temporarily halted a law requiring companies to publish narrative reports on their climate risks (SB 261), the mandate for hard emissions data (SB 253) remains on track for 2026. This “quantitative yes, qualitative maybe” status means that while companies can pause their storytelling, they must still invest in the hard data infrastructure required to count their carbon.

The International Court of Justice gave legal backing to such initiatives in July 2025 when it issued an advisory opinion establishing that countries around the globe have a legal obligation to protect the climate. That decision may ultimately increase pressure on global businesses to reduce their contributions to climate change.

Multinationals put pressure on supply chains

Multinational companies’ efforts to reduce their climate impact puts pressure on their suppliers – meaning many more companies must take their climate impact into consideration.

For instance, U.S.-based Walmart operates over 10,000 stores across 19 countries and is the largest single buyer of goods in the world. That means it faces a wide range of regulations, including tracking and reducing emissions from its suppliers. In 2017, it launched Project Gigaton, aiming to cut 1 gigaton of supply-chain greenhouse gas emissions by 2030. Suppliers including Nestle, Unilever, Coca Cola, Samsung and Hanes helped the company reach its target six years early through practical measures such as boosting energy efficiency, redesigning packaging, and reducing food waste. While the data is verified through internal quality controls co-developed with NGOs like the Environmental Defense Fund, analysts at Planet Tracker note that these “avoided” emissions haven’t yet stopped Walmart’s absolute footprint from rising alongside its business growth.

In early 2025, this growth led Walmart to push back its interim deadlines for two of its most ambitious emissions reduction targets. Despite these delays, Walmart’s “emissions intensity”—the carbon produced per dollar of revenue—has fallen by roughly 47 percent over the last decade. Moreover, almost half of its electricity worldwide came from renewable energy in 2024, its emissions per unit of revenue fell, and it is still targeting zero emissions from its operations by 2040.

There are profits to be made in clean tech

In addition to facing pressure from buyers and governments, companies see profits to be made from investing in climate-friendly clean technology.

Since 2016, investments in clean energy have outpaced that of fossil fuels globally. This trend has only hastened, with nearly twice as much invested in clean energy as fossil fuels in 2025.

Lately, myriad new business opportunities for multinational companies and start-ups alike have focused on meeting AI’s energy demand through clean energy.

From 2014 to 2024, the climate tech sector yielded total returns of nearly 200 percent , and U.S. investment in climate tech was still growing in 2025.

In the first half of 2025, close to one-fifth of the over 1,600 venture deals in climate tech were made by corporations for strategic reasons, such as technology access, supply chain integration, or future product offerings. Corporate strategic deals continued to represent about 20 to 23 percent of all climate tech equity transactions through the third and fourth quarters of 2025.

However, this surge in investment is more than a search for profit; it is a defensive necessity as the tech industry’s growth begins to collide with its environmental limits.

The AI energy paradox

The rapid expansion of AI is forcing multinational companies to make explicit choices about their climate priorities. While tech leaders once relied on annual renewable credits to meet climate targets, the scale of the AI power boom is forcing more rigorous carbon accounting. Global data centers are projected to consume more electricity than Japan by 2030, a shift that turns “voluntary” climate investments into a core business requirement for securing 24/7 energy supplies.

In 2025, the tech giants’ own reports revealed the scale of AI emissions. Microsoft’s 2025 Environmental Sustainability Report revealed a 23.4 percent increase in total emissions since its 2020 baseline. Similarly, Google’s emissions have climbed 51 percent since 2019, with a 22 percent surge in Scope 3 (supply chain) emissions in 2024 alone. Amazon’s 2024 Sustainability Report noted a 33 percent jump since 2019 driven by the construction of new data centers. Meta’s supplier’s emissions (99 percent of its total footprint) are being driven to new heights by the “embodied carbon” of AI hardware.

While high costs might tempt some to cut corners, climate action could instead become a hedge against energy volatility. Companies like Amazon and Google are securing reliable supply by leveraging federal fast-tracking of nuclear permits to act as primary offtakers for the first generation of Small Modular Reactors (SMRs). This shift is accelerated by new federal orders to bypass nuclear licensing hurdles, as seen in Google’s landmark agreement with Kairos Power and Amazon’s $500 million investment in X-energy—deals designed to secure the 24/7 “baseload” power AI requires without abandoning carbon-free commitments. Despite their advanced designs, SMRs are still provoking debate over their radioactive waste and the potential risks of deploying nuclear technology closer to populated industrial hubs.

Companies look to their customers and the future

As climate risks grow alongside political headwinds, companies are facing both pushes toward and pulls away from protecting the planet from catastrophic effects. Oil and gas companies, for example, continue to invest in new oil and gas development. However, they also forecast renewable energy growth accelerating and are investing in clean tech.

The corporate leaders I interviewed, from tech companies like Intel to sporting goods and apparel companies like Adidas, talked about aligning sustainability efforts and initiatives across their business globally whenever possible. This proactive approach allows them to more seamlessly collect data and respond to pressures arising domestically and globally, minimizing the need for costly patchwork efforts later. Moreover, global businesses know they will continue to face demands from their customers, investors and employees to be better stewards of the planet.

AI-powered consumers are increasingly demanding responsible business and accountability on corporate net-zero pledges

In a 2025 Getty Images survey of over 5,000 consumers across 25 countries, more than 80 percent of respondents reported that they expect clear ESG guidelines from businesses. Furthermore, these consumers—from Brazil, Australia and Japan to the UK and US—are increasingly using GenAI-driven shopping assistants to filter for “responsible business” practices.

U.S. market research from the Hartman Group corroborates this trend: 71 percent of surveyed food and beverage consumers consider environmental and social impacts in their purchasing decisions. They increasingly demand credible, tangible, and verifiable evidence. When claims carry third-party certifications, consumers demonstrate significantly higher trust , whereas vague or unsupported claims fuel skepticism.

In 2026, the “Climate Crossroads” is a line item on the corporate balance sheet. The divergence between federal deregulation in Washington and the rigid physical demands of the AI revolution has created a new era of corporate pragmatism. While some firms may use political shifts to “greenhush” or delay abstract pledges, the world’s largest corporations are finding that they cannot simply account away the massive energy and infrastructure requirements of the AI Age. AI-powered consumers are increasingly demanding responsible business and accountability on corporate net-zero pledges. In this new landscape, the global businesses that thrive will be those that build carbon-free foundations, while responding to existing and future laws across the globe.

This article is adapted by the author from The Conversation under a Creative Commons license. Read the original article.