2026-01-07 22:00:02

Imagine you’re at a bustling dinner party filled with laughter, music, and clinking silverware. You’re trying to follow a conversation across the table, but every word feels like it’s wrapped in noise. For most people, these types of party scenarios, where it’s difficult to filter out extraneous sounds and focus on a single source, are an occasional annoyance. For millions with hearing loss, they’re a daily challenge—and not just in busy settings.
Today’s hearing aids aren’t great at determining which sounds to amplify and which to ignore, and this often leaves users overwhelmed and fatigued. Even the routine act of conversing with a loved one during a car ride can be mentally draining, simply because the hum of the engine and road noises are magnified to create loud and constant background static that blurs speech.
In recent years, modern hearing aids have made impressive strides. They can, for example, use a technology called adaptive beamforming to focus their microphones in the direction of a talker. Noise-reduction settings also help decrease background cacophony, and some devices even use machine-learning-based analysis, trained on uploaded data, to detect certain environments—for example a car or a party—and deploy custom settings.
That’s why I was initially surprised to find out that today’s state-of-the-art hearing aids aren’t good enough. “It’s like my ears work but my brain is tired,” I remember one elderly man complaining, frustrated with the inadequacy of his cutting-edge noise-suppression hearing aids. At the time, I was a graduate student at the University of Texas at Dallas, surveying individuals with hearing loss. The man’s insight led me to a realization: Mental strain is an unaddressed frontier of hearing technology.
But what if hearing aids were more than just amplifiers? What if they were listeners too? I envision a new generation of intelligent hearing aids that not only boost sound but also read the wearer’s brain waves and other key physiological markers, enabling them to react accordingly to improve hearing and counter fatigue.
Until last spring, when I took time off to care for my child, I was a senior audio research scientist at Harman International, in Los Angeles. My work combined cognitive neuroscience, auditory prosthetics, and the processing of biosignals, which are measurable physiological cues that reflect our mental and physical state. I’m passionate about developing brain-computer interfaces (BCIs) and adaptive signal-processing systems that make life easier for people with hearing loss. And I’m not alone. A number of researchers and companies are working to create smart hearing aids, and it’s likely they’ll come on the market within a decade.
Two technologies in particular are poised to revolutionize hearing aids, offering personalized, fatigue-free listening experiences: electroencephalography (EEG), which tracks brain activity, and pupillometry, which uses eye measurements to gauge cognitive effort. These approaches might even be used to improve consumer audio devices, transforming the way we listen everywhere.
More than 430 million people suffer from disabling hearing loss worldwide, including 34 million children, according to the World Health Organization. And the problem will likely get worse due to rising life expectancies and the fact that the world itself seems to be getting louder. By 2050, an estimated 2.5 billion people will suffer some degree of hearing loss and 700 million will require intervention. On top of that, as many as 1.4 billion of today’s young people—nearly half of those aged 12 to 34—could be at risk of permanent hearing loss from listening to audio devices too loud and for too long.
Every year, close to a trillion dollars is lost globally due to unaddressed hearing loss, a trend that is also likely getting more pronounced. That doesn’t account for the significant emotional and physical toll on the hearing impaired, including isolation, loneliness, depression, shame, anxiety, sleep disturbances, and loss of balance.
Flex-printed electrode arrays, such as these from the Fraunhofer Institute for Digital Media Technology, offer a comfortable option for collecting high-quality EEG signals. Leona Hofmann/Fraunhofer IDMT
And yet, despite widespread availability, hearing aid adoption remains low. According to a 2024 study published in The Lancet, only about 13 percent of Americans adults with hearing loss regularly wear hearing aids. Key reasons for this deficiency include discomfort, stigma, cost—and, crucially, frustration with the poor performance of hearing aids in noisy environments.
Historically, hearing technology has come a long way. As early as the 13th century, people began using horns of cows and rams as “ear trumpets.” Commercial versions made of various materials, including brass and wood, came on the market in the early 19th century. (Beethoven, who famously began losing his hearing in his twenties, used variously shaped ear trumpets, some of which are now on display in a museum in Bonn, Germany.) But these contraptions were so bulky that users had to hold them with their hands or wear them within headbands. To avoid stigma, some even hid hearing aids inside furniture to mask their disability. In 1819, a special acoustic chair was designed for the king of Portugal, featuring arms ornately carved to look like open lion mouths, which helped transmit sound to the king’s ear via speaking tubes.
Modern hearing aids came into being after the advent of electronics in the early 20th century. Early devices used vacuum tubes and then transistors to amplify sound, shrinking over time from bulky body-worn boxes to discreet units that fit behind or inside the ear. At their core, today’s hearing aids still work on the same principle: A microphone picks up sound, a processor amplifies and shapes it to match the user’s hearing loss, and a tiny speaker delivers the adjusted sound into the ear canal.
Today’s best-in-class devices, like those from Oticon, Phonak, and Starkey, have pioneered increasingly advanced technologies, including the aforementioned beamforming microphones, frequency lowering to better pick up high-pitched sounds and voices, and machine learning to recognize and adapt to specific environments. For example, the device may reduce amplification in a quiet room to avoid escalating background hums or else increase amplification in a noisy café to make speech more intelligible.
Advances in the AI technique of deep learning, which relies on artificial neural networks to automatically recognize patterns, also hold enormous promise. Using context-aware algorithms, this technology can, for example, be used to help distinguish between speech and noise, predict and suppress unwanted clamor in real time, and attempt to clean up speech that is muffled or distorted.
The problem? As of right now, consumer systems respond only to external acoustic environments and not to the internal cognitive state of the listener—which means they act on imperfect and incomplete information. So, what if hearing aids were more empathetic? What if they could sense when the listener’s brain feels tired or overwhelmed and automatically use that feedback to deploy advanced features?
When it comes to creating intelligent hearing aids, there are two main challenges. The first is building convenient, power-efficient wearable devices that accurately detect brain states. The second, perhaps more difficult step is decoding feedback from the brain and using that information to help hearing aids adapt in real time to the listener’s cognitive state and auditory experience.
Let’s start with EEG. This century-old noninvasive technology uses electrodes placed on the scalp to measure the brain’s electrical activity through voltage fluctuations, which are recorded as “brain waves.”
Brain-computer interfaces allow researchers to accurately determine a listener’s focus in multitalker environments. Here, professor Christopher Smalt works on an attention-decoding system at the MIT Lincoln Laboratory.MIT Lincoln Laboratory
Clinically, EEG has long been applied for diagnosing epilepsy and sleep disorders, monitoring brain injuries, assessing hearing ability in infants and impaired individuals, and more. And while standard EEG requires conductive gel and bulky headsets, we now have versions that are far more portable and convenient. These breakthroughs have already allowed EEG to migrate from hospitals into the consumer tech spaces, driving everything from neurofeedback headbands to the BCIs in gaming and wellness apps that allow people to control devices with their minds.
The cEEGrid project at Oldenburg University, in Germany, positions lightweight adhesive electrodes around the ear to create a low-profile version. In Denmark, Aarhus University’s Center for Ear-EEG also has an ear-based EEG system designed for comfort and portability. While the signal-to-noise ratio is slightly lower compared to head-worn EEG, these ear-based systems have proven sufficiently accurate for gauging attention, listening effort, hearing thresholds, and speech tracking in real time.
For hearing aids, EEG technology can pick up brain-wave patterns that reveal how well a listener is following speech: When listeners are paying attention, their brain rhythms synchronize with the syllabic rhythms of discourse, essentially tracking the speaker’s cadence. By contrast, if the signal becomes weaker or less precise, it suggests the listener is struggling to comprehend and losing focus.
During my own Ph.D. research, I observed firsthand how real-time brain-wave patterns, picked up by EEG, can reflect the quality of a listener’s speech cognition. For example, when participants successfully homed in on a single talker in a crowded room, their neural rhythms aligned nearly perfectly with that speaker’s voice. It was as if there were a brain-based spotlight on that speaker! But when background fracas grew louder or the listener’s attention drifted, those patterns waned, revealing stress in keeping up.
Today, researchers at Oldenburg University, Aarhus University, and MIT are developing attention-decoding algorithms specifically for auditory applications. For example, Oldenburg’s cEEGrid technology has been used to successfully identify which of two speakers a listener is trying to hear. In a related study, researchers demonstrated that Ear-EEG can track the attended speech stream in multitalker environments.
All of this could prove transformational in creating neuroadaptive hearing aids. If a listener’s EEG reveals a drop in speech tracking, the hearing aid could infer increased listening difficulty, even if ambient noise levels have remained constant. For example, if a hearing-impaired car driver can’t focus on a conversation due to mental fatigue caused by background noise, the hearing aid could switch on beamforming to better spotlight the passenger’s voice, as well as machine-learning settings to deploy sound canceling that blocks the din of the road.
Of course, there are several hurdles to cross before commercialization becomes possible. For one thing, EEG-paired hearing aids will need to handle the fact that neural responses differ from person to person, which means they will likely need to be calibrated individually to capture each wearer’s unique brain-speech patterns.
Additionally, EEG signals are themselves notoriously “noisy,” especially in real-world environments. Luckily, we already have algorithms and processing tools for cleaning and organizing these signals so computer models can search for key patterns that predict mental states, including attention drift and fatigue.
Commercial versions of EEG-paired hearing aids will also need to be small and energy-efficient when it comes to signal processing and real-time computation. And getting them to work reliably, despite head movement and daily activity, will be no small feat. Importantly, companies will need to resolve ethical and regulatory considerations, such as data ownership. To me, these challenges seem surmountable, especially with technology progressing at a rapid clip.
Now let’s consider a second way of reading brain states: through the listener’s eyes.
When a person has trouble hearing and starts feeling overwhelmed, the body reacts. Heart-rate variability diminishes, indicating stress, and sweating increases. Researchers are investigating how these types of autonomic nervous-system responses can be measured and used to create smart hearing aids. For the purposes of this article, I will focus on a response that seems especially promising—namely, pupil size.
Pupillometry is the measurement of pupil size and how it changes in response to stimuli. We all know that pupils expand or contract depending on light brightness. As it turns out, pupil size is also an accurate means of evaluating attention, arousal, mental strain—and, crucially, listening effort.
Pupil size is determined by both external stimuli, such as light, and internal stimuli, such as fatigue or excitement.Chris Philpot
In recent years, studies at University College London and Leiden University have demonstrated that pupil dilation is consistently greater in hearing-impaired individuals when processing speech in noisy conditions. Research has also shown pupillometry to be a sensitive, objective correlate of speech intelligibility and mental strain. It could therefore offer a feedback mechanism for user-aware hearing aids that dynamically adjust amplification strategies, directional focus, or noise reduction based not just on the acoustic environment but on how hard the user is working to comprehend speech.
While more straightforward than EEG, pupillometry presents its own engineering challenges. Unlike with ears, which can be assessed from behind, pupillometry requires a direct line of sight to the pupil, necessitating a stable, front-facing camera-to-eye configuration—which isn’t easy to achieve when a wearer is moving around in real-world settings. On top of that, most pupil-tracking systems require infrared illumination and high-resolution optical cameras, which are too bulky and power intensive for the tiny housings of in-ear or behind-the-ear hearing aids. All this makes it unlikely that standalone hearing aids will include pupil-tracking hardware in the near future.
A more viable approach may be pairing hearing aids with smart glasses or other wearables that contain the necessary eye-tracking hardware. Products from companies like Tobii and Pupil Labs already offer real-time pupillometry via lightweight headgear for use in research, behavioral analysis, and assistive technology for people with medical conditions that limit movement but leave eye control intact. Apple’s Vision Pro and other augmented reality or virtual reality platforms also include built-in eye-tracking sensors that could support pupillometry-driven adaptations for audio content.
Smart glasses that measure pupil size, such as these made by Tobii, could help determine listening strain. Tobii
Once pupil data is acquired, the next step will be real-time interpretation. Here, again, is where machine learning can use large datasets to detect patterns signifying increased cognitive load or attentional shifts. For instance, if a listener’s pupils dilate unnaturally during a conversation, signifying strain, the hearing aid could automatically engage a more aggressive noise suppression mode or narrow its directional microphone beam. These types of systems can also learn from contextual features, such as time of day or prior environments, to continuously refine their response strategies.
While no commercial hearing aid currently integrates pupillometry, adjacent industries are moving quickly. Emteq Labs is developing “emotion-sensing” glasses that combine facial and eye tracking, along with pupil measurement, to do things like evaluate mental health and capture consumer insights. Ethical controversies aside—just imagine what dystopian governments might do with emotion-reading eyewear!—such devices show that it’s feasible to embed biosignal monitoring in consumer-grade smart glasses.
Back at the dinner party, it remains nearly impossible to participate in conversation. “Why even bother going out?” some ask. But that will soon change.
We’re at the cusp of a paradigm shift in auditory technology, from device-centered to user-centered innovation. In the next five years, we may see hybrid solutions where EEG-enabled earbuds work in tandem with smart glasses. In 10 years, fully integrated biosignal-driven hearing aids could become the standard. And in 50? Perhaps audio systems will evolve into cognitive companions, devices that adjust, advise, and align with our mental state.
Personalizing hearing-assistance technology isn’t just about improving clarity; it’s also about easing mental fatigue, reducing social isolation, and empowering people to engage confidently with the world. Ultimately, it’s about restoring dignity, connection, and joy.
2026-01-06 21:00:02

Thanks to Hollywood, whenever I think of a Dictaphone, my imagination immediately jumps to a mid-20th-century office, Don Draper suavely seated at his desk, voicing ad copy into a desktop machine. A perfectly coiffed woman from the secretarial pool then takes the recordings and neatly types them up, with carbon copies of course.
I had no idea the Dictaphone actually had its roots in the 19th century and a rivalry between two early tech giants: Alexander Graham Bell and Thomas Edison. And although it took decades to take hold in the modern office, it found novel uses in other fields.
The Dictaphone was born from the competition and the cooperation of Bell and Edison and their capable teams of researchers. In 1877, Edison had introduced the phonograph, which he later declared his favorite invention. And yet he wasn’t quite certain about its commercial applications. Initially, he thought it might be good for recording telephone messages. Then he began to imagine other uses: a mechanical stenographer for businessmen, a notetaker for students, an elocution instructor, a talking book for the blind. The playback of recorded music—the phonograph’s eventual killer app—was No. 4 on Edison’s list. And after a few public demonstrations, he set aside the invention to pursue other interests.
Thomas Edison’s early phonograph from 1877 used a needle to record sound waves on a rotating cylinder wrapped with tinfoil. Thomas Edison National Historical Park/National Park Service/U.S. Department of the Interior
Enter Bell. In 1880, the French government had awarded Bell the Volta Prize and 50,000 francs (about US $10,000 at the time) for his invention of the telephone. The following year, he, his cousin Chichester A. Bell, and Charles Sumner Tainter used the prize money to found the Volta Laboratory Association in Washington, D.C., to do research on sound recording and transmission.
Tainter saw potential in the phonograph. Edison’s version used a needle to etch sound waves on a sheet of tinfoil wrapped around a metal cylinder. The foil was easily damaged, the sound quality was distorted and squeaky, and the cylinder could be replayed only a few times before degrading and becoming inaudible. Edison’s phonograph couldn’t be easily commercialized, in other words.
Chichester Bell and Tainter greatly improved the sound quality by replacing the tinfoil with wax-coated cardboard cylinders. By 1886, the researchers at Volta Lab had a patented product: the Graphophone.
Two colleagues of Alexander Graham Bell refined Edison’s phonograph in the 1880s to create the Graphophone, which used wax-coated cardboard cylinders rather than tinfoil. Universal History Archive/Getty Images
Bell and Tainter believed the Graphophone would find greatest use as a mechanical stenographer. As a “dictator,” you would speak into the tube, and a stylus would trace the sound wave on the wax cylinder. The cylinder would then be handed off to a secretary for transcription. Typists used playback machines with foot pedals to control the speed of the recording and to reverse and repeat as necessary.
A manufacturing company set up by Volta Lab sold several machines to the U.S. government. One enthusiastic early adopter was Edward D. Easton, a noted stenographer for the U.S. Congress and the Supreme Court. Although Easton took notes in shorthand, he immediately recited his notes into the Graphophone after each session.
Easton became an evangelist for the instrument, writing glowing accounts in a trade magazine. The machine made no mistakes and could take dictation as fast as the speaker could articulate. The phonograph never complained when a transcriber needed a phrase repeated. The phonograph didn’t suffer from poor penmanship. Anyone could learn to use the machine in two weeks or less, compared to months or years to master stenography. Such were Easton’s claims. (Easton was such a fan that he cofounded the Columbia Phonograph Co., which went on to become a leading maker of phonographs and recorded music and lives on today as Columbia Records.)
Before long, several companies were manufacturing and selling phonographs and dictation machines. Even though demand was initially light, patent-infringement lawsuits sprang up, which soon threatened to bankrupt all of the companies involved. Finally, in 1896, the various parties agreed to stop fighting and to cross-license each other’s intellectual property. This didn’t end the Bell-Edison rivalry, but it allowed the phonograph business to take off in earnest, aided by the sales of mass-produced recorded music cylinders. And the accepted name for this entertainment machine became the phonograph.
But Bell, Tainter, and Edison didn’t forget the original promise of mechanical stenography, and the rivals soon came out with competing and very similar products designed specifically for dictation: the Dictaphone and the Ediphone. The public found it difficult to distinguish the two products, and it wasn’t long before “dictaphone” was being used to describe all dictation machines. (The Columbia Graphophone Co. trademarked “Dictaphone” in 1907—a confusing neologism of dicta from the Latin for “sayings” or “say repeatedly” and phone from the Greek for “voice” or “sound.”)
As David Morton recounts in his 1999 book Off the Record (Rutgers University Press), Dictaphone sales accelerated as scientific management for business began to take root. Office managers intent on streamlining, standardizing, and systemizing workflows saw the Dictaphone as a labor-saving device. In 1912, for instance, an efficiency commission set up by U.S. President William Taft endorsed the use of dictation machines in government offices. The railroad and insurance industries followed suit as they standardized their financial records. Later, managers began using dictation machines to conquer their business correspondence.
A Congressional reporter uses a Dictaphone in 1908. The U.S. government was an early adopter of the machines.Library of Congress
And yet, the Dictaphone wasn’t obviously destined to become an indispensable piece of office equipment like the typewriter. In 1923, for instance, 15,000 dictation machines were sold in the United States, versus 744,000 typewriters.
In 1926, the Dictaphone Corp. tried to drum up interest by sponsoring Henry Lunn, founder of a large U.K. travel company, on an around-the-world lecture tour. At each hotel he visited, the company ensured there was a Dictaphone for Lunn to record his diary. Consider this a prototype for the modern hotel business center. At the end of his journey, Lunn published Round the World With a Dictaphone—part travelogue, part proselytizing for Christian churches to support the League of Nations, and part Dictaphone promotion. Even so, by 1945, Dictaphone estimated that only 15 to 25 percent of the potential market had been captured.
There were social reasons working against dictation machines, Morton says in his book. Executives relied on their secretaries not only for dictation and transcription, but also for their often unacknowledged aid in prompting, correcting, and filling in their bosses’ thoughts—the soft skills that a machine could not replace.
Morton also attributes the slow uptake to the technology itself. One quirk of the Dictaphone is that it continued to use wax cylinders long after phonograph players had switched to discs. Transcribers often complained that the wax recordings were unintelligible—dictators needed to speak directly into the speaking tube, loudly, clearly, and at an appropriate pace, but many did not.
A secretary plays back the sound from a recorded Ediphone cylinder in 1930 to transcribe the cylinder’s contents.Popperfoto/Getty Images
During World War II, Dictaphone finally ditched the wax cylinders in favor of etching grooves on a plastic belt, although the new machines were available only to U.S. government agencies until the end of the war. In 1947, the company publicly introduced the new technology with its Time-Master series. Each Dictabelt held about 15 minutes of recording. Meanwhile, Edison’s Ediphone was rebranded the Voicewriter and recorded on distinctive red plastic discs.
This 1953 Edison Voicewriter recorded the speaker’s voice on plastic Diamond Discs. Magnetic tape came later.Cooper Hewitt/Smithsonian Design Museum/Smithsonian Institution
In the 1960s, Dictaphone finally embraced magnetic recording tape, in the form of cassette tapes. Pressure initially came from European companies, such as the Dutch electronics company Philips, which entered the U.S. market in 1958 with a low-priced tape-cartridge machine. Four years later, Philips introduced the Compact Cassette, which became the basis of today’s audio cassette. Transistorized electronics furthered miniaturization and made dictation machines much more portable. Eventually, solid-state storage replaced magnetic tape, and today, we all carry around a dictation device with an effectively infinite recording time via cloud storage, and, if we choose to use it, automatic transcription.
None of the stories about businessmen using (or abusing) Dictaphones really surprised me. What did surprise me were the creative ways the Dictaphone was used as a pedagogical tool.
In 1924, for example, Dwight Everett Watkins at the University of California described in a paper how his students used a microphone, an amplifier, a telemegaphone (a type of speaker), and a Dictaphone to aid in public speaking. The setup helped students understand their rhetorical imperfections: bad grammar and bad sentence and paragraph structure. It also helped with elocution—one of the early applications that Edison envisioned for his phonograph.
In 1933, George F. Meyer wrote about using the Dictaphone as an educational aid for blind and low-vision students in Minneapolis. Teachers recorded course material that would otherwise have had to be read aloud. And the students liked being able to listen to the material repeatedly without inconveniencing a human reader.
Students in 1930 listen to a Dictaphone recording, which the seated woman controls with foot pedals.George Rinhart/Corbis/Getty Images
In 1938, Frances M. Freeman wrote her master’s thesis on whether the Dictaphone could help typing students who were struggling to master the skill. Her study was supported by the Dictaphone Sales Corp., but unfortunately for the company, she concluded that using a Dictaphone offered no advantage in learning to type. She did find that the students in the Dictaphone group seemed more alert in class than students taught the traditional way.
That last finding was borne out in a 1964 experiment at Dunbar Elementary School in New Orleans, where the Dictaphone Corp. had outfitted an “electronic classroom.” The idea was to help reluctant students by fostering an environment where learning was fun. As Principal Beulah E. Brown related in an article about the experiment, she’d first encountered a Dictaphone several years earlier while on sabbatical and immediately saw its pedagogical potential. The Dictaphone, Brown wrote, promised individually tailored educational experiences, allowing students to focus on specific challenges and freeing the teacher to have more personal interactions with each child. Testimonials from Warren Honore’s fifth grade class attest to its success as an engaging technology.
As a historian of technology, I loved learning that two heavyweights in the field, Melvin Kranzberg and Thomas Kuhn, were both committed fans of the Dictaphone. I also enjoyed meditating on the role of the dictaphone and other technology as a mediator in the writing process.
My research turned up Tolstoy’s Dictaphone: Technology and the Muse (Graywolf Press), a 1996 collection of essays edited by the literary critic Sven Birkerts. The title comes from an anecdote about the Russian writer Leo Tolstoy, who refused the offer of a Dictaphone because it would be “too dreadfully exciting” and would distract him from his literary pursuits. To form the volume, Birkerts posed questions to his authors concerning the place of self and soul in a society being bombarded with new forms of communication—namely, email and the internet.
Today, of course, our world is being shaped by AI, arguably an even bigger disrupter than email was in the 1990s or the Dictaphone was in the early 20th century. But then, technology is always trying to remake society, and the path it takes is never inevitable. Sometimes, when we’re lucky, it is delightfully surprising.
Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.
An abridged version of this article appears in the January 2026 print issue as “This Machine Listened to ‘Dictators.’ ”
David Morton emailed me and suggested I write about the Dictaphone after hearing me give a talk about this series. Morton is an expert on the Dictaphone, and I leaned heavily on his book Off the Record: The Technology and Culture of Sound Recording in America (Rutgers University Press, 1999).
The Dictaphone Sales Corp. publicized its product with articles such as “Why Learn the Operation of the Dictating Machine” by L.C. Stowell, the company’s president, in the Journal of Business Education in 1935.
Articles on the pedagogical uses of the Dictaphone include George F. Meyer’s “The Dictaphone as an Aid in the Instruction of Children with Defective Vision,” published in The Teachers Forum in March 1933; Beulah E. Brown’s “ ‘Learning Is Fun’ With the Dictaphone Electronic Classroom—A Discussion,” published in the Journal of Negro Education, summer 1966; and Frances M. Freeman’s master’s thesis, “An Experimental Study of the Dictaphone Method of Teaching Typewriting to Retarded Students,” submitted to the Oklahoma Agricultural and Mechanical College in 1938. (For Freeman, “retarded” meant students who typed slowly and made a lot of mistakes.)
2026-01-06 03:00:03

The Trump administration has given corporations plenty of convenient excuses to retreat from their climate commitments, with its moves to withdraw from the Paris Agreement, roll back emissions regulations, and scale back clean energy incentives.
But will the world’s largest corporations follow its lead?
Some multinational companies have indeed scaled back. For instance, Wells Fargo dropped its goal for the companies the bank finances to reach net-zero emissions by 2050, saying the conditions necessary for meeting that goal, such as policy certainty, consumer behavior and the pace of clean technology development, hadn’t fully materialized. Oil giant BP told investors that earlier optimism about a fast transition to renewable energy was “misplaced” given the changing regulatory environment.

However, many others, including the world’s largest retailer, Walmart, aren’t trading their long-term risk planning for Washington’s focus on short-term cost savings. They are continuing their climate policies, but often doing so quietly to avoid scrutiny.
These companies still face ongoing pressure from state and local governments, the European Union, customers and other sources to reduce their impact on the climate. They also see ways to gain a competitive advantage from investing in a cleaner future.
For my new book, “Corporations at Climate Crossroads,” I interviewed executives and analyzed corporate climate actions and environmental performance of Global 500 and S&P 500 companies over the past decade.
These companies’ climate decisions are driven by a complex interplay of pressures from existing and future laws and the need to earn goodwill with employees, customers, investors, regulators, and others.
In the U.S., state climate regulations affect multinational corporations. That’s especially true in California – the world’s fourth largest economy and the state with the largest population.
While President Donald Trump dismantles U.S. climate policies and federal oversight, California and the European Union have moved in the opposite direction, becoming the de facto regulators for global businesses.
California’s newly enacted climate laws extend its cap-and-trade program, now called “cap and invest,” which is designed to ratchet down corporate emissions. They also lock in binding targets to reach net-zero greenhouse gas emissions by 2045. And they set clean-power levels that rival the Europe Union’s Green Deal and outpace most national governments.
Other states have joined California in committing to meet the goals of the international Paris climate agreement as part of the U.S. Climate Alliance. The bipartisan coalition of 24 governors, from Arizona’s to Vermont’s, represents over half of the U.S. population.
Several states have been considering “polluters pay” laws. These laws would require companies to pay for their contributions to climate change, with the money going into funds for adaptation projects. Vermont and New York passed similar laws in 2024.
Outside the U.S., several countries have climate regulations that multinational companies must follow.
The European Union remains a primary driver, though it has recently recalibrated its approach to focus on the largest corporations, reducing the administrative burden on smaller firms. The EU’s broader “Fit for 55” framework aims to cut its emissions by 55 percent by 2030 through policies like binding climate reporting rules. Most notably, the carbon tax for goods entering the EU has, as of January 2026, transitioned from a reporting exercise into a direct financial liability—a shift supported by initiatives to boost competitiveness in clean energy and green infrastructure.
Beyond Europe, companies face similar emissions reporting requirements in the United Kingdom, New Zealand, Singapore, California and cities such as Hong Kong.
While companies can pause their storytelling, they must still invest in the hard data infrastructure required to count their carbon.
While timelines for some of those laws have shifted, the underlying momentum remains. For example, while California temporarily halted a law requiring companies to publish narrative reports on their climate risks (SB 261), the mandate for hard emissions data (SB 253) remains on track for 2026. This “quantitative yes, qualitative maybe” status means that while companies can pause their storytelling, they must still invest in the hard data infrastructure required to count their carbon.
The International Court of Justice gave legal backing to such initiatives in July 2025 when it issued an advisory opinion establishing that countries around the globe have a legal obligation to protect the climate. That decision may ultimately increase pressure on global businesses to reduce their contributions to climate change.
Multinational companies’ efforts to reduce their climate impact puts pressure on their suppliers – meaning many more companies must take their climate impact into consideration.
For instance, U.S.-based Walmart operates over 10,000 stores across 19 countries and is the largest single buyer of goods in the world. That means it faces a wide range of regulations, including tracking and reducing emissions from its suppliers. In 2017, it launched Project Gigaton, aiming to cut 1 gigaton of supply-chain greenhouse gas emissions by 2030. Suppliers including Nestle, Unilever, Coca Cola, Samsung and Hanes helped the company reach its target six years early through practical measures such as boosting energy efficiency, redesigning packaging, and reducing food waste. While the data is verified through internal quality controls co-developed with NGOs like the Environmental Defense Fund, analysts at Planet Tracker note that these “avoided” emissions haven’t yet stopped Walmart’s absolute footprint from rising alongside its business growth.
In early 2025, this growth led Walmart to push back its interim deadlines for two of its most ambitious emissions reduction targets. Despite these delays, Walmart’s “emissions intensity”—the carbon produced per dollar of revenue—has fallen by roughly 47 percent over the last decade. Moreover, almost half of its electricity worldwide came from renewable energy in 2024, its emissions per unit of revenue fell, and it is still targeting zero emissions from its operations by 2040.
In addition to facing pressure from buyers and governments, companies see profits to be made from investing in climate-friendly clean technology.
Since 2016, investments in clean energy have outpaced that of fossil fuels globally. This trend has only hastened, with nearly twice as much invested in clean energy as fossil fuels in 2025.
Lately, myriad new business opportunities for multinational companies and start-ups alike have focused on meeting AI’s energy demand through clean energy.
From 2014 to 2024, the climate tech sector yielded total returns of nearly 200 percent , and U.S. investment in climate tech was still growing in 2025.
In the first half of 2025, close to one-fifth of the over 1,600 venture deals in climate tech were made by corporations for strategic reasons, such as technology access, supply chain integration, or future product offerings. Corporate strategic deals continued to represent about 20 to 23 percent of all climate tech equity transactions through the third and fourth quarters of 2025.
However, this surge in investment is more than a search for profit; it is a defensive necessity as the tech industry’s growth begins to collide with its environmental limits.
The rapid expansion of AI is forcing multinational companies to make explicit choices about their climate priorities. While tech leaders once relied on annual renewable credits to meet climate targets, the scale of the AI power boom is forcing more rigorous carbon accounting. Global data centers are projected to consume more electricity than Japan by 2030, a shift that turns “voluntary” climate investments into a core business requirement for securing 24/7 energy supplies.
In 2025, the tech giants’ own reports revealed the scale of AI emissions. Microsoft’s 2025 Environmental Sustainability Report revealed a 23.4 percent increase in total emissions since its 2020 baseline. Similarly, Google’s emissions have climbed 51 percent since 2019, with a 22 percent surge in Scope 3 (supply chain) emissions in 2024 alone. Amazon’s 2024 Sustainability Report noted a 33 percent jump since 2019 driven by the construction of new data centers. Meta’s supplier’s emissions (99 percent of its total footprint) are being driven to new heights by the “embodied carbon” of AI hardware.
While high costs might tempt some to cut corners, climate action could instead become a hedge against energy volatility. Companies like Amazon and Google are securing reliable supply by leveraging federal fast-tracking of nuclear permits to act as primary offtakers for the first generation of Small Modular Reactors (SMRs). This shift is accelerated by new federal orders to bypass nuclear licensing hurdles, as seen in Google’s landmark agreement with Kairos Power and Amazon’s $500 million investment in X-energy—deals designed to secure the 24/7 “baseload” power AI requires without abandoning carbon-free commitments. Despite their advanced designs, SMRs are still provoking debate over their radioactive waste and the potential risks of deploying nuclear technology closer to populated industrial hubs.
As climate risks grow alongside political headwinds, companies are facing both pushes toward and pulls away from protecting the planet from catastrophic effects. Oil and gas companies, for example, continue to invest in new oil and gas development. However, they also forecast renewable energy growth accelerating and are investing in clean tech.
The corporate leaders I interviewed, from tech companies like Intel to sporting goods and apparel companies like Adidas, talked about aligning sustainability efforts and initiatives across their business globally whenever possible. This proactive approach allows them to more seamlessly collect data and respond to pressures arising domestically and globally, minimizing the need for costly patchwork efforts later. Moreover, global businesses know they will continue to face demands from their customers, investors and employees to be better stewards of the planet.
AI-powered consumers are increasingly demanding responsible business and accountability on corporate net-zero pledges
In a 2025 Getty Images survey of over 5,000 consumers across 25 countries, more than 80 percent of respondents reported that they expect clear ESG guidelines from businesses. Furthermore, these consumers—from Brazil, Australia and Japan to the UK and US—are increasingly using GenAI-driven shopping assistants to filter for “responsible business” practices.
U.S. market research from the Hartman Group corroborates this trend: 71 percent of surveyed food and beverage consumers consider environmental and social impacts in their purchasing decisions. They increasingly demand credible, tangible, and verifiable evidence. When claims carry third-party certifications, consumers demonstrate significantly higher trust , whereas vague or unsupported claims fuel skepticism.
In 2026, the “Climate Crossroads” is a line item on the corporate balance sheet. The divergence between federal deregulation in Washington and the rigid physical demands of the AI revolution has created a new era of corporate pragmatism. While some firms may use political shifts to “greenhush” or delay abstract pledges, the world’s largest corporations are finding that they cannot simply account away the massive energy and infrastructure requirements of the AI Age. AI-powered consumers are increasingly demanding responsible business and accountability on corporate net-zero pledges. In this new landscape, the global businesses that thrive will be those that build carbon-free foundations, while responding to existing and future laws across the globe.
This article is adapted by the author from The Conversation under a Creative Commons license. Read the original article.
2026-01-05 21:00:02

If a data center is moving in next door, you probably live in the United States. More than half of all upcoming global data centers—as indicated by land purchased for data centers not yet announced, those under construction, and those whose plans are public—will be developed in the United States.
And these figures are likely underselling the near-term data-center dominance of the United States. Power usage varies widely among data centers, depending on land availability and whether the facility will provide xhttps://spectrum.ieee.org/data-center-liquid-cooling or mixed-use services, says Tom Wilson, who studies energy systems at the Electric Power Research Institute. Because of these factors, “data centers in the U.S. are much larger on average than data centers in other countries,” he says.
Wilson adds that the dataset you see here—which comes from the analysis firm Data Center Map—may undercount new Chinese data centers because they are often not announced publicly. Chinese data-center plans are “just not in the repository of information used to collect data on other parts of the world,” he says. If information about China were up-to-date, he would still expect to see “the U.S. ahead, China somewhat behind, and then the rest of the world trailing.”
One thing that worries Wilson is whether the U.S. power grid can meet the rising energy demands of these data centers. “We’ve had flat demand for basically two decades, and now we want to grow. It’s a big system to grow,” he notes.
He thinks the best solution is asking data centers to be more flexible in their power use, maybe by scheduling complex computation for off-peak times or maintaining on-site batteries, removing part of the burden from the power grid. Whether such measures will be enough to keep up with demand remains an open question.
2026-01-04 21:00:01

When it comes to viewing nebulae, galaxies, and other deep-sky objects, amateur astronomers on a budget have had two options. They can view with the naked eye through a telescope and perceive these spectacular objects as faint smudges that don’t even begin to capture their majesty, or they can capture long-exposure images with astrocameras and display the results on a view screen or computer, which robs the immediacy of the stargazing experience.
Stand-alone telescope eyepieces with active light amplification do exist for a real-time viewing, but commercial products are pricey, costing hundreds to thousands of dollars. I wanted something I could use for the public-astronomy observation nights that I organize in my community. So I decided to build a low-cost DIY amplifying eyepiece, to make it easier for visitors to observe deep-sky objects but without requiring a large financial investment on my part.
I quickly realized there was already an industry replete with hardware for handling low-light conditions—the security-camera industry. Faced with the challenge of monitoring areas with a variety of lighting, often using cameras spread out over a large facility, makers of closed-circuit television (CCTV) cameras created a video standard that uses digital sensors to capture images but then transmits them as HD-resolution analog signals over coaxial cables. By using this Analog High Definition (AHD) transmission standard, you can attach new cameras to preexisting long cable runs and still get a high-quality image.
A CMOS-image sensor module from a security camera [top left], a USB capture card [bottom left], and an OLED viewfinder [right] process analog video data.James Provost
While I didn’t need the long-distance capability of these cameras, I was very interested in their low price and ability to handle dim conditions. The business end of these cameras is a module that integrates a CMOS image sensor with supporting electronics. After some research, I settled on a module that combined a 2-megapixel Sony IMX307 sensor with a supporting NVP2441 chipset.
The key factor was choosing a sensor-chipset combination that supports something called Starlight or Sens-Up mode. This makes the camera more sensitive to light than the human eye, albeit at the cost of a little speed. Images are created by integrating approximately 1.2 seconds of exposure time on the sensor. That might make for choppy security footage, but it’s not noticeable when making observations of nebulae and other astronomical objects (unless of course something really weird happens in the sky!)
The existence of the Sens-up mode is actually part of the technical heritage of digital imaging sensors. CMOS sensors were developed as a successor to charge-coupled devices (CCDs), which were eagerly embraced by the astronomical community following their introduction in 1970, replacing long-exposure photographic plates. However, the ability to take exposure frames as long as one second is rarely something that CCTV cameras are designed for: It can be more of a drawback than a feature, leading to blurred images of moving objects or people.
As a result, this capability is rarely mentioned in the product descriptions, and so finding the right module was the most challenging part: I had to buy three different camera modules before finally landing on one that worked.
The output from the camera module is passed to a digital viewfinder, which displays both the video and control menus generated by the module. These menus are navigated using a four-way, press-to-select joystick that connects to a dedicated header on the module.
The output of the camera is also passed to a capture card that converts the analog signal to digital and provides a USB-C interface, which allows images to be seen and saved using a smartphone. All the electronics can be powered via battery for complete stand-alone operation or from a USB cable attached to the capture card.
The analog HD module can be controlled directly using a joystick to navigate onscreen menus. Power can be provided externally via USB-C connector on the capture card or via an optional battery pack.James Provost
The components fit in an enclosure I made from 3D-printed parts, designed to match the 32-millimeter diameter of most telescope eyepieces for easy mounting. The whole thing cost less than US $250.
I took my new amplifying eyepiece out with my Celestron C11 telescope to give it a try. Soon I had in my viewfinder the Dumbbell Nebula, also known as Messier 27/M27, which is normally quite hard to see. It was significantly brighter compared to a naked-eye observation. Certainly the difference wasn’t as marked as with a commercial rig that has noise-reducing cooling for the sensor electronics. But it was still an enormous improvement and for a fraction of the cost.
The Orion Nebula, some 1,340 light years away.Jordan Blanchard
The amplifier is also more versatile: You can remove it from the telescope, and with a 2.8-mm HD lens fitted to the camera-module sensor, you can use it as a night-vision camera. That’s handy when trying to operate in dark outdoor conditions on starry nights!
For the future, I’d like to upgrade the USB-C capture module to one that can handle the sensor’s digital output directly, rather than just the analog signal. This would give a noticeable boost in resolution when recording or streaming to a phone or computer. Beyond that, I’m interested in finding another low-cost camera module with a longer exposure, and refining the 3D-printed housing so it’s easier to build and adapt to other observing setups. That way the eyepiece will stay affordable, but people can still push it toward more serious electronically assisted astronomy.2026-01-03 22:00:01

In a few days, Las Vegas will be inundated with engineers, executives, investors, and members of the press—including me—for the annual Consumer Electronics Show, one of the largest tech events of the year.
If you can dream it, there’s a good chance it’ll be on display at CES 2026 (though admittedly, much of this tech won’t necessarily make it to the mainstream). There will be a range of AI toys, AI notetakers, and “AI companions,” exoskeletons and humanoid robots, and health tech to track your hormones, brain activity, and... bathroom activity.
This year’s event will have keynote addresses from the CEOs of tech giants including AMD and Lenovo, and thousands of booths from companies spanning legacy brands to brand-new startups.
I’m excited to stumble across unexpected new tech while wandering the show floor. But as I prepare for this year’s event, here are some of the devices that have already caught my eye.
Electroencephalography, or EEG, has been used in health care for decades to monitor neural activity. It usually involves a person wearing a whole helmet of electrodes, but scaled-down versions of the tech are now being integrated into consumer devices and may soon be ready for users.
Several neurotech companies using EEG will be at CES this year. For instance, Neurable (a company we’ve had on our radar for years) is announcing a new headset with gaming brand HyperX, which is meant to help players hone their focus by tracking brain activity in real time. Neurable’s over-ear EEG headphones for everyday use are now available for preorder. Naox will also bring its in-ear EEG tech to consumer-oriented earbuds. And Elemind, another company we’ve covered, aims to help you sleep with its headband. With wearables already monitoring vital signs, sleep, and activity, 2026 may be the year our brainwaves join the list of biosignals we can track on a daily basis.
RheoFit’s A1 massage roller basically looks like what would happen if a foam roller and a massage chair had a baby. The device automatically rolls itself down your spine and offers two replaceable surfaces: one harder material that mimics a masseuse’s knuckles, and a gentler option that’s meant to feel more like an open palm.
This will be the second year the RheoFit A1 is on display. To find out whether it’s worth the US $449 price tag, I’ll have to try it out on site.
Just before last year’s CES, several devices were awarded a new certification meant to reward tech designed to be less distracting. While most exhibitors will keep competing for our attention, a few are tapping into a desire for calm and clarity.
Minimalist tech company Mudita, for instance, will be showing its e-ink smartphone, which began shipping in 2025. The phone is designed to reduce screen time with its black-and-white, paperlike display, and the operating system is designed to run without Google. Like other not-as-smart phones (such as the Wisephone or Light Phone), the Mudita Kompakt offers the essentials—messaging, maps, camera, and so forth—without constant notifications.
Some new tech surprises users with an experience they didn’t know they wanted. Others aim to offer the solutions you’ve dreamt about. For me, French startup Allergen Alert falls into the second category.
The startup, one of the listed exhibitors at an early press event on 4 January, is developing a portable system to test food allergens in real time. I’ve been eating gluten-free for most of my life (not by choice), and I know how easily allergens can sneak into a dish with just a sprinkle of flour, or a dash of soy sauce—especially when you’re traveling or eating out. For many people with severe allergies, a device like this could be a lifesaver.
This story was updated on 5 January, 2026 to add that Neurable announced a new headset with HyperX. A previous version of this story incorrectly stated that the company was displaying its MW75 Neuro at CES. This has been removed.