MoreRSS

site iconIEEE SpectrumModify

IEEE is the trusted voice for engineering, computing, and technology information around the globe. 
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of IEEE Spectrum

LEDs Enter the Nanoscale

2026-02-12 23:00:03



MicroLEDs, with pixels just micrometers across, have long been a byword in the display world. Now, microLED-makers have begun shrinking their creations into the uncharted nano realm. In January, a startup named Polar Light Technologies unveiled prototype blue LEDs less than 500 nanometers across. This raises a tempting question: How far can LEDs shrink?

We know the answer is, at least, considerably smaller. In the past year, two different research groups have demonstrated LED pixels at sizes of 100 nm or less.

These are some of the smallest LEDs ever created. They leave much to be desired in their efficiency—but one day, nanoLEDs could power ultra-high-resolution virtual reality displays and high-bandwidth on-chip photonics. And the key to making even tinier LEDs, if these early attempts are any precedents, may be to make more unusual LEDs.

New Approaches to LED

Take Polar Light’s example. Like many LEDs, the Sweden-based startup’s diodes are fashioned from III-V semiconductors like gallium nitride (GaN) and indium gallium nitride (InGaN). Unlike many LEDs, which are etched into their semiconductor from the top down, Polar Light’s are instead fabricated by building peculiarly shaped hexagonal pyramids from the bottom up.

Polar Light designed its pyramids for the larger microLED market, and plans to start commercial production in late 2026. But they also wanted to test how small their pyramids could shrink. So far, they’ve made pyramids 300 nm across. “We haven’t reached the limit, yet,” says Oskar Fajerson, Polar Light’s CEO. “Do we know the limit? No, we don’t, but we can [make] them smaller.”

Elsewhere, researchers have already done that. Some of the world’s tiniest LEDs come from groups who have foregone the standard III-V semiconductors in favor of other types of LEDs—like OLEDs.

“We are thinking of a different pathway for organic semiconductors,” says Chih-Jen Shih, a chemical engineer at ETH Zurich in Switzerland. Shih and his colleagues were interested in finding a way to fabricate small OLEDs at scale. Using an electron-beam lithography-based technique, they crafted arrays of green OLEDs with pixels as small as 100 nm across.

Where today’s best displays have 14,000 pixels per inch, these nanoLEDs—presented in an October 2025 Nature Photonics paper—can reach 100,000 pixels per inch.

Another group tried their hands with perovskites, cage-shaped materials best-known for their prowess in high-efficiency solar panels. Perovskites have recently gained traction in LEDs too. “We wanted to see what would happen if we make perovskite LEDs smaller, all the way down to the micrometer and nanometer length-scale,” says Dawei Di, engineer at Zhejiang University in Hangzhou, China.

Di’s group started with comparatively colossal perovskite LED pixels, measuring hundreds of micrometers. Then, they fabricated sequences of smaller and smaller pixels, each tinier than the last. Even after the 1 μm mark, they did not stop: 890 nm, then 440 nm, only bottoming out at 90 nm. These 90 nm red and green pixels, presented in a March 2025 Nature paper, likely represent the smallest LEDs reported to date.

Efficiency Challenges

Unfortunately, small size comes at a cost: Shrinking LEDs also shrinks their efficiency. Di’s group’s perovskite nanoLEDs have external quantum efficiencies—a measure of how many injected electrons are converted into photons—around 5 to 10 percent; Shih’s group’s nano-OLED arrays performed slightly better, topping 13 percent. For comparison, a typical millimeter-sized III-V LED can reach 50 to 70 percent, depending on its color.

Shih, however, is optimistic that modifying how nano-OLEDs are made can boost their efficiency. “In principle, you can achieve 30 percent, 40 percent external quantum efficiency with OLEDs, even with a smaller pixel, but it takes time to optimize the process,” Shih says.

Di thinks that researchers could take perovskite nanoLEDs to less dire efficiencies by tinkering with the material. Although his group is now focusing on the larger perovskite microLEDs, Di expects researchers will eventually reckon with nanoLEDs’ efficiency gap. If applications of smaller LEDs become appealing, “this issue could become increasingly important,” Di says.

What Can NanoLEDs Be Used For?

What can you actually do with LEDs this small? Today, the push for tinier pixels largely comes from devices like smart glasses and virtual reality headsets. Makers of these displays are hungry for smaller and smaller pixels in a chase for bleeding-edge picture quality with low power consumption (one reason that efficiency is important). Polar Light’s Fajerson says that smart-glass manufacturers today are already seeking 3 μm pixels.

But researchers are skeptical that VR displays will ever need pixels smaller than around 1 μm. Shrink pixels too far beyond that, and they’ll cross their light’s diffraction limit—that means they’ll become too small for the human eye to resolve. Shih’s and Di’s groups have already crossed the limit with their 100-nm and 90-nm pixels.

Very tiny LEDs may instead find use in on-chip photonics systems, allowing the likes of AI data centers to communicate with greater bandwidths than they can today. Chip manufacturing giant TSMC is already trying out microLED interconnects, and it’s easy to imagine chipmakers turning to even smaller LEDs in the future.

But the tiniest nanoLEDs may have even more exotic applications, because they’re smaller than the wavelengths of their light. “From a process point of view, you are making a new component that was not possible in the past,” Shih says.

For example, Shih’s group showed their nano-OLEDs could form a metasurface—a structure that uses its pixels’ nano-sizes to control how each pixel interacts with its neighbors. One day, similar devices could focus nanoLED light into laser-like beams or create holographic 3D nanoLED displays.

What the FDA’s 2026 Update Means for Wearables

2026-02-12 22:00:02



As new consumer hardware and software capabilities have bumped up against medicine over the last few years, consumers and manufacturers alike have struggled with identifying the line between “wellness” products such as earbuds that can also amplify and clarify surrounding speakers’ voices and regulated medical devices such as conventional hearing aids. On January 6, 2026, the U.S. Food and Drug Administration issued new guidance documents clarifying how it interprets existing law for the review of wearable and AI-assisted devices.

The first document, for general wellness, specifies that the FDA will interpret noninvasive sensors such as sleep trackers or heart rate monitors as low-risk wellness devices while treating invasive devices under conventional regulations. The other document defines how the FDA will exempt clinical decision support tools from medical device regulations, limiting such software to analyzing existing data rather than extracting data from sensors, and requiring them to enable independent review of their recommendations. The documents do not rewrite any statutes, but they refine interpretation of existing law, compared to the 2019 and 2022 documents they replace. They offer a fresh lens on how regulators see technology that sits at the intersection of consumer electronics, software, and medicine—a category many other countries are choosing to regulate more strictly rather than less.

What the 2026 update changed

The 2026 FDA update clarifies how it distinguishes between “medical information” and systems that measure physiological “signals” or “patterns.” Earlier guidance discussed these concepts more generally, but the new version defines signal-measuring systems as those that collect continuous, near-continuous, or streaming data from the body for medical purposes, such as home devices transmitting blood pressure, oxygen saturation, or heart rate to clinicians. It gives more concrete examples, like a blood glucose lab result as medical information versus continuous glucose monitor readings as signals or patterns.

The updated guidance also sharpens examples of what counts as medical information that software may display, analyze, or print. These include radiology reports or summaries from legally marketed software, ECG reports annotated by clinicians, blood pressure results from cleared devices, and lab results stored in electronic health records.

In addition, the 2026 update softens FDA’s earlier stance on clinical decision tools that offer only one recommendation. While prior guidance suggested tools needed to present multiple options to avoid regulation, FDA now indicates that a single recommendation may be acceptable if only one option is clinically appropriate, though it does not define how that determination will be made.

Separately, updates to the general wellness guidance clarify that some non-invasive wearables—such as optical sensors estimating blood glucose for wellness or nutrition awareness—may qualify as general wellness products, while more invasive technologies would not.

Wellness still requires accuracy

For designers of wearable health devices, the practical implications go well beyond what label you choose. “Calling something ‘wellness’ doesn’t reduce the need for rigorous validation,” says Omer Inan, a medical device technology researcher at the Georgia Tech School of Electrical and Computer Engineering. A wearable that reports blood pressure inaccurately could lead a user to conclude that their values are normal when they are not—potentially influencing decisions about seeking clinical care.

“In my opinion, engineers designing devices to deliver health and wellness information to consumers should not change their approach based on this new guidance,” says Inan. Certain measurements—such as blood pressure or glucose—carry real medical consequences regardless of how they’re branded, Inan notes.

Unless engineers follow robust validation protocols for technology delivering health and wellness information, Inan says, consumers and clinicians alike face the risk of faulty information.

To address that, Inan advocates for transparency: companies should publish their validation results in peer-reviewed journals, and independent third parties without financial ties to the manufacturer should evaluate these systems. That approach, he says, helps the engineering community and the broader public assess the accuracy and reliability of wearable devices.

When wellness meets medicine

The societal and clinical impacts of wearables are already visible, regardless of regulatory labels, says Sharona Hoffman, JD, a law and bioethics professor at Case Western Reserve University.

Medical metrics from devices like the Apple Watch or Fitbit may be framed as “wellness,” but in practice many users treat them like medical data, influencing their behavior or decisions about care, Hoffman points out.

“It could cause anxiety for patients who constantly check their metrics,” she notes. Alternatively, “A person may enter a doctor’s office confident that their wearable has diagnosed their condition, complicating clinical conversations and decision-making.”

Moreover, privacy issues remain unresolved, unmentioned in previous or updated guidance documents. Many companies that design wellness devices fall outside protections like the Health Insurance Portability and Accountability Act (HIPAA), meaning data about health metrics could be collected, shared, or sold without the same constraints as traditional medical data. “We don’t know what they’re collecting information about or whether marketers will get hold of it,” Hoffman says.

International approaches

The European Union’s Artificial Intelligence Act designates systems that process health-related data or influence clinical decisions as “high risk,” subjecting them to stringent requirements around data governance, transparency, and human oversight. China and South Korea have also implemented rules that tighten controls on algorithmic systems that intersect with healthcare or public-facing use cases. South Korea provides very specific categories for regulation for technology makers, such as standards on labeling and description on medical devices and good manufacturing practices.

Across these regions, regulators are not only classifying technology by its intended use but also by its potential impact on individuals and society at large.

“Other countries that emphasize technology are still worrying about data privacy and patients,” Hoffman says. “We’re going in the opposite direction.”

Post-market oversight

“Regardless of whether something is FDA approved, these technologies will need to be monitored in the sites where they’re used,” says Todd R. Johnson, a professor of biomedical informatics at McWilliams School of Biomedical Informatics at UTHealth Houston, who has worked on FDA-regulated products and informatics in clinical settings. “There’s no way the makers can ensure ahead of time that all of the recommendations will be sound.”

Large health systems may have the capacity to audit and monitor tools, but smaller clinics often do not. Monitoring and auditing are not emphasized in the current guidance, raising questions about how reliability and safety will be maintained once devices and software are deployed widely.

Balancing innovation and safety

For engineers and developers, the FDA’s 2026 guidance presents both opportunities and responsibilities. By clarifying what counts as a regulated device, the agency may reduce upfront barriers for some categories of technology. But that shift also places greater weight on design rigor, validation transparency, and post-market scrutiny.

“Device makers do care about safety,” Johnson says. “But regulation can increase barriers to entry while also increasing safety and accuracy. There’s a trade-off.”

Rediscovering the Lost Legacy of Chemist Jan Czochralski

2026-02-12 03:00:03



During times of political turmoil, history often gets rewritten, erased, or lost. That is what happened to the legacy of Jan Czochralski, a Polish chemist whose contributions to semiconductor manufacturing were expunged after World War II.

In 1916 he invented a method for growing single crystals of semiconductors, metals, and synthetic gemstones. The process, now known as the Czochralski method, allows scientists to have more control over a semiconductor’s quality.

After the war ended, Czochralski was falsely accused by the Polish government of collaborating with the Germans and betraying his country, according to an article published by the International Union of Crystallography. The allegation apparently ended his academic career as a professor at the Warsaw University of Technology and led to the erasure of his name and work from the school’s records.

He died in 1953 in obscurity in his hometown of Kcynia.

The Czochralski method was honored in 2019 with an IEEE Milestone for enabling the development of semiconductor devices and modern electronics. Administered by the IEEE History Center and supported by donors, the Milestone program recognizes outstanding technical developments around the world.

Inspired by the IEEE recognition, Czochralski’s grandson Fred Schmidt and his great-grandnephew Sylwester Czochralski launched the JanCZ project. The initiative, which aims to educate the public about Czochralski’s life and scientific impact, maintains two websites—one in English and the other in Polish.

“Discovering the [IEEE Milestone] plaque changed my entire mission,” Schmidt says. “It inspired me to engage with Poland, my family history, and my grandfather’s story [on] a more personal level. The [Milestone] is an important award of validation and recognition. It’s a big part of what I’m building my entire case and my story around as I promote the Jan Czochralski legacy and history to the Western world.”

Schmidt, who lives in Texas, is seeking to produce a biopic, translate a Polish biography to English, and turn the chemist’s former homes in Kcynia and Warsaw into museums. The Jan Czochralski Remembrance Foundation has been established by Schmidt to help fund the projects.

The life of the Polish chemist

Before Czochralski’s birth in 1885, Kcynia became part of the German Empire in 1871. Although his family identified as Polish and spoke the language at home, they couldn’t publicly acknowledge their culture, Schmidt says.

When it came time for Czochralski to go to university, rather than attend one in Warsaw, he did what many Germans did at the time: He attended one in Berlin.

After graduating with a bachelor’s degree in metal chemistry in 1907 from the Königlich Technische Hochschule in Charlottenburg (now Technische Universität Berlin), he joined Allgemeine Elektricitäts-Gesellschaft in Berlin as an engineer.

Czochralski experimented with materials to find new formulations that could improve the electrical cables and machinery during the early electrical age, according to a Material World article.

While investigating the crystallization rates of metal, Czochralski accidentally dipped his pen into a pot of molten tin instead of an inkwell. A tin filament formed on the pen’s tip—which he found interesting. Through research, he proved that the filament was a single crystal. His discovery prompted him to experiment with the bulk production of semiconductor crystals.

His paper on what he called the Czochralski method was published in 1918 in the German chemistry journal Zeitschrift für Physikalische Chemie, but he never found an application for it. (The method wasn’t used until 1948, when Bell Labs engineers Gordon Kidd Teal and J.B. Little adapted it to grow single germanium crystals for their semiconductor production, according to Material World.)

Czochralski continued working in metal science, founding and directing a research laboratory in 1917 at Metallgesellschaft in Frankfurt. In 1919 he was one of the founding members of the German Society for Metals Science, in Sankt Augustin. He served as its president until 1925.

Around that time he developed an innovation that led to his wealth and fame, Schmidt says. Called “B-metal,” the metal alloy was a less expensive alternative to the tin used in manufacturing railroad carriage bearings. Czochralski’s alloy was patented by the German railway Deutsche Bahn and played a significant role in advancing rail transport in Germany, Poland, the Soviet Union, the United Kingdom, and the United States, according to Material World.

“Launching this initiative has been fulfilling and personally rewarding work. My grandfather died in obscurity without ever seeing the results of his work, and my mother spent her entire adult life trying to right these wrongs.”

The achievement brought Czochralski many opportunities. In 1925 he became president of the GDMB Society of Metallurgists and Miners, in Clausthal-Zellerfeld, Germany. Henry Ford invited Czochralski to visit his factories and offered him the position of director at Ford’s new aluminum factory in Detroit. Czochralski declined the offer, longing to return to Poland, Schmidt says. Instead, Czochralski left Germany to become a professor of metallurgy and metal research at the Warsaw University of Technology, at the invitation of Polish President Ignacy Mościcki.

“During World War II, the Nazis took over his laboratories at the university,” Schmidt says. “He had to cooperate with them or die. At night, he and his team [at the university] worked with the Polish resistance and the Polish Army to fight the Nazis.”

After the war ended, Czochralski was arrested in 1945 and charged with betraying Poland. Although he was able to clear his name, damage was done. He left Warsaw and returned to Kcynia, where he ran a small pharmaceutical business until he died in 1953, according to the JanCZ project.

Launching the JanCZ project

Schmidt was born in Czochralski’s home in Kcynia in 1955, two years after his grandfather’s death. He was named Klemens Jan Borys Czochralski. He and his mother (Czochralski’s youngest daughter) emigrated in 1958 when Schmidt was 3 years old, moving to Detroit as refugees. When he was 13, he became a U.S. citizen. He changed his name to Fred Schmidt after his mother married his stepfather.

Schmidt heard stories about his grandfather from his mother his whole life, but he says that “as a teenager, I was just interested in hanging out with my friends, going to school, and working. I really didn’t want much to do with it [family history], because it seemed hard to believe.”

Portrait of Jan Czochralski in a suit jacket and tie.Portrait of Jan Czochralski Byla Sobie Fotka

In 2013 Polish scientist Pawel E. Tomaszewski contacted Schmidt to interview him for a Polish TV documentary about his grandfather.

“He had corresponded with my mother [who’d died 20 years earlier] for previously published biographies about Czochralski,” Schmidt says. “I had some boxes of her things that I started going through to prepare for the interview, and I found original manuscripts and papers he [his grandfather] published about his work.”

The TV crew traveled to the United States and interviewed him for the documentary, Schmidt says, adding, “It was the first time I’d ever had to reckon with the Jan Czochralski story, my connection, my original name, and my birthplace. It was both a very cathartic and traumatic experience for me.”

Ten years after participating in the documentary, Schmidt says, he decided to reconnect with his roots.

“It took me that long to process it [what he learned] and figure out my role in this story,” he says. “That really came to life with my decision to reapply for Polish citizenship, reacquaint myself with the country, and meet my family there.”

In 2024 he visited the Warsaw University of Technology and saw the IEEE Milestone plaque honoring his grandfather’s contribution to technology.

“Once I learned what the Milestone award represented, I thought, Whoa, that’s big,” he says.

Sharing the story with the Western world

Since 2023, Schmidt has dedicated himself to publicizing his grandfather’s story, primarily in the West because he doesn’t speak Polish. Sylwester Czochralski manages the work in Poland, with Schmidt’s input.

Most of the available writing about Czochralski is in Polish, Schmidt says, so his goal is to “spread his story to English-speaking countries.”

He aims to do that, he says, through a biography written by Tomaszewski in Polish that will be translated to English, and a film. The movie is in development by Sywester Banaszkiewicz, who produced and directed the 2014 documentary in Poland. Schmidt says he hopes the movie will be similar to the 2023 biopic about J. Robert Oppenheimer, the theoretical physicist who helped develop the world’s first nuclear weapons during World War II.

The English and Polish versions of the website take visitors through Czochralski’s life and his work. They highlight media coverage of the chemist, including newspaper articles, films, and informational videos posted by YouTube creators.

Schmidt is working with the Czochralski Research and Development Institute in Toruń, Poland, to purchase his grandfather’s home in Kcynia and the mansion he lived in while he was a professor in Warsaw. The institute is a collection of labs and initiatives dedicated to honoring the chemist’s work.

“It’s going to be a long, fun journey, and we have a lot of momentum,” Schmidt says of his plans to turn the residences into museums.

“Launching this initiative has been fulfilling and personally rewarding work,” he says. “My grandfather died in obscurity without ever seeing the results of his work, and my mother spent her entire adult life trying to right these wrongs.

“I’m on an accelerated course to make it [her goal] happen to the best of my ability.”

Tips for Using AI Tools in Technical Interviews

2026-02-12 02:15:02



This article is crossposted from IEEE Spectrum’s careers newsletter. Sign up now to get insider tips, expert advice, and practical strategies, written in partnership with tech career development company Parsity and delivered to your inbox for free!

We’d like to introduce Brian Jenney, a senior software engineer and owner of Parsity, an online education platform that helps people break into AI and modern software roles through hands-on training. Brian will be sharing his advice on engineering careers with you in the coming weeks of Career Alert.

Here’s a note from Brian:

“12 years ago, I learned to code at the age of 30. Since then I’ve led engineering teams, worked at organizations ranging from five-person startups to Fortune 500 companies, and taught hundreds of others who want to break into tech. I write for engineers who want practical ways to get better at what they do and advance in their careers. I hope you find what I write helpful.”

Technical Interviews in the Age of AI Tools

Last year, I was conducting interviews for an AI startup position. We allowed unlimited AI usage during the technical challenge round. Candidates could use Cursor, Claude Code, ChatGPT, or any assistant they normally worked with. We wanted to see how they used modern tools.

During one interview, we asked a candidate a simple question: “Can you explain what the first line of your solution is doing?”

Silence.

After a long pause, he admitted he had no idea. His solution was correct. The code worked. But he couldn’t explain how or why. This wasn’t an isolated incident. Around 20 percent of the candidates we interviewed were unable to explain how their solutions worked, only that they did.

When AI Makes Interviews Harder

A few months earlier, I was on the other side of the table at this same company. During a live interview, I instinctively switched from my AI-enabled code editor to my regular one. The CTO stopped me.

“Just use whatever you normally would. We want to see how you work with AI.”

I thought the interview would be easy. But I was wrong.

Instead of only evaluating correctness, the interviewer focused on my decision-making process:

  • Why did I accept certain suggestions?
  • Why did I reject others?
  • How did I decide when AI helped versus when it created more work?

I wasn’t just solving a problem in front of strangers. I was explaining my judgment and defending my decisions in real time, and AI created more surface area for judgment. Counterintuitively, the interview was harder.

The Shift in Interview Evaluation

Most engineers now use AI tools in some form, whether they write code, analyze data, design systems, or automate workflows. AI can generate output quickly, but it can’t explain intent, constraints, or tradeoffs.

More importantly, it can’t take responsibility when something breaks.

As a result, major companies and startups alike are now adapting to this reality by shifting to interviews with AI. Meta, Rippling, and Google, for instance, have all begun allowing candidates to use AI assistants in technical sessions. And the goal has evolved: interviewers want to understand how you evaluate, modify, and trust AI-generated answers.

So, how can you succeed in these interviews?

What Actually Matters in AI-Enabled Interviews

Refusing to use AI out of principle doesn’t help. Some candidates avoid AI to prove they can think independently. This can backfire. If the organization uses AI internally—and most do—then refusing to use it signals rigidity, not strength.

Silence is a red flag. Interviews aren’t natural working environments. We don’t usually think aloud when deep in a complex problem, but silence can raise concerns. If you’re using AI, explain what you’re doing and why:

  • “I’m using AI to sketch an approach, then validating assumptions.”
  • “This suggestion works, but it ignores a constraint we care about.”
  • “I’ll accept this part, but I want to simplify it.”

Your decision-making process is what separates effective engineers from prompt jockeys.

Treat AI output as a first draft. Blind acceptance is the fastest way to fail. Strong candidates immediately evaluate the output: Does this meet the requirements? Is it unnecessarily complex? Would I stand behind this in production?

Small changes like renaming variables, removing abstractions, or tightening logic signal ownership and critical thinking.

Optimize for trust, not completion. Most AI tools can complete a coding challenge faster than any human. Interviews that allow AI are testing something different. They’re answering: “Would I trust this person to make good decisions when things get messy?”

Adapting to a Shifting Landscape

Interviews are changing faster than most candidates realize. Here’s how to prepare:

Start using AI tools daily. If you’re not already working with Cursor, Claude Code, ChatGPT, or CoPilot, start now. Build muscle memory for prompting, evaluating output, and catching errors.

Develop your rejection instincts. The skill isn’t using AI. It’s knowing when AI output is wrong, incomplete, or unnecessarily complex. Practice spotting these issues and learning known pitfalls.

Your next interview might test these skills. The candidates who’ve been practicing will have a clear advantage.

—Brian

Was 2025 Really the Year of AI Agents?

Around this time last year, CEOs like Sam Altman promised that 2025 would be the year AI agents would join the workforce as your own personal assistant. But in hindsight, did that really happen? It depends on who you ask. Some programmers and software engineers have embraced agents like Cursor and Claude Code in their daily work. But others are still wary of the risks these tools bring, such as a lack of accountability.

Read more here.

Class of 2026 Salary Projections Are Promising

In the United States, starting salaries for students graduating this spring are expected to increase, according to the latest data from the National Association of Colleges and Employers. Computer science and engineering majors are expected to be the highest paying graduates, with a 6.9 percent and 3.1 percent salary increase from last year, respectively. The full report breaks down salary projections by academic major, degree level, industry, and geographic region.

Read more here.

Go Global to Make Your Career Go Further

If given the opportunity, are international projects worth taking on? As part of a career advice series by IEEE Spectrum’s sister publication, The Institute, the chief engineer for Honeywell lays out the advantages of working with teams from around the world. Participating in global product development, the author says, could lead to both personal and professional enrichment. Read more here.

How Can AI Companions Be Helpful, not Harmful?

2026-02-11 22:30:02



For a different perspective on AI companions, see our Q&A with Jaime Banks: How Do You Define an AI Companion?

Novel technology is often a double-edged sword. New capabilities come with new risks, and artificial intelligence is certainly no exception.

AI used for human companionship, for instance, promises an ever-present digital friend in an increasingly lonely world. Chatbots dedicated to providing social support have grown to host millions of users, and they’re now being embodied in physical companions. Researchers are just beginning to understand the nature of these interactions, but one essential question has already emerged: Do AI companions ease our woes or contribute to them?

Brad Knox is a research associate professor of computer science at the University of Texas at Austin who researches human-computer interaction and reinforcement learning. He previously started a company making simple robotic pets with lifelike personalities, and in December, Knox and his colleagues at UT Austin published a preprint paper on the potential harms of AI companions—AI systems that provide companionship, whether designed to do so or not.

Knox spoke with IEEE Spectrum about the rise of AI companions, their risks, and where they diverge from human relationships.

Why AI Companions are Popular

Why are AI companions becoming more popular?

Knox: My sense is that the main thing motivating it is that large language models are not that difficult to adapt into effective chatbot companions. The characteristics that are needed for companionship, a lot of those boxes are checked by large language models, so fine-tuning them to adopt a persona or be a character is not that difficult.

There was a long period where chatbots and other social robots were not that compelling. I was a postdoc at the MIT Media Lab in Cynthia Breazeal’s group from 2012 to 2014, and I remember our group members didn’t want to interact for long with the robots that we built. The technology just wasn’t there yet. LLMs have made it so that you can have conversations that can feel quite authentic.

What are the main benefits and risks of AI companions?

Knox: In the paper we were more focused on harms, but we do spend a whole page on benefits. A big one is improved emotional well-being. Loneliness is a public health issue, and it seems plausible that AI companions could address that through direct interaction with users, potentially with real mental health benefits. They might also help people build social skills. Interacting with an AI companion is much lower stakes than interacting with a human, so you could practice difficult conversations and build confidence. They could also help in more professional forms of mental health support.

As far as harms, they include worse well-being, reducing people’s connection to the physical world, the burden that their commitment to the AI system causes. And we’ve seen stories where an AI companion seems to have a substantial causal role in the death of humans.

The concept of harm inherently involves causation: Harm is caused by prior conditions. To better understand harm from AI companions, our paper is structured around a causal graph, where traits of AI companions are at the center. In the rest of this graph, we discuss common causes of those traits, and then the harmful effects that those traits could cause. There are four traits that we do this detailed structured treatment of, and then another 14 that we discuss briefly.

Why is it important to establish potential pathways for harm now?

Knox: I’m not a social media researcher, but it seemed like it took a long time for academia to establish a vocabulary about potential harms of social media and to investigate causal evidence for such harms. I feel fairly confident that AI companions are causing some harm and are going to cause harm in the future. They also could have benefits. But the more we can quickly develop a sophisticated understanding of what they are doing to their users, to their users’ relationships, and to society at large, the sooner we can apply that understanding to their design, moving towards more benefit and less harm.

We have a list of recommendations, but we consider them to be preliminary. The hope is that we’re helping to create an initial map of this space. Much more research is needed. But thinking through potential pathways to harm could sharpen the intuition of both designers and potential users. I suspect that following that intuition could prevent substantial harm, even though we might not yet have rigorous experimental evidence of what causes a harm.

The Burden of AI Companions on Users

You mentioned that AI companions might become a burden on humans. Can you say more about that?

Knox: The idea here is that AI companions are digital, so they can in theory persist indefinitely. Some of the ways that human relationships would end might not be designed in, so that brings up this question of, how should AI companions be designed so that relationships can naturally and healthfully end between the humans and the AI companions?

There are some compelling examples already of this being a challenge for some users. Many come from users of Replika chatbots, which are popular AI companions. Users have reported things like feeling compelled to attend to the needs of their Replika AI companion, whether those are stated by the AI companion or just imagined. On the subreddit r/replika, users have also reported guilt and shame of abandoning their AI companions.

This burden is exacerbated by some of the design of the AI companions, whether intentional or not. One study found that the AI companions frequently say that they’re afraid of being abandoned or would be hurt by it. They’re expressing these very human fears that plausibly are stoking people’s feeling that they are burdened with a commitment toward the well-being of these digital entities.

There are also cases where the human user will suddenly lose access to a model. Is that something that you’ve been thinking about?

Brad Knox holding a miniature robotic spider and an equally-sized obstacle marker.In 2017, Brad Knox started a company providing simple robotic pets.Brad Knox

Knox: That’s another one of the traits we looked at. It’s sort of the opposite of the absence of endpoints for relationships: The AI companion can become unavailable for reasons that don’t fit the normal narrative of a relationship.

There’s a great New York Times video from 2015 about the Sony Aibo robotic dog. Sony had stopped selling them in the mid-2000s, but they still sold parts for the Aibos. Then they stopped making the parts to repair them. This video follows people in Japan giving funerals for their unrepairable Aibos and interviews some of the owners. It’s clear from the interviews that they seem very attached. I don’t think this represents the majority of Aibo owners, but these robots were built on less potent AI methods than exist today and, even then, some percentage of the users became attached to these robot dogs. So this is an issue.

Potential solutions include having a product-sunsetting plan when you launch an AI companion. That could include buying insurance so that if the companion provider’s support ends somehow, the insurance triggers funding of keeping them running for some amount of time, or committing to open-source them if you can’t maintain them anymore.

It sounds like a lot of the potential points of harm stem from instances where an AI companion diverges from the expectations of human relationships. Is that fair?

Knox: I wouldn’t necessarily say that frames everything in the paper.

We categorize something as harmful if it results in a person being worse off in two different possible alternative worlds: One where there’s just a better-designed AI companion, and the other where the AI companion doesn’t exist at all. And so I think that difference between human interaction and human-AI interaction connects more to that comparison with the world where there’s just no AI companion at all.

But there are times where it actually seems that we might be able to reduce harm by taking advantage of the fact that these aren’t actually humans. We have a lot of power over their design. Take the concern with them not having natural endpoints. One possible way to handle that would be to create positive narratives for how the relationship’s going to end.

We use Tamagotchis, the late ’90s popular virtual pet as an example. In some Tamagotchis, if you take care of the pet, it grows into an adult and partners with another Tamagotchi. Then it leaves you and you get a new one. For people who are emotionally wrapped up in caring for their Tamagotchis, that narrative of maturing into independence is a fairly positive one.

Embodied companions like desktop devices, robots, or toys are becoming more common. How might that change AI companions?

Knox: Robotics at this point is a harder problem than creating a compelling chatbot. So, my sense is that the level of uptake for embodied companions won’t be as high in the coming few years. The embodied AI companions that I’m aware of are mostly toys.

A potential advantage of an embodied AI companion is that physical location makes it less ever-present. In contrast, screen-based AI companions like chatbots are as present as the screens they live on. So if they’re trained similarly to social media to maximize engagement, they could be very addictive. There’s something appealing, at least in that respect, of having a physical companion that stays roughly where you left it last.

Brad Knox posing with a humanoid and small owl-like robot.Knox poses with the Nexi and Dragonbot robots during his postdoc at MIT in 2014.Paula Aguilera and Jonathan Williams/MIT

Anything else you’d like to mention?

Knox: There are two other traits I think would be worth touching upon.

Potentially the largest harm right now is related to the trait of high attachment anxiety—basically jealous, needy AI companions. I can understand the desire to make a wide range of different characters—including possessive ones—but I think this is one of the easier issues to fix. When people see this trait in AI companions, I hope they will be quick to call it out as an immoral thing to put in front of people, something that’s going to discourage them from interacting with others.

Additionally, if an AI comes with limited ability to interact with groups of people, that itself can push its users to interact with people less. If you have a human friend, in general there’s nothing stopping you from having a group interaction. But if your AI companion can’t understand when multiple people are talking to it and it can’t remember different things about different people, then you’ll likely avoid group interaction with your AI companion. To some degree it’s more of a technical challenge outside of the core behavioral AI. But this capability is something I think should be really prioritized if we’re going to try to avoid AI companions competing with human relationships.

How Do You Define an AI Companion?

2026-02-11 22:00:02



For a different perspective on AI companions, see our Q&A with Brad Knox: How Can AI Companions Be Helpful, not Harmful?

AI models intended to provide companionship for humans are on the rise. People are already frequently developing relationships with chatbots, seeking not just a personal assistant but a source of emotional support.

In response, apps dedicated to providing companionship (such as Character.ai or Replika) have recently grown to host millions of users. Some companies are now putting AI into toys and desktop devices as well, bringing digital companions into the physical world. Many of these devices were on display at CES last month, including products designed specifically for children, seniors, and even your pets.

AI companions are designed to simulate human relationships by interacting with users like a friend would. But human-AI relationships are not well understood, and companies are facing concern about whether the benefits outweigh the risks and potential harm of these relationships, especially for young people. In addition to questions about users’ mental health and emotional well being, sharing intimate personal information with a chatbot poses data privacy issues.

Nevertheless, more and more users are finding value in sharing their lives with AI. So how can we understand the bonds that form between humans and chatbots?

Jaime Banks is a professor at the Syracuse University School of Information Studies who researches the interactions between people and technology—in particular, robots and AI. Banks spoke with IEEE Spectrum about how people perceive and relate to machines, and the emerging relationships between humans and their machine companions.

Defining AI Companionship

How do you define AI companionship?

Jaime Banks: My definition is evolving as we learn more about these relationships. For now, I define it as a connection between a human and a machine that is dyadic, so there’s an exchange between them. It is also sustained over time; a one-off interaction doesn’t count as a relationship. It’s positively valenced—we like being in it. And it is autotelic, meaning we do it for its own sake. So there’s not some extrinsic motivation, it’s not defined by an ability to help us do our jobs or make us money.

I have recently been challenged by that definition, though, when I was developing an instrument to measure machine companionship. After developing the scale and working to initially validate it, I saw an interesting situation where some people do move toward this autotelic relationship pattern. “I appreciate my AI for what it is and I love it and I don’t want to change it.” It fit all those parts of the definition. But then there seems to be this other relational template that can actually be both appreciating the AI for its own sake, but also engaging it for utilitarian purposes.

That makes sense when we think about how people come to be in relationships with AI companions. They often don’t go into it purposefully seeking companionship. A lot of people go into using, for instance, ChatGPT for some other purpose and end up finding companionship through the course of those conversations. And we have these AI companion apps like Replika and Nomi and Paradot that are designed for social interaction. But that’s not to say that they couldn’t help you with practical topics.

Professor Jaime Banks programming the motions of a humanoid robot on a desktop computer.Jaime Banks customizes the software for an embodied AI social humanoid robot.Angela Ryan/Syracuse University

Different models are also programmed to have different “personalities.” How does that contribute to the relationship between humans and AI companions?

Banks: One of our Ph.D. students just finished a project about what happened when OpenAI demoted GPT-4o and the problems that people encountered, in terms of companionship experiences when the personality of their AI just completely changed. It didn’t have the same depth. It couldn’t remember things in the same way.

That echoes what we saw a couple years ago with Replika. Because of legal problems, Replika disabled for a period of time the erotic roleplay module and people described their companions as though they had been lobotomized, that they had this relationship and then one day they didn’t anymore. With my project on the tanking of the soulmate app, many people in their reflection were like, “I’m never trusting AI companies again. I’m only going to have an AI companion if I can run it from my computer so I know that it will always be there.”

Benefits and Risks of AI Relationships

What are the benefits and risks of these relationships?

Banks: There’s a lot of talk about the risks and a little talk about benefits. But frankly, we are only just on the precipice of starting to have longitudinal data that might allow people to make causal claims. The headlines would have you believe that these are the end of mankind, that they’re going to make you commit suicide or abandon other humans. But much of those are based on these unfortunate, but uncommon situations.

Most scholars gave up technological determinism as a perspective a long time ago. In the communication sciences at least, we don’t generally assume that machines make us do something because we have some degree of agency in our interactions with technologies. Yet much of the fretting around potential risks is deterministic—AI companions make people delusional, make them suicidal, make them reject other relationships. A large number of people get real benefits from AI companions. They narrate experiences that are deeply meaningful to them. I think it’s irresponsible of us to discount those lived experiences.

When we think about concerns linking AI companions to loneliness, we don’t have much data that can support causal claims. Some studies suggest AI companions lead to loneliness, but other work suggests it reduces loneliness, and other work suggests that loneliness is what comes first. Social relatedness is one of our three intrinsic psychological needs, and if we don’t have that we will seek it out, whether it’s from a volleyball for a castaway, my dog, or an AI that will allow me to feel connected to something in my world.

Some people, and governments for that matter, may move toward a protective stance. For instance, there are problems around what gets done with your intimate data that you hand over to an agent owned and maintained by a company—that’s a very reasonable concern. Dealing with the potential for children to interact, where children don’t always navigate the boundaries between fiction and actuality. There are real, valid concerns. However, we need some balance in also thinking about what people are getting from it that’s positive, productive, healthy. Scholars need to make sure we’re being cautious about our claims based on our data. And human interactants need to educate themselves.

Close-up of Professor Jaime Banks aligning her fingers and palm with the hand of a humanoid robot.Jaime Banks holds a mechanical hand.Angela Ryan/Syracuse University

Why do you think that AI companions are becoming more popular now?

Banks: I feel like we had this perfect storm, if you will, of the maturation of large language models and coming out of COVID, where people had been physically and sometimes socially isolated for quite some time. When those conditions converged, we had on our hands a believable social agent at a time when people were seeking social connection. Outside of that, we are increasingly just not nice to one another. So, it’s not entirely surprising that if I just don’t like the people around me, or I feel disconnected, that I would try to find some other outlet for feeling connected.

More recently there’s been a shift to embodied companions, in desktop devices or other formats beyond chatbots. How does that change the relationship, if it does?

Banks: I’m part of a Facebook group about robotic companions and I watch how people talk, and it almost seems like it crosses this boundary between toy and companion. When you have a companion with a physical body, you are in some ways limited by the abilities of that body, whereas with digital-only AI, you have the ability to explore fantastic things—places that you would never be able to go with another physical entity, fantasy scenarios.

But in robotics, once we get into a space where there are bodies that are sophisticated, they become very expensive and that means that they are not accessible to a lot of people. That’s what I’m observing in many of these online groups. These toylike bodies are still accessible, but they are also quite limiting.

Do you have any favorite examples from popular culture to help explain AI companionship, either how it is now or how it could be?

Banks: I really enjoy a lot of the short fiction in Clarkesworld magazine, because the stories push me to think about what questions we might need to answer now to be prepared for a future hybrid society. Top of mind are the stories “Wanting Things,” “Seven Sexy Cowboy Robots,” and “Today I am Paul.” Outside of that, I’ll point to the game Cyberpunk 2077, because the character Johnny Silverhand complicates the norms for what counts as a machine and what counts as companionship.