MoreRSS

site iconIEEE SpectrumModify

IEEE is the trusted voice for engineering, computing, and technology information around the globe. 
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of IEEE Spectrum

Improve Engineering Communication by Translating Technical Detail

2026-03-26 03:03:20



This article is crossposted from IEEE Spectrum’s careers newsletter. Sign up now to get insider tips, expert advice, and practical strategies, written in partnership with tech career development company Parsity and delivered to your inbox for free!

Engineers Aren’t Bad at Communication. They’re Just Speaking to the Wrong Audience.

There’s a persistent myth that engineers are bad communicators. In my experience, that’s not true.

Engineers are often excellent communicators—inside their domain. We’re precise. We’re logical. We structure arguments clearly. We define terms. We reason from constraints.

The breakdown happens when the audience changes.

We’re used to speaking in highly technical language, surrounded by people who share our vocabulary. In that environment, shorthand and jargon are efficient. But outside that bubble, when talking to executives, product managers, marketing teams, or customers, that same precision can be confusing.

The problem isn’t that we can’t communicate. It’s that we forget to translate.

If you’ve ever explained a critical issue or error to a non-technical stakeholder, you’ve probably experienced this: You give a technically accurate explanation. They leave either more confused than before, or more alarmed than necessary.

Suddenly you’re spending more time clarifying your explanation than fixing the issue.

Under pressure, we default to what we know best—technical detail. But detail without context creates cognitive overload. The listener can’t tell what matters, what’s normal, and what’s dangerous.

That’s when the “engineers can’t communicate” narrative shows up.

In reality, we just skipped the translation step.

The Writing Shortcut

One of the simplest ways to improve written communication today is surprisingly easy: Run your explanation through an AI model and ask, “would this make sense to a non-technical audience? Where would someone get confused?”

You can also say:

  • “Rewrite this for an executive audience.”
  • “What analogy would help explain this?”
  • “Simplify this without losing accuracy.”

Large language models are particularly good at identifying jargon and offering alternative framings. They’re essentially translation assistants.

Analogies are especially powerful. If you’re explaining system latency, compare it to traffic congestion. If you’re describing technical debt, compare it to skipping maintenance on a house. If you’re explaining distributed systems, try using supply chain examples.

The goal isn’t to “dumb it down.” It’s to map the unfamiliar onto something familiar.

Before sending an email or report, ask yourself:

  • Does this audience need to understand the mechanism, or just impact?
  • Does this explanation help them make a decision?
  • Have I defined terms they might not know?

Translation When Speaking

When speaking—especially in meetings or presentations—most engineers have one predictable habit: We speak too fast.

Nerves speed us up. Speed causes filler words. Filler words dilute authority.

To prevent that, follow a simple rule: Speak 10 to 15 percent slower than feels natural.

Slowing down cuts down the number of times you say “um” and “uh”, gives you time to think, makes you sound more confident, and gives the listener time to process.

Another rule: Say only what the audience needs to move forward.

Explain just enough for the person to make a decision. If you overload someone with implementation details when they only need tradeoffs, you’ve made their job harder.

The Real Skill

The key skill in communication is audience awareness.

The same engineer who can clearly explain a concurrency bug to a peer can absolutely explain system risk to an executive. The difference is framing, vocabulary, and context. Not intelligence.

In the age of AI, where code generation is increasingly commoditized, the ability to translate complexity into clarity is becoming a defining advantage.

Engineers aren’t bad communicators. We just have to remember that outside our bubble, translation is part of the job.

—Brian

How Robert Goddard’s Self-Reliance Crashed His Dreams

Robert Goddard launched the first liquid-fueled rocket 100 years ago, but his legacy still has relevant lessons for today’s engineers. Although Goddard’s headstrong confidence in his ideas helped bring about the breakthrough, it later became an obstacle in what systems engineer Guru Madhavan calls “the alpha trap.” Madhavan writes: “We love to celebrate the lone genius, yet we depend on teams to bring the flame of genius to the people.”

Read more here.

Redefining the Software Engineering Profession for AI

For Communications of the ACM, two Microsoft engineers propose a model for software engineering in the age of AI: Making the growth of early-in-career developers an explicit organizational goal. Without hiring early-career workers, the profession’s talent pipeline will eventually dry up. So, they argue, companies must hire them and develop talent, even if that comes with a short-term dip in productivity.

Read more here.

IEEE Launches Global Virtual Career Fairs

Looking for a job? Last year, IEEE Industry Engagement hosted its first virtual career fair to connect recruiters and young professionals. Several more career fairs are now planned, including two upcoming regional events and a global career fair in June. At these fairs, you can participate in interactive sessions, chat with recruiters, and experience video interviews.

Read more here.

Training Driving AI at 50,000× Real Time

2026-03-26 03:00:05



This is a sponsored article brought to you by General Motors. Visit their new Engineering Blog for more insights.

Autonomous driving is one of the most demanding problems in physical AI. An automated system must interpret a chaotic, ever-changing world in real time—navigating uncertainty, predicting human behavior, and operating safely across an immense range of environments and edge cases.

At General Motors, we approach this problem from a simple premise: while most moments on the road are predictable, the rare, ambiguous, and unexpected events — the long tail — are what ultimately defines whether an autonomous system is safe, reliable, and ready for deployment at scale. (Note: While here we discuss research and emerging technologies to solve the long tail required for full general autonomy, we also discuss our current approach or solving 99% of everyday autonomous driving in a deep dive on Compound AI.)

As GM advances toward eyes-off highway driving, and ultimately toward fully autonomous vehicles, solving the long tail becomes the central engineering challenge. It requires developing systems that can be counted on to behave sensibly in the most unexpected conditions.

GM is building scalable driving AI to meet that challenge — combining large-scale simulation, reinforcement learning, and foundation-model-based reasoning to train autonomous systems at a scale and speed that would be impossible in the real world alone.

Stress-testing for the long tail

Long-tail scenarios of autonomous driving come in a few varieties.

Some are notable for their rareness. There’s a mattress on the road. A fire hydrant bursts. A massive power outage in San Francisco that disabled traffic lights required driverless vehicles to navigate never-before experienced challenges. These rare system-level interactions, especially in dense urban environments, show how unexpected edge cases can cascade at scale.

But long-tail challenges don’t just come in the form of once-in-a-lifetime rarities. They also manifest as everyday scenarios that require characteristically human courtesy or common sense. How do you queue up for a spot without blocking traffic in a crowded parking lot? Or navigate a construction zone, guided by gesturing workers and ad-hoc signs? These are simple challenges for a human driver but require inventive engineering to handle flawlessly with a machine.

Autonomous driving scenario demand curve


Graph showing scenario complexity: Predictable, everyday, and rare long-tail events.


Deploying vision language models

One tool GM is developing to tackle these nuanced scenarios is the use of Vision Language Action (VLA) models. Starting with a standard Vision Language Model, which leverages internet-scale knowledge to make sense of images, GM engineers use specialized decoding heads to fine-tune for distinct driving-related tasks. The resulting VLA can make sense of vehicle trajectories and detect 3D objects on top of its general image-recognition capabilities.

These tuned models enable a vehicle to recognize that a police officer’s hand gesture overrides a red traffic light or to identify what a “loading zone” at a busy airport terminal might look like.

These models can also generate reasoning traces that help engineers and safety operators understand why a maneuver occurred — an important tool for debugging, validation, and trust.

Testing hazardous scenarios in high-fidelity simulations

The trouble is: driving requires split-second reaction times so any excess latency poses an especially critical problem. To solve this, GM is developing a “Dual Frequency VLA.” This large-scale model runs at a lower frequency to make high-level semantic decisions (“Is that object in the road a branch or a cinder block?”), while a smaller, highly efficient model handles the immediate, high-frequency spatial control (steering and braking).

This hybrid approach allows the vehicle to benefit from deep semantic reasoning without sacrificing the split-second reaction times required for safe driving.

But dealing with an edge case safely requires that the model not only understand what it is looking at but also understand how to sensibly drive through the challenge it’s identified. For that, there is no substitute for experience.

Which is why, each day, we run millions of high-fidelity closed loop simulations, equivalent to tens of thousands of human driving days, compressed into hours of simulation. We can replay actual events, modify real-world data to create new virtual scenarios, or design new ones entirely from scratch. This allows us to regularly test the system against hazardous scenarios that would be nearly impossible to encounter safely in the real world.

Synthetic data for the hardest cases

Where do these simulated scenarios come from? GM engineers employ a whole host of AI technologies to produce novel training data that can model extreme situations while remaining grounded in reality.

GM’s “Seed-to-Seed Translation” research, for instance, leverages diffusion models to transform existing real-world data, allowing a researcher to turn a clear-day recording into a rainy or foggy night while perfectly preserving the scene’s geometry. The result? A “domain change”—clear becomes rainy, but everything else remains the same.

In addition, our GM World diffusion-based simulator allows us to synthesize entirely new traffic scenarios using natural language and spatial bounding boxes. We can summon entirely new scenarios with different weather patterns. We can also take an existing road scene and add challenging new elements, such as a vehicle cutting into our path.


Comparison of a 3D model and street view with a vehicle removed, labeled "Original" and "Edited".


Street with several cars parked, partially flooded after heavy rain; blue geometric markings overlay.


Winter street with cars; blue 3D wireframe shapes overlay.

High-fidelity simulation isn’t always the best tool for every learning task. Photorealistic rendering is essential for training perception systems to recognize objects in varied conditions. But when the goal is teaching decision-making and tactical planning—when to merge, or how to navigate an intersection—the computationally expensive details matter less than spatial relationships and traffic dynamics. AI systems may need billions or even trillions of lightweight examples to support reinforcement learning, where models learn the rules of sensible driving through rapid trial and error rather than relying on imitation alone.

To this end, General Motors has developed a proprietary, multi-agent reinforcement learning simulator, GM Gym, to serve as a closed-loop simulation environment that can both simulate high-fidelity sensor data, and model thousands of drivers per second in an abstract environment known as “Boxworld.”

By focusing on essentials like spatial positioning, velocity and rules of the road while stripping away details like puddles and potholes, Boxworld creates a high-speed training environment for reinforcement learning models at incredible speeds, operating 50,000 times faster than real-time and simulating 1,000 km of driving per second of GPU time. It’s a method that allows us to not just imitate humans, but to develop driving models that have verifiable objective outcomes, like safety and progress.

From abstract policy to real-world driving

Of course, the route from your home to your office does not run through Boxworld. It passes through a world of asphalt, shadows, and weather. So, to bring that conceptual expertise into the real world, GM is one of the first to employ a technique called “On Policy Distillation,” where engineers run their simulator in both modes simultaneously: the abstract, high-speed Boxworld and the high-fidelity sensor mode.

Here, the reinforcement learning model—which has practiced countless abstract miles to develop a perfect “policy,” or driving strategy—acts as a teacher. It guides its “student,” the model that will eventually live in the car. This transfer of wisdom is incredibly efficient; just 30 minutes of distillation can capture the equivalent of 12 hours of raw reinforcement learning, allowing the real-world model to rapidly inherit the safety instincts its cousin painstakingly honed in simulation.

Designing failures before they happen

Simulation isn’t just about training the model to drive well, though; it’s also about trying to make it fail. To rigorously stress-test the system, GM utilizes a differentiable pipeline called SHIFT3D. Instead of just recreating the world, SHIFT3D actively modifies it to create “adversarial” objects designed to trick the perception system. The pipeline takes a standard object, like a sedan, and subtly morphs its shape and pose until it becomes a “challenging”, fun-house version that is harder for the AI to detect. Optimizing these failure modes is what allows engineers to preemptively discover safety risks before they ever appear on the road. Iteratively retraining the model on these generated “hard” objects has been shown to reduce near-miss collisions by over 30%, closing the safety gap on edge cases that might otherwise be missed.

Even with advanced simulation and adversarial testing, a truly robust system must know its own limits. To enable safety in the face of the unknown, GM researchers add a specialized “Epistemic uncertainty head” to their models. This architectural addition allows the AI to distinguish between standard noise and genuine confusion. When the model encounters a scenario it doesn’t understand—a true “long tail” event—it signals high epistemic uncertainty. This acts as a principled proxy for data mining, automatically flagging the most confusing and high-value examples for engineers to analyze and add to the training set.

This rigorous, multi-faceted approach—from “Boxworld” strategy to adversarial stress-testing—is General Motors’ proposed framework for solving the final 1% of autonomy. And while it serves as the foundation for future development, it also surfaces new research challenges that engineers must address.

How do we balance the essentially unlimited data from Reinforcement Learning with the finite but richer data we get from real-world driving? How close can we get to full, human-like driving by writing down a reward function? Can we go beyond domain change to generate completely new scenarios with novel objects?

Solving the long tail at scale

Working toward solving the long tail of autonomy is not about a single model or technique. It requires an ecosystem — one that combines high-fidelity simulation with abstract learning environments, reinforcement learning with imitation, and semantic reasoning with split-second control.

This approach does more than improve performance on average cases. It is designed to surface the rare, ambiguous, and difficult scenarios that determine whether autonomy is truly ready to operate without human supervision.

There are still open research questions. How human-like can a driving policy become when optimized through reward functions? How do we best combine unlimited simulated experience with the richer priors embedded in real human driving? And how far can generative world models take us in creating meaningful, safety-critical edge cases?

Answering these questions is central to the future of autonomous driving. At GM, we are building the tools, infrastructure, and research culture needed to address them — not at small scale, but at the scale required for real vehicles, real customers, and real roads.

30 Years Ago, Robots Learned to Walk Without Falling

2026-03-26 02:00:05



When you hear the term humanoid robot, you may think of C-3PO, the human-cyborg-relations android from Star Wars. C-3PO was designed to assist humans in communicating with robots and alien species. The droid, which first appeared on screen in 1977, joined the characters on their adventures, walking, talking, and interacting with the environment like a human. It was ahead of its time.

Before the release of Star Wars, a few androids did exist and could move and interact with their environment, but none could do so without losing its balance.

It wasn’t until 1996 that the first autonomous robot capable of walking without falling was developed in Japan. Honda’s Prototype 2 (P2) was nearly 183 centimeters tall and weighed 210 kilograms. It could control its posture to maintain balance, and it could move multiple joints simultaneously.

In recognition of that decades-old feat, P2 has been honored as an IEEE Milestone. The dedication ceremony is scheduled for 28 April at the Honda Collection Hall, located on the grounds of the Mobility Resort Motegi, in Japan. The machine is on display in the hall’s robotics exhibit, which showcases the evolution of Honda’s humanoid technology.

In support of the Milestone nomination, members of the IEEE Nagoya (Japan) Section wrote: “This milestone demonstrated the feasibility of humanlike locomotion in machines, setting a new standard in robotics.” The Milestone proposal is available on the Engineering Technology and History Wiki.

Developing a domestic android

In 1986 Honda researchers Kazuo Hirai, Masato Hirose, Yuji Haikawa, and Toru Takenaka set out to develop what they called a “domestic robot” to collaborate with humans. It would be able to climb stairs, remove impediments in its path, and tighten a nut with a wrench, according to their research paper on the project.

“We believe that a robot working within a household is the type of robot that consumers may find useful,” the authors wrote.

But to create a machine that would do household chores, it had to be able to move around obstacles such as furniture, stairs, and doorways. It needed to autonomously walk and read its environment like a human, according to the researchers.

But no robot could do that at the time. The closest technologists got was the WABOT-1. Built in 1973 at Waseda University, in Tokyo, the WABOT had eyes and ears, could speak Japanese, and used tactile sensors embedded on its hands as it gripped and moved objects. Although the WABOT could walk, albeit unsteadily, it couldn’t maneuver around obstacles or maintain its balance. It was powered by an external battery and computer.

To build an android, the Honda team began by analyzing how people move, using themselves as models.

That led to specifications for the robot that gave it humanlike dimensions, including the location of the leg joints and how far the legs could rotate.

Once they began building the machine, though, the engineers found it difficult to satisfy every specification. Adjustments were made to the number of joints in the robot’s hips, knees, and ankles, according to the research paper. Humans have four hip, two knee, and three ankle joints; P2’s predecessor had three hip, one knee, and two ankle joints. The arms were treated similarly. A human’s four shoulder and three elbow joints became three shoulder joints and one elbow joint in the robot.

The researchers installed existing Honda motors and hydraulics in the hips, knees, and ankles to enable the robot to walk. Each joint was operated by a DC motor with a harmonic-drive reduction gear system, which is compact and offered high torque capacity.

To test their ideas, the engineers built what they called E0. The robot, which was just a pair of connected legs, successfully walked. It took about 15 seconds to take each step, however, and it moved using static walking in a straight line, according to a post about the project on Honda’s website. (Static walking is when the body’s center of mass is always within the foot’s sole. Humans walk with their center of mass below their navel.)

The researchers created several algorithms to enable the robot to walk like a human, according to the Honda website. The codes allowed the robot to use a locomotion mechanism, dynamic walking, whereby the robot stays upright by constantly moving and adjusting its balance, rather than keeping its center of mass over its feet, according to a video on the YouTube channel Everything About Robotics Explained.

“P2 was not just a technical achievement; it was a catalyst that propelled the field of humanoid robotics forward, demonstrating the potential for robots to interact with and assist humans in meaningful ways.” —IEEE Nagoya Section

The Honda team installed rubber brushes on the bottom of the machine’s feet to reduce vibrations from the landing impacts (the force experienced when its feet touch the ground)—which had made the robot lose its balance.

Between 1987 and 1991, three more prototypes (E1, E2, and E3) were built, each testing a new algorithm. E3 was a success.

With the dynamic walking mechanism complete, the researchers continued their quest to make the robot stable. The team added 6-axis sensors to detect the force at which the ground pushed back against the robot’s feet and the movements of each foot and ankle, allowing the robot to adjust its gait in real time for stability.

The team also developed a posture-stabilizing control system to help the robot stay upright. A local controller directed how the electric motor actuators needed to move so the robot could follow the leg joint angles when walking, according to the research paper.

During the next three years, the team tested the systems and built three more prototypes (E4, E5, and E6), which had boxlike torsos atop the legs.

In 1993 the team was finally ready to build an android with arms and a head that looked more like C-3PO, dubbed Prototype 1 (P1). Because the machine was meant to help people at home, the researchers determined its height and limb proportions based on the typical measurements of doorways and stairs. The arm length was based on the ability of the robot to pick up an object when squatting.

When they finished building P1, it was 191.5 cm tall, weighed 175 kg, and used an external power source and computer. It could turn a switch on and off, grab a doorknob, and carry a 70 kg object.

P1 was not launched publicly but instead used to conduct research on how to further improve the design. The engineers looked at how to install an internal power source and computer, for example, as well as how to coordinate the movement of the arms and legs, according to Honda.

For P2, four video cameras were installed in its head—two for vision processing and the other two for remote operation. The head was 60 cm wide and connected to the torso, which was 75.6 cm deep.

A computer with four microSparc II processors running a real-time operating system was added into the robot’s torso. The processors were used to control the arms, legs, joints, and vision-processing cameras.

Also within the body were DC servo amplifiers, a 20-kg nickel-zinc battery, and a wireless Ethernet modem, according to the research paper. The battery lasted for about 15 minutes; the machine also could be charged by an external power supply.

The hardware was enclosed in white-and-gray casing.

P2, which was launched publicly in 1996, could walk freely, climb up and down stairs, push carts, and perform some actions wirelessly.

P2, which was launched publicly in 1996, could walk freely, climb up and down stairs, push carts, and perform some actions wirelessly.King Rose Archives

The following year, Honda’s engineers released the smaller and lighter P3. It was 160 cm tall and weighed 130 kg.

In 2000 the popular ASIMO robot was introduced. Although shorter than its predecessors at 130 cm, it could walk, run, climb stairs, and recognize voices and faces. The most recent version was released in 2011. Honda has retired the robot.

Honda P2’s influence

Thanks to P2, today’s androids are not just ideas in a laboratory. Robots have been deployed to work in factories and, increasingly, at home.

The machines are even being used for entertainment. During this year’s Spring Festival gala in Beijing, machines developed by Chinese startups Unitree Robotics, Galbot, Noetix, and MagicLab performed synchronized dances, martial arts, and backflips alongside human performers.

“P2’s development shifted the focus of robotics from industrial applications to human-centric designs,” the Milestone sponsors explained in the wiki entry. “It inspired subsequent advancements in humanoid robots and influenced research in fields like biomechanics and artificial intelligence.

“It was not just a technical achievement; it was a catalyst that propelled the field of humanoid robotics forward, demonstrating the potential for robots to interact with and assist humans in meaningful ways.”

To learn more about robots, check out IEEE Spectrum’s guide.

Recognition as an IEEE Milestone

A plaque recognizing Honda’s P2 robot as an IEEE Milestone is to be installed at the Honda Collection Hall. The plaque is to read:

In 1996 Prototype 2 (P2), a self-contained autonomous bipedal humanoid robot capable of stable dynamic walking and stair-climbing, was introduced by Honda. Its legged robotics incorporated real-time posture control, dynamic balance, gait generation, and multijoint coordination. Honda’s mechatronics and control algorithms set technical benchmarks in mobility, autonomy, and human-robot interaction. P2 inspired new research in humanoid robot development, leading to increasingly sophisticated successors.

Administered by the IEEE History Center and supported by donors, the Milestone program recognizes outstanding technical developments around the world.

How IEEE 802.11bn Delivers Ultra-High Reliability for Wi-Fi 8

2026-03-25 22:22:07



A technical exploration of IEEE 802.11bn’s physical and MAC layer enhancements — including distributed resource units, enhanced long range, multi-AP coordination, and seamless roaming — that define Wi-Fi 8.

What Attendees will Learn

  1. Why Wi-Fi 8 prioritizes reliability over raw throughput — Understand how IEEE 802.11bn shifts the design philosophy from peak data-rate gains to ultra-high reliability.
  2. How new physical layer features overcome uplink power limitations — Learn how distributed resource units spread tones across wider distribution bandwidths to boost per-tone transmit power, and how enhanced long range protocol data units use power-boosted preamble fields and frequency-domain duplication to extend uplink coverage.
  3. How advanced MAC coordination reduces interference and latency — Examine multi-access point coordination schemes — coordinated beamforming, spatial reuse, time division multiple access, and restricted target wake time — alongside non-primary channel access and priority enhanced distributed channel access.
  4. What seamless roaming and power management mean for next-generation deployments — Discover how seamless mobility domains eliminate reassociation delays during access point transitions, and how dynamic power save and multi-link power management let devices trade capability for battery life without sacrificing connectivity.

Download this free whitepaper now!

What Happens When You Host an AI Café

2026-03-25 22:00:05



“Can I get an interview?” “Can I get a job when I graduate?” Those questions came from students during a candid discussion about artificial intelligence, capturing the anxiety many young people feel today. As companies adopt AI-driven interview screeners, restructure their workforces, and redirect billions of dollars toward AI infrastructure, students are increasingly unsure of what the future of work will look like.

We had gathered people together at a coffee shop in Auburn, Alabama, for what we called an AI Café. The event was designed to confront concerns about AI directly, demystifying the technology while pushing back against the growing narrative of technological doom.

AI is reshaping society at breathtaking speed. Yet the trajectory of this transformation is being charted primarily by for-profit tech companies, whose priorities revolve around market dominance rather than public welfare. Many people feel that AI is something being done to them rather than developed with them.

As computer science and liberal arts faculty at Auburn University, we believe there is another path forward: one where scholars engage their communities in genuine dialogue about AI. Not to lecture about technical capabilities, but to listen, learn, and co-create a vision for AI that serves the public interest.

The AI Café Model

Last November, we ran two public AI Cafés in Auburn. These were informal, 90-minute conversations between faculty, students, and community members about their experiences with AI. In these conversational forums, participants sat in clusters, questions flowed in multiple directions, and lived experience carried as much weight as technical expertise.

We avoided jargon and resisted attempts to “correct” misconceptions, welcoming whatever emotions emerged. One ground rule proved crucial: keeping discussions in the present, asking participants where they encounter AI today. Without that focus, conversations could easily drift to sci-fi speculation. Historical analogies—to the printing press, electricity, and smartphones—helped people contextualize their reactions. And we found that without shared definitions of AI, people talked past each other; we learned to ask participants to name specific tools they were concerned about.

A pair of photos show people in chairs in a cafe raising their hands, and 3 people smiling in front of the audience.Organizers Xaq Frohlich, Cheryl Seals, and Joan Harrell (right) held their first AI Café in a welcoming coffee shop and bookstore. Well Red

Most important, we approached these events not as experts enlightening the masses, but as community members navigating complex change together.

What We Learned by Listening

Participants arrived with significant frustration. They felt that commercial interests were driving AI development “without consideration of public needs,” as one attendee put it. This echoed deeper anxieties about technology, from social media algorithms that amplify division to devices that profit from “engagement” and replace meaningful face-to-face connection. People aren’t simply “afraid of AI.” They’re weary of a pattern where powerful technologies reshape their lives while they have little say.

Yet when given space to voice concerns without dismissal, something shifted. Participants didn’t want to stop AI development; they wanted to have a voice in it. When we asked “What would a human-centered AI future look like?” the conversation became constructive. People articulated priorities: fairness over efficiency, creativity over automation, dignity over convenience, community over individualism.

Three people standing together in front of a yellow curtain at an indoor event.The three organizers, all professors at Alabama’s Auburn University, say that including people from the liberal arts fields brought new perspectives to the discussions about AI. Well Red

For us as organizers, the experience was transformative. Hearing how AI affected people’s work, their children’s education, and their trust in information prompted us to consider dimensions we hadn’t fully grasped. Perhaps most striking was the gratitude participants expressed for being heard. It wasn’t about filling knowledge deficits; it was about mutual learning. The trust generated created a spillover effect, renewing faith that AI could serve the public interest if shaped through inclusive processes.

How to Start Your Own AI Café

The “deficit model” of science communication—where experts transmit knowledge to an uninformed public—has been discredited. Public resistance to emerging technologies reflects legitimate concerns about values, risks, and who controls decision-making. Our events point toward a better model.

We urge engineering and liberal arts departments, professional societies, and community organizations worldwide to organize dialogues similar to our AI Cafés.

We found that a few simple design choices made these conversations far more productive. Informal and welcoming spaces such as coffee shops, libraries, and community centers helped participants feel comfortable (and serving food and drinks helped too!). Starting with small-group discussions, where people talked with neighbors, produced more honest thinking and greater participation. Partnering with colleagues in the liberal arts brought additional perspectives on technology’s social dimensions. And by making a commitment to an ongoing series of events, we built trust.

Facilitation also matters. Rather than leading with technical expertise, we began with values: We asked what kind of world participants wanted, and how AI might help or hinder that vision. We used analogies to earlier technologies to help people situate their reactions and grounded discussions in present realities, asking participants where they have encountered AI in their daily lives. We welcomed emotions constructively, transforming worry into problem solving by asking questions like: “What would you do about that?”

Why Engineers Should Engage the Public

Professional ethics codes remain abstract unless grounded in dialogue with affected communities. Conversations about what “responsible AI” means will look different in São Paulo than in Seoul, in Vienna than in Nairobi. What makes the AI Café model portable is its general principles: informal settings, values-first questions, present-tense focus, genuine listening.

Without such engagement, ethical accountability quietly shifts to technical experts rather than remaining a shared public concern. If we let commercial interests define AI’s trajectory with minimal public input, it will only deepen divides and entrench inequities.

AI will continue advancing whether or not we have public trust. But AI shaped through dialogue with communities will look fundamentally different from AI developed solely to pursue what’s technically possible or commercially profitable.

The tools for this work aren’t technical; they’re social, requiring humility, patience, and genuine curiosity. The question isn’t whether AI will transform society. It’s whether that transformation will be done to people or with them. We believe scholars must choose the latter, and that starts with showing up in coffee shops and community centers to have conversations where we do less talking and more listening.

The future of AI depends on it.


Are U.S. Engineering Ph.D. Programs Losing Students?

2026-03-25 21:00:05



U.S. doctoral programs in electrical engineering form the foundation of technological advancement, training the brightest minds in the world to research, develop, and design next-generation electronics, software, electrical infrastructure, and other high-tech products and systems. Elite institutions have long served as launchpads for the engineers behind tomorrow’s technology.

Now that foundation is under strain.

With U.S. universities increasingly entangled in political battles under the second Trump administration, uncertainty is beginning to ripple through doctoral admissions for electrical engineering programs. While some departments are reducing the number of spots available in anticipation of potential federal funding cuts, others are seeing their applicant pools shrink, particularly among international students, who make up a significant portion of their programs.

In 2024 alone, U.S. universities awarded more than 2,000 doctorates in electrical and computer engineering, according to data from the National Center for Science and Engineering Statistics. The number of computing Ph.D.s grew significantly in the 2010s, according to data from the National Academies, but there is still high demand for those with advanced degrees across academia, government, and industry. Now, some universities point to warning signs of waning enrollment.

Though not all engineers have Ph.D.s, if enrollment continues to shrink, fewer doctoral students could mean fewer engineers developing cutting-edge technology and training the next generation, potentially exacerbating existing labor shortages as global competition for tech talent intensifies.

Federal funding cuts affect admissions

Public universities in particular are feeling the strain because they rely heavily on federal grants to support doctoral students.

The University of California, Los Angeles, for instance, must fund Ph.D. students for the duration of a degree—typically five years. In August 2025, the U.S. government pulled more than US $580 million in federal grants over allegations that the university failed to adequately address antisemitism on campus during student protests. A federal judge has since ordered the funding to be restored, but faculty began to worry that research support could be clawed back without notice, says Subramanian Iyer, distinguished professor at UC Los Angeles’s department of electrical and computer engineering.

According to Iyer, departments across UC Los Angeles, including engineering, plan to scale back Ph.D. admissions this year. “The fear is that at some point, all this government money will be taken away,” Iyer says. “Lowering the admissions rate is just a way to prepare for that reality.”

In response to a request for comment, a spokesperson for the U.S. National Science Foundation—a major source of federal research funding at UC Los Angeles and elsewhere—said, “NSF recognizes the essential role doctoral trainees play in the nation’s engineering and STEM enterprise” and noted several of the foundation’s awards and programs that support graduate research.

Funding shocks may also force Pennsylvania State University to reshape future admissions decisions, according to Madhavan Swaminathan, head of Penn State’s electrical engineering department and director of the Center for Heterogeneous Integration of Micro Electronic Systems (CHIMES), a semiconductor research lab.

In 2023, the Defense Advanced Research Projects Agency (DARPA) and industry partners awarded CHIMES a five-year $32.7 million grant. But in late 2025, the agency pulled its final year of funding from the center, citing a shift in priorities from microelectronics to photonics, Swaminathan says. As a result, CHIMES’ annual budget, which supports research assistantships for roughly 100 engineering graduate students, the majority pursuing Ph.D.s, will fall from $7 million in 2026 to $3.5 million in 2027. If these constraints persist, Penn State’s engineering department may reduce the number of doctoral students it supports.

In a statement, a DARPA spokesperson told IEEE Spectrum: “Basic research is central to identifying world-changing technologies, and DARPA remains committed to engaging academic institutions in our program research. By design, a DARPA program typically lasts about 3 to 5 years. Once we establish proof of concept, we transition the technology for further development and turn our attention to other challenging areas of research.”

Penn State’s enrollment numbers reflect Swaminathan’s caution. He says the electrical engineering Ph.D. cohort shrank from 28 students in 2024 to 15 students in 2025. Applications show a similar pattern. After rising from 195 in 2024 to 247 in 2025, Ph.D. applications fell roughly 30 percent to 174 for the upcoming 2026 cohort, a sign that prospective students may be wary of applying to U.S. programs.

Immigration restrictions and application declines

In late January, the Trump administration announced it had paused visa approvals for citizens of 75 countries. Months earlier, the administration proposed new restrictions on student visas, including a four-year cap.

For Texas A&M University’s graduate electrical and computer engineering programs, up to 80 percent of applicants each year are international students, according to Narasimha Annapareddy, professor and head of the department. Annapareddy says applications for the fall 2026 Ph.D. cohort have dropped by roughly 50 percent.

Annapareddy says the United States is “sending a message that migration is going to be more difficult in the future.” Foreign students often pursue degrees in the U.S. not only for academic training, he says, but to build long-term careers and lives in the country. Fewer applications from international students mean that the university forgoes a “driven and hungry” segment of the applicant pool who are highly qualified in technical fields.

“The fear is that at some point, all this government money will be taken away.”— Subramanian Iyer, UC Los Angeles

At the University of Southern California, the decline is more moderate. The freshman Ph.D. class fell from about 90 students in 2024 to roughly 70 in 2025, a reduction of 22 percent, according to Richard Leahy, department chair of USC’s Ming Hsieh Department of Electrical and Computer Engineering.

While Leahy says applications are down modestly overall, domestic applications have increased by roughly 15 percent. Beyond immigration restrictions, international students, particularly from countries such as India and China, may be staying in their home countries as their technology sectors expand.

“A lot of those students that would normally have come to the U.S. are now taking very good jobs working in the AI industry and other areas,” Leahy says. “There are a lot more opportunities now.”

Workforce pipeline strains

Some faculty say shrinking cohorts could erode the tech workforce if the pattern continues.

At UC Los Angeles, Iyer describes a doctoral ecosystem built on a chain of mentorship. Among the roughly 25 students in his lab, senior doctoral students mentor junior Ph.D. candidates, who in turn guide master’s students and undergraduates. The system depends on overlapping cohorts. Reducing the number of students hired weakens that overlap and the trickle-down benefits of the mentorship model that keeps labs functioning.

The real benefit of the university system isn’t just the teaching but also “the community that you build,” Iyer says. “As you decrease admissions, this will disappear.”

At Penn State, Swaminathan points to specialization as key to a strong workforce. Many doctoral students train in semiconductor engineering, feeding expert talent into the domestic chip industry. If enrollment continues to shrink over the next few years, Swaminathan says, companies may need to hire students with bachelor’s or master’s degrees, who might lack the necessary skills required to design and innovate new chips.

“Without that specialization, there’s only so much one can do,” Swaminathan says.

The industry–academia gap

Not all departments are shrinking. At the University of Texas at Austin, overall enrollment has remained relatively steady, according to Diana Marculescu, chair of UT Austin’s Chandra Family Department of Electrical and Computer Engineering.

While she says recent fluctuations aren’t raising alarms, her concern lies more with alignment between research and industry. Doctoral students often train according to current grant priorities, she says. But by the time graduates enter the job market four to six years later, their specialization may not align neatly with open roles. That creates friction in the talent pipeline.

“That lack of connection might be problematic,” Marculescu says. She argues that closer collaboration between universities and the private sector could help create stronger feedback loops between hiring needs and academic research priorities.

For now, USC’s Leahy says Ph.D. graduates remain in high demand, and the current shifts have not yet translated into measurable workforce shortages. “We should be concerned about the number of Ph.D.s,” he says. “But there isn’t a crisis at this point.”