MoreRSS

site iconIEEE SpectrumModify

IEEE is the trusted voice for engineering, computing, and technology information around the globe. 
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of IEEE Spectrum

Working With More Experienced Engineers Can Fast-Track Career Growth

2026-04-11 02:49:00



This article is crossposted from IEEE Spectrum’s careers newsletter. Sign up now to get insider tips, expert advice, and practical strategies, written in partnership with tech career development company Parsity and delivered to your inbox for free!

The Worst Engineer in the Room

My salary doubled. My confidence tanked.

That’s what happened when I had just joined a five-person startup in San Francisco in my third year as a software engineer. Two of the founders had been recognized in Forbes 30 Under 30. The team was exceptional by any measure.

On my first day, someone made a joke about Dijkstra’s algorithm. Everyone laughed. I smiled along, then looked it up afterward so I could understand why it was funny. Dijkstra’s algorithm finds the shortest path between 2 points—the math underlying GPS navigation. It’s a foundational concept in virtually every formal computer science curriculum. I had never encountered it.

That moment reflected a broader pattern. Conversations about system design and tradeoffs often felt just out of reach. I could follow parts of them, but not enough to contribute meaningfully.

I was mostly self-taught. Wide coverage, shallow roots. The engineers around me had roots. You could feel it in how they reasoned through problems, how they talked about tradeoffs, how they debugged with patience instead of pure panic.

The Advice That Sounds Good Until You’re Living It

You’ve heard the phrase: “If you’re the smartest person in the room, you’re in the wrong room.”

It sounds aspirational. What nobody tells you is what it actually feels like to be in that room. It feels like barely following system design conversations. Like nodding along to discussions you can only partially decode. Like shipping solutions through trial and error and hoping nobody looks too closely.

Being the weakest engineer in the room is genuinely uncomfortable. It surfaces every gap. And if you’re not careful, it pushes you in exactly the wrong direction.

My instinct was to make myself smaller. On a team of five, every voice mattered. I stopped offering mine. I rushed toward working solutions without real understanding, hoping velocity would compensate for depth.

I was working harder and, at the same time, I was not improving.

The turning point came when one of the most senior engineers left. Before departing, he told me it was difficult to work with me because I lacked foundational programming knowledge, listing out the concepts he saw me struggle with.

For the first time, what had felt like vague inadequacy became something specific.

What the Cliché Misses

Proximity to stronger engineers is not sufficient on its own. You won’t absorb their skill through osmosis. The engineers who thrive when they’re outmatched are not the ones who wait for confidence to arrive. They treat the discomfort as diagnostic information.

What can they answer that I can’t? What do they see in a system that I’m missing?

I defined a clear picture of the engineer I wanted to become and compared it to where I was. I wrote down what I did not know. I identified how I would close each gap with books, tutorials and small projects. I asked for recommendations from the same engineer who gave me the hard feedback.

I figured out the gaps. Then the bridges. Then I worked through each of them.

Over time, conversations became clearer. Debugging became more systematic. I started contributing meaningfully rather than just executing tasks.

The Other Room Nobody Warns You About

There’s a less-obvious version of this same problem: when you’re the strongest engineer in the room.

It can feel rewarding. Less friction, more validation. But there’s also less growth. When you’re at the ceiling, there’s no external pressure to raise your own floor. The feedback loops that sharpen judgment go quiet. Some engineers spend years there without noticing. They’re good. They’re comfortable. They stop getting better.

Both rooms carry risk. One threatens your confidence. The other threatens your trajectory.

Being the weakest engineer in a strong room is an advantage, but only if you treat it like one. It gives you a clear benchmark. But the room doesn’t do the work for you. You have to name the gaps, build a plan, and follow through.

And if you ever find yourself in the other room, where you’re clearly the strongest, pay attention to how long you’ve been there.

Both rooms are trying to tell you something.

—Brian

Are U.S. Engineering Ph.D. Programs Losing Students?

Not every engineer has a doctorate, but Ph.D. engineers are an essential part of the workforce, researching and designing tomorrow’s high-tech products and systems. In the United States, early signs are emerging that Ph.D. programs in electrical engineering and related fields may be shrinking. Political and economic uncertainty mean some universities are now seeing smaller applicant pools and graduate cohorts.

Read more here.

What Happens When You Host an AI Cafe

Last November, three professors at Auburn University in Ala. hosted a gathering at a coffee shop to confront students’ concerns about AI. The event, which they call an “AI Café,” was meant to create an environment “where scholars engage their communities in genuine dialogue about AI. Not to lecture about technical capabilities, but to listen, learn, and co-create a vision for AI that serves the public interest.” In a guest article, they share what they learned at the event and tips for starting your own AI Café.

Read more here.

What Is Inference Engineering?

Inference, the process of running a trained AI model on new data, is increasingly becoming a focus in the world of AI engineering. The growth of open LLMs means that more engineers can now tweak the models to perform better at inference. Given this trend, a recent issue of the Substack “The Pragmatic Engineer” does a deep dive on inference engineering—what it is, when it’s needed, and how to do it.

Read more here.

Remembering Gus Gaynor: A Devoted IEEE Volunteer

2026-04-10 02:00:02



Gerard “Gus” Gaynor, a long-serving IEEE volunteer and former engineering director at 3M, died on 9 March. The IEEE Life Fellow was 104.

Readers of The Institute might remember Gus from his 2022 profile: “From Fixing Farm Equipment to Becoming a Director at 3M.” Just last year, he and I coauthored twoarticles. One discusses how to leverage relationships to boost your career growth. The other weighs the pros and cons of pursuing a technical or managerial career path. He was 103 years old then. How many IEEE members can claim a centenarian coauthor?

I first met Gus in 2009 at the IEEE Technical Activities Board (TAB) meeting in San Juan, Puerto Rico. We sat together in the airplane on our way back to Minneapolis, our hometown. At home I told many of my friends about the remarkable person—who was 87 years young at the time—with whom I chatted during our six-hour flight.

A decade later, he and I met for lunch in Minneapolis. He drove himself to the restaurant, just asking for a hand to navigate the snowy sidewalk.

A dedicated IEEE volunteer

Gus’s involvement with IEEE predates the organization. He joined the Institute of Radio Engineers, a predecessor society, as a student member in 1942. Twenty years later he became an active IEEE volunteer.

He served on the TAB’s finance committee and the Publications Services and Products Board. He was president of the IEEE Engineering Management Society (now the Technology and Engineering Management Society ), and he was the Technology Management Council’s first president. He was the founding editor of IEEE-USA’s online magazine Today’s Engineer, which reported on government legislation and issues affecting U.S. members’ careers. The magazine is now available as the e-newsletter IEEE-USA InSight.

He authored several books on technology management and other topics, published by IEEE-USA and IEEE-Wiley.

An elderly white man smiling in a dress shirt against a background of bookshelves.IEEE Life Fellow Gerard “Gus” Gaynor died on 9 March.The Gaynor Family

Most recently, after the formation of TEMS in 2015, he became an active member of its executive committee. He served two terms as vice president of publications.

At 100 years old, he led the launch of a new publication, TEMS Leadership Briefs, a novel short-format open-access publication aimed at technology leaders.

Gus, who is a former member of The Institute’s editorial advisory board, also worked with Kathy Pretz, The Institute’s editor in chief, to start an ongoing series of TEMS-sponsored career-interest articles. He coauthored several of them.

Throughout his 64 years as an IEEE volunteer, he received several honors. They include IEEE EMS’s Engineering Manager of the Year Award, the IEEE TEMS Career Achievement Award, and the IEEE-USA McClure Citation of Honor. In 2014 he was inducted into the IEEE Technical Activities Board Hall of Honor.

A 25-year career at 3M

Gus received a degree in electrical engineering in 1950 from the University of Michigan in Ann Arbor. He worked for several companies including Automatic Electric (now part of Nokia) and Johnson Farebox (now part of Genfare), before joining 3M in 1962.

During his successful 25-year career at 3M, he served as chief engineer for a division in Italy, established the innovation department, and led the design and installation of the company’s first computerized manufacturing facilities. He retired as director of engineering in 1987.

Last year, IEEE Life Fellow Michael Condry, a former TEMS president, organized a Zoom call with Gus and other leaders of the society to celebrate Gus’s 104th birthday. Gus looked well and was his usual upbeat self, telling everyone: “I’m good. Everything’s well. I can’t complain.”

Gus was married to Shirley Margaret Karrels Gaynor, who passed away in 2018. He lives on in the hearts and minds of his seven children, seven grandchildren, two great-grandchildren, and innumerable friends and IEEE colleagues.

GoZTASP: A Zero-Trust Platform for Governing Autonomous Systems at Mission Scale

2026-04-09 23:06:39



ZTASP is a mission-scale assurance and governance platform designed for autonomous systems operating in real-world environments. It integrates heterogeneous systems—including drones, robots, sensors, and human operators—into a unified zero-trust architecture. Through Secure Runtime Assurance (SRTA) and Secure Spatio-Temporal Reasoning (SSTR), ZTASP continuously verifies system integrity, enforces safety constraints, and enables resilient operation even under degraded conditions.

ZTASP has progressed beyond conceptual design, with operational validation at Technology Readiness Level (TRL) 7 in mission critical environments. Core components, including Saluki secure flight controllers, have reached TRL8 and are deployed in customer systems. While initially developed for high-consequence mission environments, the same assurance challenges are increasingly present across domains such as healthcare, transportation, and critical infrastructure.

Download this free whitepaper now!

Chip Can Project Video the Size of a Grain of Sand

2026-04-09 21:00:01



By many estimates, quantum computers will need millions of qubits to realize their potential applications in cybersecurity, drug development, and other industries. The problem is, anyone who has wanted to simultaneously control millions of a certain kind of qubits has run into the problem of trying to control millions of laser beams.

That’s exactly the challenge that was faced by scientists working on the MITRE Quantum Moonshot project, which brought together scientists from MITRE, MIT, the University of Colorado at Boulder, and Sandia National Laboratories. The solution they developed came in the form of an image projection technology that they realized could also be the fix for a host of other challenges in augmented reality, biomedical imaging, and elsewhere. The device is a one-square-millimeter photonic chip capable of projecting the Mona Lisa onto an area smaller than the size of two human egg cells.

“When we started, we certainly never would have anticipated that we would be making a technology that might revolutionize imaging,” says Matt Eichenfield, one of the leaders of the Quantum Moonshot project, a collaborative research effort focused on developing a scalable diamond-based quantum computer, and a professor of quantum engineering at the University of Colorado at Boulder. Each second, their chip is capable of projecting 68.6 million individual spots of light—called scannable pixels to differentiate them from physical pixels. That’s more than fifty times the capability of previous technology, such as micro-electromechanical systems (MEMS) micromirror arrays.

“We have now made a scannable pixel that is at the absolute limit of what diffraction allows,” says Henry Wen, a visiting researcher at MIT and a photonics engineer at QuEra Computing.

The chip’s distinguishing feature is an array of tiny micro-scale cantilevers, which curve away from the plane of the chip in response to voltage and act as miniature “ski-jumps” for light. Light is channeled along the length of each cantilever via a waveguide, and exits at its tip. The cantilevers contain a thin layer of aluminum nitride, a piezoelectric which expands or contracts under voltage, thus moving the micromachine up and down and enabling the array to scan beams of light over a two-dimensional area.

Despite the magnitude of the team’s achievement, Eichenfield says that the process of engineering the cantilevers was “pretty smooth.” Each cantilever is composed of a stack of several submicrometer layers of material and curls approximately 90 degrees out of the plane at rest. To achieve such a high curvature, the team took advantage of differences in the contraction and expansion of individual layers caused by physical stresses in the material resulting from the fabrication process. The materials are first deposited flat onto the chip. Then, a layer in the chip below the cantilever is removed, allowing the material stresses to take effect, releasing the cantilever from the chip and allowing it to curl out. The top layer of each cantilever also features a series of silicon dioxide bars running perpendicular to the waveguide, which keep the cantilever from curling along its width while also improving its length-wise curvature.

A micro-cantilever wiggles and waggles to project light in the right place.Matt Saha, Y. Henry Wen, et al.

What was more of a challenge than engineering the chip itself was figuring out the details of actually making the chip project images and videos. Working out the process of synchronizing and timing the cantilevers’ motion and light beams to generate the right colors at the right time was a substantial effort, according to Andy Greenspon, a researcher at MITRE who also worked on the project. Now, the team has successfully projected a variety of videos from a single cantilever, including clips from the movie A Charlie Brown Christmas.

A warped projection of the Mona Lisa.The chip projected a roughly 125-micrometer image of the Mona Lisa.Matt Saha, Y. Henry Wen, et al.

Because the chip can project so many more spots in any given time interval than any previous beam scanners, it could also be used to control many more qubits in quantum computers. The Quantum Moonshot program’s mission is to build a quantum computer that can be scaled to millions of qubits. So clearly, it needs a scalable way of controlling each one, explains Wen. Instead of using one laser per qubit, the team realized that not every qubit needed to be controlled at every given moment. The chip’s ability to move light beams over a two-dimensional area, would allow them to control all of the qubits with many fewer lasers.

Another process that Wen thinks the chip could improve is scanning objects for 3D printing. Today, that typically involves using a single laser to scan over the entire surface of an object. The new chip, however, could potentially employ thousands of laser beams. “I think now you can take a process that would have taken hours and maybe bring it down to minutes,” says Wen.

Wen is also excited to explore the potential of different cantilever shapes. By changing the orientations of the bars perpendicular to the waveguide, the team has been able to make the cantilevers curl into helixes. Wen says that such unusual shapes could be useful in making a lab-on-a-chip for cell biology or drug development. “A lot of this stuff is imaging, scanning a laser across something, either to image it or to stimulate some response. And so we could have one of these ski jumps curl not just up, but actually curl back around, and then move around and scan over a sample,” Wen explains. “If you can imagine a structure that will be useful for you, we should try it.”

Temple University Student Highlights IEEE Membership Perks

2026-04-08 02:00:02



Kyle McGinley graduated from high school in 2018 and, like many teenagers, he was unsure what career he wanted to pursue. Recuperating from a sports injury led him to consider becoming a physical therapist for athletes. But he was skilled at repairing cars and fixing things around the house, so he thought about becoming an engineer, like his father.

McGinley, who lives in Sellersville, Pa., took some classes at Montgomery County Community College in Blue Bell, while also working. During his years at the college, he took a variety of courses and was drawn to electrical engineering and computing, he says. He left to pursue a bachelor’s degree in electrical and computer engineering in Philadelphia at Temple University, where he is currently a junior.

Kyle McGinley


MEMBER GRADE

Student member

UNIVERSITY

Temple, in Philadelphia

MAJOR

Electrical and computer engineering

The 26-year-old is also a teaching assistant and a research assistant at Temple. His research focuses on applying artificial intelligence to electrical hardware and robotics. He helped build an AI-integrated android companion to assist in-home caregivers.

Temple recognized McGinley’s efforts last year with its Butz scholarship, which is awarded annually to an electrical and computer engineering undergraduate with an interest in software development, AI development systems, health education software, or a similar field.

An IEEE student member, he is active within the university’s student branch.

“Dr. Brian Butz, the late professor emeritus, dedicated his research to artificial intelligence,” McGinley says. “The scholarship he and his wife Susan established helps allow students to pursue research in AI. Their generous donation has helped fund my research.”

Building a robot aide

McGinley is a teaching assistant for his digital circuit design course. In a class of 35 students, it can be a struggle for some to digest the professor’s words, he says.

“My job is to answer students’ questions if they are having problems following the professor’s lecture or are confused about any of the topics,” he says. “In the lab, I help students debug code or with hardware issues they have on the FPGA [field-programmable gate array] boards.”

He also conducts research for the university’s Computer Fusion Lab under the supervision of IEEE Senior Member Li Bai, a professor of electrical and computer engineering. McGinley writes software programs at the lab.

“In school, they don’t teach you how to communicate with people. They only teach you how to remember stuff. Working well with people is one of the most underrated skills that a lot of students don’t understand is important.”

One such assignment was working with the Temple School of Social Work at the Barnett College of Public Health to build a robot companion integrated with AI to assist individuals with Parkinson’s disease and their caregivers.

“I realized the need for this with my grandmother, when she was taking care of my grandfather,” he says. “It was a lot for her, trying to remember everything.”

Using the latest software and hardware, he and three classmates rebuilt an older lab robot. They installed an operating system and used Python and C++ for its control, perception, and behavior, he says. The students also incorporated Google’s Gemini AI to help with routine tasks such as scheduling medication reminders and setting alarms for upcoming doctor visits.

A small humanoid robot standing on a kitchen counter.Kyle McGinley helped build an AI-integrated android to assist individuals with Parkinson’s disease and their caregivers.Temple University of Public Health

The AI-integrated android was intended to assist, not replace, the caregivers by handling the mental load of remembering tasks, he says.

“This was one of the cool things that drew me to working in the robotics field,” he says. “Something where AI could be used to help caregivers do simple tasks.

“My career ambition after I graduate is to gain real-world experience in the engineering industry to learn skills outside of academia, Long term, I want to do project management or work in a technical lead role, with the primary goal of creating impactful projects that I can be proud of.”

The benefits of a student branch

McGinley joined Temple’s IEEE student branch last year after one of his professors offered extra credit to students who did so. After attending meetings and participating in a few workshops, he found he really liked the club, he says, adding that he made new friends and enjoyed the camaraderie with other engineering students.

After the student branch’s board members got to know McGinley better, they asked him to become the club’s historian and manage its social media account. He also helps with event planning, creating and posting fliers, taking pictures, and shooting videos of the gatherings.

The branch has benefited from McGinley’s involvement, but he says it’s a two-way street.

“The biggest things I’ve learned are being held accountable and being reliable,” he says. “I am responsible for other people knowing what’s going on.”

Being an active volunteer has improved his communication skills, he says.

“Learning to clearly communicate with other people to make sure everyone is on the same page is important,” he says. “In school, they don’t teach you how to communicate with people. They only teach you how to remember stuff. Working well with people is one of the most underrated skills that a lot of students don’t understand is important.”

He encourages students to join their university’s IEEE branch.

“I know it can be scary because you might not know anyone, but it honestly can’t hurt you; it could actually benefit you,” he says. “Being active is going to help you with a lot of skills that you need.

“You’ll definitely get opportunities that you would have never known about, like a scholarship or working in the research lab. I would have never gotten these opportunities if I hadn’t shown up. Joining IEEE and being active is the best thing you can do for your career.”

This article was updated on 9 April 2026.

Decentralized Training Can Help Solve AI’s Energy Woes

2026-04-07 22:00:01



Artificial intelligence harbors an enormous energy appetite. Such constant cravings are evident in the hefty carbon footprint of the data centers behind the AI boom and the steady increase over time of carbon emissions from training frontier AI models.

No wonder big tech companies are warming up to nuclear energy, envisioning a future fueled by reliable, carbon-free sources. But while nuclear-powered data centers might still be years away, some in the research and industry spheres are taking action right now to curb AI’s growing energy demands. They’re tackling training as one of the most energy-intensive phases in a model’s life cycle, focusing their efforts on decentralization.

Decentralization allocates model training across a network of independent nodes rather than relying on one platform or provider. It allows compute to go where the energy is—be it a dormant server sitting in a research lab or a computer in a solar-powered home. Instead of constructing more data centers that require electric grids to scale up their infrastructure and capacity, decentralization harnesses energy from existing sources, avoiding adding more power into the mix.

Hardware in harmony

Training AI models is a huge data center sport, synchronized across clusters of closely connected GPUs. But as hardware improvements struggle to keep up with the swift rise in size of large language models, even massive single data centers are no longer cutting it.

Tech firms are turning to the pooled power of multiple data centers—no matter their location. Nvidia, for instance, launched the Spectrum-XGS Ethernet for scale-across networking, which “can deliver the performance needed for large-scale single job AI training and inference across geographically separated data centers.” Similarly, Cisco introduced its 8223 router designed to “connect geographically dispersed AI clusters.”

Other companies are harvesting idle compute in servers, sparking the emergence of a GPU-as-a-Service business model. Take Akash Network, a peer-to-peer cloud computing marketplace that bills itself as the “Airbnb for data centers.” Those with unused or underused GPUs in offices and smaller data centers register as providers, while those in need of computing power are considered as tenants who can choose among providers and rent their GPUs.

“If you look at [AI] training today, it’s very dependent on the latest and greatest GPUs,” says Akash cofounder and CEO Greg Osuri. “The world is transitioning, fortunately, from only relying on large, high-density GPUs to now considering smaller GPUs.”

Software in sync

In addition to orchestrating the hardware, decentralized AI training also requires algorithmic changes on the software side. This is where federated learning, a form of distributed machine learning, comes in.

It starts with an initial version of a global AI model housed in a trusted entity such as a central server. The server distributes the model to participating organizations, which train it locally on their data and share only the model weights with the trusted entity, explains Lalana Kagal, a principal research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) who leads the Decentralized Information Group. The trusted entity then aggregates the weights, often by averaging them, integrates them into the global model, and sends the updated model back to the participants. This collaborative training cycle repeats until the model is considered fully trained.

But there are drawbacks to distributing both data and computation. The constant back and forth exchanges of model weights, for instance, result in high communication costs. Fault tolerance is another issue.

“A big thing about AI is that every training step is not fault-tolerant,” Osuri says. “That means if one node goes down, you have to restore the whole batch again.”

To overcome these hurdles, researchers at Google DeepMind developed DiLoCo, a distributed low-communication optimization algorithm. DiLoCo forms what Google DeepMind research scientist Arthur Douillard calls “islands of compute,” where each island consists of a group of chips. Every island holds a different chip type, but chips within an island must be of the same type. Islands are decoupled from each other, and synchronizing knowledge between them happens once in a while. This decoupling means islands can perform training steps independently without communicating as often, and chips can fail without having to interrupt the remaining healthy chips. However, the team’s experiments found diminishing performance after eight islands.

An improved version dubbed Streaming DiLoCo further reduces the bandwidth requirement by synchronizing knowledge “in a streaming fashion across several steps and without stopping for communicating,” says Douillard. The mechanism is akin to watching a video even if it hasn’t been fully downloaded yet. “In Streaming DiLoCo, as you do computational work, the knowledge is being synchronized gradually in the background,” he adds.

AI development platform Prime Intellect implemented a variant of the DiLoCo algorithm as a vital component of its 10-billion-parameter INTELLECT-1 model trained across five countries spanning three continents. Upping the ante, 0G Labs, makers of a decentralized AI operating system, adapted DiLoCo to train a 107-billion-parameter foundation model under a network of segregated clusters with limited bandwidth. Meanwhile, popular open-source deep learning framework PyTorch included DiLoCo in its repository of fault tolerance techniques.

“A lot of engineering has been done by the community to take our DiLoCo paper and integrate it in a system learning over consumer-grade internet,” Douillard says. “I’m very excited to see my research being useful.”

A more energy-efficient way to train AI

With hardware and software enhancements in place, decentralized AI training is primed to help solve AI’s energy problem. This approach offers the option of training models “in a cheaper, more resource-efficient, more energy-efficient way,” says MIT CSAIL’s Kagal.

And while Douillard admits that “training methods like DiLoCo are arguably more complex, they provide an interesting tradeoff of system efficiency.” For instance, you can now use data centers across far apart locations without needing to build ultrafast bandwidth in between. Douillard adds that fault tolerance is baked in because “the blast radius of a chip failing is limited to its island of compute.”

Even better, companies can take advantage of existing underutilized processing capacity rather than continuously building new energy-hungry data centers. Betting big on such an opportunity, Akash created its Starcluster program. One of the program’s aims involves tapping into solar-powered homes and employing the desktops and laptops within them to train AI models. “We want to convert your home into a fully functional data center,” Osuri says.

Osuri acknowledges that participating in Starcluster will not be trivial. Beyond solar panels and devices equipped with consumer-grade GPUs, participants would also need to invest in batteries for backup power and redundant internet to prevent downtime. The Starcluster program is figuring out ways to package all these aspects together and make it easier for homeowners, including collaborating with industry partners to subsidize battery costs.

Backend work is already underway to enable homes to participate as providers in the Akash Network, and the team hopes to reach its target by 2027. The Starcluster program also envisions expanding into other solar-powered locations, such as schools and local community sites.

Decentralized AI training holds much promise to steer AI toward a more environmentally sustainable future. For Osuri, such potential lies in moving AI “to where the energy is instead of moving the energy to where AI is.”