MoreRSS

site iconIEEE SpectrumModify

IEEE is the trusted voice for engineering, computing, and technology information around the globe. 
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of IEEE Spectrum

This Former Physicist Helps Keep the Internet Secure

2026-02-16 22:00:02



When Alan DeKok began a side project in network security, he didn’t expect to start a 27-year career. In fact, he didn’t initially set out to work in computing at all.

DeKok studied nuclear physics before making the switch to a part of network computing that is foundational but—like nuclear physics—largely invisible to those not directly involved in the field. Eventually, a project he started as a hobby became a full-time job: maintaining one of the primary systems that helps keep the internet secure.

Alan DeKok


Employer

InkBridge Networks

Occupation

CEO

Education

Bachelor’s degree in physics, Carleton University; master’s degree in physics, Carleton University

Today, he leads the FreeRADIUS Project, which he cofounded in the late 1990s to develop what is now the most widely used Remote Authentication Dial-In User Service (RADIUS) software. FreeRADIUS is an open-source server that provides back-end authentication for most major internet service providers. It’s used by global financial institutions, Wi-Fi services like Eduroam, and Fortune 50 companies. DeKok is also CEO of InkBridge Networks, which maintains the server and provides support for the companies that use it.

Reflecting on nearly three decades of experience leading FreeRADIUS, DeKok says he became an expert in remote authentication “almost by accident,” and the key to his career has largely been luck. “I really believe that it’s preparing yourself for luck, being open to it, and having the skills to capitalize on it.”

From Farming to Physics

DeKok grew up on a farm outside of Ottawa growing strawberries and raspberries. “Sitting on a tractor in the heat is not particularly interesting,” says DeKok, who was more interested in working with 8-bit computers than crops. As a student at Carleton University, in Ottawa, he found his way to physics because he was interested in math but preferred the practicality of science.

While pursuing a master’s degree in physics, also at Carleton, he worked on a water-purification system for the Sudbury Neutrino Observatory, an underground observatory then being built at the bottom of a nickel mine. He would wake up at 4:30 in the morning to drive up to the site, descend 2 kilometers, then enter one of the world’s deepest clean-room facilities to work on the project. The system managed to achieve one atom of impurity per cubic meter of water, “which is pretty insane,” DeKok says.

But after his master’s degree, DeKok decided to take a different route. Although he found nuclear physics interesting, he says he didn’t see it as his life’s work. Meanwhile, the Ph.D. students he knew were “fanatical about physics.” He had kept up his computing skills through his education, which involved plenty of programming, and decided to look for jobs at computing companies. “I was out of physics. That was it.”

Still, physics taught him valuable lessons. For one, “You have to understand the big picture,” DeKok says. “The ability to tell the big-picture story in standards, for example, is extremely important.” This skill helps DeKok explain to standards bodies how a protocol acts as one link in the entire chain of events that needs to occur when a user wants to access the internet.

He also learned that “methods are more important than knowledge.” It’s easy to look up information, but physics taught DeKok how to break down a problem into manageable pieces to come up with a solution. “When I was eventually working in the industry, the techniques that came naturally to me, coming out of physics, didn’t seem to be taught as well to the people I knew in engineering,” he says. “I could catch up very quickly.”

Founding FreeRADIUS

In 1996, DeKok was hired as a software developer at a company called Gandalf, which made equipment for ISDN, a precursor to broadband that enabled digital transmission of data over telephone lines. Gandalf went under about a year later, and he joined CryptoCard, a company providing hardware devices for two-factor authentication.

While at CryptoCard, DeKok began spending more time working with a RADIUS server. When users want to connect to a network, RADIUS acts as a gatekeeper and verifies their identity and password, determines what they can access, and tracks sessions. DeKok moved on to a new company in 1999, but he didn’t want to lose the networking skills he had developed. No other open-source RADIUS servers were being actively developed at the time, and he saw a gap in the market.

The same year, he started FreeRADIUS in his free time and it “gradually took over my life,” DeKok says. He continued to work on the open-source software as a hobby for several years while bouncing around companies in California and France. “Almost by accident, I became one of the more senior people in the space. Then I doubled down on that and started the business.” He founded NetworkRADIUS (now called InkBridge Networks) in 2008.

By that point, FreeRADIUS was already being used by 100 million people daily. The company now employs experts in Canada, France, and the United Kingdom who work together to support FreeRADIUS. “I’d say at least half of the people in the world get on the internet by being authenticated through my software,” DeKok estimates. He attributes that growth largely to the software being open source. Initially a way to enter the market with little funding, going open source has allowed FreeRADIUS to compete with bigger companies as an industry-leading product.

Although the software is critical for maintaining secure networks, most people aren’t aware of it because it works behind the scenes. DeKok is often met with surprise that it’s still in use. He compares RADIUS to a building foundation: “You need it, but you never think about it until there’s a crack in it.”

27 Years of Fixes

Over the years, DeKok has maintained FreeRADIUS by continually making small fixes. Like using a ratcheting tool to make a change inch by inch, “you shouldn’t underestimate that ratchet effect of tiny little fixes that add up over time,” he says.

He’s seen the project through minor patches and more significant fixes, like when researchers exposed a widespread vulnerability DeKok had been trying to fix since 1998. He also watched a would-be successor to the network protocol, Diameter, rise and fall in popularity in the 2000s and 2010s. (Diameter gained traction in mobile applications but has gradually been phased out in the shift to 5G.) Though Diameter offers improvements, RADIUS is far simpler and already widely implemented, giving it an edge, DeKok explains.

And he remains confident about its future. “People ask me, ‘What’s next for RADIUS?’ I don’t see it dying.” Estimating that billions of dollars of equipment run RADIUS, he says, “It’s never going to go away.”

About his own career, DeKok says he plans to keep working on FreeRADIUS, exploring new markets and products. “I never expected to have a company and a lot of people working for me, my name on all kinds of standards, and customers all over the world. But it worked out that way.”

This article appears in the March 2026 print issue as “Alan DeKok.”

NASA Let AI Drive the Perseverance Rover

2026-02-15 22:00:02



In December, NASA took another small, incremental step towards autonomous surface rovers. In a demonstration, the Perseverance team used AI to generate the rover’s waypoints. Perseverance used the AI waypoints on two separate days, traveling a total of 456 meters without human control.

“This demonstration shows how far our capabilities have advanced and broadens how we will explore other worlds,” said NASA Administrator Jared Isaacman. “Autonomous technologies like this can help missions to operate more efficiently, respond to challenging terrain, and increase science return as distance from Earth grows. It’s a strong example of teams applying new technology carefully and responsibly in real operations.”


Mars is a long way away, and there’s about a 25-minute delay for a round trip signal between Earth and Mars. That means that one way or another, rovers are on their own for short periods of time.

The delay shapes the route-planning process. Rover drivers here on Earth examine images and elevation data and program a series of waypoints, which usually don’t exceed 100 meters apart. The driving plan is sent to NASA’s Deep Space Network (DSN), which transmits it to one of several orbiters, which then relay it to Perseverance. (Perseverance can receive direct comms from the DSN as a back up, but the data rate is slower.)

AI Enhances Mars Rover Navigation

In this demonstration, the AI model analyzed orbital images from the Mars Reconnaissance Orbiter’s HiRISE camera, as well as digital elevation models. The AI, which is based on Anthropic’s Claude AI, identified hazards like sand traps, boulder fields, bedrock, and rocky outcrops. Then it generated a path defined by a series of waypoints that avoids the hazards. From there, Perseverance’s auto-navigation system took over. It has more autonomy than its predecessors and can process images and driving plans while in motion.

There was another important step before these waypoints were transmitted to Perseverance. NASA’s Jet Propulsion Laboratory has a “twin” for Perseverance called the “Vehicle System Test Bed” (VSTB) in JPL’s Mars Yard. It’s an engineering model that the team can work with here on Earth to solve problems, or for situations like this. These engineering versions are common on Mars missions, and JPL has one for Curiosity, too.

“The fundamental elements of generative AI are showing a lot of promise in streamlining the pillars of autonomous navigation for off-planet driving: perception (seeing the rocks and ripples), localization (knowing where we are), and planning and control (deciding and executing the safest path),” said Vandi Verma, a space roboticist at JPL and a member of the Perseverance engineering team. “We are moving towards a day where generative AI and other smart tools will help our surface rovers handle kilometer-scale drives while minimizing operator workload, and flag interesting surface features for our science team by scouring huge volumes of rover images.”

AI’s Expanding Role in Space Exploration

AI is rapidly becoming ubiquitous in our lives, showing up in places that don’t necessarily have a strong use case for it. But this isn’t NASA hopping on the AI bandwagon. They’ve been developing automatic navigation systems for a while, out of necessity. In fact, Perseverance’s primary means of driving is its self-driving autonomous navigation system.

One thing that prevents fully-autonomous driving is the way uncertainty grows as the rover operates without human assistance. The longer the rover travels, the more uncertain it becomes about its position on the surface. The solution is to re-localize the rover on its map. Currently, humans do this. But this takes time, including a complete communication cycle between Earth and Mars. Overall, it limits how far Perseverance can go without a helping hand.

NASA/JPL is also working on a way that Perseverance can use AI to re-localize. The main roadblock is matching orbital images with the rover’s ground-level images. It seems highly likely that AI will be trained to excel at this.

It’s obvious that AI is set to play a much larger role in planetary exploration. The next Mars rover may be much different than current ones, with more advanced autonomous navigation and other AI features. There are already concepts for a swarm of flying drones released by a rover to expand its explorative reach on Mars. These swarms would be controlled by AI to work together and autonomously.

And it’s not just Mars exploration that will benefit from AI. NASA’s Dragonfly mission to Saturn’s moon Titan will make extensive use of AI. Not only for autonomous navigation as the rotorcraft flies around, but also for autonomous data curation.

“Imagine intelligent systems not only on the ground at Earth, but also in edge applications in our rovers, helicopters, drones, and other surface elements trained with the collective wisdom of our NASA engineers, scientists, and astronauts,” said Matt Wallace, manager of JPL’s Exploration Systems Office. “That is the game-changing technology we need to establish the infrastructure and systems required for a permanent human presence on the Moon and take the U.S. to Mars and beyond.”

Sub-$200 Lidar Could Reshuffle  Auto Sensor Economics

2026-02-14 22:00:02



MicroVision, a solid-state sensor technology company located in Redmond, Wash., says it has designed a solid-state automotive lidar sensor intended to reach production pricing below US $200. That’s less than half of typical prices now, and it’s not even the full extent of the company’s ambition. The company says its longer-term goal is $100 per unit. MicroVision’s claim, which, if realized, would place lidar within reach of advanced driver-assistance systems (ADAS) rather than limiting it to high-end autonomous vehicle programs. Lidar’s limited market penetration comes down to one issue: cost.

Comparable mechanical lidars from multiple suppliers now sell in the $10,000 to $20,000 range. That price roughly tenfold drop, from about $80,000, helps explain why suppliers now are now hopeful that another steep price reduction is on the horizon.

For solid-state devices, “it is feasible to bring the cost down even more when manufacturing at high volume,” says Hayder Radha, a professor of electrical and computer engineering at Michigan State University and director of the school’s Connected & Autonomous Networked Vehicles for Active Safety program. With demand expanding beyond fully autonomous vehicles into driver-assistance applications, “one order or even two orders of magnitude reduction in cost are feasible.”

“We are focused on delivering automotive-grade lidar that can actually be deployed at scale,” says MicroVision CEO Glen DeVos. “That means designing for cost, manufacturability, and integration from the start—not treating price as an afterthought.”

MicroVision’s Lidar System

Tesla CEO Elon Musk famously dismissed lidar in 2019 as “a fool’s errand,” arguing that cameras and radar alone were sufficient for automated driving. A credible path to sub-$200 pricing would fundamentally alter the calculus of autonomous-car design by lowering the cost of adding precise three-dimensional sensing to mainstream vehicles. The shift reflects a broader industry trend toward solid-state lidar designs optimized for low-cost, high-volume manufacturing rather than maximum range or resolution.

Before those economics can be evaluated, however, it’s important to understand what MicroVision is proposing to build.

The company’s Movia S is a solid-state lidar. Mounted at the corners of a vehicle, the sensor sends out 905-nanometer-wavelength laser pulses and measures how long it takes for light reflected from the surfaces of nearby objects to return. The arrangement of the beam emitters and receivers provides a fixed field of view designed for 180-degree horizontal coverage rather than full 360-degree scanning typical of traditional mechanical units. The company says the unit can detect objects at distances of up to roughly 200 meters under favorable weather conditions—compared with the roughly 300-meter radius scanned by mechanical systems—and supports frame rates suitable for real-time perception in driver-assistance systems. Earlier mechanical lidars, used spinning components to steer their beams but the Movia S is a phased-arraysystem. It controls the amplitude and phase of the signals across an array of antenna elements to steer the beam. The unit is designed to meet automotive requirements for vibration tolerance, temperature range, and environmental sealing.

MicroVision’s pricing targets might sound aggressive, but they are not without precedent. The lidar industry has already experienced one major cost reset over the past decade.

“Automakers are not buying a single sensor in isolation... They are designing a perception system, and cost only matters if the system as a whole is viable.” –Glen DeVos, MicroVision

Around 2016 and 2017, mechanical lidar systems used in early autonomous driving research often sold for close to $100,000. Those units relied on spinning assemblies to sweep laser beams across a full 360 degrees, which made them expensive to build and difficult to ruggedize for consumer vehicles.

“Back then, a 64-beam Velodyne lidar cost around $80,000,” says Radha.

Comparable mechanical lidars from multiple suppliers now sell in the $10,000 to $20,000 range. That roughly tenfold drop helps explain why suppliers now believe another steep price reduction is possible.

“For solid-state devices, it is feasible to bring the cost down even more when manufacturing at high volume,” Radha says. With demand expanding beyond fully autonomous vehicles into driver-assistance applications, “one order or even two orders of magnitude reduction in cost are feasible.”

Solid-State Lidar Design Challenges

Lower cost, however, does not come for free. The same design choices that enable solid-state lidar to scale also introduce new constraints.

“Unlike mechanical lidars, which provide full 360-degree coverage, solid-state lidars tend to have a much smaller field of view,” Radha says. Many cover 180 degrees or less.

That limitation shifts the burden from the sensor to the system. Automakers will need to deploy three or four solid-state lidars around a vehicle to achieve full coverage. Even so, Radha notes, the total cost can still undercut that of a single mechanical unit.

What changes is integration. Multiple sensors must be aligned, calibrated, and synchronized so their data can be fused accurately. The engineering is manageable, but it adds complexity that price targets alone do not capture.

DeVos says MicroVision’s design choices reflect that reality. “Automakers are not buying a single sensor in isolation,” he says. “They are designing a perception system, and cost only matters if the system as a whole is viable.”

Those system-level tradeoffs help explain where low-cost lidar is most likely to appear first.

Most advanced driver assistance systems today rely on cameras and radar, which are significantly cheaper than lidar. Cameras provide dense visual information, while radar offers reliable range and velocity data, particularly in poor weather. Radha estimates that lidar remains roughly an order of magnitude more expensive than automotive radar.

But at prices in the $100 to $200 range, that gap narrows enough to change design decisions.

“At that point, lidar becomes appealing because of its superior capability in precise 3D detection and tracking,” Radha says.

Rather than replacing existing sensors, lower-cost lidar would likely augment them, adding redundancy and improving performance in complex environments that are challenging for electronic perception systems. That incremental improvement aligns more closely with how ADAS features are deployed today than with the leap to full vehicle autonomy.

MicroVision is not alone in pursuing solid-state lidar, and several suppliers including Chinese firms Hesai and RoboSense and other major suppliers such as Luminar and Velodyne have announced long-term cost targets below $500. What distinguishes current claims is the explicit focus on sub-$200 pricing tied to production volume rather than future prototypes or limited pilot runs.

Some competitors continue to prioritize long-range performance for autonomous vehicles, which pushes cost upward. Others have avoided aggressive pricing claims until they secure firm production commitments from automakers.

That caution reflects a structural challenge: Reaching consumer-level pricing requires large, predictable demand. Without it, few suppliers can justify the manufacturing investments needed to achieve true economies of scale.

Evaluating Lidar Performance Metrics

Even if low-cost lidar becomes manufacturable, another question remains: How should its performance be judged?

From a systems-engineering perspective, Radha says cost milestones often overshadow safety metrics.

“The key objective of ADAS and autonomous systems is improving safety,” he says. Yet there is no universally adopted metric that directly expresses safety gains from a given sensor configuration.

Researchers instead rely on perception benchmarks such as mean Average Precision, or mAP, which measures how accurately a system detects and tracks objects in its environment. Including such metrics alongside cost targets, says Radha, would clarify what performance is preserved or sacrificed as prices fall.

IEEE Spectrum has covered lidar extensively, often focusing on technical advances in scanning, range, and resolution. What distinguishes the current moment is the renewed focus on economics rather than raw capability

If solid-state lidar can reliably reach sub-$200 pricing, it will not invalidate Elon Musk’s skepticism—but it will weaken one of its strongest foundations. When cost stops being the dominant objection, automakers will have to decide whether leaving lidar out is a technical judgment or a strategic one.

That decision, more than any single price claim, may determine whether lidar finally becomes a routine component of vehicle safety systems.

TryEngineering Marks 20 Years of Getting Kids Interested in STEM

2026-02-14 03:00:04



IEEE TryEngineering is celebrating 20 years of empowering educators with resources that introduce engineering to students at an early age. Launched in 2006 as a collaboration between IEEE, IBM, and the New York Hall of Science (NYSCI), TryEngineering began with a clear goal: Make engineering accessible, understandable, and engaging for students and the teachers who support them.

What started as an idea within IEEE Educational Activities has grown into a global platform supporting preuniversity engineering education around the world.

Concerns about the future

In the early 2000s, engineering was largely absent from preuniversity education, typically being taught only in small, isolated programs. Most students had little exposure to the many types of engineering, and they did not learn what engineers actually do.

At the same time, industry and academic leaders were increasingly concerned about the future of engineering as a whole. They worried about the talent pipeline and saw existing outreach efforts as scattered and inconsistent.

In 2004 representatives from several electrical and computer engineering industries met with IEEE leadership and expressed their concerns about the declining number of students interested in engineering careers. They urged IEEE to organize a more effective, coordinated response to unite professional societies, educators, and industry around a shared approach to preuniversity outreach and education.

One of the major recommendations to come out of that meeting was to start teaching youngsters about engineering earlier. Research from the U.S. National Academy of Engineering at the time showed that students begin forming attitudes toward science, technology, engineering, and math fields from ages 5 to 10, and that outreach should begin as early as kindergarten. Waiting until the teen years or university-level education is simply too late, they determined; it needs to happen during the formative years to spark long-term interest in STEM learning.

The idea behind the website

TryEngineering emerged from the broader Launching Our Children’s Path to Engineering initiative, which was approved in 2005 by the IEEE Board of Directors. A core element of the IEEE program was a public-facing website that would introduce young learners to engineering projects, roles, and careers. The concept eventually developed into TryEngineering.org.

The idea for TryEngineering.org itself grew from an existing, successful model. The NYSCI operated TryScience.org, a popular public website supported by IBM that helped students explore science topics through hands-on activities and real‑world connections.

At the time, the IEEE Educational Activities group was working with the NYSCI on TryScience projects. Building a parallel site focused on engineering was a natural next step, and IBM’s experience in supporting large‑scale educational outreach made it a strong partner.

A central figure in turning that vision into reality was Moshe Kam, who served as the 2005–2007 IEEE Educational Activities vice president, and later as the 2011 IEEE president. During his tenure, Kam spearheaded the creation of TryEngineering.org and guided the international expansion of IEEE’s Teacher In‑Service Program, which trained volunteers to work directly with teachers to create hands-on engineering lessons (the program no longer exists). His leadership helped establish preuniversity education as a core, long‑term priority within IEEE.

“The founders of the IEEE TryEngineering program created something very special. In a world where the messaging about becoming an engineer often scares students who have not yet developed math skills away from our profession, and preuniversity teachers without engineering degrees have trepidation in teaching topics in our fields of interest, people like Dr. Kam and the other founders had a vision where everyone could literally try engineering,” says Jamie Moesch, IEEE Educational Activities managing director.

“Because of this, teachers have now taught millions of our hands-on lessons and opened our profession to so many more young minds,” he adds. “All of the preuniversity programs we have continued to build and improve upon are fueled by this massively important and simple-to-understand concept of try engineering.”

A focus on educators

From the beginning, TryEngineering focused on educators as the keys to its success, rather than starting with students. Instead of complex technical explanations, the platform offered free, classroom-ready lesson plans with clear explanations about engineering fields and examples with which students could relate. Hands-on activities emphasized problem‑solving, creativity, and teamwork—core elements of how engineers actually work.

IEEE leaders also recognized that misconceptions about engineering discouraged many talented young people—particularly girls and students from underrepresented groups—from pursuing engineering as a career. TryEngineering aimed to show engineering as practical, creative, and connected to real-world needs, helping students see that engineering could be for anyone, not just a narrow group of specialists.

By simply encouraging students and educators to just try engineering, doors are open to new possibilities and a broader understanding of the field. Even students who ultimately choose other career paths get to learn key concepts, such as the engineering design process, equipping them with practical skills for the rest of their life.

Outreach programs and summer camps

During the past two decades, TryEngineering has grown well beyond its original website. In addition to providing a vast library of lesson plans and resources that engage and inspire, it also serves as the hub for a collection of programs reaching educators and students in many ways.

Those include the TryEngineering STEM Champions program, which empowers dedicated volunteers to support outreach programs and serve as vital connectors to IEEE’s extensive resources. The TryEngineering Summer Institute offers immersive campus‑based experiences for students ages 13 to 17, with expanded locations and programs being introduced this year.

The IEEE STEM Summit is an annual virtual event that brings together educators and volunteers from around the world. TryEngineering OnCampus partners with universities around the globe to organize hands-on programs. TryEngineering educator sessions provide free professional development programs aligned with emerging industry needs such as semiconductors.

20 ways to celebrate 20 years

To mark its 20th anniversary, TryEngineering is celebrating with a year of special activities, new partnerships, and fresh resources for educators. Visit the TryEngineering 20th Anniversary collection page to explore what’s ahead, join the celebration, and discover 20 ways to celebrate 20 years of inspiring the next generation of technology innovators. This is an opportunity to reflect on how far the program has come, and to help shape how the next generation discovers engineering.

“The passion and dedication of the thousands of volunteers of IEEE who do local outreach enables the IEEE-wide goal to inspire intellectual curiosity and invention to engage the next generation of technology innovators,” Moesch says. “The first 20 years have been special, and I cannot wait to have the world experience what the future holds for the TryEngineering programs.”

Video Friday: Robot Collective Stays Alive Even When Parts Die

2026-02-14 00:30:03



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2026: 1–5 June 2026, VIENNA

Enjoy today’s videos!

No system is immune to failure. The compromise between reducing failures and improving adaptability is a recurring problem in robotics. Modular robots exemplify this tradeoff, because the number of modules dictates both the possible functions and the odds of failure. We reverse this trend, improving reliability with an increased number of modules by exploiting redundant resources and sharing them locally.

[ Science ] via [ RRL ]

Now that the Atlas enterprise platform is getting to work, the research version gets one last run in the sun. Our engineers made one final push to test the limits of full-body control and mobility, with help from the RAI Institute.

[ RAI ] via [ Boston Dynamics ]

Announcing Isaac 0: the laundry folding robot we’re shipping to homes, starting in February 2026 in the Bay Area.

[ Weave Robotics ]

In a paper published in Science, researchers at the Max Planck Institute for Intelligent Systems, the Humboldt University of Berlin, and the University of Stuttgart have discovered that the secret to the elephant’s amazing sense of touch is in its unusual whiskers. The interdisciplinary team analyzed elephant trunk whiskers using advanced microscopy methods that revealed a form of material intelligence more sophisticated than the well-studied whiskers of rats and mice. This research has the potential to inspire new physically intelligent robotic sensing approaches that resemble the unusual whiskers that cover the elephant trunk.

[ MPI ]

Got an interest in autonomous mobile robots, ROS2, and a mere $150 lying around? Try this.

[ Maker's Pet ]

Thanks, Ilia!

We’re giving humanoid robots swords now.

[ Robotera ]

A system developed by researchers at the University of Waterloo lets people collaborate with groups of robots to create works of art inspired by music.

[ Waterloo ]

FastUMI Pro is a multimodal, model-agnostic data acquisition system designed to power a truly end-to-end closed loop for embodied intelligence — transforming real-world data into genuine robotic capability.

[ Lumos Robotics ]

We usually take fingernails for granted, but they’re vital for fine-motor control and feeling textures. Our students have been doing some great work looking into the mechanics behind this.

[ Paper ]

This is a 550-lb all-electric coaxial unmanned rotorcraft developed by Texas A&M University’s Advanced Vertical Flight Laboratory and Harmony Aeronautics as a technology demonstrator for our quiet-rotor technology. The payload capacity is 200 lb (gross weight = 750 lb). The noise level measured was around 74 dBA in hover at 50-ft making this probably the quietest rotorcraft at this scale.

[ Harmony Aeronautics ]

Harvard scientists have created an advanced 3D printing method for developing soft robotics. This technique, called rotational multimaterial 3D printing, enables the fabrication of complex shapes and tubular structures with dissolvable internal channels. This innovation could someday accelerate the production of components for surgical robotics and assistive devices, advancing medical technology.

[ Harvard ]

Lynx M20 wheeled-legged robot steps onto the ice and snow, taking on challenges inspired by four winter sports scenarios. Who says robots can’t enjoy winter sports?

[ Deep Robotics ]

NGL right now I find this more satisfying to watch than a humanoid doing just about anything.

[ Fanuc ]

At Mentee Robotics, we design and build humanoid robots from the ground up with one goal: reliable, scalable deployment in real-world industrial environments. Our robots are powered by deep vertical integration across hardware, embedded software, and AI, all developed in-house to close the Sim2Real gap and enable continuous, around-the-clock operation.

[ Mentee Robotics ]

You don’t need to watch this whole video, but the idea of little submarines that hitch rides on bigger boats and recharge themselves is kind of cool.

[ Lockheed Martin ]

Learn about the work of Dr. Roland Siegwart, Dr. Anibal Ollero, Dr. Dario Floreano, and Dr. Margarita Chli on flying robots and some of the challenges they are still trying to tackle in this video created based on their presentations at ICRA@40 the 40th anniversary celebration of the IEEE International Conference on Robotics and Automation.

[ ICRA@40 ]

LEDs Enter the Nanoscale

2026-02-12 23:00:03



MicroLEDs, with pixels just micrometers across, have long been a byword in the display world. Now, microLED-makers have begun shrinking their creations into the uncharted nano realm. In January, a startup named Polar Light Technologies unveiled prototype blue LEDs less than 500 nanometers across. This raises a tempting question: How far can LEDs shrink?

We know the answer is, at least, considerably smaller. In the past year, two different research groups have demonstrated LED pixels at sizes of 100 nm or less.

These are some of the smallest LEDs ever created. They leave much to be desired in their efficiency—but one day, nanoLEDs could power ultra-high-resolution virtual reality displays and high-bandwidth on-chip photonics. And the key to making even tinier LEDs, if these early attempts are any precedents, may be to make more unusual LEDs.

New Approaches to LED

Take Polar Light’s example. Like many LEDs, the Sweden-based startup’s diodes are fashioned from III-V semiconductors like gallium nitride (GaN) and indium gallium nitride (InGaN). Unlike many LEDs, which are etched into their semiconductor from the top down, Polar Light’s are instead fabricated by building peculiarly shaped hexagonal pyramids from the bottom up.

Polar Light designed its pyramids for the larger microLED market, and plans to start commercial production in late 2026. But they also wanted to test how small their pyramids could shrink. So far, they’ve made pyramids 300 nm across. “We haven’t reached the limit, yet,” says Oskar Fajerson, Polar Light’s CEO. “Do we know the limit? No, we don’t, but we can [make] them smaller.”

Elsewhere, researchers have already done that. Some of the world’s tiniest LEDs come from groups who have foregone the standard III-V semiconductors in favor of other types of LEDs—like OLEDs.

“We are thinking of a different pathway for organic semiconductors,” says Chih-Jen Shih, a chemical engineer at ETH Zurich in Switzerland. Shih and his colleagues were interested in finding a way to fabricate small OLEDs at scale. Using an electron-beam lithography-based technique, they crafted arrays of green OLEDs with pixels as small as 100 nm across.

Where today’s best displays have 14,000 pixels per inch, these nanoLEDs—presented in an October 2025 Nature Photonics paper—can reach 100,000 pixels per inch.

Another group tried their hands with perovskites, cage-shaped materials best-known for their prowess in high-efficiency solar panels. Perovskites have recently gained traction in LEDs too. “We wanted to see what would happen if we make perovskite LEDs smaller, all the way down to the micrometer and nanometer length-scale,” says Dawei Di, engineer at Zhejiang University in Hangzhou, China.

Di’s group started with comparatively colossal perovskite LED pixels, measuring hundreds of micrometers. Then, they fabricated sequences of smaller and smaller pixels, each tinier than the last. Even after the 1 μm mark, they did not stop: 890 nm, then 440 nm, only bottoming out at 90 nm. These 90 nm red and green pixels, presented in a March 2025 Nature paper, likely represent the smallest LEDs reported to date.

Efficiency Challenges

Unfortunately, small size comes at a cost: Shrinking LEDs also shrinks their efficiency. Di’s group’s perovskite nanoLEDs have external quantum efficiencies—a measure of how many injected electrons are converted into photons—around 5 to 10 percent; Shih’s group’s nano-OLED arrays performed slightly better, topping 13 percent. For comparison, a typical millimeter-sized III-V LED can reach 50 to 70 percent, depending on its color.

Shih, however, is optimistic that modifying how nano-OLEDs are made can boost their efficiency. “In principle, you can achieve 30 percent, 40 percent external quantum efficiency with OLEDs, even with a smaller pixel, but it takes time to optimize the process,” Shih says.

Di thinks that researchers could take perovskite nanoLEDs to less dire efficiencies by tinkering with the material. Although his group is now focusing on the larger perovskite microLEDs, Di expects researchers will eventually reckon with nanoLEDs’ efficiency gap. If applications of smaller LEDs become appealing, “this issue could become increasingly important,” Di says.

What Can NanoLEDs Be Used For?

What can you actually do with LEDs this small? Today, the push for tinier pixels largely comes from devices like smart glasses and virtual reality headsets. Makers of these displays are hungry for smaller and smaller pixels in a chase for bleeding-edge picture quality with low power consumption (one reason that efficiency is important). Polar Light’s Fajerson says that smart-glass manufacturers today are already seeking 3 μm pixels.

But researchers are skeptical that VR displays will ever need pixels smaller than around 1 μm. Shrink pixels too far beyond that, and they’ll cross their light’s diffraction limit—that means they’ll become too small for the human eye to resolve. Shih’s and Di’s groups have already crossed the limit with their 100-nm and 90-nm pixels.

Very tiny LEDs may instead find use in on-chip photonics systems, allowing the likes of AI data centers to communicate with greater bandwidths than they can today. Chip manufacturing giant TSMC is already trying out microLED interconnects, and it’s easy to imagine chipmakers turning to even smaller LEDs in the future.

But the tiniest nanoLEDs may have even more exotic applications, because they’re smaller than the wavelengths of their light. “From a process point of view, you are making a new component that was not possible in the past,” Shih says.

For example, Shih’s group showed their nano-OLEDs could form a metasurface—a structure that uses its pixels’ nano-sizes to control how each pixel interacts with its neighbors. One day, similar devices could focus nanoLED light into laser-like beams or create holographic 3D nanoLED displays.