2026-02-18 03:58:48

At CES 2026 in Las Vegas, Singapore-based startup Strutt introduced the EV1, a powered personal mobility device that uses lidar, cameras, and onboard computing for collision avoidance. Unlike manually-steered powered wheelchairs, the EV1 assists with navigation in both indoor and outdoor environments—stopping or rerouting itself before a collision can occur.
Strutt describes its approach as “shared control,” in which the user sets direction and speed, while the device intervenes to avoid unsafe motion.
“The problem isn’t always disability,” says Strutt cofounder and CEO Tony Hong. “Sometimes people are just tired. They have limited energy, and mobility shouldn’t consume it.”
Building a mobility platform was not Hong’s original ambition. Trained in optics and sensor systems, he previously worked in aerospace and robotics. From 2016 to 2019, he led the development of lidar systems for drones at Shenzhen, China-based DJI, a leading manufacturer of consumer and professional drones. Hong then left DJI for a position as an assistant professor at Southern University of Science and Technology in Shenzhen—a school known for its research in robotics, human augmentation, sensors, and rehabilitation engineering.
However, he says, demographic trends around him proved hard to ignore. Populations in Asia, Europe, and North America are aging rapidly. More people are living longer, with limited stamina, slower reaction times, or balance challenges. So, Hong says he left academia to develop technology that would help people facing mobility limitations.
EV1 combines two lidar units, two cameras, 10 time-of-flight depth sensors, and six ultrasonic sensors. Sensor data feeds into onboard computing that performs object detection and path planning.
“We need accuracy at a few centimeters,” Hong says. “Otherwise, you’re hitting door frames.”
Using the touchscreen interface, users can select a destination within the mapped environment. The onboard system calculates a safe route and guides the vehicle at a reduced speed of about 3 miles per hour. The rider can override the route instantly with joystick input. The system even supports voice commands, allowing the user to direct the EV1 to waypoints saved in its memory.
The user can say, for example, “Go to the fridge,” and it will chart a course to the refrigerator and go there, avoiding obstacles along the way.
The Strutt EV1 puts both joystick controls and a lidar-view of the environment in front of the device’s user. Strutt
Driving EV1 in manual mode, the rider retains full control, with vibration feedback warning of nearby obstacles. In “copilot” mode, the vehicle prevents direct collisions by stopping before impact. In “copilot plus,” it can steer around obstacles while continuing in the intended direction of travel.
“We don’t call it autonomous driving,” Hong says. “The user is always responsible and can take control instantly.”
Hong says Strutt has also kept its users’ digital privacy in mind. All perception, planning, and control computations, he says, occur onboard the device. Sensor data is not transmitted unless the user chooses to upload logs for diagnostics. Camera and microphone activity is visibly indicated, and wireless communications are encrypted. Navigation and obstacle avoidance function without cloud connectivity.
“We don’t think of this as a wheelchair,” Hong says. “We think of it as an everyday vehicle.”
Strutt promotes EV1’s use for both outdoor and indoor environments—offering high-precision sensing capabilities to navigate confined spaces. Strutt
To ensure that the EV1 could withstand years of shuttling a user back and forth inside their home and around their neighborhood, the Strutt team subjected the mobility vehicleto two million roller cycles—mechanical simulation testing that allows engineers to estimate how well the motors, bearings, suspension, and frame will hold up over time.
The EV1’s 600-watt-hour lithium iron phosphate battery provides 32 kilometers of range—enough for a full day of errands, indoor navigation, and neighborhood travel. A smaller 300-watt-hour version, designed to comply with airline lithium-battery limits, delivers 16 km. Charging from zero to 80 percent takes two hours.
The EV1 retails for US $7,500—a price that could place it outside the reach of people without deep pockets. For now, advanced sensors and embedded computing keep manufacturing cost high, while insurance reimbursement frameworks for AI-assisted mobility devices depend on where a person lives.
“A retail price of $7,500 raises serious equity concerns,” says Erick Rocha, communications and development coordinator at the Los Angeles-based advocacy organization Disability Voices United,. “Many mobility device users in the United States rely on Medicaid,” the government insurance program for people with limited incomes. “Access must not be restricted to those who can afford to pay out of pocket.”
Medicaid coverage for high-tech mobility devices varies widely by state, and some states have rules that create significant barriers to approval (especially for non-standard or more specialized equipment).
Even in states that do cover mobility devices, similar types of hurdles often show up. Almost all states require prior approval for powered mobility devices, and the process can be time-consuming and documentation-heavy. Many states rigidly define what “medically necessary” means. They may require a detailed prescription describing the features of the mobility device and why the patient’s needs cannot be met with a simpler mobility aid such as a walker, cane, or standard manual wheelchair. Some states’ processes include a comprehensive in-person exam, documenting how the impairment described by the clinician limits activities of daily living such as toileting, dressing, bathing, or eating. Even if a person overcomes those hurdles, a state Medicaid program could deny coverage if a device doesn’t fit neatly into existing Healthcare Common Procedure Coding System billing codes
“Sensor-assisted systems can improve safety,” Rocha says. “But the question is whether a device truly meets the lived, day-to-day realities of people with limited mobility.”
Hong says that Strutt, founded in 2023, is betting that falling sensor prices and advances in embedded processing now make commercial deployment of the EV1 feasible.
2026-02-18 03:27:00

Join Hannah Alpert (NASA Ames) to explore thermal data from the record-breaking 6-meter LOFTID inflatable aeroshell. Learn how COMSOL Multiphysics® was used to perform inverse analysis on flight thermocouple data, validating heat flux gauges and preflight CFD predictions. Attendees will gain technical insights into improving thermal models for future HIAD missions, making this essential for engineers seeking to advance atmospheric reentry design. The session concludes with a live Q&A.
2026-02-17 23:00:03

In 2024, Google claimed that their data centers are 1.5x more energy efficient than industry average. In 2025, Microsoft committed billions to nuclear power for AI workloads. The data center industry tracks power usage effectiveness to three decimal places and optimizes water usage intensity with machine precision. We report direct emissions and energy emissions with religious fervor.
These are laudable advances, but these metrics account for only 30 percent of total emissions from the IT sector. The majority of the emissions are not directly from data centers or the energy they use, but from the end-user devices that actually access the data centers, emissions due to manufacturing the hardware, and software inefficiencies. We are frantically optimizing less than a third of the IT sector’s environmental impact, while the bulk of the problem goes unmeasured.
Incomplete regulatory frameworks are part of the problem. In Europe, the Corporate Sustainability Reporting Directive (CSRD) now requires 11,700 companies to report emissions using these incomplete frameworks. The next phase of the directive, covering 40,000+ additional companies, was originally scheduled for 2026 (but is likely delayed to 2028). In the United States, the standards body responsible for IT sustainability metrics (ISO/IEC JTC 1/SC 39) is conducting active revision of its standards through 2026, with a key plenary meeting in May 2026.
The time to act is now. If we don’t fix the measurement frameworks, we risk locking in incomplete data collection and optimizing a fraction of what matters for the next 5 to 10 years, before the next major standards revision.
Walk into any modern data center and you’ll see sustainability instrumentation everywhere. Power usage efficiency (PUE) monitors track every watt. Water usage efficiency (WUE) systems measure water consumption down to the gallon. Sophisticated monitoring captures everything from server utilization to cooling efficiency to renewable energy percentages.
But here’s what those measurements miss: End-user devices globally emit 1.5 to 2 times more carbon than all data centers combined, according to McKinsey’s 2022 report. The smartphones, laptops, and tablets we use to access those ultra-efficient data centers are the bigger problem.
Data center operations, as measured by power usage efficiency, account for only 24 percent of the total emissions.
On the conservative end of the range from McKinsey’s report, devices emit 1.5 times as much as data centers. That means that data centers make up 40 percent of total IT emissions, while devices make up 60 percent.
On top of that, approximately 75 percent of device emissions occur not during use, but during manufacturing—this is so-called embodied carbon. For data centers, only 40 percent is embodied carbon, and 60 percent comes from operations (as measured by PUE).
Putting this together, data center operations, as measured by PUE, account for only 24 percent of the total emissions. Data center embodied carbon is 16 percent, device embodied carbon is 45 percent, and device operation is 15 percent.
Under the EU’s current CSRD framework, companies must report their emissions in three categories: direct emissions from owned sources, indirect emissions from purchased energy, and a third category for everything else.
This “everything else” category does include device emissions and embodied carbon. However, those emissions are reported as aggregate totals broken down by accounting category—Capital Goods, Purchased Goods and Services, Use of Sold Products—but not by product type. How much comes from end-user devices versus datacenter infrastructure, or employee laptops versus network equipment, remains murky, and therefore, unoptimized.
Manufacturing a single smartphone generates approximately 50 kg CO2 equivalent (CO2e). For a laptop, it’s 200 kg CO2e. With 1 billion smartphones replaced annually, that’s 50 million tonnes of CO2e per year just from smartphone manufacturing, before anyone even turns them on. On average, smartphones are replaced every 2 years, laptops every 3 to 4 years, and printers every 5 years. Data center servers are replaced approximately every 5 years.
Extending smartphone lifecycles to 3 years instead of 2 would reduce annual manufacturing emissions by 33 percent. At scale, this dwarfs data center optimization gains.
There are programs geared towards reusing old components that are still functional and integrating them into new servers. GreenSKUs and similar initiatives show 8 percent reductions in embodied carbon are achievable. But these remain pilot programs, not systematic approaches. And critically, they’re measured only in data center context, not across the entire IT stack.
Imagine applying the same circular economy principles to devices. With over 2 billion laptops in existence globally and 2-3-year replacement cycles, even modest lifespan extensions create massive emission reductions. Extending smartphone lifecycles to 3 years instead of 2 would reduce annual manufacturing emissions by 33 percent. At scale, this dwarfs data center optimization gains.
Yet data center reuse gets measured, reported, and optimized. Device reuse doesn’t, because the frameworks don’t require it.
Leading load balancer infrastructure across IBM Cloud, I see how software architecture decisions ripple through energy consumption. Inefficient code doesn’t just slow things down—it drives up both data center power consumption and device battery drain.
For example, University of Waterloo researchers showed that they can reduce 30 percent of energy use in data centers by changing just 30 lines of code. From my perspective, this result is not an anomaly—it’s typical. Bad software architecture forces unnecessary data transfers, redundant computations, and excessive resource use. But unlike data center efficiency, there’s no commonly accepted metric for software efficiency.
This matters more now than ever. With AI workloads driving massive data center expansion—projected to consume 6.7-12 percent of total U.S. electricity by 2028, according to Lawrence Berkeley National Laboratory—software efficiency becomes critical.
The solution isn’t to stop measuring data center efficiency. It’s to measure device sustainability with the same rigor. Specifically, standards bodies (particularly ISO/IEC JTC 1/SC 39 WG4: Holistic Sustainability Metrics) should extend frameworks to include device lifecycle tracking, software efficiency metrics, and hardware reuse standards.
To track device lifecycles, we need standardized reporting of device embodied carbon, broken out separately by device. One aggregate number in an “everything else” category is insufficient. We need specific device categories with manufacturing emissions and replacement cycles visible.
To include software efficiency, I advocate developing a PUE-equivalent for software, such as energy per transaction, per API call, or per user session. This needs to be a reportable metric under sustainability frameworks so companies can demonstrate software optimization gains.
To encourage hardware reuse, we need to systematize reuse metrics across the full IT stack—servers and devices. This includes tracking repair rates, developing large-scale refurbishment programs, and tracking component reuse with the same detail currently applied to data center hardware.
To put it all together, we need a unified IT emission-tracking dashboard. CSRD reporting should show device embodied carbon alongside data center operational emissions, making the full IT sustainability picture visible at a glance.
These aren’t radical changes—they’re extensions of measurement principles already proven in data center context. The first step is acknowledging what we’re not measuring. The second is building the frameworks to measure it. And the third is demanding that companies report the complete picture—data centers and devices, servers and smartphones, infrastructure and software.
Because you can’t fix what you can’t see. And right now, we’re not seeing 70 percent of the problem.
2026-02-16 22:00:02

When Alan DeKok began a side project in network security, he didn’t expect to start a 27-year career. In fact, he didn’t initially set out to work in computing at all.
DeKok studied nuclear physics before making the switch to a part of network computing that is foundational but—like nuclear physics—largely invisible to those not directly involved in the field. Eventually, a project he started as a hobby became a full-time job: maintaining one of the primary systems that helps keep the internet secure.
Employer
InkBridge Networks
Occupation
CEO
Education
Bachelor’s degree in physics, Carleton University; master’s degree in physics, Carleton University
Today, he leads the FreeRADIUS Project, which he cofounded in the late 1990s to develop what is now the most widely used Remote Authentication Dial-In User Service (RADIUS) software. FreeRADIUS is an open-source server that provides back-end authentication for most major internet service providers. It’s used by global financial institutions, Wi-Fi services like Eduroam, and Fortune 50 companies. DeKok is also CEO of InkBridge Networks, which maintains the server and provides support for the companies that use it.
Reflecting on nearly three decades of experience leading FreeRADIUS, DeKok says he became an expert in remote authentication “almost by accident,” and the key to his career has largely been luck. “I really believe that it’s preparing yourself for luck, being open to it, and having the skills to capitalize on it.”
DeKok grew up on a farm outside of Ottawa growing strawberries and raspberries. “Sitting on a tractor in the heat is not particularly interesting,” says DeKok, who was more interested in working with 8-bit computers than crops. As a student at Carleton University, in Ottawa, he found his way to physics because he was interested in math but preferred the practicality of science.
While pursuing a master’s degree in physics, also at Carleton, he worked on a water-purification system for the Sudbury Neutrino Observatory, an underground observatory then being built at the bottom of a nickel mine. He would wake up at 4:30 in the morning to drive up to the site, descend 2 kilometers, then enter one of the world’s deepest clean-room facilities to work on the project. The system managed to achieve one atom of impurity per cubic meter of water, “which is pretty insane,” DeKok says.
But after his master’s degree, DeKok decided to take a different route. Although he found nuclear physics interesting, he says he didn’t see it as his life’s work. Meanwhile, the Ph.D. students he knew were “fanatical about physics.” He had kept up his computing skills through his education, which involved plenty of programming, and decided to look for jobs at computing companies. “I was out of physics. That was it.”
Still, physics taught him valuable lessons. For one, “You have to understand the big picture,” DeKok says. “The ability to tell the big-picture story in standards, for example, is extremely important.” This skill helps DeKok explain to standards bodies how a protocol acts as one link in the entire chain of events that needs to occur when a user wants to access the internet.
He also learned that “methods are more important than knowledge.” It’s easy to look up information, but physics taught DeKok how to break down a problem into manageable pieces to come up with a solution. “When I was eventually working in the industry, the techniques that came naturally to me, coming out of physics, didn’t seem to be taught as well to the people I knew in engineering,” he says. “I could catch up very quickly.”
In 1996, DeKok was hired as a software developer at a company called Gandalf, which made equipment for ISDN, a precursor to broadband that enabled digital transmission of data over telephone lines. Gandalf went under about a year later, and he joined CryptoCard, a company providing hardware devices for two-factor authentication.
While at CryptoCard, DeKok began spending more time working with a RADIUS server. When users want to connect to a network, RADIUS acts as a gatekeeper and verifies their identity and password, determines what they can access, and tracks sessions. DeKok moved on to a new company in 1999, but he didn’t want to lose the networking skills he had developed. No other open-source RADIUS servers were being actively developed at the time, and he saw a gap in the market.
The same year, he started FreeRADIUS in his free time and it “gradually took over my life,” DeKok says. He continued to work on the open-source software as a hobby for several years while bouncing around companies in California and France. “Almost by accident, I became one of the more senior people in the space. Then I doubled down on that and started the business.” He founded NetworkRADIUS (now called InkBridge Networks) in 2008.
By that point, FreeRADIUS was already being used by 100 million people daily. The company now employs experts in Canada, France, and the United Kingdom who work together to support FreeRADIUS. “I’d say at least half of the people in the world get on the internet by being authenticated through my software,” DeKok estimates. He attributes that growth largely to the software being open source. Initially a way to enter the market with little funding, going open source has allowed FreeRADIUS to compete with bigger companies as an industry-leading product.
Although the software is critical for maintaining secure networks, most people aren’t aware of it because it works behind the scenes. DeKok is often met with surprise that it’s still in use. He compares RADIUS to a building foundation: “You need it, but you never think about it until there’s a crack in it.”
Over the years, DeKok has maintained FreeRADIUS by continually making small fixes. Like using a ratcheting tool to make a change inch by inch, “you shouldn’t underestimate that ratchet effect of tiny little fixes that add up over time,” he says.
He’s seen the project through minor patches and more significant fixes, like when researchers exposed a widespread vulnerability DeKok had been trying to fix since 1998. He also watched a would-be successor to the network protocol, Diameter, rise and fall in popularity in the 2000s and 2010s. (Diameter gained traction in mobile applications but has gradually been phased out in the shift to 5G.) Though Diameter offers improvements, RADIUS is far simpler and already widely implemented, giving it an edge, DeKok explains.
And he remains confident about its future. “People ask me, ‘What’s next for RADIUS?’ I don’t see it dying.” Estimating that billions of dollars of equipment run RADIUS, he says, “It’s never going to go away.”
About his own career, DeKok says he plans to keep working on FreeRADIUS, exploring new markets and products. “I never expected to have a company and a lot of people working for me, my name on all kinds of standards, and customers all over the world. But it worked out that way.”
This article appears in the March 2026 print issue as “Alan DeKok.”
2026-02-15 22:00:02

In December, NASA took another small, incremental step towards autonomous surface rovers. In a demonstration, the Perseverance team used AI to generate the rover’s waypoints. Perseverance used the AI waypoints on two separate days, traveling a total of 456 meters without human control.
“This demonstration shows how far our capabilities have advanced and broadens how we will explore other worlds,” said NASA Administrator Jared Isaacman. “Autonomous technologies like this can help missions to operate more efficiently, respond to challenging terrain, and increase science return as distance from Earth grows. It’s a strong example of teams applying new technology carefully and responsibly in real operations.”
Mars is a long way away, and there’s about a 25-minute delay for a round trip signal between Earth and Mars. That means that one way or another, rovers are on their own for short periods of time.
The delay shapes the route-planning process. Rover drivers here on Earth examine images and elevation data and program a series of waypoints, which usually don’t exceed 100 meters apart. The driving plan is sent to NASA’s Deep Space Network (DSN), which transmits it to one of several orbiters, which then relay it to Perseverance. (Perseverance can receive direct comms from the DSN as a back up, but the data rate is slower.)
In this demonstration, the AI model analyzed orbital images from the Mars Reconnaissance Orbiter’s HiRISE camera, as well as digital elevation models. The AI, which is based on Anthropic’s Claude AI, identified hazards like sand traps, boulder fields, bedrock, and rocky outcrops. Then it generated a path defined by a series of waypoints that avoids the hazards. From there, Perseverance’s auto-navigation system took over. It has more autonomy than its predecessors and can process images and driving plans while in motion.
There was another important step before these waypoints were transmitted to Perseverance. NASA’s Jet Propulsion Laboratory has a “twin” for Perseverance called the “Vehicle System Test Bed” (VSTB) in JPL’s Mars Yard. It’s an engineering model that the team can work with here on Earth to solve problems, or for situations like this. These engineering versions are common on Mars missions, and JPL has one for Curiosity, too.
“The fundamental elements of generative AI are showing a lot of promise in streamlining the pillars of autonomous navigation for off-planet driving: perception (seeing the rocks and ripples), localization (knowing where we are), and planning and control (deciding and executing the safest path),” said Vandi Verma, a space roboticist at JPL and a member of the Perseverance engineering team. “We are moving towards a day where generative AI and other smart tools will help our surface rovers handle kilometer-scale drives while minimizing operator workload, and flag interesting surface features for our science team by scouring huge volumes of rover images.”
AI is rapidly becoming ubiquitous in our lives, showing up in places that don’t necessarily have a strong use case for it. But this isn’t NASA hopping on the AI bandwagon. They’ve been developing automatic navigation systems for a while, out of necessity. In fact, Perseverance’s primary means of driving is its self-driving autonomous navigation system.
One thing that prevents fully-autonomous driving is the way uncertainty grows as the rover operates without human assistance. The longer the rover travels, the more uncertain it becomes about its position on the surface. The solution is to re-localize the rover on its map. Currently, humans do this. But this takes time, including a complete communication cycle between Earth and Mars. Overall, it limits how far Perseverance can go without a helping hand.
NASA/JPL is also working on a way that Perseverance can use AI to re-localize. The main roadblock is matching orbital images with the rover’s ground-level images. It seems highly likely that AI will be trained to excel at this.
It’s obvious that AI is set to play a much larger role in planetary exploration. The next Mars rover may be much different than current ones, with more advanced autonomous navigation and other AI features. There are already concepts for a swarm of flying drones released by a rover to expand its explorative reach on Mars. These swarms would be controlled by AI to work together and autonomously.
And it’s not just Mars exploration that will benefit from AI. NASA’s Dragonfly mission to Saturn’s moon Titan will make extensive use of AI. Not only for autonomous navigation as the rotorcraft flies around, but also for autonomous data curation.
“Imagine intelligent systems not only on the ground at Earth, but also in edge applications in our rovers, helicopters, drones, and other surface elements trained with the collective wisdom of our NASA engineers, scientists, and astronauts,” said Matt Wallace, manager of JPL’s Exploration Systems Office. “That is the game-changing technology we need to establish the infrastructure and systems required for a permanent human presence on the Moon and take the U.S. to Mars and beyond.”
2026-02-14 22:00:02

MicroVision, a solid-state sensor technology company located in Redmond, Wash., says it has designed a solid-state automotive lidar sensor intended to reach production pricing below US $200. That’s less than half of typical prices now, and it’s not even the full extent of the company’s ambition. The company says its longer-term goal is $100 per unit. MicroVision’s claim, which, if realized, would place lidar within reach of advanced driver-assistance systems (ADAS) rather than limiting it to high-end autonomous vehicle programs. Lidar’s limited market penetration comes down to one issue: cost.
Comparable mechanical lidars from multiple suppliers now sell in the $10,000 to $20,000 range. That price roughly tenfold drop, from about $80,000, helps explain why suppliers now are now hopeful that another steep price reduction is on the horizon.
For solid-state devices, “it is feasible to bring the cost down even more when manufacturing at high volume,” says Hayder Radha, a professor of electrical and computer engineering at Michigan State University and director of the school’s Connected & Autonomous Networked Vehicles for Active Safety program. With demand expanding beyond fully autonomous vehicles into driver-assistance applications, “one order or even two orders of magnitude reduction in cost are feasible.”
“We are focused on delivering automotive-grade lidar that can actually be deployed at scale,” says MicroVision CEO Glen DeVos. “That means designing for cost, manufacturability, and integration from the start—not treating price as an afterthought.”
Tesla CEO Elon Musk famously dismissed lidar in 2019 as “a fool’s errand,” arguing that cameras and radar alone were sufficient for automated driving. A credible path to sub-$200 pricing would fundamentally alter the calculus of autonomous-car design by lowering the cost of adding precise three-dimensional sensing to mainstream vehicles. The shift reflects a broader industry trend toward solid-state lidar designs optimized for low-cost, high-volume manufacturing rather than maximum range or resolution.
Before those economics can be evaluated, however, it’s important to understand what MicroVision is proposing to build.
The company’s Movia S is a solid-state lidar. Mounted at the corners of a vehicle, the sensor sends out 905-nanometer-wavelength laser pulses and measures how long it takes for light reflected from the surfaces of nearby objects to return. The arrangement of the beam emitters and receivers provides a fixed field of view designed for 180-degree horizontal coverage rather than full 360-degree scanning typical of traditional mechanical units. The company says the unit can detect objects at distances of up to roughly 200 meters under favorable weather conditions—compared with the roughly 300-meter radius scanned by mechanical systems—and supports frame rates suitable for real-time perception in driver-assistance systems. Earlier mechanical lidars, used spinning components to steer their beams but the Movia S is a phased-arraysystem. It controls the amplitude and phase of the signals across an array of antenna elements to steer the beam. The unit is designed to meet automotive requirements for vibration tolerance, temperature range, and environmental sealing.
MicroVision’s pricing targets might sound aggressive, but they are not without precedent. The lidar industry has already experienced one major cost reset over the past decade.
“Automakers are not buying a single sensor in isolation... They are designing a perception system, and cost only matters if the system as a whole is viable.” –Glen DeVos, MicroVision
Around 2016 and 2017, mechanical lidar systems used in early autonomous driving research often sold for close to $100,000. Those units relied on spinning assemblies to sweep laser beams across a full 360 degrees, which made them expensive to build and difficult to ruggedize for consumer vehicles.
“Back then, a 64-beam Velodyne lidar cost around $80,000,” says Radha.
Comparable mechanical lidars from multiple suppliers now sell in the $10,000 to $20,000 range. That roughly tenfold drop helps explain why suppliers now believe another steep price reduction is possible.
“For solid-state devices, it is feasible to bring the cost down even more when manufacturing at high volume,” Radha says. With demand expanding beyond fully autonomous vehicles into driver-assistance applications, “one order or even two orders of magnitude reduction in cost are feasible.”
Lower cost, however, does not come for free. The same design choices that enable solid-state lidar to scale also introduce new constraints.
“Unlike mechanical lidars, which provide full 360-degree coverage, solid-state lidars tend to have a much smaller field of view,” Radha says. Many cover 180 degrees or less.
That limitation shifts the burden from the sensor to the system. Automakers will need to deploy three or four solid-state lidars around a vehicle to achieve full coverage. Even so, Radha notes, the total cost can still undercut that of a single mechanical unit.
What changes is integration. Multiple sensors must be aligned, calibrated, and synchronized so their data can be fused accurately. The engineering is manageable, but it adds complexity that price targets alone do not capture.
DeVos says MicroVision’s design choices reflect that reality. “Automakers are not buying a single sensor in isolation,” he says. “They are designing a perception system, and cost only matters if the system as a whole is viable.”
Those system-level tradeoffs help explain where low-cost lidar is most likely to appear first.
Most advanced driver assistance systems today rely on cameras and radar, which are significantly cheaper than lidar. Cameras provide dense visual information, while radar offers reliable range and velocity data, particularly in poor weather. Radha estimates that lidar remains roughly an order of magnitude more expensive than automotive radar.
But at prices in the $100 to $200 range, that gap narrows enough to change design decisions.
“At that point, lidar becomes appealing because of its superior capability in precise 3D detection and tracking,” Radha says.
Rather than replacing existing sensors, lower-cost lidar would likely augment them, adding redundancy and improving performance in complex environments that are challenging for electronic perception systems. That incremental improvement aligns more closely with how ADAS features are deployed today than with the leap to full vehicle autonomy.
MicroVision is not alone in pursuing solid-state lidar, and several suppliers including Chinese firms Hesai and RoboSense and other major suppliers such as Luminar and Velodyne have announced long-term cost targets below $500. What distinguishes current claims is the explicit focus on sub-$200 pricing tied to production volume rather than future prototypes or limited pilot runs.
Some competitors continue to prioritize long-range performance for autonomous vehicles, which pushes cost upward. Others have avoided aggressive pricing claims until they secure firm production commitments from automakers.
That caution reflects a structural challenge: Reaching consumer-level pricing requires large, predictable demand. Without it, few suppliers can justify the manufacturing investments needed to achieve true economies of scale.
Even if low-cost lidar becomes manufacturable, another question remains: How should its performance be judged?
From a systems-engineering perspective, Radha says cost milestones often overshadow safety metrics.
“The key objective of ADAS and autonomous systems is improving safety,” he says. Yet there is no universally adopted metric that directly expresses safety gains from a given sensor configuration.
Researchers instead rely on perception benchmarks such as mean Average Precision, or mAP, which measures how accurately a system detects and tracks objects in its environment. Including such metrics alongside cost targets, says Radha, would clarify what performance is preserved or sacrificed as prices fall.
IEEE Spectrum has covered lidar extensively, often focusing on technical advances in scanning, range, and resolution. What distinguishes the current moment is the renewed focus on economics rather than raw capability
If solid-state lidar can reliably reach sub-$200 pricing, it will not invalidate Elon Musk’s skepticism—but it will weaken one of its strongest foundations. When cost stops being the dominant objection, automakers will have to decide whether leaving lidar out is a technical judgment or a strategic one.
That decision, more than any single price claim, may determine whether lidar finally becomes a routine component of vehicle safety systems.