2025-11-23 21:05:01

“Why worry about something that isn’t going to happen?”
KGB Chairman Charkov’s question to inorganic chemist Valery Legasov in HBO’s “Chernobyl” miniseries makes a good epitaph for the hundreds of software development, modernization, and operational failures I have covered for IEEE Spectrum since my first contribution, to its September 2005 special issue on learning—or rather, not learning—from software failures. I noted then, and it’s still true two decades later: Software failures are universally unbiased. They happen in every country, to large companies and small. They happen in commercial, nonprofit, and governmental organizations, regardless of status or reputation.
Global IT spending has more than tripled in constant 2025 dollars since 2005, from US $1.7 trillion to $5.6 trillion, and continues to rise. Despite additional spending, software success rates have not markedly improved in the past two decades. The result is that the business and societal costs of failure continue to grow as software proliferates, permeating and interconnecting every aspect of our lives.
For those hoping AI software tools and coding copilots will quickly make large-scale IT software projects successful, forget about it. For the foreseeable future, there are hard limits on what AI can bring to the table in controlling and managing the myriad intersections and trade-offs among systems engineering, project, financial, and business management, and especially the organizational politics involved in any large-scale software project. Few IT projects are displays of rational decision-making from which AI can or should learn. As software practitioners know, IT projects suffer from enough management hallucinations and delusions without AI adding to them.
As I noted 20 years ago, the drivers of software failure frequently are failures of human imagination, unrealistic or unarticulated project goals, the inability to handle the project’s complexity, or unmanaged risks, to name a few that today still regularly cause IT failures. Numerous others go back decades, such as those identified by Stephen Andriole, the chair of business technology at Villanova University’s School of Business, in the diagram below first published in Forbes in 2021. Uncovering a software system failure that has gone off the rails in a unique, previously undocumented manner would be surprising because the overwhelming majority of software-related failures involve avoidable, known failure-inducing factors documented in hundreds of after-action reports, academic studies, and technical and management books for decades. Failure déjà vu dominates the literature.
The question is, why haven’t we applied what we have repeatedly been forced to learn?
Many of the IT developments and operational failures I have analyzed over the last 20 years have each had their own Chernobyl-like meltdowns, spreading reputational radiation everywhere and contaminating the lives of those affected for years. Each typically has a story that strains belief. A prime example is the Canadian government’s CA $310 million Phoenix payroll system, which went live in April 2016 and soon after went supercritical.
Phoenix project executives believed they could deliver a modernized payment system, customizing PeopleSoft’s off-the-shelf payroll package to follow 80,000 pay rules spanning 105 collective agreements with federal public-service unions. It also was attempting to implement 34 human-resource system interfaces across 101 government agencies and departments required for sharing employee data. Further, the government’s developer team thought they could accomplish this for less than 60 percent of the vendor’s proposed budget. They’d save by removing or deferring critical payroll functions, reducing system and integration testing, decreasing the number of contractors and government staff working on the project, and forgoing vital pilot testing, along with a host of other overly optimistic proposals.

The Phoenix payroll failure pales in comparison to the worst operational IT system failure to date: the U.K. Post Office’s electronic point-of-sale (EPOS) Horizon system, provided by Fujitsu. Rolled out in 1999, Horizon was riddled with internal software errors that were deliberately hidden, leading to the Post Office unfairly accusing 3,500 local post branch managers of false accounting, fraud, and theft. Approximately 900 of these managers were convicted, with 236 incarcerated between 1999 and 2015. By then, the general public and the branch managers themselves finally joined Computer Weekly’s reporters (who had doggedly reported on Horizon’s problems since 2008) in the knowledge that there was something seriously wrong with Horizon’s software. It then took another decade of court cases, an independent public statutory inquiry, and an ITV miniseries “Mr. Bates vs. The Post Office” to unravel how the scandal came to be.
Like Phoenix, Horizon was plagued with problems that involved technical, management, organizational, legal, and ethical failures. For example, the core electronic point-of-sale system software was built on communication and data-transfer middleware that was itself buggy. In addition, Horizon’s functionality ran wild under unrelenting, ill-disciplined scope creep. There were ineffective or missing development and project management processes, inadequate testing, and a lack of skilled professional, technical, and managerial personnel.
The Post Office’s senior leadership repeatedly stated that the Horizon software was fully reliable, becoming hostile toward postmasters who questioned it, which only added to the toxic environment. As a result, leadership invoked every legal means at its disposal and crafted a world-class cover-up, including the active suppression of exculpatory information, so that the Post Office could aggressively prosecute postmasters and attempt to crush any dissent questioning Horizon’s integrity.
Shockingly, those wrongly accused still have to continue to fight to be paid just compensation for their ruined lives. Nearly 350 of the accused died, at least 13 of whom are believed to be by suicide, before receiving any payments for the injustices experienced. Unfortunately, as attempts to replace Horizon in 2016 and 2021 failed, the Post Office continues to use it, at least for now. The government wants to spend £410 million on a new system, but it’s a safe bet that implementing it will cost much, much more. The Post Office accepted bids for a new point-of-sale software system in summer 2025, with a decision expected by 1 July 2026.Phoenix’s payroll meltdown was preordained. As a result, over the past nine years, around 70 percent of the 430,000 current and former Canadian federal government employees paid through Phoenix have endured paycheck errors. Even as recently as fiscal year 2023–2024, a third of all employees experienced paycheck mistakes. The ongoing financial stress and anxieties for thousands of employees and their families have been immeasurable. Not only are recurring paycheck troubles sapping worker morale, but in at least one documented case, a coroner blamed an employee’s suicide on the unbearable financial and emotional strain she suffered.
By the end of March 2025, when the Canadian government had promised that the backlog of Phoenix errors would finally be cleared, over 349,000 were still unresolved, with 53 percent pending for more than a year. In June, the Canadian government once again committed to significantly reducing the backlog, this time by June 2026. Given previous promises, skepticism is warranted.

2019
The planned $41 million Minnesota Licensing and Registration System (MNLARS) effort is rolled out in 2016 and then is canceled in 2019 after a total cost of $100 million. It is deemed too hard to fix.
The financial costs to Canadian taxpayers related to Phoenix’s troubles have so far climbed to over CA $5.1 billion (US $3.6 billion). It will take years to calculate the final cost of the fiasco. The government spent at least CA $100 million (US $71 million) before deciding on a Phoenix replacement, which the government acknowledges will cost several hundred million dollars more and take years to implement. The late Canadian Auditor General Michael Ferguson’s audit reports for the Phoenix fiasco described the effort as an “incomprehensible failure of project management and oversight.”
While it may be a project management and oversight disaster, an inconceivable failure Phoenix certainly is not. The IT community has striven mightily for decades to make the incomprehensible routine.
South of the Canadian border, the United States has also seen the overall cost of IT-related development and operational failures since 2005 rise to the multi-trillion-dollar range, potentially topping $10 trillion. A report from the Consortium for Information & Software Quality (CISQ) estimated the annual cost of operational software failures in the United States in 2022 alone was $1.81 trillion, with another $260 billion spent on software-development failures. It is larger than the total U.S. defense budget for that year, $778 billion.
The question is, why haven’t we applied what we have repeatedly been forced to learn?
What percentage of software projects fail, and what failure means, has been an ongoing debate within the IT community stretching back decades. Without diving into the debate, it’s clear that software development remains one of the riskiest technological endeavors to undertake. Indeed, according to Bent Flyvbjerg, professor emeritus at the University of Oxford’s Saїd Business School, comprehensive data shows that not only are IT projects risky, they are the riskiest from a cost perspective.

2022
Australia’s planned AU $480.5 million program to modernize it business register systems is canceled. After AU $530 million is spent, a review finds that the projected cost has risen to AU $2.8 billion, and the project would take five more years to complete.
The CISQ report estimates that organizations in the United States spend more than $520 billion annually supporting legacy software systems, with 70 to 75 percent of organizational IT budgets devoted to legacy maintenance. A 2024 report by services company NTT DATA found that 80 percent of organizations concede that “inadequate or outdated technology is holding back organizational progress and innovation efforts.” Furthermore, the report says that virtually all C-level executives believe legacy infrastructure thwarts their ability to respond to the market. Even so, given that the cost of replacing legacy systems is typically many multiples of the cost of supporting them, business executives hesitate to replace them until it is no longer operationally feasible or cost-effective. The other reason is a well-founded fear that replacing them will turn into a debacle like Phoenix or others.
Nevertheless, there have been ongoing attempts to improve software development and sustainment processes. For example, we have seen increasing adoption of iterative and incremental strategies to develop and sustain software systems through Agile approaches, DevOps methods, and other related practices.

2025
Louisiana’s governor orders a state of emergency over repeated failures of the 50-year-old Office of Motor Vehicles mainframe computer system. The state promises expedited acquisition of a new IT system, which might be available by early 2028.
The goal is to deliver usable, dependable, and affordable software to end users in the shortest feasible time. DevOps strives to accomplish this continuously throughout the entire software life cycle. While Agile and DevOps have proved successful for many organizations, they also have their share of controversy and pushback. Provocative reports claim Agile projects have a failure rate of up to 65 percent, while others claim up to 90 percent of DevOps initiatives fail to meet organizational expectations.
It is best to be wary of these claims while also acknowledging that successfully implementing Agile or DevOps methods takes consistent leadership, organizational discipline, patience, investment in training, and culture change. However, the same requirements have always been true when introducing any new software platform. Given the historic lack of organizational resolve to instill proven practices, it is not surprising that novel approaches for developing and sustaining ever more complex software systems, no matter how effective they may be, will also frequently fall short.
The frustrating and perpetual question is why basic IT project-management and governance mistakes during software development and operations continue to occur so often, given the near-total societal reliance on reliable software and an extensively documented history of failures to learn from? Next to electrical infrastructure, with which IT is increasingly merging into a mutually codependent relationship, the failure of our computing systems is an existential threat to modern society.
Frustratingly, the IT community stubbornly fails to learn from prior failures. IT project managers routinely claim that their project is somehow different or unique and, thus, lessons from previous failures are irrelevant. That is the excuse of the arrogant, though usually not the ignorant. In Phoenix’s case, for example, it was the government’s second payroll-system replacement attempt, the first effort ending in failure in 1995. Phoenix project managers ignored the well-documented reasons for the first failure because they claimed its lessons were not applicable, which did nothing to keep the managers from repeating them. As it’s been said, we learn more from failure than from success, but repeated failures are damn expensive.

2025
A cyberattack forced Jaguar Land Rover, Britain’s largest automaker, to shut down its global operations for over a month. An initial FAIR-MAM assessment, a cybersecurity-cost-model, estimates the loss for Jaguar Land Rover to be between $1.2 billion and $1.9 billion (£911 million and £1.4 billion), which has affected its 33,000 employees and some 200,000 employees of its suppliers.
Not all software development failures are bad; some failures are even desired. When pushing the limits of developing new types of software products, technologies, or practices, as is happening with AI-related efforts, potential failure is an accepted possibility. With failure, experience increases, new insights are gained, fixes are made, constraints are better understood, and technological innovation and progress continue. However, most IT failures today are not related to pushing the innovative frontiers of the computing art, but the edges of the mundane. They do not represent Austrian economist Joseph Schumpeter’s “gales of creative destruction.” They’re more like gales of financial destruction. Just how many more enterprise resource planning (ERP) project failures are needed before success becomes routine? Such failures should be called IT blunders, as learning anything new from them is dubious at best.
Was Phoenix a failure or a blunder? I argue strongly for the latter, but at the very least, Phoenix serves as a master class in IT project mismanagement. The question is whether the Canadian government learned from this experience any more than it did from 1995’s payroll-project fiasco? The government maintains it will learn, which might be true, given the Phoenix failure’s high political profile. But will Phoenix’s lessons extend to the thousands of outdated Canadian government IT systems needing replacement or modernization? Hopefully, but hope is not a methodology, and purposeful action will be necessary.
The IT community has striven mightily for decades to make the incomprehensible routine.
Repeatedly making the same mistakes and expecting a different result is not learning. It is a farcical absurdity. Paraphrasing Henry Petroski in his book To Engineer Is Human: The Role of Failure in Successful Design (Vintage, 1992), we may have learned how to calculate the software failure due to risk, but we have not learned how to calculate to eliminate the failure of the mind. There are a plethora of examples of projects like Phoenix that failed in part due to bumbling management, yet it is extremely difficult to find software projects managed professionally that still failed. Finding examples of what could be termed “IT heroic failures” is like Diogenes seeking one honest man.
The consequences of not learning from blunders will be much greater and more insidious as society grapples with the growing effects of artificial intelligence, or more accurately, “intelligent” algorithms embedded into software systems. Hints of what might happen if past lessons go unheeded are found in the spectacular early automated decision-making failure of Michigan’s MiDAS unemployment and Australia’s Centrelink “Robodebt” welfare systems. Both used questionable algorithms to identify deceptive payment claims without human oversight. State officials used MiDAS to accuse tens of thousands of Michiganders of unemployment fraud, while Centrelink officials falsely accused hundreds of thousands of Australians of being welfare cheats. Untold numbers of lives will never be the same because of what occurred. Government officials in Michigan and Australia placed far too much trust in those algorithms. They had to be dragged, kicking and screaming, to acknowledge that something was amiss, even after it was clearly demonstrated that the software was untrustworthy. Even then, officials tried to downplay the errors’ impact on people, then fought against paying compensation to those adversely affected by the errors. While such behavior is legally termed “maladministration,” administrative evil is closer to reality.

2017
The international supermarket chain Lidl decides to revert to its homegrown legacy merchandise-management system after three years of trying to make SAP’s €500 million enterprise resource planning (ERP) system work properly.
If this behavior happens in government organizations, does anyone think profit-driven companies whose AI-driven systems go wrong are going to act any better? As AI becomes embedded in ever more IT systems—especially governmental systems and the growing digital public infrastructure, which we as individuals have no choice but to use—the opaqueness of how these systems make decisions will make it harder to challenge them. The European Union has given individuals a legal “right to explanation” when a purely algorithmic decision goes against them. It’s time for transparency and accountability regarding all automated systems to become a fundamental, global human right.
What will it take to reduce IT blunders? Not much has worked with any consistency over the past 20 years. The financial incentives for building flawed software, the IT industry’s addiction to failure porn, and the lack of accountability for foolish management decisions are deeply entrenched in the IT community. Some argue it is time for software liability laws, while others contend that it is time for IT professionals to be licensed like all other professionals. Neither is likely to happen anytime soon.

2018
Boeing adds poorly designed and described Maneuvering Characteristics Augmentation System (MCAS) to new 737 Max model creating safety problems leading to two fatal airline crashes killing 346 passengers and crew and grounding of fleet for some 20 months. Total cost to Boeing estimates at $14b in direct costs and $60b in indirect costs.
So, we are left with only a professional and personal obligation to reemphasize the obvious: Ask what you do know, what you should know, and how big the gap is between them before embarking on creating an IT system. If no one else has ever successfully built your system with the schedule, budget, and functionality you asked for, please explain why your organization thinks it can. Software is inherently fragile; building complex, secure, and resilient software systems is difficult, detailed, and time-consuming. Small errors have outsize effects, each with an almost infinite number of ways they can manifest, from causing a minor functional error to a system outage to allowing a cybersecurity threat to penetrate the system. The more complex and interconnected the system, the more opportunities for errors and their exploitation. A nice start would be for senior management who control the purse strings to finally treat software and systems development, operations, and sustainment efforts with the respect they deserve. This not only means providing the personnel, financial resources, and leadership support and commitment, but also the professional and personal accountability they demand.

2025
Software and hardware issues with the F-35 Block 4 upgrade continue unabated. The Block 4 upgrade program which started in 2018, and is intended to increase the lethality of the JSF aircraft has slipped to 2031 at earliest from 2026, with cost rising from $10.5 b to a minimum of $16.5b. It will take years more to rollout the capability to the F-35 fleet.
It is well known that honesty, skepticism, and ethics are essential to achieving project success, yet they are often absent. Only senior management can demand they exist. For instance, honesty begins with the forthright accounting of the myriad of risks involved in any IT endeavor, not their rationalization. It is a common “secret” that it is far easier to get funding to fix a troubled software development effort than to ask for what is required up front to address the risks involved. Vendor puffery may also be legal, but that means the IT customer needs a healthy skepticism of the typically too-good-to-be-true promises vendors make. Once the contract is signed, it is too late. Furthermore, computing’s malleability, complexity, speed, low cost, and ability to reproduce and store information combine to create ethical situations that require deep reflection about computing’s consequences on individuals and society. Alas, ethical considerations have routinely lagged when technological progress and profits are to be made. This practice must change, especially as AI is routinely injected into automated systems.
In the AI community, there has been a movement toward the idea of human-centered AI, meaning AI systems that prioritize human needs, values, and well-being. This means trying to anticipate where and when AI can go wrong, move to eliminate these situations, and build in ways to mitigate the effects if they do happen. This concept requires application to every IT system’s effort, not just AI.
Given the historic lack of organizational resolve to instill proven practices...novel approaches for developing and sustaining ever more complex software systems...will also frequently fall short.
Finally, project cost-benefit justifications of software developments rarely consider the financial and emotional distress placed on end users of IT systems when something goes wrong. These include the long-term failure after-effects. If these costs had to be taken fully into account, such as in the cases of Phoenix, MiDAS, and Centrelink, perhaps there could be more realism in what is required managerially, financially, technologically, and experientially to create a successful software system. It may be a forlorn request, but surely it is time the IT community stops repeatedly making the same ridiculous mistakes it has made since at least 1968, when the term “software crisis” was coined. Make new ones, damn it. As Roman orator Cicero said in Philippic 12, “Anyone can make a mistake, but only an idiot persists in his error.”
Special thanks to Steve Andriole, Hal Berghel, Matt Eisler, John L. King, Roger Van Scoy, and Lee Vinsel for their invaluable critiques and insights.
2025-11-22 22:00:02

As an auditor of battery manufacturers around the world, University of Maryland mechanical engineer Michael Pecht frequently finds himself touring spotless production floors. They’re akin to “the cleanest hospital that you could imagine–it’s semiconductor-type cleanliness,” he says. But he’s also seen the opposite, and plenty of it. Pecht estimates he’s audited dozens of battery factories where he found employees watering plants next to a production line or smoking cigarettes where particulates and contaminants can get into battery components and compromise their performance and safety.
Unfortunately, those kinds of scenes are just the tip of the iceberg. Pecht says he’s seen poorly assembled lithium-ion cells with little or no safety features and, worse, outright counterfeits. These phonies may be home-built or factory-built and masquerade as those from well-known global brands. They’ve been found in scooters, vape pens, e-bikes, and other devices, and have caused fires and explosions with lethal consequences.
The prevalence of fakes is on the rise, causing growing concern in the global battery market. In fact, after a rash of fires in New York City over the past few years caused by faulty batteries, including many powering e-bikes used by the city’s delivery cyclists, New York banned the sale of uncertified batteries. The city is currently setting up what will be its first e-bike battery-swapping stations as an alternative to home charging, in an effort to coax delivery riders to swap their depleted batteries for a fresh one rather than charging at home, where a bad battery could be a fire hazard.
Compared with certified batteries, whose public safety risks may be overblown, the dangers of counterfeit batteries may be underrated. “It is probably an order of magnitude worse with these counterfeits,” Pecht says.
There are a few ways to build a counterfeit battery. Scammers often relabel old or scrap batteries built by legitimate manufacturers like LG, Panasonic, or Samsung and sell them as new. “It’s so simple to make a new label and put it on,” Pecht says. To fetch a higher price, they sometimes rebadge real batteries with labels that claim more capability than the cells actually have.
But the most prevalent fake batteries, Pecht says, are homemade creations. Counterfeiters can do this in make-shift environments because building a lithium-ion cell is fairly straightforward. With an anode, cathode, separator, electrolyte, and other electrical elements, even fly-by-night battery makers can get the cells to work.
What they don’t do is make them as safe and reliable as tested, certified batteries. Counterfeiters skimp on safety mechanisms that prevent issues that lead to fire. For example, certified batteries are built to stop thermal runaway, the chain reaction that can start because of an electrical short or mechanical damage to the battery and lead to the temperature increasing out of control.
Judy Jeevarajan, the vice president and executive director of Houston-based Electrochemical Safety Research Institute, which is part of Underwriters Laboratories (UL) Research Institutes, led a study of fake batteries in 2023. In the study, Jeevarajan and her colleagues gathered both real and fake lithium batteries from three manufacturers (whose names were withheld), and pushed them to their limits to demonstrate the differences.
One test, called a destructive physical analysis, involved dismantling small cylindrical batteries. This immediately revealed differences in quality. The legitimate, higher quality examples contained thick plastic insulators at the top and bottom of the cylinders, as well as axially and radially placed tape to hold the “jelly roll” core of the battery. But illegitimate examples had thinner insulators or none at all, and little or no safety tape.
“This is a major concern from a safety perspective as the original products are made with certain features to reduce the risk associated with the high energy density that li-ion cells offer,” Jeevarajan says.
Jeevarajan’s team also subjected batteries to overcharging and to electrical shorts. A legitimately tested and certified battery, like the iconic 18650 lithium-ion cylinder, counters these threats with internal safety features such as positive temperature coefficient, where a material gains electrical resistance as it gets hotter, and a current interrupt device (CID), which automatically disconnects the battery’s electrical circuit if the internal pressure rises too high. The legit lithium battery in Jeevarajan’s test had the best insulators and internal construction. It also had a high-quality CID that prevented overcharging, reducing risk a fire. Neither of the other cells had one.
Despite the gross lack of safety parts in the batteries, great care had clearly gone into making sure the counterfeit labels had the exact same shade and markings as the original manufacturer’s, Jeevarajan says.
Because counterfeiters are so skilled at duplicating manufacturers’ labels, it can be hard to know for sure whether the lithium batteries that come with a consumer electronics device, or the replacements that can be purchased on sites like eBay or Amazon, are in fact the genuine article. It’s not just individual consumers who struggle with this. Pecht says he knows of instances where device makers have bought what they thought were LG or Samsung batteries for their machines but failed to verify that the batteries were the real thing.
“One cannot tell from visually inspecting it,” Jeevarajan says. But companies don’t have to dismantle the cells to do their due diligence. “The lack of safety devices internal to the cell can be determined by carrying out tests that verify their presence,” she says. A simple way, Pecht says, is to have a comparison standard on hand – a known, legitimate battery whose labeling, performance, or other characteristics can be compared to a questionable cell. His team will even go as far as doing a CT scan to see inside a battery and find out whether it is built correctly.
Of course, most consumers don’t have the equipment on hand to test the veracity of all the rechargeable batteries in their homes. To shop smart, then, Pecht advises people to think about what kind of batteries and devices they’re using. The units in our smartphones and the large, high-capacity batteries found in electric vehicles aren’t the problem; they are subject to strict quality control and very unlikely to be fake. By far, he says, the more likely places to find counterfeits are the cylindrical batteries found in small, inexpensive devices.
“They are mostly found as energy and power sources for portable applications that can vary from your cameras, camcorders, cell phones, power banks, power tools, e-bikes and e-scooters,” adds Jeevarajan. “For most of these products, they are sold with part numbers that show an equivalency to a manufacturer’s part number. Electric vehicles are a very high-tech market and they would not accept low-quality or cells and batteries of questionable origin.”
The trouble with battling the counterfeit battery scourge, Pecht says, is that new rules tend to focus on consumer behavior, such as trying to prevent people from improperly storing or charging e-bike batteries in their apartments. Safe handling and charging are indeed crucial, but what’s even more important is trying to keep counterfeits out of the supply chain. “They want to blame the user, like you overcharged it or you did this wrong,” he says. “But in my view, it’s the cells themselves” that are the problem.
2025-11-22 21:00:02

Between humans and machines,
feedback loops of love and grace.
It could be that way, he wrote.
Less robotic ourselves, we could
live more in dreams, less in routines.
Things that made us weak and strange
can be engineered around:
servos here, neural nets there,
bits of bone, and hanks of hair,
becoming beautiful and profound.
With each machine, we make a mirror
thinking of us as we may think
of it. Images come again,
new, yet we recognize them
as something almost known before.
Every web conceals its spider.
There is unease because of this.
As there should be. Control, yes,
but rare freedom to some degree--
freedom’s always a contingency.
We are old enough to be friends.
Let each kind be kind to the other.
Let there be commerce among us–
feedback loops of love and grace
between machines and humans.
2025-11-22 03:00:02

This article is part of our exclusive career advice series in partnership with the IEEE Technology and Engineering Management Society.
Let’s say you’ve been in your role for a few years now. You know your systems inside and out. You’ve solved tricky problems, led small teams, and delivered results on time. But lately, between status meetings and routine design reviews, you’ve caught yourself thinking: There must be a better way to do this task. Someone should make this better.
Then you spend some time imagining. Maybe it’s a new tool that would save weeks of engineering time. Or a better process. Or a new product feature. You sketch it out after work hours, maybe even build a quick prototype. Then you think: I could make this product myself.
The shift from “someone should” to “I will” is the start of entrepreneurial thinking. And you don’t have to quit your job or have a billionaire’s appetite for risk to begin.
As an engineer, you already have the ability to analyze complex problems, design viable solutions, and follow them through to a working prototype. Your technical skills came from a structured training background and hands-on projects. Your ability to lead, persuade, and navigate uncertainty often comes from experience, especially when you step outside your usual responsibilities.
Some of the most game-changing products didn’t begin as formal projects. They started as bootleg efforts—side projects developed quietly by engineers who saw an opportunity. Post-it Notes and Gmail both began that way. Many companies now encourage such efforts; some even allow their engineers to devote 15 to 20 percent of their workweek to pursuing their own ideas.
Ideas can be easy. Execution is harder. Nearly every engineer has a colleague with a clever idea that never got past the whiteboard. The difference between wanting to act and actually taking action—known as the intention-action gap—is where entrepreneurship lives or dies. Successful innovators build the discipline to cross the gap—one small, concrete step at a time.
You don’t need to be born creative to be entrepreneurial. Here are ways to reprogram your mindset.
And, yes, timing matters. Amazon might have stayed just an online bookstore without the rise of e-commerce. The right idea at the wrong time is likely to struggle. Start with current trends, for instance, AI offers extremely low entry barriers to get started, and everything is being built around it these days.
Entrepreneurial thinking isn’t only for startup founders. It can mean championing a new process at your company, building an internal tool that changes how your team works, or bringing a product idea from sketch to launch. The engineering mindset—systematic, detail-oriented, problem-solving—is an asset that can power not just products but entire companies.
If you’ve ever thought: There’s got to be a better way—and if you felt the itch to make it real—you might be closer to being an entrepreneur than you think. Don’t wait any longer; the best time to start is: tomorrow.
2025-11-22 00:20:16

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
Enjoy today’s videos!
Researchers at the RAI Institute have built a low-impedance platform to study dynamic robot manipulation. In this demo, robots play a game of catch and participate in batting practice, both with each other and with skilled humans. The robots are capable of throwing 70mph [112 kph], approaching the speed of a strong high school pitcher. The robots can catch and bat at short distances (23 feet [7 m]) requiring quick reaction times to catch balls thrown at up to 41 mph [66kph] and hit balls pitched at up to 30 mph [48kph].
That’s a nice touch with the custom “RAI” baseball gloves, but what I really want to know is how long a pair of robots can keep themselves entertained.
[ RAI Institute ]
This week’s best bacronym winner is GIRAF: Greatly Increased Reach AnyMAL Function. And if that arm looks like magic, that’s because it is, although with some careful pausing of the video you’ll be able to see how it works.
[ Stanford BDML ]
DARPA concluded the second year of the DARPA Triage Challenge on October 4, awarding top marks to DART and MSAI in Systems and Data competitions, respectively. The three-year prize competition aims to revolutionize medical triage in mass casualty incidents where medical resources are limited.
[ DARPA ]
We propose a robot agnostic reward function that balances the achievement of a desired end pose with impact minimization and the protection of critical robot parts during reinforcement learning. To make the policy robust to a broad range of initial falling conditions and to enable the specification of an arbitrary and unseen end pose at inference time, we introduce a simulation-based sampling strategy of initial and end poses. Through simulated and real-world experiments, our work demonstrates that even bipedal robots can perform controlled, soft falls.
[ Moritz Baecher ]
Oh look, more humanoid acrobatics.
My prediction: once humanoid companies run out of mocapped dance moves, we’ll start seeing some freaky stuff that leverages the degrees of freedom that robots have and humans do not. You heard it here first, folks.
[ MagicLab ]
I challenge the next company that makes a “lights-out” video to just cut to just a totally black screen with a little “Successful Picks” counter in the corner that just goes up and up and up.
[ Brightpick ]
Thanks, Gilmarie!
The terrain stuff is cool and all but can we just talk about the trailer instead?
[ LimX Dynamics ]
Presumably very picky German birblets are getting custom nesting boxes manufactured with excessively high precision by robots.
[ TUM ]
All those UBTECH Walker S2 robots weren’t fake, it turns out.
[ UBTECH ]
This is more automation than what we’d really be thinking of as robotics at this point, but I could still watch it all day.
[ Motoman ]
Brad Porter (Cobot) and Alfred Lin (Sequoia Capital) discuss the future of robotics, AI, and automation at the Human[X] Conference, moderated by CNBC’s Kate Rooney. They explore why collaborative robots are accelerating now, how AI is transforming physical systems, the role of humanoids, labor market shifts, and the investment trends shaping the next decade of robotics.
[ Cobot ]
Humanoid robots have long captured our imagination. Interest has skyrocketed along with the perception that robots are getting closer to taking on a wide range of labor-intensive tasks. In this discussion, we reflect on what we’ve learned by observing factory floors, and why we’ve grown convinced that chasing generalization in manipulation—both in hardware and behavior—isn’t just interesting, but necessary. We’ll discuss AI research threads we’re exploring at Boston Dynamics to push this mission forward, and highlight opportunities our field should collectively invest more in to turn the humanoid vision, and the reinvention of manufacturing, into a practical, economically viable product.
[ Boston Dynamics ]
On November 12, 2025, Tom Williams presented “Degrees of Freedom: On Robotics and Social Justice” as part of the Michigan Robotics Seminar Series.
Ask the OSRF Board of Directors anything! Or really, listen to other people ask them anything.
[ ROSCon ]
2025-11-20 23:00:02

A few years ago, Matthew Carey lost a friend in a freak car accident, after the friend’s car struck some small debris on a highway. The accident happened under conditions that render nearly all of today’s car-mounted sensors useless: fog and bright early-morning sunshine. Radar can’t see small objects well, lidar is limited by fog, and cameras are blinded by glare. Carey and his cofounders decided to create a sensor that could have done the job—a terahertz imager.
Historically, terahertz frequencies have been the least utilized portion of the electromagnetic spectrum. People have struggled to send them even short distances through the air. But thanks to some intense engineering and improvements in silicon transistor frequency, beaming terahertz radiation over hundreds of meters is now possible. Teradar, the Boston-based startup Carey cofounded, has managed to make a sensor that can meet the auto industry’s 300-meter distance requirements.
The company came out of stealth last week with chips it says can deliver 20 times the resolution of automotive radar while seeing through all kinds of weather and costing less than lidar. The tech provides “a superset of lidar and radar combined,” Carey says. The technology is in tests with carmakers for a slot in vehicles to be produced in 2028, he says. It would be the first such sensor to make it to market.
“Every time you unlock a chunk of the electromagnetic spectrum, you unlock a brand-new way to view the world,” Carey says.
Teradar’s system is a new architecture, says Carey, that has elements of traditional radar and a camera. The terahertz transmitters are arrays of elements that generate electronically steerable beams, while the sensors are like imaging chips in a camera. The beams scan the area, and the sensor measures the time it takes for the signals to return as well as where they return from.
Teradar’s system can steer beams of terahertz radiation with no moving parts.Teradar
From these signals, the system generates a point cloud, similar to what a lidar produces. But unlike lidar, it does not use any moving parts. Those moving parts add significantly to the cost of lidar and subject it to wear and tear from the road.
“It’s a sensor that [has] the simplicity of radar and the resolution of lidar,” says Carey. Whether it replaces either technology or becomes an add-on is up to carmakers, he adds. The company is currently working with five of them.
That Teradar has gotten this far is partly down to progress in silicon transistor technology—in particular, the steady increase in the maximum frequency of devices that modern foundries can supply, says Carey.
Ruonan Han, a professor of electrical engineering at MIT who specializes in terahertz electronics, agrees. These improvements have led to boosts in the efficiency of terahertz circuits, their output power, and the sensitivity of receivers. Additionally, chip packaging, which is key to efficiently transmitting the radiation, has improved. Combined with research into the design of circuits and systems, engineers can now apply terahertz radiation in a variety of applications, including autonomous driving and safety.
Nevertheless, “it’s pretty challenging to deliver the performance needed for real and safe self-driving—especially the distance,” says Han. His lab at MIT has worked on terahertz radar and other circuits for several years. At the moment it’s focused on developing lightweight, low-power terahertz sensors for robots and drones. His lab has also spun out an imaging startup, Cambridge Terahertz, targeted at using the frequency band’s advantages in security scanners, where it can see through clothes to spot hidden weapons.
Teradar, too, will explore applications outside the automotive sector. Carey points out that while terahertz frequencies do not penetrate skin, melanomas show up as a different color at those wavelengths compared to normal skin.
But for now Carey’s company is focused on cars. And in that area, there’s one question I had to ask: Could Teradar’s tech have saved Kit Kat, the feline regrettably run down by a Waymo self-driving car in San Francisco last month?
“It probably would have saved the cat,” says Carey.
This post was corrected on 21 November 2025 to make the conditions of a car accident clearer.