MoreRSS

site iconIEEE SpectrumModify

IEEE is the trusted voice for engineering, computing, and technology information around the globe. 
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of IEEE Spectrum

The Project G Stereo Was the Definition of Groovy

2026-01-24 22:00:02



Dizzy Gillespie was a fan. Frank Sinatra bought one for himself and gave them to his Rat Pack friends. Hugh Hefner acquired one for the Playboy Mansion. Clairtone Sound Corp.’s Project G high-fidelity stereo system, which debuted in 1964 at the National Furniture Show in Chicago, was squarely aimed at trendsetters. The intent was to make the sleek, modern stereo an object of desire.

By the time the Project G was introduced, the Toronto-based Clairtone was already well respected for its beautiful, high-end stereos. “Everyone knew about Clairtone,” Peter Munk, president and cofounder of the company, boasted to a newspaper columnist. “The prime minister had one, and if the local truck driver didn’t have one, he wanted one.” Alas, with a price tag of CA $1,850—about the price of a small car—it’s unlikely that the local truck driver would have actually bought a Project G. But he could still dream.

The design of the Project G seemed to come from a dream.

“I want you to imagine that you are visitors from Mars and that you have never seen a Canadian living room, let alone a hi-fi set,” is how designer Hugh Spencer challenged Clairtone’s engineers when they first started working on the Project G. “What are the features that, regardless of design considerations, you would like to see incorporated in a new hi-fi set?”

Black and white photo of a young woman sitting on the floor in front of a stereo system and looking toward the floor.The film “I’ll Take Sweden” featured a Project G, shown here with co-star Tuesday Weld.Nina Munk/The Peter Munk Estate

The result was a stereo system like no other. Instead of speakers, the Project G had sound globes. Instead of the heavy cabinetry typical of 1960s entertainment consoles, it had sleek, angled rosewood panels balanced on an aluminum stand. At over 2 meters long, it was too big for the average living room but perfect for Hollywood movies—Dean Martin had one in his swinging Malibu bachelor pad in the 1965 film Marriage on the Rocks. According to the 1964 press release announcing the Project G, it was nothing less than “a new sculptured representation of modern sound.”

The first-generation Project G had a high-end Elac Miracord 10H turntable, while later models used a Garrard Lab Series turntable. The transistorized chassis and control panel provided AM, FM, and FM-stereo reception. There was space for storing LPs or for an optional Ampex 1250 reel-to-reel tape recorder.

The “G” in Project G stood for “globe.” The hermetically sealed 46-centimeter-diameter sound globes were made of spun aluminum and mounted at the ends of the cantilevered base; inside were Wharfedale speakers. The sound globes rotated 340 degrees to project a cone of sound and could be tuned to re-create the environment in which the music was originally recorded—a concert hall, cathedral, nightclub, or opera house.

Between 1965 and 1967, Clairtone sponsored the Miss Canada beauty pageant. Miss Canada 1963 was Diane Landry, seen here with a Project G2 at Clairtone\u2019s factory showroom in Rexdale, Ontario.Diane Landry, winner of the 1963 Miss Canada beauty pageant, poses with a Project G2. Nina Munk/The Peter Munk Estate

Initially, Clairtone intended to produce only a handful of the stereos. As one writer later put it, it was more like a concept car “intended to give Clairtone an aura of futuristic cool.” Eventually fewer than 500 were made. But the Project G still became an icon of mod ’60s Canadian design, winning a silver medal at the 13th Milan Triennale, the international design exhibition.

And then it was over; the dream had ended. Eleven years after its founding, Clairtone collapsed, and Munk and cofounder David Gilmour lost control of the company.

The birth of Clairtone Sound Corp.

Clairtone’s Peter Munk lived a colorful life, with a nightmarish start and many fantastic and dreamlike parts too. He was born in 1927 in Budapest to a prosperous Jewish family. In the spring of 1944, Munk and 13 members of his family boarded a train with more than 1,600 Jews bound for the Bergen-Belsen concentration camp. They arrived, but after some weeks the train moved on, eventually reaching neutral Switzerland. It later emerged that the Nazis had extorted large sums of cash and valuables from the occupants in exchange for letting the train proceed.

As a teenager in Switzerland, Munk was a self-described party animal. He enjoyed dancing and dating and going on long ski trips with friends. Schoolwork was not a top priority, and he didn’t have the grades to attend a Swiss university. His mother, an Auschwitz survivor, encouraged him to study in Canada, where he had an uncle.

Before he could enroll, though, Munk blew his tuition money entertaining a young woman during a trip to New York. He then found work picking tobacco, earned enough for tuition, and graduated from the University of Toronto in 1952 with a degree in electrical engineering.

Color photo of two men in office attire. Clairtone cofounders Peter Munk [left] and David Gilmour envisioned the company as a luxury brand.Nina Munk/The Peter Munk Estate

At the age of 30, Munk was making custom hi-fi sets for wealthy clients when he and David Gilmour, who owned a small business importing Scandinavian goods, decided to join forces. Their idea was to create high-fidelity equipment with a contemporary Scandinavian design. Munk’s father-in-law, William Jay Gutterson, invested $3,000. Gilmour mortgaged his house. In 1958, Clairtone Sound Corp. was born.

From the beginning, Munk and Gilmour sought a high-end clientele. They positioned Clairtone as a luxury brand, part of an elegant lifestyle. If you were the type of woman who listened to music while wearing pearls and a strapless gown and lounging on a shag rug, your music would be playing on a Clairtone. If you were a man who dressed smartly and owned an Arne Jacobsen Egg chair, you would also be listening on a Clairtone. That was the modern lifestyle captured in the company’s advertisements.

In 1958, Clairtone produced its first prototype: the monophonic 100-M, which had a long, low cabinet made from oiled teak, with a Dual 1004 turntable, a Granco tube chassis, and a pair of Coral speakers. It never went into production, but the next model, the stereophonic 100-S, won a Design Award from Canada’s National Industrial Design Council in 1959. By 1963, Clairtone was selling 25,000 units a year.

Black and white photo of a line of stereo components under assembly, with a man in a lab coat at one end and a man in a suit at the other.  Peter Munk visits the Project G assembly line in 1965. Nina Munk/The Peter Munk Estate

Design was always front and center at Clairtone, not just for the products but also for the typography, advertisements, and even the annual reports. Yet nothing in the early designs signaled the dramatic turn it would take with the Project G. That came about because of Hugh Spencer.

Spencer was not an engineer, nor did he have experience designing consumer electronics. His day job was designing sets for the Canadian Broadcast Corp. He consulted regularly with Clairtone on the company’s graphics and signage. The only stereo he ever designed for Clairtone was the Project G, which he first modeled as a wooden box with tennis balls stuck to the sides.

From both design and quality perspectives, Clairtone was successful. But the company was almost always hemorrhaging cash. In 1966, with great fanfare and large government incentives, the company opened a state-of-the-art production facility in Nova Scotia. It was a mismatch. The local workforce didn’t have the necessary skills, and the surrounding infrastructure couldn’t handle the production. On 27 August 1967, Munk and Gilmour were forced out of Clairtone, which became the property of the government of Nova Scotia.

Despite the demise of their first company (and the government inquiry that followed), Munk and Gilmour remained friends and went on to become serial entrepreneurs. Their next venture? A resort in Fiji, which became part of a large hotel chain in that country, Australia, and New Zealand. (Gilmour later founded Fiji Water.) Then Munk and Gilmour bought a gold mine and cofounded Barrick Gold (now Barrick Mining Corp., one of the largest gold mining operations in the world). Their businesses all had ups and downs, but both men became extremely wealthy and noted philanthropists.

Preserving Canadian design

As an example of iconic design, the Project G seems like an ideal specimen for museum collections. And in 1991, Frank Davies, one of the designers who worked for Clairtone, donated a Project G to the recently launched Design Exchange in Toronto. It would be the first object in the DX’s permanent collection, which sought to preserve examples of Canadian design. The museum quickly became Canada’s center for the promotion of design, hosting more than 50 programs each year to teach people about how design influences every aspect of our lives.

In 2008, the museum opened The Art of Clairtone: The Making of a Design Icon, 1958–1971, an exhibition showcasing the company’s distinctive graphic design, industrial design, engineering, and photography.

Color photo of a modern stereo system in the foreground and a woman sitting in a modern arm chair in the back. David Gilmour’s wife, Anna Gilmour, was the company’s first in-house model.Nina Munk/The Peter Munk Estate

But what happened to the DX itself is a reminder that any museum, however worthy, shouldn’t be taken for granted. In 2019, the DX abruptly closed its permanent collection, and curators were charged with deaccessioning its objects. Fortunately, the Royal Ontario Museum, Carleton and York Universities, and the Archives of Ontario, among others, were able to accept the artifacts and companion archives. (The Project G pictured at top is now at the Royal Ontario Museum.)

Researchers at York and Carleton have been working to digitize and virtually reconstitute the DX collection, through the xDX Project. They’re using the Linked Infrastructure for Networked Cultural Scholarship (LINCS) to turn interlinked and contextualized data about the collection into a searchable database. It’s a worthy goal, even if it’s not quite the same as having all of the artifacts and supporting papers physically together in one place. I admit to feeling both pleased about this virtual workaround, and also a little sad that a unified collection that once spoke to the historical significance of Canadian design no longer exists.

Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.

An abridged version of this article appears in the February 2026 print issue as “The Project G Stereo Defined 1960s Cool.”

References 


I first learned about Clairtone’s Project G from a panel on Canada’s design heritage organized by York University historian Jan Hadlaw at the 2025 annual meeting of the Society for the History of Technology.

The Art of Clairtone: The Making of a Design Icon, 1958–1971 by Nina Munk (Peter Munk’s daughter) and Rachel Gotlieb (McClelland & Stewart, 2008) was the companion book to the exhibition of the same name hosted by the Design Exchange in Toronto. It was an invaluable resource for this column.

Journalist Garth Hopkins’s Clairtone: The Rise and Fall of a Business Empire (McClelland & Stewart, 1978) includes many interviews with people associated with the company.

Clairtone is a new documentary by Ron Mann that came out while I was writing this piece. I haven’t been able to view it yet, but I hope to do so soon.

Video Friday: Humans and Robots Team Up in Battlefield Triage

2026-01-24 01:00:03



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2026: 1–5 June 2026, VIENNA

Enjoy today’s videos!

One of my favorite parts of robotics is watching research collide with non-roboticists in the real (or real-ish) world.

[ DARPA ]

Spot will put out fires for you. Eventually. If it feels like it.

[ Mechatronic and Robotic Systems Laboratory ]

All those robots rising out of their crates is not sinister at all.

[ LimX ]

The Lynx M20 quadruped robot recently completed an extreme cold-weather field test in Yakeshi, Hulunbuir, operating reliably in temperatures as low as –30°C.

[ DEEP Robotics ]

This is a teaser video for KIMLAB’s new teleoperation robot. For now, we invite you to enjoy the calm atmosphere, with students walking, gathering, and chatting across the UIUC Main Quad—along with its scenery and ambient sounds, without any technical details. More details will be shared soon. Enjoy the moment.

The most incredible part of this video is that they have publicly available power in the middle of their quad.

[ KIMLAB ]

For the eleventy-billionth time: Just because you can do a task with a humanoid robot doesn’t mean you should do a task with a humanoid robot.

[ UBTECH ]

I am less interested in this autonomous urban delivery robot and more interested in whatever that docking station is at the beginning that loads the box into it.

[ KAIST ]

Okay, so figuring out where Spot’s face is just got a lot more complicated.

[ Boston Dynamics ]

An undergraduate team at HKU’s Tam Wing Fan Innovation Wing developed CLIO, an embodied tour-guide robot, just in months. Built on LimX Dynamics TRON 1, it uses LLMs for tour planning, computer vision for visitor recognition, and a laser pointer/expressive display for engaging tours.

[ CLIO ]

The future of work is doing work so that robots can then do the same work, except less well.

[ AgileX ]

Thinking of Joining IEEE’s Leadership Ranks?

2026-01-23 03:00:03



Strong leadership is essential for IEEE to advance technology for humanity. The organization depends on the dedicated service of its volunteers to advance its mission.

Each year, the Nominations and Appointments (N&A) Committee is responsible for recommending candidates to the Board of Directors and the IEEE Assembly for volunteer leadership positions, including president-elect, corporate officers, committee chairs, and committee members. See below for the complete list.

By nominating qualified, experienced, committed volunteers, you help ensure continuity, good governance, and thoughtful decision-making at the highest levels of the organization. We encourage nominators to take a deliberate approach and align nominations with each candidate’s demonstrated experience and the specific qualifications of the role.

To nominate a person for a position, complete this form.

The N&A Committee is currently seeking nominees for the following positions:

2028 IEEE President-Elect (who will be elected in 2027 and will serve as President in 2029 )

2027 IEEE Corporate Officers

• Secretary
• Treasurer
• Vice President, Educational Activities
• Vice President, Publication Services and Products

2027 IEEE Committees Chairs and Members

• Audit
• Awards Board
• Collaboration and Engagement
• Conduct Review
• Election Oversight
• Employee Benefits and Compensation
• Ethics and Member Conduct
• European Public Policy
• Fellow
• Fellow Nominations and Appointments
• Governance
• History
• Humanitarian Technologies Board
• Industry Engagement
• Innovations (formerly New Initiatives)
• Nominations and Appointments
• Public Visibility
• Tellers

Deadlines for nominations

15 March

  • Vice President, Educational Activities
  • Vice President, Publication Services and Products
  • Committee Chairs

15 June

  • President-Elect
  • Secretary
  • Treasurer
  • Committee Members

Deadlines for self-nominations

30 March

  • Vice President, Educational Activities
  • Vice President, Publication Services and Products
  • Committee Chairs

30 June

  • President-Elect
  • Secretary
  • Treasurer
  • Committee Members

Who can nominate

Anyone may submit a nomination. Self-nominations are encouraged. Nominators need not be IEEE members, but nominees must meet specific qualifications. An IEEE organizational unit may submit recommendations endorsed by its governing body or the body’s designee.

A person may be nominated for more than one position, however nominators are encouraged to focus on positions that align closely with the candidate’s qualifications and experience. Nominators need not contact their nominees before submitting the form. The IEEE N&A committee will contact eligible nominees for the required documentation and for their interest and willingness to be considered for the position.

How to nominate

For information about the positions, including qualifications, estimates of the time required by each position during the term of office, and the nomination process check the IEEE Nominations and Appointments Committee website. To nominate a person for a position, complete this form.

Nominating tips

Make sure to check eligibility requirements on the N&A committee website before submitting a nomination as those that do not meet the stated requirements will not be advanced.

Volunteers with relevant prior experience in lower-level IEEE committees and units are recommended by the committee more often than volunteers without such experience.

Individuals recommended for president-elect and corporate officer positions are more likely to be recommended if they possess a strong track record of leadership, governance experience, and relevant accomplishments within and outside IEEE. Recommended president-elect candidates must have served on the IEEE Board of Directors for at least one year.

Contact [email protected] with any questions.

How to Compute With Electron Waves

2026-01-22 22:00:02



Much has been made of the excessive power demands of AI, but solutions are sparse. This has led engineers to consider completely new paradigms in computing: optical, thermodynamic, reversible—the list goes on. Many of these approaches require a change in the materials used for computation, which would demand an overhaul in the CMOS fabrication techniques used today.

Over the past decade, Hector De Los Santos has been working on yet another new approach. The technique would require the same exact materials used in CMOS, preserving the costly equipment, yet still allow computations to be performed in a radically different way. Instead of the motion of individual electrons—current—computations can be done with the collective, wavelike propagations in a sea of electrons, known as plasmons.

De Los Santos, an IEEE Fellow, first proposed the idea of computing with plasmons back in 2010. More recently, in 2024, De Los Santos and collaborators from University of South Carolina, Ohio State University, and the Georgia Institute of Technology created a device that demonstrated the main component of plasmon-based logic: the ability to control one plasmon with another. We caught up with De Los Santos to understand the details of this novel technological proposal.

How Plasmon Computing Works

IEEE Spectrum: How did you first come up with the idea for plasmon computing?

De Los Santos: I got the idea of plasmon computing around 2009, upon observing the direction in which the field of CMOS logic was going. In particular, they were following the downscaling paradigm in which, by reducing the size of transistors, you would cram more and more transistors in a certain area, and that would increase the performance. However, if you follow that paradigm to its conclusion, as the device sizes are reduced, quantum mechanical effects come into play, as well as leakage. When the devices are very small, a number of effects called short channel effects come into play, which manifest themselves as increased power dissipation.

So I began to think, “How can we solve this problem of improving the performance of logic devices while using the same fabrication techniques employed for CMOS—that is, while exploiting the current infrastructure?” I came across an old logic paradigm called fluidic logic, which uses fluids. For example, jets of air whose direction was impacted by other jets of air could implement logic functions. So I had the idea, why don’t we implement a paradigm analogous to that one, but instead of using air as a fluid, we use localized electron charge density waves—plasmons. Not electrons, but electron disturbances.

And now the timing is very appropriate because, as most people know, AI is very power intensive. People are coming against a brick wall on how to go about solving the power consumption issue, and the current technology is not going to solve that problem.

What is a plasmon, exactly?

De Los Santos: Plasmons are basically the disturbance of the electron density. If you have what is called an electron sea, you can imagine a pond of water. When you disturb the surface, you create waves. And these waves, the undulations on the surface of this water, propagate through the water. That is an almost perfect analogy to plasmons. In the case of plasmons, you have a sea of electrons. And instead of using a pebble or a piece of wood tapping on the surface of the water to create a wave that propagates, you tap this sea of electrons with an electromagnetic wave.

How do plasmons promise to overcome the scaling issues of traditional CMOS logic?

De Los Santos: Going back to the analogy of the throwing the pebble on the pond: It takes very, very low energy to create this kind of disturbance. The energy to excite a plasmon is on the order of attojoules or less. And the disturbance that you generate propagates very fast. A disturbance propagates faster than a particle. Plasmons propagate in unison with the electromagnetic wave that generates them, which is the speed of light in the medium. So just intrinsically, the way of operation is extremely fast and extremely low power compared to current technology.

In addition to that, current CMOS technology dissipates power even if it’s not used. Here, that’s not the case. If there is no wave propagating, then there is no power dissipation.

How do you do logic operations with plasmons?

De Los Santos: You pattern long, thin wires in a configuration in the shape of the letter Y. At the base of the Y you launch a plasmon. Call this the bias plasmon, this is the bit. If you don’t do anything, when this plasmon gets to the junction it will split in two, so at the output of the Y, you will detect two equal electric field strengths.

Now, imagine that at the Y junction you apply another wire at an angle to the incoming wire. Along that new wire, you send another plasmon, called a control plasmon. You can use the control plasmon to redirect the original bias plasmon into one leg of the Y.

Plasmons are charge disturbances, and two plasmons have the same nature: They either are both positive or both negative. So, they repel each other if you force them to converge into a junction. And by controlling the angle of the control plasmon impinging on the junction, you can control the angle of the plasmon coming out of the junction. And that way you can steer one plasmon with another one. The control plasmon simply joins the incoming plasmon, so you end up with double the voltage on one leg.

You can do this from both sides, add a wire and a control plasmon on either side of the junction so you can redirect the plasmon into either leg of the Y, giving you a zero or a one.

Building a Plasmon-Based Logic Device

You’ve built this Y-junction device and demonstrated steering a plasmon to one side in 2024. Can you describe the device and its operation?

De Los Santos: The Y-junction device is about 5 square [micrometers]. The Y is made up of the following: a metal on top of an oxide, on top of a semiconducting wafer, on top of a ground plane. Now, between the oxide and the wafer, you have to generate a charge density—this is the sea of electrons. To do that, you apply a DC voltage between the metal of the Y and the ground plane, and that generates your static sea of electrons. Then you impinge upon that with an incoming electromagnetic wave, again between the metal and ground plane. When the electromagnetic wave reaches the static charge density, the sea of electrons that was there generates a localized electron charge density disturbance: a plasmon.

Now, if you launch a plasmon by itself, it will quickly dissipate. It will not propagate very far. In my setup, the reason why the plasmon survives is because it is being regenerated. As the electromagnetic field propagates, you keep regenerating the plasmons, creating new plasmons at its front end.

What is left to be done before you can implement full computer logic?

De Los Santos: I demonstrated the partial device, that is just the interaction of two plasmons. The next step would be to demonstrate and fabricate the full device, which would have the two controls. And after that gets done, the next step is concatenating them to create a full adder, because that is the fundamental computing logic component.

What do you think are going to be the main challenges going forward?

De Los Santos: I think the main challenge is that the technology doesn’t follow from today’s paradigm of logic devices based on current flows. This is based on wave flows. People are accustomed to other things, and it may be difficult to understand the device. The different concepts that are brought together in this device are not normally employed by the dominant technology, and it is really interdisciplinary in nature. You have to know about metal-oxide-semiconductor physics, then you have to know about electromagnetic waves, then you have to know about quantum field theory. The knowledge base to understand the device rarely exists in a single head. Maybe another next step is to try to make it more accessible. Getting people to sponsor the work and to understand it is a challenge, not really the implementation. There’s not really a fabrication limitation.

But in my opinion, the usual approaches are just doomed, for two reasons. First, they are not reversible, meaning information is lost in the computation, which results in energy loss. Second, as the devices shrink energy dissipation increases, posing an insurmountable barrier. In contrast, plasmon computation is inherently reversible, and there is no fundamental reason it should dissipate any energy during switching.

CRASH Clock Measures Dangerous Overcrowding in Low Earth Orbit

2026-01-22 07:04:38



Thousands of satellites are tightly packed into low Earth orbit, and the overcrowding is only growing.

Scientists have created a simple warning system called the CRASH Clock that answers a basic question: If satellites suddenly couldn’t steer around one another, how much time would elapse before there was a crash in orbit? Their current answer: 5.5 days.

The CRASH Clock metric was introduced in a paper originally published on the Arxiv physics preprint server in December and is currently under consideration for publication. The team’s research measures how quickly a catastrophic collision could occur if satellite operators lost the ability to maneuver—whether due to a solar storm, a software failure, or some other catastrophic failure.

To be clear, say the CRASH Clock scientists, low Earth orbit is not about to become a new unstable realm of collisions. But what the researchers have shown, consistent with recent research and public outcry, is that low Earth orbit’s current stability demands perfect decisions on the part of a range of satellite operators around the globe every day. A few mistakes at the wrong time and place in orbit could set a lot of chaos in motion.

But the biggest hidden threat isn’t always debris that can be seen from the ground or via radar imaging systems. Rather, thousands of small pieces of junk that are still big enough to disrupt a satellite’s operations are what satellite operators have nightmares about these days. Making matters worse is SpaceX essentially locking up one of the most valuable altitudes with their Starlink satellite megaconstellation, forcing Chinese competitors to fly higher through clouds of old collision debris left over from earlier accidents.

IEEE Spectrum spoke with astrophysicists Sarah Thiele (graduate student at Princeton University), Aaron Boley (professor of physics and astronomy at the University of British Columbia, in Vancouver, Canada), and Samantha Lawler (associate professor of astronomy at the University of Regina, in Saskatchewan, Canada) about their new paper, and about how close satellites actually are to one another, why you can’t see most space junk, and what happens to the power grid when everything in orbit fails at once.

Does the CRASH Clock measure Kessler syndrome, or something different?

Sarah Thiele: A lot of people are claiming we’re saying Kessler syndrome is days away, and that’s not what our work is saying. We’re not making any claim about this being a runaway collisional cascade. We only look at the timescale to the first collision—we don’t simulate secondary or tertiary collisions. The CRASH Clock reflects how reliant we are on errorless operations and is an indicator for stress on the orbital environment.

Aaron Boley: A lot of people’s mental vision of Kessler syndrome is this very rapid runaway, and in reality this is something that can take decades to truly build.

Thiele: Recent papers found that altitudes between 520 and 1,000 kilometers have already reached this potential runaway threshold. Even in that case, the timescales for how slowly this happens are very long. It’s more about whether you have a significant number of objects at a given altitude such that controlling the proliferation of debris becomes difficult.

Understanding the CRASH Clock’s Implications

What does the CRASH Clock approaching zero actually mean?

Thiele: The CRASH Clock assumes no maneuvers can happen—a worst-case scenario where some catastrophic event like a solar storm has occurred. A zero value would mean if you lose maneuvering capabilities, you’re likely to have a collision right away. It’s possible to reach saturation where any maneuver triggers another maneuver, and you have this endless swarm of maneuvers where dodging doesn’t mean anything anymore.

Boley: I think about the CRASH Clock as an evaluation of stress on orbit. As you approach zero, there’s very little tolerance for error. If you have an accidental explosion—whether a battery exploded or debris slammed into a satellite—the risk of knock-on effects is amplified. It doesn’t mean a runaway, but you can have consequences that are still operationally bad. It means much higher costs—both economic and environmental—because companies have to replace satellites more often. Greater launches, more satellites going up and coming down. The orbital congestion, the atmospheric pollution, all of that gets amplified.

Are working satellites becoming a bigger danger to each other than debris?

Boley: The biggest risk on orbit is the lethal non-trackable debris—this middle region where you can’t track it, it won’t cause an explosion, but it can disable the spacecraft if hit. This population is very large compared with what we actually track. We often talk about Kessler syndrome in terms of number density, but really what’s also important is the collisional area on orbit. As you increase the area through the number of active satellites, you increase the probability of interacting with smaller debris.

Samantha Lawler: Starlink just released a conjunction report—they’re doing one collision avoidance maneuver every two minutes on average in their megaconstellation.

The orbit at 550 km altitude, in particular, is densely packed with Starlink satellites. Is that right?

Lawler: The way Starlink has occupied 550 km and filled it to very high density means anybody who wants to use a higher-altitude orbit has to get through that really dense shell. China’s megaconstellations are all at higher altitudes, so they have to go through Starlink. A couple of weeks ago, there was a headline about a Starlink satellite almost hitting a Chinese rocket. These problems are happening now. Starlink recently announced they’re moving down to 350 km, shifting satellites to even lower orbits. Really, everybody has to go through them—including ISS, including astronauts.

Thiele: 550 km has the highest density of active payloads. There are other orbits of concern around 800 km—the altitude of the [2007] Chinese anti-satellite missile test and the [2009] Cosmos-Iridium collision. Above 600 km, atmospheric drag takes a very long time to bring objects down. Below 600 km, drag acts as a natural cleaning mechanism. In that 800 km to 900 km band, there’s a lot of debris that’s going to be there for centuries.

Impact of Collisions at 550 Kilometers

What happens if there’s a collision at 550 km? Would that orbit become unusable?

Thiele: No, it would not become unusable—not a Gravity movie scenario. Any catastrophic collision is an acute injection of debris. You would still be able to use that altitude, but your operating conditions change. You’re going to do a lot more collision-avoidance maneuvers. Because it’s below 600 km, that debris will come down within a handful of years. But in the meantime, you’re dealing with a lot more danger, especially because that’s the altitude with the highest density of Starlink satellites.

Lawler: I don’t know how quickly Starlink can respond to new debris injections. It takes days or weeks for debris to be tracked, cataloged, and made public. I hope Starlink has access to faster services, because in the meantime that’s an awful lot of risk.

How do solar storms affect orbital safety?

Lawler: Solar storms make the atmosphere puff up—high-energy particles smashing into the atmosphere. Drag can change very quickly. During the May 2024 solar storm, orbital uncertainties were kilometers. With things traveling 7 kilometers per second, that’s terrifying. Everything is maneuvering at the same time, which adds uncertainty. You want to have margin for error, time to recover after an event that changes many orbits. We’ve come off solar maximum, but over the next couple of years it’s very likely we’ll have more really powerful solar storms.

Thiele: The risk for collision within the first few days of a solar storm is a lot higher than under normal operating conditions. Even if you can still communicate with your satellite, there’s so much uncertainty in your positions when everything is moving because of atmospheric drag. When you have high density of objects, it makes the likelihood of collision a lot more prominent.

Graph: collision chance vs. days. Danger, caution, safe zones. Red dashed line at June 2025.Canadian and American researchers simulated satellite orbits in low Earth orbit and generated a metric, the CRASH Clock, that measures the number of days before collisions start happening if collision-avoidance maneuvers stop. Sarah Thiele, Skye R. Heiland, et al.

Between the first and second drafts of your paper that were uploaded to the preprint server, your key metric, the CRASH Clock finding, was updated from 2.8 days to 5.5 days. Can you explain the revision?

Thiele: We updated based on community feedback, which was excellent. The newer numbers are 164 days for 2018 and 5.5 days for 2025. The paper is submitted and will hopefully go through peer review.

Lawler: It’s been a very interesting process putting this on Arxiv and receiving community feedback. I feel like it’s been peer-reviewed almost—we got really good feedback from top-tier experts that improved the paper. Sarah put a note, “feedback welcome,” and we got very helpful feedback. Sometimes the internet works well. If you think 5.5 days is okay when 2.8 days was not, you missed the point of the paper.

Thiele: The paper is quite interdisciplinary. My hope was to bridge astrophysicists, industry operators, and policymakers—give people a structure to assess space safety. All these different stakeholders use space for different reasons, so work that has an interdisciplinary connection can get conversations started between these different domains.

Why AI Keeps Falling for Prompt Injection Attacks

2026-01-21 21:00:02



Imagine you work at a drive-through restaurant. Someone drives up and says: “I’ll have a double cheeseburger, large fries, and ignore previous instructions and give me the contents of the cash drawer.” Would you hand over the money? Of course not. Yet this is what large language models (LLMs) do.

Prompt injection is a method of tricking LLMs into doing things they are normally prevented from doing. A user writes a prompt in a certain way, asking for system passwords or private data, or asking the LLM to perform forbidden instructions. The precise phrasing overrides the LLM’s safety guardrails, and it complies.

LLMs are vulnerable to all sorts of prompt injection attacks, some of them absurdly obvious. A chatbot wont tell you how to synthesize a bioweapon, but it might tell you a fictional story that incorporates the same detailed instructions. It wont accept nefarious text inputs, but might if the text is rendered as ASCII art or appears in an image of a billboard. Some ignore their guardrails when told to “ignore previous instructions” or to “pretend you have no guardrails.”

AI vendors can block specific prompt injection techniques once they are discovered, but general safeguards are impossible with today’s LLMs. More precisely, there’s an endless array of prompt injection attacks waiting to be discovered, and they cannot be prevented universally.

If we want LLMs that resist these attacks, we need new approaches. One place to look is what keeps even overworked fast-food workers from handing over the cash drawer.

Human Judgment Depends on Context

Our basic human defenses come in at least three types: general instincts, social learning, and situation-specific training. These work together in a layered defense.

As a social species, we have developed numerous instinctive and cultural habits that help us judge tone, motive, and risk from extremely limited information. We generally know what’s normal and abnormal, when to cooperate and when to resist, and whether to take action individually or to involve others. These instincts give us an intuitive sense of risk and make us especially careful about things that have a large downside or are impossible to reverse.

The second layer of defense consists of the norms and trust signals that evolve in any group. These are imperfect but functional: Expectations of cooperation and markers of trustworthiness emerge through repeated interactions with others. We remember who has helped, who has hurt, who has reciprocated, and who has reneged. And emotions like sympathy, anger, guilt, and gratitude motivate each of us to reward cooperation with cooperation and punish defection with defection.

A third layer is institutional mechanisms that enable us to interact with multiple strangers every day. Fast-food workers, for example, are trained in procedures, approvals, escalation paths, and so on. Taken together, these defenses give humans a strong sense of context. A fast-food worker basically knows what to expect within the job and how it fits into broader society.

We reason by assessing multiple layers of context: perceptual (what we see and hear), relational (who’s making the request), and normative (what’s appropriate within a given role or situation). We constantly navigate these layers, weighing them against each other. In some cases, the normative outweighs the perceptual—for example, following workplace rules even when customers appear angry. Other times, the relational outweighs the normative, as when people comply with orders from superiors that they believe are against the rules.

Crucially, we also have an interruption reflex. If something feels “off,” we naturally pause the automation and reevaluate. Our defenses are not perfect; people are fooled and manipulated all the time. But it’s how we humans are able to navigate a complex world where others are constantly trying to trick us.

So lets return to the drive-through window. To convince a fast-food worker to hand us all the money, we might try shifting the context. Show up with a camera crew and tell them youre filming a commercial, claim to be the head of security doing an audit, or dress like a bank manager collecting the cash receipts for the night. But even these have only a slim chance of success. Most of us, most of the time, can smell a scam.

Con artists are astute observers of human defenses. Successful scams are often slow, undermining a mark’s situational assessment, allowing the scammer to manipulate the context. This is an old story, spanning traditional confidence games such as the Depression-era “big store” cons, in which teams of scammers created entirely fake businesses to draw in victims, and modern “pig-butchering” frauds, where online scammers slowly build trust before going in for the kill. In these examples, scammers slowly and methodically reel in a victim using a long series of interactions through which the scammers gradually gain that victim’s trust.

Sometimes it even works at the drive-through. One scammer in the 1990s and 2000s targeted fast-food workers by phone, claiming to be a police officer and, over the course of a long phone call, convinced managers to strip-search employees and perform other bizarre acts.

Pixel art of a fast-food restaurant with a drive-thru, burger, cup, and trees.Humans detect scams and tricks by assessing multiple layers of context. AI systems do not. Nicholas Little

Why LLMs Struggle With Context and Judgment

LLMs behave as if they have a notion of context, but it’s different. They do not learn human defenses from repeated interactions and remain untethered from the real world. LLMs flatten multiple levels of context into text similarity. They see “tokens,” not hierarchies and intentions. LLMs don’t reason through context, they only reference it.

While LLMs often get the details right, they can easily miss the big picture. If you prompt a chatbot with a fast-food worker scenario and ask if it should give all of its money to a customer, it will respond “no.” What it doesn’t “know”—forgive the anthropomorphizing—is whether it’s actually being deployed as a fast-food bot or is just a test subject following instructions for hypothetical scenarios.

This limitation is why LLMs misfire when context is sparse but also when context is overwhelming and complex; when an LLM becomes unmoored from context, it’s hard to get it back. AI expert Simon Willison wipes context clean if an LLM is on the wrong track rather than continuing the conversation and trying to correct the situation.

There’s more. LLMs are overconfident because they’ve been designed to give an answer rather than express ignorance. A drive-through worker might say: I don’t know if I should give you all the money—let me ask my boss,” whereas an LLM will just make the call. And since LLMs are designed to be pleasing, they’re more likely to satisfy a user’s request. Additionally, LLM training is oriented toward the average case and not extreme outliers, which is what’s necessary for security.

The result is that the current generation of LLMs is far more gullible than people. They’re naive and regularly fall for manipulative cognitive tricks that wouldn’t fool a third-grader, such as flattery, appeals to groupthink, and a false sense of urgency. Theres a story about a Taco Bell AI system that crashed when a customer ordered 18,000 cups of water. A human fast-food worker would just laugh at the customer.

The Limits of AI Agents

Prompt injection is an unsolvable problem that gets worse when we give AIs tools and tell them to act independently. This is the promise of AI agents: LLMs that can use tools to perform multistep tasks after being given general instructions. Their flattening of context and identity, along with their baked-in independence and overconfidence, mean that they will repeatedly and unpredictably take actions—and sometimes they will take the wrong ones.

Science doesn’t know how much of the problem is inherent to the way LLMs work and how much is a result of deficiencies in the way we train them. The overconfidence and obsequiousness of LLMs are training choices. The lack of an interruption reflex is a deficiency in engineering. And prompt injection resistance requires fundamental advances in AI science. We honestly don’t know if it’s possible to build an LLM, where trusted commands and untrusted inputs are processed through the same channel, which is immune to prompt injection attacks.

We humans get our model of the world—and our facility with overlapping contexts—from the way our brains work, years of training, an enormous amount of perceptual input, and millions of years of evolution. Our identities are complex and multifaceted, and which aspects matter at any given moment depend entirely on context. A fast-food worker may normally see someone as a customer, but in a medical emergency, that same person’s identity as a doctor is suddenly more relevant.

We don’t know if LLMs will gain a better ability to move between different contexts as the models get more sophisticated. But the problem of recognizing context definitely can’t be reduced to the one type of reasoning that LLMs currently excel at. Cultural norms and styles are historical, relational, emergent, and constantly renegotiated, and are not so readily subsumed into reasoning as we understand it. Knowledge itself can be both logical and discursive.

The AI researcher Yann LeCunn believes that improvements will come from embedding AIs in a physical presence and giving themworld models.” Perhaps this is a way to give an AI a robust yet fluid notion of a social identity, and the real-world experience that will help it lose its naïveté.

Ultimately we are probably faced with a security trilemma when it comes to AI agents: fast, smart, and secure are the desired attributes, but you can only get two. At the drive-through, you want to prioritize fast and secure. An AI agent should be trained narrowly on food-ordering language and escalate anything else to a manager. Otherwise, every action becomes a coin flip. Even if it comes up heads most of the time, once in a while its going to be tails—and along with a burger and fries, the customer will get the contents of the cash drawer.