2025-09-17 22:00:04
I love a good comeback story of technological innovation, struggle, failure, and redemption. The invention of the scanning capacitance microscope has all of that.
In 1981, RCA filed a patent for the SCM on behalf of company researcher James R. Matey. The microscope was an unintentional by-product of the VideoDisc technology the company had been struggling to bring to market since the mid-1960s. RCA expected the VideoDisc to capture half of the home video market, but instead it lost out in a big way to VHS.
RCA’s James. R. Matey invented the scanning capacitance microscope, which used sensors cannibalized from the company’s VideoDisc players. Hagley Museum and Library
Despite the VideoDisc’s struggles, the underlying technology held a gem: The exquisitely sensitive capacitance sensors used in the VideoDisc players were capable of measuring capacitance differences on the scale of attofarads (1 × 10-18farad).
But before engineers and scientists could trust Matey’s idea, they wanted an independent evaluation to confirm the accuracy of the new microscope. Researchers at the National Institute of Standards and Technology obliged. Starting in the early 1990s, they too cannibalized capacitance sensors from old VideoDisc players and custom-built a series of SCMs, such as the one pictured at top. After NIST’s validation, microscope manufacturers commercialized the SCM, chipmakers adopted them to study integrated circuits, thus opening the door to the next generation of semiconductors.
But no story about the scanning capacitance microscope’s triumph would be complete without some discussion of the VideoDisc’s failure. In theory, it should have thrived: It was a thoroughly researched product that anticipated an important consumer market. Its playback fidelity was superior to over-the-air programming and to magnetic tape. And yet it bombed. Why?
The VideoDisc effort had begun in the early 1960s, when RCA asked itself, “What comes after color TV? What will be the next major consumer electronics system?” The company decided that the answer was some type of system to play prerecorded movies and TV shows through your television. RCA was far from alone in pursuing this idea. All of the home video systems under development included a storage medium—film, magnetic tape, nonmagnetic tape, and vinyl discs of various size and composition—and a device to play back the audio and video in high resolution. In addition to magnetic methods, information could be stored using electromechanical, photographic, electron-beam, or optical technologies.
RCA VideoDiscs were easily damaged by dust and fingerprints, so they were loaded into the SelectaVision player inside plastic sleeves.Hagley Museum and Library
By 1964, RCA had settled on VideoDiscs. Like a record album (which the company had pioneered), a VideoDisc was a grooved vinyl platter that uses a stylus for playback. Unlike a record, the VideoDisc carried both audio and video, at a much higher density, and the stylus was electrical instead of mechanical. (The VideoDisc is sometimes confused with the LaserDisc, a home video technology of that era that used an optical laser.)
RCA called its discs Capacitance Electronic Discs. The VideoDisc player spun the 30-centimeter disc at a constant 450 rpm. A metallic stylus traced the depressions and bumps in the disc’s groove by registering differences in capacitance, similar to the way that bringing your finger into contact with a touchscreen causes a detectable change in the screen’s capacitance at that point. Solid-state circuitry unscrambled the frequency-modulated video signal encoded in the capacitance differences. These differences were on the order of femtofarads, and the video signal ran at about 910 megahertz. To get a clear picture, the VideoDisc system required very sensitive capacitance sensors to detect these tiny differences at high frequency.
Unfortunately, commercialization took much longer than expected. In 1972, RCA announced that its VideoDisc would debut the following year, but it didn’t materialize. An article in Popular Science in February 1977 anticipated regional sales by the end of that year. But it wasn’t until March 1981 that the RCA SelectaVision system finally hit the market. Despite heavy promotion, it sold poorly and was pulled from the shelves in 1984. In the end, RCA sank about US $500 million over 20 years to develop the VideoDisc, and it was a total flop.
What went wrong? In a word: videotape. Magnetic tape, which RCA had rejected, turned out to have greater consumer appeal. Introduced in 1976, VHS tapes were cheaper, had more titles available for purchase or rent, and, importantly, allowed owners to record their own programs.
Perhaps if the VideoDisc had launched in 1973, it might have had a chance. But the technology had other problems. Fingerprints, dust, and scratches torpedoed early designs that envisioned users removing the discs from sleeves as casually as record albums; instead, the final version kept the discs encased in a plastic shell that was then inserted into the player.
RCA spent two decades developing its home video system, but in the end the SelectaVision lost out to VHS and VCRs. Hagley Museum and Library
Another problem was running time. In 1977, VideoDiscs could hold only about 30 minutes of material per side. That rose to an hour per side by the time of product launch, but that still meant that any movie over 120 minutes would have to be spread over multiple discs. The first VHS tapes could hold 120 minutes of video (double that of its main tape competitor, Betamax). And VHS kept extending that lead: By the 1980s, VHS had long play (four hours) and extended play (six hours) versions, albeit with noticeable drops in resolution quality.
RCA forecasters also badly misread the economics of VideoDisc players. Their 1977 price estimate for a VideoDisc player was $500 (about $2,800 in today’s dollars). The first VHS players were much more expensive, ranging from $1,000 to $1,400, but by the mid-1980s, their price had dropped to $200 to $400. VHS tapes of major Hollywood films cost about $80—much more than VideoDiscs’ $10 to $18 price tag—but only diehard fans actually paid the modern equivalent of about $440 to buy a movie on videotape. For everyone else, the Hollywood studios licensed titles to third-party rental companies. Seemingly overnight, independent video shops, supermarkets, and national chains like Blockbuster were renting movies for a small fee. For a brief period, RCA VideoDiscs shared the shelves with videotapes, but usually only at independent shops and never with as many titles available.
Meanwhile, RCA struggled to sell its VideoDisc players. The company had forecast eventual annual sales of five to six million players; its first-year goal was a more modest 200,000, and yet it sold only half that number. By 1984, RCA realized the VideoDisc would never come close to 50 percent market penetration, let alone profitability, and pulled the plug.
Normally that would be the end of the story, another failed venture in consumer electronics. But back when RCA scientists first began researching the VideoDisc, there were no microscopes capable of identifying the tiny variations in the disc that encoded the audio/video signal. The bumps and depressions were less than a tenth the size of the groove itself; even the most advanced microscopes of the day couldn’t detect features that small.
A factory worker inspects an RCA VideoDisc, which encoded the audio and video signals in the disc’s groove. Hagley Museum and Library
Semiconductor performance depends on the distribution of intentionally introduced impurities, called dopants, which change the ability of the material to conduct electricity. In the early days of semiconductor production, manufacturers used ion mass spectroscopy and a technique called spreading resistance to measure the dopant distribution in one dimension.
By the late 1980s, integrated circuits had become so small that the industry needed a way to measure the dopants in two dimensions. The SCM, used in conjunction with an atomic force microscope, fit the bill. When the conductive tip of the atomic force microscope made contact with a semiconductor surface, it created a small capacitance, on the order of attofarads to femtofarads, depending on the dopant concentration. The SCM measured the changes of the local capacitance and mapped the dopant distributions. But the technology was still novel and not yet commercially available, so researchers at NIST took up the task of testing it.
In the early 1990s, Joseph Kopanski, Jay Marchiando, and David Berning began building a series of custom SCMs at the NIST Semiconductor Electronics Division. They did more than just reproduce Matey and Blanc’s results. They also provided the industry with models and software for extracting two-dimensional dopant distribution from the capacitance measurements.
NIST’s validation of the SCM led to the commercial production of the instruments, which in turn led to the development of more-advanced semiconductors—an industry that is orders of magnitude more important to the global economy than a consumer product like the VideoDisc would ever have been. It’s a classic tale of redemption in the history of technology: At the start of any new tech project, no one really knows what the outcome will be. Sometimes, you just have to keep going, even through abject failure, and trust that something good will emerge on the other side.
Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.
An abridged version of this article appears in the October 2025 print issue as “RCA’s VideoDisc Gamble Paid Off in Chips.”
My friends at NIST first suggested the scanning capacitance microscope in their collection and pointed me to a great description of it. The Sarnoff Collection at the College of New Jersey has an equally helpful description of the rise and fall of the RCA VideoDisc.
I then went to the primary sources, starting with James R. Matey’s patents. His article with Joseph Blanc, “Scanning capacitance microscopy” (Journal of Applied Physics, 1 March 1985), offers a clearly written account of the limitations of previous microscopes and the possibilities of the SCM.
Because sources often lead to sources, footnote 1 in Matey and Blanc’s article led me to the richly informative March 1978 issue of RCA Review, which consists of 10 articles over 228 pages devoted to the VideoDisc.
Tom Howe has collected a tremendous amount of data on RCA’s Capacitance Electronic Discs on his retro website CED Magic. It is there that I learned of the February 1977 Popular Science article “Here at Last: Video-Disc Players.”
Finally, business historian Margaret B.W. Graham had RCA-sanctioned access while chronicling the making of the VideoDisc as part of a Harvard Business School study. Her book RCA and the VideoDisc: The Business of Research (Cambridge University Press, 1986) remains one of the best analyses of the failure of the project.
2025-09-17 21:00:04
On 9 September, Apple introduced its newest lineup, including the iPhone 17 series. Much of the attention went to a new ultrathin model and a bright orange color option (a shade not dissimilar to that of the IEEE Spectrum logo). The new smartphones will also ship with the latest operating system and its “Liquid Glass” software design—but the liquid in these phones goes beyond software.
The iPhone 17 Pro and iPhone 17 Pro Max contain thin, hermetically sealed chambers with a drop of water inside that cycles between liquid and gas to help dissipate heat. Known as vapor chambers, the cooling system is becoming more common in smartphones built for sustained high performance. Some high-end Samsung Galaxy and Google Pixel models, among others, have introduced vapor-chamber cooling in the past few years. Now, Apple is following their lead.
“Cooling of smaller portables like phones must focus on spreading heat as widely as possible to the surface of the device, with particular attention to heat-generating components, like the chip,” says Kenneth Goodson, a professor of mechanical engineering at Stanford who specializes in heat transfer and energy conversion. To cool down those hot spots, the industry seems to be moving toward vapor chambers and other phase-change technology.
The standard approach to cooling smartphones uses a solid, highly conductive plate made from a material like copper to spread heat. This approach relies on having a surface where heat can spread. Sometimes, fins are added to extend that surface, but this can lead to a thicker device. Most companies, however, are intent on making thinner and thinner phones.
Phase-change technology—which has been used in laptops for decades, Goodson notes—achieves the same goal more effectively with fluid that boils and condenses to dissipate heat. These two-phase solutions include vapor chambers, like those used in the new iPhone, as well as narrow, fingerlike structures called heat pipes.
Phones have limited volume to work with, and “performance per volume is critical,” says Victor Chiriac, the CEO and cofounder of Global Cooling Technology Group, based in Phoenix. Thin and wide vapor chambers have a high heat-removal capacity and offer an effective solution. The cycle between liquid and vapor is “a powerful mechanism for absorbing heat,” he says.
Apple’s vapor chamber efficiently spreads heat across the phone’s body.Apple
In Apple’s version, a small amount of deionized water is sealed in the chamber. The water evaporates when near heat sources, then condenses back into a liquid when the heat dissipates into the phone’s surrounding aluminum body. Water is often used in vapor chambers, though sometimes other materials are mixed in to prevent it from freezing and cracking the seal, Chiriac says.
As Apple, Samsung, and others push the boundaries of how thin phones can get, manufacturing vapor chambers may become a challenge. While solid materials can easily be shaved down, these chambers need to have enough space for coolant to travel through channels. The chambers have to be perfectly sealed in order to work properly, and “the thinner you make it, the less space you have for that secret sauce to do its thing,” Chiriac says.
It comes down to physics: “A big challenge in small devices like phones is that as you scale down the thickness of a vapor chamber, the fluid physics aggressively scale back their performance relative to copper and other solid heat conductors,” Goodson explains. (This is a problem that researchers, including his students, are working to address with new microstructures.) Plus, vapor chambers tend to be expensive to manufacture.
Still, Apple and other companies have decided to invest in this technology for their most powerful phone models. Goodson suspects part of that decision is to leverage the “wow” factor. But, he says, “with time this approach will likely become an industry standard.”
2025-09-17 02:00:04
Carolina Araújo Moreira Delci often has dreamed of transforming the world around her. As a child, she says, she was fascinated by how bridges, dams, and skyscrapers can reshape urban landscapes.
“Even back then, I could see the power that engineering had to touch lives at scale,” Delci says. “I wanted to be part of that.”
Employer:
Salum Construções, in Belo Horizonte, Brazil
Job title:
Civil engineer
Member grade:
Senior member
Alma maters:
Centro Universitário de Belo Horizonte in Brazil;
Pontifícia Universidade Católica de Minas Gerais in Belo HorizonteToday she is making her mark through multiple megaprojects. As a civil engineer at Salum Construções, in Belo Horizonte, Brazil, she leads high-impact infrastructure developments.
Salum specializes in road and railway construction, drainage systems, containment structures, and decharacterization of iron ore tailings dams—the process of redesigning the massive earthen structures designed to store huge amounts of the waste that remains after iron ore is processed. Its projects involve complex challenges that require precision, agility, and foresight.
As Delci sees it, she’s not just building infrastructure; she’s building a smarter, safer future.
She was elevated this year to IEEE senior member, a professional milestone that acknowledges the impact she has had on her field.
“Becoming a senior member,” she says, “reaffirmed that the path I chose—to blend technology with infrastructure—can truly drive meaningful change.”
Delci was born in Itabirito, a small Brazilian town nestled in the mountains. Her father, a furniture store owner, and her mother, an artisan, instilled in her a love for learning and allowed her the freedom to dream big.
In 2012 she enrolled at Centro Universitário de Belo Horizonte (UniBH) to study civil engineering. Delci chose what she calls a “sandwich bachelor’s program”—an arrangement in which students begin their studies at their chosen local university, complete part of their coursework at a university abroad, and then return to their home institution to finish the degree.
In 2014 she was awarded a full scholarship by Brazil’s federal Science Without Borders grant program. It covered tuition, housing, health insurance, learning materials, and airfare.
“I believed that international academic experience would broaden my perspective and greatly enhance my education,” Delci says. But, she adds, she had to learn a new language to continue her studies. First she took intensive English courses at Eötvös József College, in Baja, Hungary. She then completed two semesters of civil engineering courses, taught in English, at the Budapest University of Technology and Economics. From May to July 2015, she interned at Budapest Waterworks, a city utility. She worked in the three main departments of the company: engineering, project management, and water treatment.
Delci returned to UniBH in the second semester of 2015 and earned a bachelor’s degree in civil engineering in 2017. That same year, she earned a technical diploma in civil construction process operations from the Federação das Indústrias do Estado de Minas Gerais, a regional industry association based in Belo Horizonte. In 2021 she earned a postgraduate degree in project management from Pontifícia Universidade Católica de Minas Gerais, also in Belo Horizonte.
Her ambition led her to get hands-on technical training early. In 2012, during her first stint at UniBH, she was an intern at Delphi Automotive Systems do Brasil, in Itabirito, Minas Gerais. The role gave her hands-on experience in infrastructure maintenance of the industrial plant, including preparing purchase requisitions and work orders, as well as monitoring modifications in the plant layout, she says.
She joined Salum Construções in 2020 as a trainee engineer. She worked on the remediation of waste rock piles at the Vale Vargem Grande complex in Minas Gerais, part of Brazil’s “Iron Quadrangle.” The site is one of many owned by Vale, the world’s leading producer of iron ore.
“I want to keep working on sustainable solutions, mentoring young engineers, and influencing policy to make the industry more environmentally responsible.”
The initiative was part of Vale’s broader efforts to minimize reliance on tailings dams—earthen structures built to contain waste byproducts of ore processing— particularly after Brazil’s Brumadinho dam disaster in 2019. More than 200 Vale employees and contractors were killed when a dam at the company’s Córrego do Feijão mine in Brumadinho collapsed, releasing a 10-meter-high tidal wave containing more than 11 million cubic meters of toxic ore waste.
Assigned to the waste rock remediation project, Delci began ensuring the rock pile complied with new government environmental and safety standards implemented after the Brumadinho disaster. Among the methods she and her team tried was dry stacking, whereby fine, pulverized rock is filtered from tailings and deposited in piles rather than left in the form of sludge.
“Further studies and environmental licensing are still needed for large-scale implementation” of dry stacking, she says.
Today, she’s a civil engineer at Salum. As part of its corporate engineering team, she oversees several large-scale projects.
She uses cutting-edge technology including artificial intelligence and 4D building information modeling. BIM, a digital construction planning method, uses 3D modeling to link each building component—including earthworks, drainage systems and pavement layers—with the construction schedule, with time being the fourth dimension. Such modeling makes the entire project more predictable by allowing teams to anticipate and mitigate risks before they impact the schedule. Her work also focuses on construction principles borrowed from lean manufacturing that emphasize reducing waste.
Her approach and knowledge have drawn the attention of her Salum peers and the construction industry at large, she says.
“I’ve always been passionate about digital engineering and how it can help reduce costs, save time, and improve safety,” she says. “It’s not just about getting the work done. It’s about doing it smarter and more sustainably.”
She’s also helping to reshape a traditionally male-dominated field.
“Heavy civil construction didn’t intimidate me; it motivated me,” she says. “I wanted to show that engineering talent has no gender. There’s space for women to lead here, and we’re claiming it.”
Her research has been published in Revista Cientifica Sistemática, the Brazilian Journal of Development, and other journals. She has received recognition for leading initiatives that use digital technology to improve large-scale construction. A key indicator of this recognition is the widespread citation of her research by experts. For instance, her articles ‘Digital Monitoring of Heavy Equipment...’ and ‘The Effectiveness of Last Planner System...’ have been cited 66 and 56 times, respectively, according to Google Scholar. In academia, this level of citation is an exceptionally high honor, demonstrating that her work is a foundational and influential contribution to the advancement of knowledge in her field.
Delci heard about IEEE through colleagues who praised its global network of members and technical resources. She joined, she says, to expand her knowledge, collaborate with innovators, and stay ahead of technological trends.
Today she serves as a volunteer reviewer for several IEEE technical committees and senior member applications, giving back to the organization that helped elevate her career.
“IEEE showed me that engineering isn’t just local—it’s global,” she says. “It’s a community of people who believe in building something better, together.”
Asked about her long-term goals, Moreira doesn’t hesitate. “I want to keep working on sustainable solutions, mentoring young engineers, and influencing policy,” she says, “to make the industry more environmentally responsible.”
Her advice to students considering engineering? “If you’re curious and want to solve real problems, engineering is for you,” she says. “Don’t let stereotypes define your path. We need all kinds of minds to build a better world.”
“Engineering is how we leave our mark on the world,” she says. “I intend to make mine count.”
2025-09-16 23:04:13
For most of us, to see a truly starry night isn’t easy. I have been writing about dark skies and light pollution for almost 20 years. And I have seen some breathtaking skies—southern Morocco at the edge of the desert so plush with stars it still seems like a dream, the Racetrack in Death Valley with stars rising in the east and dropping off the edge of the world in the west. At other times when I’ve gone to see the sky, I found too much humidity in the air, or too many clouds, or the obscuring smoke from a burning world. In the Canary Islands off the coast of Africa, the famous skies were veiled by a freak sandstorm from the Sahara.
But the main reason that city dwellers can no longer see a starry night is simply all the artificial light we waste into the sky. There’s even a scale for this—the Bortle scale, named after an amateur astronomer in New York state who grew tired of younger astronomers inviting him out to supposedly dark viewing sites only to find those sites not so dark. John E. Bortle’s scale goes from 1 to 9, from darkest (no artificial light on the ground or in the sky) to brightest (inner cities).
Most people will live the majority of their lives in level 5 or above, without the experience of a naturally dark sky. And ever-increasing numbers of people live in areas with levels of 7, 8, and 9. They may think of these bright levels as normal, what “darkness” is supposed to be.
Over the past decade, the shift from electric light to electronic lighting—in the form of light-emitting diodes—has made the problem worse. LEDs are generally brighter than traditional electric lights, often emit more blue-white light, and are more energy efficient. But too often this efficiency means we use more of them, creating more artificial light at night. A recent study estimated the growth of light pollution worldwide from 2011 to 2022 at 10 percent per year, a doubling roughly every eight years.
Despite these conditions, Bortle Scale 1 areas still exist, just not where most of us live. When I asked a friend at the U.S. National Park Service—whose Natural Sounds and Night Skies Division measures levels of darkness throughout the National Park System—where to find a dark sky, he hesitated to name anywhere in the lower 48 states. The Outback in Australia would be a good place, he said, or Chile.
And so I’m here, traveling with Pedro Sanhueza, an environmentalist and longtime dark-skies advocate who has devoted the past two decades to protecting his country’s dark skies. As we drive, Pedro tells me that in the 25 years he’s lived in La Serena, the population has almost doubled, and the lights have grown, too. The sleepy city where the Milky Way used to spend the night in everyone’s backyard is getting so big that astronomers in the Elqui Valley are beginning to worry, even as billions of dollars go into building new observatories like the Vera C. Rubin Observatory at Cerro Pachón.
This image illustrates the 9-point Bortle scale, which quantifies the impact of light pollution on the darkness of a night sky at a particular location. Star visibility increases dramatically from left (urban areas with heavy light pollution) to right (excellent dark-sky conditions).P. Horálek and M. Wallner/ESO
For 20 minutes, we creep around curves and navigate road repairs, ours the only car. Bats flicker above the gravel, a statuelike owl perches on a bare roadside limb. The mountains disappear as we climb, as does the road behind us, and we see only the world our headlights carve from the dark. Part of me wishes we could turn the headlights off too and ride beneath a sky filling with stars. But the rest of me knows this is a terrible idea.
When we finally arrive at our destination, a small private observatory with two large telescopes, I am stunned by the brightness of the sky. In a line stretched back toward La Serena, three planets—Venus, Saturn, and Jupiter—glow like passenger jets in low approach for landing. Though I’ve never seen it before, I recognize the Southern Cross immediately, its shape shining just above a mountain’s ridge to the south. The constellation Orion catches my eye, its iconic three-star belt so familiar—until I realize it’s upside down from how I see it back home. Nearby glows the red eye of Taurus the Bull, the star of Aldebaran, only 65 light-years away. If you kept to highway speeds, it would take 800 million years to get there by car.
When you get to a place that is still naturally dark and see a night sky like the one our ancestors knew, what you feel is recognition…reconnecting with something you maybe didn’t even know you’d become disconnected from.
Someone points out the Magellanic Clouds, galaxies visible only from the Southern Hemisphere. They’re clouds of stars. “Yes, tens of thousands of millions,” says Pedro. The Large Magellanic Cloud alone is estimated to have 30 billion stars. And distances? The Large Magellanic Cloud lies about 160,000 light-years from Earth, the Small Magellanic Cloud about 200,000 light-years away. We use light-years—the distance that light can travel in a year—to talk about distance in space. In this case, that’s 160,000 or 200,000 multiplied by 9.5 trillion kilometers. And these are among the closest galaxies to our own Milky Way. At this, my brain begins to spin—how are we to grasp such numbers? “Even astronomers can’t truly grasp these scales,” the popular astronomer Phil Plait writes. “We work with them, and we can do math and physics with them, but our ape brains still struggle to comprehend even the distance to the moon—and the universe is two million trillion times bigger than that.”
When I hear such numbers, I can’t do much but nod. And that’s okay. To stare into the sky unable to grasp what you see, the numbers and distances bending your brain beyond its reach, is part of what makes this experience so valuable—a reminder that we are not in control of everything, our anxieties are small, our lives brief, and all the more reason to savor what we see, now and here. Rather than feeling overwhelmed, seeing these distant objects causes me to feel connected to something incomprehensibly larger than me. And from this a wonder at being alive, and the welcome thought that I get to exist in a universe where this exists, too.
As we stand watching, I tell Pedro I’ve been thinking about the idea of “scale,” how we make sense of the distances and numbers in the night sky, and even the question of what good it does to look further and further into space.
The lights of the town of Andacollo, Chile, were visibly reduced [top] during the Blackout for Our Skies event, as seen from Cerro Tololo Inter-American Observatory. After the event [above], the level of light pollution returned to near-normal levels.Photos: D. Munizaga/CTIO/NSF/AURA/NOIRLab
Earlier that day, I’d mentioned the idea of scale to Manuel Paredes, my guide at the Cerro Tololo observatory. “On the hierarchy—or scale—of needs, you have to have food and water and shelter, but life isn’t only those basic needs,” he said. “There’s also a more philosophical or spiritual or soulful desire to look out there. And with looking to the night sky, trying to understand specific things like, What is a star? What’s the sun? What is a supernova? Through these small questions, we are trying to answer these big questions, trying to get some satisfaction. I think that there’s a need to understand our position in the universe very deep in our brains and in our hearts.”
When you get to a place that is still naturally dark and see a night sky like the one our ancestors knew, what you feel is recognition. It’s the feeling of reconnecting with something you maybe didn’t even know you’d become disconnected from. It’s the way the voice of an old lover or friend rings through your body when you hear it again after some years. It’s related to the way that images of the night sky—as stunning as they may be—aren’t the same experience as seeing the sky for yourself.
This deep connection makes me recoil from the next thing I see: a long train of satellites rising from the horizon toward orbit. Before this, we had watched the slow scrawl of individual satellites, slight incisions on the black fabric overhead. I admit that I still feel some thrill at the sight, a thrill left over from my first sight of satellites near a northern lake decades ago. But when I was a child, there were only a few hundred of these artificial lights in the sky, and their uniqueness added to the wonder of the stars. Since then, the scale of their growth has been overwhelming. The number has already swelled to around 12,000, and predictions are for 100,000 or more in a decade. Astronomers and dark-sky advocates have been sounding the alarm, warning that our night skies, especially after dusk and before dawn, when most people look to the sky, will become crowded with as many “moving stars” as static stars.
We are cutting ourselves off from so much that might inspire us, from scales of time and distance and numbers that offer a chance to feel awe and wonder at this world where we live.
Pedro doesn’t hold back about the satellites. “I hate them so much,” he says. “This is industry without control.”
Much of his present work centers on helping Chile’s mines comply with national laws limiting artificial light, a vital task in a country where mining produces more than half of the exports, including nearly a quarter of the world’s copper and 30 percent of its lithium. At night, the mines shine like small cities, Pedro explains, but he’s been able to make progress, a result he attributes to the large number of engineers the mines employ. “The mines have many well-prepared people,” he says, “and so if you explain in the proper way the technological solution, they will make the change.”
He and the rest of the astronomical community currently face an immense challenge from a proposed new mine in the Atacama Desert, uncomfortably close to some of the most important observatories. Pedro calls the INNA project—a proposed 7,400-acre (3,000-hectare) green-hydrogen production facility—“very bad for Chile,” with the light pollution, airborne dust, and atmospheric disturbance “underestimated systematically, the projections very limited and superficial.” Nonetheless, he says, the mine may be approved by a national government hesitant to turn down the tax revenue the mine owners insist it will bring. This seems to be the perpetual dilemma when it comes to light pollution: the promise of economic gain for some at the expense of a dark-sky heritage that belongs to everyone.
I think about my 6-year-old daughter, and the future sky any child anywhere will know. The inexorable spread of artificial light, the explosion in the number of satellites—on nearly any scale of time, these changes have happened in an instant. Beneath the Chilean night sky, the suddenness seems even more startling given the scale of astronomical time, where the faintest light we can see left its source about 13 billion years ago. The erasure of this experience robs us all, and especially future generations who will never see a night like this. That the costs of such loss are intangible makes them not a bit less important. We are cutting ourselves off from so much that might inspire us, from scales of time and distance and numbers that offer a chance to feel awe and wonder at this world where we live.
Even here, the lights from La Serena are rising on the western horizon to erase what low stars we might have seen. “We are losing 30 degrees above the horizon,” Pedro says. “We are fighting almost every day a new offender.” Here in Chile, a long strip of country where astronomers come to find the driest, clearest darkness in the world, the risk is that such darkness—and all the celestial beauty it brings—will slip away. There’s no reason it has to, and for now there is still time.
But if we do nothing or not enough, we will lose nights like these here as we have lost them in so many other places. And the evening drive from La Serena will bring us only to somewhere darker than the city we left, rather than back in time to a world every human once knew.
2025-09-15 21:00:03
Plans to end global warming hinge on driving net greenhouse gas emissions to zero (plus or minus a few gigatonnes). It’s not going well. CO2 emissions hit an all-time high last year, and for the first time average temperatures on Earth rose 1.5 °C above preindustrial levels. To limit warming to 2 °C, massive amounts of carbon dioxide will have to be sucked out of the atmosphere and locked away, according to the Intergovernmental Panel on Climate Change (IPCC).
There are old and new ways to do this. The old methods—growing more and bigger trees in temperate and tropical forests, stuffing more carbon into soils—can be cheap, but they have limits. Forests burn, die from disease, or get cut down, releasing some of the carbon they store. Microorganisms eventually break down much of what’s in the soil. Both are hard to audit and constrained by available land. Another option—pulverized minerals spread on fields—can solidify airborne carbon. But like trees and soils, these approaches require a lot of land to sink a tonne of carbon.
Enter the machines: Several companies are now deploying high-powered fans or pumps that chemically isolate CO2 from air or seawater and then pipe it to systems that inject it underground.
But direct air capture (DAC) systems consume a lot of energy and reagents that currently produce toxic by-products. To make a significant dent in global warming, all known removal methods—both conventional and novel—will probably have to scale up until their unwanted consequences limit further expansion.
What would it take to scale DAC to many billions of tonnes a year? Let’s take a look.
Geologists have identified ample reservoirs that could hold many trillions of tonnes of injected CO2 underground for centuries. Around 51 megatonnes of CO2 are already stored each year, and announced plans would scale that up sevenfold over the next decade. DAC operations contribute a minuscule amount to that total; essentially all carbon capture and storage operations today inject CO2 recovered from fossil-fuel production or use, not from the atmosphere. But the same kind of injection infrastructure could be used for DAC as it ramps up and fossil-fuel use declines.
To hold warming below 2 °C, a total of 525 to 755 gigatonnes of carbon dioxide will need to be removed from the atmosphere by 2100, according to a 2018 review study by an international research group led by the Mercator Research Institute on Global Commons and Climate Change. That total will increase if global emissions do not start falling quickly.
From 2019 to 2023, just 9 Gt of CO2 was removed, 99.9 percent through managed forestry.
All novel methods, such as DAC, biochar, and enhanced rock weathering, remove less than 2 million tonnes a year, according to the 2024 State of Carbon Dioxide Removal study. This accounted for only about 0.004 Gt of CO2 from 2019 to 2023. Of that amount, just 0.00001 Gt was removed by DAC.
IPCC scenarios that aim to limit warming to 2 °C depend on DAC and other novel removal methods ramping up fast in the decades ahead to reach 6 to 12 Gt of CO2 each year by century’s end—enough to offset residual greenhouse emissions from industry and agriculture. That’s on top of 2 to 5 Gt removed annually by soils and forests. CO2 capture and injection systems are so energy intensive that some DAC facilities have reportedly struggled to offset their own carbon footprints. Researchers in Saudi Arabia recently estimated that the world would have to make an additional 4.4 terawatts of carbon-free electricity and heat to sustain 10 Gt of CO2 per year of DAC—significantly more than all clean energy consumed in 2024.
2025-09-14 21:00:02
There are plenty of labs working on solutions to Kessler Syndrome, where there’s so much debris in low Earth orbit that rockets are no longer capable of reaching it without being hit with hypersonic parts of defunct equipment. While we haven’t yet gotten to the point where we’ve lost access to space, there will come a day where that will happen if we don’t do something about it. A new paper from Kazunori Takahashi of Tohoku University, in Japan, looks at a novel solution that uses a type of magnetic field typically seen in fusion reactors to decelerate debris using a plasma beam, while balancing itself with an equal and opposite thrust on the other side.
Researchers have been working on two main categories of systems for the type of deorbiting work that might save us from Kessler Syndrome—contact and noncontact. Contact systems physically make contact with the debris, such as by a net or a grappling hook, and slow the debris to a point where it can deorbit safely. This method faces the challenge that most debris is rotating uncontrollably, and could potentially destroy the satellite trying to make contact with it if it moves unexpectedly—adding to the problem rather than solving it.
Therefore, noncontact forms are in the ascendancy, as they allow a system designed to deorbit another satellite to stay a few meters away while still affecting its speed. Typically they use systems like lasers, ion beams, or in the case of Takahashi’s invention, plasma beams, to slow their intended target to a point where it can safely deorbit. The problem with plasma-beam-based deorbiting systems is Newton’s third law—as the plasma is being directed toward the target, it is pushing the operational system away from the defunct one, essentially acting as a small plasma thruster. As the distance between the two increases, the slowing effect of the plasma decreases. To solve this problem, Takahashi and his fellow researchers presented a bidirectional thruster in a paper in 2018 that counteracted the pushing force of the plasma used to slow the target with an equal force in the opposite direction, allowing it to maintain its position.
However, in that original paper, the thrust was too weak to effectively deorbit some of the larger potential targets for such a mission. So Takahashi set about improving the design by implementing a “cusp-type” magnetic field. These are typically used in fusion reactors to ensure the plasma doesn’t interact with the wall of the magnetic chamber. The cusp of a magnetic field is a point at which two opposing magnetic fields meet and cancel out, creating a quick change in direction for the forces they apply. Ideally, this results in a stronger plasma beam.
That is what happened when Takahashi set up an experiment to test the new cusp system with the previous “straight-field” system that had proved too weak. He saw a 20 percent improvement in the force that the plasma thruster exerted on the target, resulting in a 17.1-millinewton push at the same power level. When he bumped up the power level to 5 kilowatts (compared to the 3 kW in the original test), it showed an improved deceleration of about 25 mN, which is approaching the level of 30 mN expected to be needed to decelerate a 1-tonne piece of debris in 100 days. It also had the added benefit of using argon as fuel, which is cheaper compared with the xenon typically used in plasma thrusters.
Even with this success, there’s still a lot of work to do before this becomes a fully fleshed-out system. The experiment was run in a vacuum chamber, with the plasma thruster only 30 centimeters away from the target, compared with the distance in meters that would be required in a real orbital environment. In fact, the debris target will also move relative to the deorbiting system as it slows down, so it will have to strike a balance of maintaining distance from a slowing object as well as continuing to fire the decelerating beam at it. And finally, there is the disadvantage of it using literally twice as much fuel as other solutions that don’t require thrusters operating in opposite directions—while fuel might not be much of a concern for plasma thrusters, operating one over 100 days is sure to consume a lot of it.
With all that being said, any new solution to this potentially catastrophic problem is welcome, and Takahashi will likely continue work on developing this prototype. Someday soon you might even be able to watch a dual-thrust plasma engine blasting away at a large piece of space junk.