MoreRSS

site iconIEEE SpectrumModify

IEEE is the trusted voice for engineering, computing, and technology information around the globe. 
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of IEEE Spectrum

How the Rubin Observatory Will Reinvent Astronomy

2025-06-23 12:01:02



Night is falling on Cerro Pachón.


Stray clouds reflect the last few rays of golden light as the sun dips below the horizon. I focus my camera across the summit to the westernmost peak of the mountain. Silhouetted within a dying blaze of red and orange light looms the sphinxlike shape of the Vera C. Rubin Observatory.

“Not bad,” says William O’Mullane, the observatory’s deputy project manager, amateur photographer, and master of understatement. We watch as the sky fades through reds and purples to a deep, velvety black. It’s my first night in Chile. For O’Mullane, and hundreds of other astronomers and engineers, it’s the culmination of years of work, as the Rubin Observatory is finally ready to go “on sky.”

Rubin is unlike any telescope ever built. Its exceptionally wide field of view, extreme speed, and massive digital camera will soon begin the 10-year Legacy Survey of Space and Time (LSST) across the entire southern sky. The result will be a high-resolution movie of how our solar system, galaxy, and universe change over time, along with hundreds of petabytes of data representing billions of celestial objects that have never been seen before.

Stars begin to appear overhead, and O’Mullane and I pack up our cameras. It’s astronomical twilight, and after nearly 30 years, it’s time for Rubin to get to work.


Starry galaxy field with colorful spirals and nebulae in deep space.

On 23 June, the Vera C. Rubin Observatory released the first batch of images to the public. One of them, shown here, features a small section of the Virgo cluster of galaxies. Visible are two prominent spiral galaxies (lower right), three merging galaxies (upper right), several groups of distant galaxies, and many stars in the Milky Way galaxy. Created from over 10 hours of observing data, this image represents less than 2 percent of the field of view of a single Rubin image.

NSF-DOE Rubin Observatory


Colorful nebulae in space with clouds of pink, blue, and dark dust against a starry background.

A second image reveals clouds of gas and dust in the Trifid and Lagoon nebulae, located several thousand light-years from Earth. It combines 678 images taken by the Rubin Observatory over just seven hours, revealing faint details—like nebular gas and dust—that would otherwise be invisible.

NSF-DOE Rubin Observatory


Engineering the Simonyi Survey Telescope

The top of Cerro Pachón is not a big place. Spanning about 1.5 kilometers at 2,647 meters of elevation, its three peaks are home to the Southern Astrophysical Research Telescope (SOAR), the Gemini South Telescope, and for the last decade, the Vera Rubin Observatory construction site. An hour’s flight north of the Chilean capital of Santiago, these foothills of the Andes offer uniquely stable weather. The Humboldt Current flows just offshore, cooling the surface temperature of the Pacific Ocean enough to minimize atmospheric moisture, resulting in some of the best “seeing,” as astronomers put it, in the world.


Map showing Vera C. Rubin Observatory in Chile, near La Serena and Santiago.


It’s a complicated but exciting time to be visiting. It’s mid-April of 2025, and I’ve arrived just a few days before “first photon,” when light from the night sky will travel through the completed telescope and into its camera for the first time. In the control room on the second floor, engineers and astronomers make plans for the evening’s tests. O’Mullane and I head up into a high bay that contains the silvering chamber for the telescope’s mirrors and a clean room for the camera and its filters. Increasingly exhausting flights of stairs lead to the massive pier on which the telescope sits, and then up again into the dome.

I suddenly feel very, very small. The Simonyi Survey Telescope towers above us—350 tonnes of steel and glass, nestled within the 30-meter-wide, 650-tonne dome. One final flight of stairs and we’re standing on the telescope platform. In its parked position, the telescope is pointed at horizon, meaning that it’s looking straight at me as I step in front of it and peer inside.


Modern observatory under a starry night sky on a rocky hilltop.


The telescope’s enormous 8.4-meter primary mirror is so flawlessly reflective that it’s essentially invisible. Made of a single piece of low-expansion borosilicate glass covered in a 120-nanometer-thick layer of pure silver, the huge mirror acts as two different mirrors, with a more pronounced curvature toward the center. Standing this close means that different reflections of the mirrors, the camera, and the structure of the telescope all clash with one another in a way that shifts every time I move. I feel like if I can somehow look at it in just the right way, it will all make sense. But I can’t, and it doesn’t.


Diagram of a telescope with labeled mirrors, lenses, filters, and camera components.

I’m rescued from madness by O’Mullane snapping photos next to me. “Why?” I ask him. “You see this every day, right?”

“This has never been seen before,” he tells me. “It’s the first time, ever, that the lens cover has been off the camera since it’s been on the telescope.” Indeed, deep inside the nested reflections I can see a blue circle, the r-band filter within the camera itself. As of today, it’s ready to capture the universe.


Two images show the inner parts of a telescope, with large mirrors and a camera housed inside a metal frame.


Close-up of a large, complex astronomical telescope structure in an observatory.


Large telescope inside observatory dome against a bright starry night sky.


Rubin’s Wide View Unveils the Universe

Back down in the control room, I find director of construction Željko Ivezić. He’s just come up from the summit hotel, which has several dozen rooms for lucky visitors like myself, plus a few even luckier staff members. The rest of the staff commutes daily from the coastal town of La Serena, a 4-hour round trip.

To me, the summit hotel seems luxurious for lodgings at the top of a remote mountain. But Ivezić has a slightly different perspective. “The European-funded telescopes,” he grumbles, “have swimming pools at their hotels. And they serve wine with lunch! Up here, there’s no alcohol. It’s an American thing.” He’s referring to the fact that Rubin is primarily funded by the U.S. National Science Foundation and the U.S. Department of Energy’s Office of Science, which have strict safety requirements.


Silhouetted telescope under a starry sky and vibrant, colorful sunset.


Originally, Rubin was intended to be a dark-matter survey telescope, to search for the 85 percent of the mass of the universe that we know exists but can’t identify. In the 1970s, astronomer Vera C. Rubin pioneered a spectroscopic method to measure the speed at which stars orbit around the centers of their galaxies, revealing motion that could be explained only by the presence of a halo of invisible mass at least five times the apparent mass of the galaxies themselves. Dark matter can warp the space around it enough that galaxies act as lenses, bending light from even more distant galaxies as it passes around them. It’s this gravitational lensing that the Rubin observatory was designed to detect on a massive scale. But once astronomers considered what else might be possible with a survey telescope that combined enormous light-collecting ability with a wide field of view, Rubin’s science mission rapidly expanded beyond dark matter.

Trading the ability to focus on individual objects for a wide field of view that can see tens of thousands of objects at once provides a critical perspective for understanding our universe, says Ivezić. Rubin will complement other observatories like the Hubble Space Telescope and the James Webb Space Telescope. Hubble’s Wide Field Camera 3 and Webb’s Near Infrared Camera have fields of view of less than 0.05 square degrees each, equivalent to just a few percent of the size of a full moon. The upcoming Nancy Grace Roman Space Telescope will see a bit more, with a field of view of about one full moon. Rubin, by contrast, can image 9.6 square degrees at a time—about 45 full moons’ worth of sky.

That ultrawide view offers essential context, Ivezić explains. “My wife is American, but I’m from Croatia,” he says. “Whenever we go to Croatia, she meets many people. I asked her, ‘Did you learn more about Croatia by meeting many people very superficially, or because you know me very well?’ And she said, ‘You need both. I learn a lot from you, but you could be a weirdo, so I need a control sample.’ ” Rubin is providing that control sample, so that astronomers know just how weird whatever they’re looking at in more detail might be.



Every night, the telescope will take a thousand images, one every 34 seconds. After three or four nights, it’ll have the entire southern sky covered, and then it’ll start all over again. After a decade, Rubin will have taken more than 2 million images, generated 500 petabytes of data, and visited every object it can see at least 825 times. In addition to identifying an estimated 6 million bodies in our solar system, 17 billion stars in our galaxy, and 20 billion galaxies in our universe, Rubin’s rapid cadence means that it will be able to delve into the time domain, tracking how the entire southern sky changes on an almost daily basis.


Cutting-Edge Technology Behind Rubin’s Speed

Achieving these science goals meant pushing the technical envelope on nearly every aspect of the observatory. But what drove most of the design decisions is the speed at which Rubin needs to move (3.5 degrees per second)—the phrase most commonly used by the Rubin staff is “crazy fast.”



Crazy fast movement is why the telescope looks the way it does. The squat arrangement of the mirrors and camera centralizes as much mass as possible. Rubin’s oversize supporting pier is mostly steel rather than mostly concrete so that the movement of the telescope doesn’t twist the entire pier. And then there’s the megawatt of power required to drive this whole thing, which comes from huge banks of capacitors slung under the telescope to prevent a brownout on the summit every 30 seconds all night long.

Rubin is also unique in that it utilizes the largest digital camera ever built. The size of a small car and weighing 2,800 kilograms, the LSST camera captures 3.2-gigapixel images through six swappable color filters ranging from near infrared to near ultraviolet. The camera’s focal plane consists of 189 4K-by-4K charge-coupled devices grouped into 21 “rafts.” Every CCD is backed by 16 amplifiers that each read 1 million pixels, bringing the readout time for the entire sensor down to 2 seconds flat.


Technician examines a large telescope camera in a clean room environment.


Astronomy in the Time Domain

As humans with tiny eyeballs and short lifespans who are more or less stranded on Earth, we have only the faintest idea of how dynamic our universe is. To us, the night sky seems mostly static and also mostly empty. This is emphatically not the case.

In 1995, the Hubble Space Telescope pointed at a small and deliberately unremarkable part of the sky for a cumulative six days. The resulting image, called the Hubble Deep Field, revealed about 3,000 distant galaxies in an area that represented just one twenty-four-millionth of the sky. To observatories like Hubble, and now Rubin, the sky is crammed full of so many objects that it becomes a problem. As O’Mullane puts it, “There’s almost nothing not touching something.”

One of Rubin’s biggest challenges will be deblending—­identifying and then separating things like stars and galaxies that appear to overlap. This has to be done carefully by using images taken through different filters to estimate how much of the brightness of a given pixel comes from each object.


Exploded diagram of a large telescope camera, with labeled parts including lens, shutter, filters, and a 3.2-gigapixel CCD.


At first, Rubin won’t have this problem. At each location, the camera will capture one 30-second exposure before moving on. As Rubin returns to each location every three or four days, subsequent exposures will be combined in a process called coadding. In a coadded image, each pixel represents all of the data collected from that location in every previous image, which results in a much longer effective exposure time. The camera may record only a few photons from a distant galaxy in each individual image, but a few photons per image added together over 825 images yields much richer data. By the end of Rubin’s 10-year survey, the coadding process will generate images with as much detail as a typical Hubble image, but over the entire southern sky. A few lucky areas called “deep drilling fields” will receive even more attention, with each one getting a staggering 23,000 images or more.

Rubin will add every object that it detects to its catalog, and over time, the catalog will provide a baseline of the night sky, which the observatory can then use to identify changes. Some of these changes will be movement—Rubin may see an object in one place, and then spot it in a different place some time later, which is how objects like near-Earth asteroids will be detected. But the vast majority of the changes will be in brightness rather than movement.


A circle with grid lines overlaying a night sky background with stars and a full moon.


Every image that Rubin collects will be compared with a baseline image, and any change will automatically generate a software alert within 60 seconds of when the image was taken. Rubin’s wide field of view means that there will be a lot of these alerts—on the order of 10,000 per image, or 10 million alerts per night. Other automated systems will manage the alerts. Called alert brokers, they ingest the alert streams and filter them for the scientific community. If you’re an astronomer interested in Type Ia supernovae, for example, you can subscribe to an alert broker and set up a filter so that you’ll get notified when Rubin spots one.

Many of these alerts will be triggered by variable stars, which cyclically change in brightness. Rubin is also expected to identify somewhere between 3 million and 4 million supernovae—that works out to over a thousand new supernovae for every night of observing. And the rest of the alerts? Nobody knows for sure, and that’s why the alerts have to go out so quickly, so that other telescopes can react to make deeper observations of what Rubin finds.


Managing Rubin’s Vast Data Output

After the data leaves Rubin’s camera, most of the processing will take place at the SLAC National Accelerator Laboratory in Menlo Park, Calif., over 9,000 kilometers from Cerro Pachón. It takes less than 10 seconds for an image to travel from the focal plane of the camera to SLAC, thanks to a 600-gigabit fiber connection from the summit to La Serena, and from there, a dedicated 100-gigabit line and a backup 40-gigabit line that connect to the Department of Energy’s science network in the United States. The 20 terabytes of data that Rubin will produce nightly makes this bandwidth necessary. “There’s a new image every 34 seconds,” O’Mullane tells me. “If I can’t deal with it fast enough, I start to get behind. So everything has to happen on the cadence of half a minute if I want to keep up with the data flow.”

At SLAC, each image will be calibrated and cleaned up, including the removal of satellite trails. Rubin will see a lot of satellites, but since the satellites are unlikely to appear in the same place in every image, the impact on the data is expected to be minimal when the images are coadded. The processed image is compared with a baseline image and any alerts are sent out, by which time processing of the next image has already begun.


Numerous thick cables hang in an industrial setting, surrounded by blue metal scaffolding.


As Rubin’s catalog of objects grows, astronomers will be able to query it in all kinds of useful ways. Want every image of a particular patch of sky? No problem. All the galaxies of a certain shape? A little trickier, but sure. Looking for 10,000 objects that are similar in some dimension to 10,000 other objects? That might take a while, but it’s still possible. Astronomers can even run their own code on the raw data.

“Pretty much everyone in the astronomy community wants something from Rubin,” O’Mullane explains, “and so they want to make sure that we’re treating the data the right way. All of our code is public. It’s on GitHub. You can see what we’re doing, and if you’ve got a better solution, we’ll take it.”

One better solution may involve AI. “I think as a community we’re struggling with how we do this,” says O’Mullane. “But it’s probably something we ought to do—curating the data in such a way that it’s consumable by machine learning, providing foundation models, that sort of thing.”

The data management system is arguably as much of a critical component of the Rubin observatory as the telescope itself. While most telescopes make targeted observations that get distributed to only a few astronomers at a time, Rubin will make its data available to everyone within just a few days, which is a completely different way of doing astronomy. “We’ve essentially promised that we will take every image of everything that everyone has ever wanted to see,” explains Kevin Reil, Rubin observatory scientist. “If there’s data to be collected, we will try to collect it. And if you’re an astronomer somewhere, and you want an image of something, within three or four days we’ll give you one. It’s a colossal challenge to deliver something on this scale.”


Animated image on the left shows an automated mechanism that switches color filters; an image on the right shows how each filter affects the exposures of stars and galaxies.


The more time I spend on the summit, the more I start to think that the science that we know Rubin will accomplish may be the least interesting part of its mission. And despite their best efforts, I get the sense that everyone I talk to is wildly understating the impact it will have on astronomy. The sheer volume of objects, the time domain, the 10 years of coadded data—what new science will all of that reveal? Astronomers have no idea, because we’ve never looked at the universe in this way before. To me, that’s the most fascinating part of what’s about to happen.

Reil agrees. “You’ve been here,” he says. “You’ve seen what we’re doing. It’s a paradigm shift, a whole new way of doing things. It’s still a telescope and a camera, but we’re changing the world of astronomy. I don’t know how to capture—I mean, it’s the people, the intensity, the awesomeness of it. I want the world to understand the beauty of it all.”


The Intersection of Science and Engineering

Because nobody has built an observatory like Rubin before, there are a lot of things that aren’t working exactly as they should, and a few things that aren’t working at all. The most obvious of these is the dome. The capacitors that drive it blew a fuse the day before I arrived, and the electricians are off the summit for the weekend. The dome shutter can’t open either. Everyone I talk to takes this sort of thing in stride—they have to, because they’ve been troubleshooting issues like these for years.

I sit down with Yousuke Utsumi, a camera operations scientist who exudes the mixture of excitement and exhaustion that I’m getting used to seeing in the younger staff. “Today is amazingly quiet,” he tells me. “I’m happy about that. But I’m also really tired. I just want to sleep.”

Just yesterday, Utsumi says, they managed to finally solve a problem that the camera team had been struggling with for weeks—an intermittent fault in the camera cooling system that only seemed to happen when the telescope was moving. This was potentially a very serious problem, and Utsumi’s phone would alert him every time the fault occurred, over and over again in the middle of the night. The fault was finally traced to a cable within the telescope’s structure that used pins that were slightly too small, leading to a loose connection.



Utsumi’s contract started in 2017 and was supposed to last three years, but he’s still here. “I wanted to see first photon,” he says. “I’m an astronomer. I’ve been working on this camera so that it can observe the universe. And I want to see that light, from those photons from distant galaxies.” This is something I’ve also been thinking about—those lonely photons traveling through space for billions of years, and within the coming days, a lucky few of them will land on the sensors Utsumi has been tending, and we’ll get to see them. He nods, smiling. “I don’t want to lose one, you know?”


Illuminated telescope interior with vibrant blue and red hues, showcasing intricate machinery.


Rubin’s commissioning scientists have a unique role, working at the intersection of science and engineering to turn a bunch of custom parts into a functioning science instrument. Commissioning scientist Marina Pavlovic is a postdoc from Serbia with a background in the formation of supermassive black holes created by merging galaxies. “I came here last year as a volunteer,” she tells me. “My plan was to stay for three months, and 11 months later I’m a commissioning scientist. It’s crazy!”


Technicians in clean suits handling a large metallic component in a laboratory.


Pavlovic’s job is to help diagnose and troubleshoot whatever isn’t working quite right. And since most things aren’t working quite right, she’s been very busy. “I love when things need to be fixed because I am learning about the system more and more every time there’s a problem—every day is a new experience here.”

I ask her what she’ll do next, once Rubin is up and running. “If you love commissioning instruments, that is something that you can do for the rest of your life, because there are always going to be new instruments,” she says.

Before that happens, though, Pavlovic has to survive the next few weeks of going on sky. “It’s going to be so emotional. It’s going to be the beginning of a new era in astronomy, and knowing that you did it, that you made it happen, at least a tiny percent of it, that will be a priceless moment.”

“I had to learn how to calm down to do this job,” she admits, “because sometimes I get too excited about things and I cannot sleep after that. But it’s okay. I started doing yoga, and it’s working.”


From First Photon to First Light

My stay on the summit comes to an end on 14 April, just a day before first photon, so as soon as I get home I check in with some of the engineers and astronomers that I met to see how things went. Guillem Megias Homar manages the adaptive optics system—232 actuators that flex the surfaces of the telescope’s three mirrors a few micrometers at a time to bring the image into perfect focus. Currently working on his Ph.D., he was born in 1997, one year after the Rubin project started.

First photon, for him, went like this: “I was in the control room, sitting next to the camera team. We have a microphone on the camera, so that we can hear when the shutter is moving. And we hear the first click. And then all of a sudden, the image shows up on the screens in the control room, and it was just an explosion of emotions. All that we have been fighting for is finally a reality. We are on sky!” There were toasts (with sparkling apple juice, of course), and enough speeches that Megias Homar started to get impatient: “I was like, when can we start working? But it was only an hour, and then everything became much more quiet.”


Dense galaxy cluster with diverse stars and galaxies scattered across the dark universe background.

Another newly released image showing a small section of the Rubin Observatory’s total view of the Virgo cluster of galaxies. Visible are bright stars in the Milky Way galaxy shining in the foreground, and many distant galaxies in the background.

NSF-DOE Rubin Observatory


“It was satisfying to see that everything that we’d been building was finally working,” Victor Krabbendam, project manager for Rubin construction, tells me a few weeks later. “But some of us have been at this for so long that first photon became just one of many firsts.” Krabbendam has been with the observatory full-time for the last 21 years. “And the very moment you succeed with one thing, it’s time to be doing the next thing.”


Group of people seated in office chairs look at a screen (not shown) and smile slightly, with one person covering their mouth with their hands.


Since first photon, Rubin has been undergoing calibrations, collecting data for the first images that it’s now sharing with the world, and preparing to scale up to begin its survey. Operations will soon become routine, the commissioning scientists will move on, and eventually, Rubin will largely run itself, with just a few people at the observatory most nights.

But for astronomers, the next 10 years will be anything but routine. “It’s going to be wildly different,” says Krabbendam. “Rubin will feed generations of scientists with trillions of data points of billions of objects. Explore the data. Harvest it. Develop your idea, see if it’s there. It’s going to be phenomenal.”


Listen to a Conversation About the Rubin Observatory

As part of an experiment with AI storytelling tools, author Evan Ackerman—who visited the Vera C. Rubin Observatory in Chile for four days this past April—fed over 14 hours of raw audio from his interviews and other reporting notes into NotebookLM, an AI-powered research assistant developed by Google. The result is a podcast-style audio experience that you can listen to here. While the script and voices are AI-generated, the conversation is grounded in Ackerman’s original reporting, and includes many details that did not appear in the article above. Ackerman reviewed and edited the audio to ensure accuracy, and there are minor corrections in the transcript. Let us know what you think of this experiment in AI narration.

Video Friday: Jet-Powered Humanoid Robot Lifts Off

2025-06-21 00:30:03



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RSS 2025: 21–25 June 2025, LOS ANGELES
ETH Robotics Summer School: 21–27 June 2025, GENEVA
IAS 2025: 30 June–4 July 2025, GENOA, ITALY
ICRES 2025: 3–4 July 2025, PORTO, PORTUGAL
IEEE World Haptics: 8–11 July 2025, SUWON, SOUTH KOREA
IFAC Symposium on Robotics: 15–18 July 2025, PARIS
RoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL
RO-MAN 2025: 25–29 August 2025, EINDHOVEN, THE NETHERLANDS
CLAWAR 2025: 5–7 September 2025, SHENZHEN
CoRL 2025: 27–30 September 2025, SEOUL
IEEE Humanoids: 30 September–2 October 2025, SEOUL
World Robot Summit: 10–12 October 2025, OSAKA, JAPAN
IROS 2025: 19–25 October 2025, HANGZHOU, CHINA

Enjoy today’s videos!

This is the first successful vertical takeoff of a jet-powered flying humanoid robot, developed by Artificial and Mechanical Intelligence (AMI) at Istituto Italiano di Tecnologia (IIT). The robot lifted ~50 cm off the ground while maintaining dynamic stability, thanks to advanced AI-based control systems and aerodynamic modeling.

We will have much more on this in the coming weeks!

[Nature] via [IIT]

As a first step toward our mission of deploying general-purpose robots, we are pushing the frontiers of what end-to-end AI models can achieve in the real world. We’ve been training models and evaluating their capabilities for dexterous sensorimotor policies across different embodiments, environments, and physical interactions. We’re sharing capability demonstrations on tasks stressing different aspects of manipulation: fine motor control, spatial and temporal precision, generalization across robots and settings, and robustness to external disturbances.

[Generalist AI]

Thanks, Noah!

Ground Control Robotics is introducing SCUTTLE, our newest elongate multilegged platform for mobility anywhere!

[Ground Control Robotics]

Teleoperation has been around for a while, but what hasn’t been is precise, real-time force feedback. That’s where Flexiv steps in to shake things up. Now, whether you’re across the room or across the globe, you can experience seamless, high-fidelity remote manipulation with a sense of touch.

This sort of thing usually takes some human training, for which you’d be best served by robot arms with precise, real-time force feedback. Hmm, I wonder where you’d find those...?

[Flexiv]

The 1X World Model is a data-driven simulator for humanoid robots built with a grounded understanding of physics. It allows us to predict—or “hallucinate”—the outcomes of NEO’s actions before they’re taken in the real world. Using the 1X World Model, we can instantly assess the performance of AI models—compressing development time and providing a clear benchmark for continuous improvement.

[1X]

SLAPBOT is an interactive robotic artwork by Hooman Samani and Chandler Cheng, exploring the dynamics of physical interaction, artificial agency, and power. The installation features a robotic arm fitted with a soft, inflatable hand that delivers slaps through pneumatic actuation, transforming a visceral human gesture into a programmed robotic response.

I asked, of course, whether SLAPBOT slaps people, and it does not: “Despite its provocative concept and evocative design, SLAPBOT does not make physical contact with human participants. It simulates the gesture of slapping without delivering an actual strike. The robotic arm’s movements are precisely choreographed to suggest the act, yet it maintains a safe distance.”

[SLAPBOT]

Thanks, Hooman!

Inspecting the bowels of ships is something we’d really like robots to be doing for us, please and thank you.

[Norwegian University of Science and Technology] via [GitHub]

Thanks, Kostas!

H2L Corporation (hereinafter referred to as H2L) has unveiled a new product called “Capsule Interface,” which transmits whole-body movements and strength, enabling new shared experiences with robots and avatars. A product introduction video depicting a synchronization never before experienced by humans was also released.

[H2L Corp.] via [RobotStart]

How do you keep a robot safe without requiring it to look at you? Radar!

[Paper] via [IEEE Sensors Journal]

Thanks, Bram!

We propose Aerial Elephant Trunk, an aerial continuum manipulator inspired by the elephant trunk, featuring a small-scale quadrotor and a dexterous, compliant tendon-driven continuum arm for versatile operation in both indoor and outdoor settings.

[Adaptive Robotics Controls Lab]

This video demonstrates a heavy weight lifting test using the ARMstrong Dex robot, focusing on a 40 kg bicep curl motion. ARMstrong Dex is a human-sized, dual-arm hydraulic robot currently under development at the Korea Atomic Energy Research Institute (KAERI) for disaster response applications. Designed to perform tasks flexibly like a human while delivering high power output, ARMstrong Dex is capable of handling complex operations in hazardous environments.

[Korea Atomic Energy Research Institute]

Micro-robots that can inspect water pipes, diagnose cracks, and fix them autonomously—reducing leaks and avoiding expensive excavation work—have been developed by a team of engineers led by the University of Sheffield.

[University of Sheffield]

We’re growing in size, scale, and impact! We’re excited to announce the opening of our serial production facility in the San Francisco Bay Area, the very first purpose-built robotaxi assembly facility in the United States. More space means more innovation, production, and opportunities to scale our fleet.

[Zoox]

Watch multipick in action as our pickle robot rapidly identifies, picks, and places multiple boxes in a single swing of an arm.

[Pickle]

And now, this.

[Aibo]

Cargill’s Amsterdam Multiseed facility enlists Spot and Orbit to inspect machinery and perform visual checks, enhanced by all-new AI features, as part of their “Plant of the Future” program.

[Boston Dynamics]

This ICRA 2025 plenary talk is from Raffaello D’Andrea, entitled “Models are Dead, Long Live Models!”

[ICRA 2025]

Will data solve robotics and automation? Absolutely! Never! Who knows?! Let’s argue about it!

[ICRA 2025]

Making the Most of 1:1 Meetings With Your Boss

2025-06-20 03:45:10



This article is crossposted from IEEE Spectrum’s careers newsletter. Sign up now to get insider tips, expert advice, and practical strategies, written in partnership with tech career development company Taro and delivered to your inbox for free!

I once had a manager at Meta who kept flip-flopping. We’d have our one-on-one meetings to align on the priorities, and whether I should focus on new features or fix user-reported bugs.

But after a few days, our plans would suddenly change. Certain bugs would become the highest priority, especially if the order came from directors or VPs. I noticed a pattern where my manager would change his mind after speaking with a strong-willed project manager or some engineering leader up the chain.

I was left feeling confused and unsupported.

When this happens, how do you tell your manager to shape up? Is it even your responsibility to give feedback to your manager?

The 1:1 is a critical forum to share this kind of feedback. A 1:1 is a focused meeting between two people within the company, typically lasting 30 or 45 minutes. When done well, these meetings are a valuable tool for building trust and fostering career growth. In my experience, managers will have weekly or biweekly 1:1s with each of their reports. If you don’t have a regularly scheduled 1:1 with your manager, you’re missing out. Ask for one!

The effectiveness of a 1:1 depends on your preparation before the meeting. Here are a few ground rules I set with my reports and my own manager to make them as valuable as possible:

  • Write down the agenda in advance. This shows that you have put some thought into the meeting and, therefore, it shouldn’t be canceled. Keep a running doc of everything you’ve written down. It can be helpful for both you and your manager to refer back to prior discussions and action items.
  • Avoid status updates. Approach each 1:1 as a valuable opportunity to learn something or gain a new perspective. Feel free to write down status updates ahead of time, but you should minimize the time spent in the 1:1 just reviewing statuses. The conversation should be more focused on emotions and concerns rather than obvious facts.
  • Be vulnerable. One litmus test for the conversation is, “Could this have been shared in the broader team meeting?” If the answer is yes, don’t waste the valuable 1:1 time on that topic. The 1:1 should focus on the sticky human issues that inevitably come up in the workplace: losing motivation, feeling overwhelmed, or delivering difficult feedback, for example.

At Meta, I used the 1:1 time with my manager to share my concerns about the constantly shifting priorities between new features and user-reported bugs. The problem didn’t get resolved overnight, but at least he was aware of the issue. I felt heard, and we continued to monitor the situation as it improved.

What if your manager isn’t receptive to your feedback or concerns? In almost all cases, it’s not worth trying to “fix” your manager or your environment. There’s a clear power dynamic between you and your boss, and the energy spent on your manager is better spent on finding a new team or company altogether.

The 1:1 is a critical pillar for our career growth as engineers. Try out these tactics in your next 1:1 and let me know how it goes.

—Rahul

IEEE’s 5 New E-Books Provide Onramp to Engineering

Five new e-books from IEEE’s TryEngineering initiative provide an overview of topics including semiconductors, signal processing, oceanic engineering, and AI. As part of IEEE’s suite of pre-university resources, the free e-books are meant to introduce these complex technical topics to younger readers—the next generation of engineers.

Read more here.

In Dubai’s AI job market, your passport matters

More tech workers are moving to the UAE, which is now second only to the United States in attracting top AI talent, according to reporting from Rest of World. But as the country becomes an AI talent magnet, differences are emerging among workers based on where they’re from. While tech specialists from the West take top positions, engineers from developing nations often fill lower positions.

Read more here.

Record Number of IEEE Members Visit U.S. Congress to Talk Tech Policy

In this guest article, a technical program manager at Google reflects on his experience meeting with U.S. legislators this April. More than 300 IEEE representatives participated in the organization’s Congressional Visits Day to discuss federal funding, the STEM talent pipeline, and other policy issues.

Read more here.

Check Out IEEE’s Revamped Online Presence

2025-06-20 02:00:03



The newly designed IEEE website makes it easier than ever to learn about the organization and its offerings. IEEE incorporated feedback from members and site visitors to create its modern look and feel.

Throughout the site, the work of IEEE and its members is prominently highlighted to show how they are creating a better world and driving engineering forward.

“The new website is more visual, with video and other media to engage all visitors. It also showcases our global community’s commitment as a public charity advancing technology for the benefit of humanity,” says Sophia Muirhead, IEEE executive director and chief operating officer.

The website reflects IEEE’s commitment to delivering an engaging online experience that is more intuitive for its global community. The storytelling theme of the site highlights select quotes, testimonials, and member and volunteer stories from IEEE’s more than 486,000 members and 189,000 student members from 347 sections in 10 geographic regions.

Whether you’re looking for a humanitarian project to get involved in, finding an upcoming conference to attend, taking a continuing education course, or publishing a research paper, the new design makes resources easier to access.

Where to find courses, career resources, and more

The first thing you’ll see on the new site is a box with scrolling options. Power What’s Next for Tech describes what IEEE is, and it includes a link to the What We Do page, which gives an overview of the organization, including its mission, strategic plan, history, and offerings.

Using the arrows on the right side of the box, you can see the Building a Better World section, where visitors can learn about humanitarian initiatives such as IEEE MOVE and EPICS in IEEE, then Career Support and, finally, an option to join IEEE and be part of something bigger.

Scrolling down the home page, the next module, Happening Across IEEE, features upcoming conferences, the latest standards, new educational courses, ways to advance your career, and how to get involved with IEEE’s societies, councils, and communities.

“The new website is more visual, with video and other media to engage all visitors. It also showcases our global community’s commitment as a public charity advancing technology for the benefit of humanity.”

The next section, the IEEE Is the Global Community of Technology Professionals module, has options to Find Your Path to learn about resources available for industry professionals, authors and researchers, students and young professionals, volunteers, new members, and retirees.

The following section, Latest Innovations, features videos and articles from publications including IEEE Spectrum and The Institute on cutting-edge technology engineers are working on, such as electronic tattoos.

Keep scrolling down and you’ll get to know IEEE members and their thoughts on what’s next for technologies such as artificial intelligence and quantum computing.

“This redesign marks a key milestone in IEEE’s digital transformation,” Muirhead says. “The use of rich media, video content, and dynamic storytelling features allows for deeper engagement with IEEE and understanding its various offerings.

“However, it is just the beginning. In the months ahead, we will continue to enhance the site with new features, updated content, and richer tools.”

A New BCI Instantly Synthesizes Speech

2025-06-20 00:00:04



By analyzing neural signals, a brain-computer interface (BCI) can now almost instantaneously synthesize the speech of a man who lost use of his voice due to a neurodegenerative disease, a new study finds.

The researchers caution it will still be a long time before such a device, which could restore speech to paralyzed patients, will find use in everyday communication. Still, the hope is this work “will lead to a pathway for improving these systems further—for example, through technology transfer to industry,” says Maitreyee Wairagkar, a project scientist at the University of California Davis’s Neuroprosthetics Lab.

A major potential application for brain-computer interfaces is restoring the ability to communicate to people who can no longer speak due to disease or injury. For instance, scientists have developed a number of BCIs that can help translate neural signals into text.

However, text alone fails to capture many key aspects of human speech, such as intonation, that help to convey meaning. In addition, text-based communication is slow, Wairagkar says.

Now, researchers have developed what they call a brain-to-voice neuroprosthesis that can decode neural activity into sounds in real time. They detailed their findings 11 June in the journal Nature.

“Losing the ability to speak due to neurological disease is devastating,” Wairagkar says. “Developing a technology that can bypass the damaged pathways of the nervous system to restore speech can have a big impact on the lives of people with speech loss.”

Neural Mapping for Speech Restoration

The new BCI mapped neural activity using four microelectrode arrays. In total, the scientists placed 256 microelectrode arrays in three brain regions, chief among them the ventral precentral gyrus, which plays a key role in controlling the muscles underlying speech.

“This technology does not ‘read minds’ or ‘read inner thoughts,’” Wairagkar says. “We record from the area of the brain that controls the speech muscles. Hence, the system only produces voice when the participant voluntarily tries to speak.”

The researchers implanted the BCI in a 45-year-old volunteer with amyotrophic lateral sclerosis (ALS), the neurodegenerative disorder also known as Lou Gehrig’s disease. Although the volunteer could still generate vocal sounds, he was unable to produce intelligible speech on his own for years before the BCI.

The neuroprosthesis recorded the neural activity that resulted when the patient attempted to read sentences on a screen out loud. The scientists then trained a deep-learning AI model on this data to produce his intended speech.

The researchers also trained a voice-cloning AI model on recordings made of the patient before his condition so the BCI could synthesize his pre-ALS voice. The patient reported that listening to the synthesized voice “made me feel happy, and it felt like my real voice,” the study notes.

Neuroprosthesis Reproduces a Man’s Speech UC Davis

In experiments, the scientists found that the BCI could detect key aspects of intended vocal intonation. They had the patient attempt to speak sets of sentences as either statements, which had no changes in pitch, or as questions, which involved rising pitches at the ends of the sentences. They also had the patient emphasize one of the seven words in the sentence “I never said she stole my money” by changing its pitch. (The sentence has seven different meanings, depending on which word is emphasized.) These tests revealed increased neural activity toward the ends of the questions and before emphasized words. In turn, this let the patient control his BCI voice enough to ask a question, emphasize specific words in a sentence, or sing three-pitch melodies.

“Not only what we say but also how we say it is equally important,” Wairagkar says. “Intonation of our speech helps us to communicate effectively.”

All in all, the new BCI could acquire neural signals and produce sounds with a delay of 25 milliseconds, enabling near-instantaneous speech synthesis, Wairagkar says. The BCI also proved flexible enough to speak made-up pseudo-words, as well as interjections such as “ahh,” “eww,” “ohh,” and “hmm.”

The resulting voice was often intelligible, but not consistently so. In tests where human listeners had to transcribe the BCI’s words, they understood what the patient said about 56 percent of the time, up from about 3 percent from when he did not use the BCI.

Computer screen displaying neural signal data in multiple graph plots.Neural recordings of the BCI participant shown on screen.UC Davis

“We do not claim that this system is ready to be used to speak and have conversations by someone who has lost the ability to speak,” Wairagkar says. “Rather, we have shown a proof of concept of what is possible with the current BCI technology.”

In the future, the scientists plan to improve the accuracy of the device—for instance, with more electrodes and better AI models. They also hope that BCI companies might start clinical trials incorporating this technology. “It is yet unknown whether this BCI will work with people who are fully locked in”—that is, nearly completely paralyzed, save for eye motions and blinking, Wairagkar adds.

Another interesting research direction is to study whether such speech BCIs could be useful for people with language disorders, such as aphasia. “Our current target patient population cannot speak due to muscle paralysis,” Wairagkar says. “However, their ability to produce language and cognition remains intact.” In contrast, she notes, future work might investigate restoring speech to people with damage to brain areas that produce speech, or with disabilities that have prevented them from learning to speak since childhood.

Guatemalan Engineer Ascends From Rural Roots to Ph.D.

2025-06-18 02:00:04



Although she is just now starting her career as a tech professional, Mayra Yucely Beb Caal has already overcome towering obstacles. The IEEE member sees her life as an example for other young people, demonstrating that they can succeed despite disadvantages they face due to their gender, ethnicity, language, or economic background.

Born in Cobán, the capital of Alta Verapaz in northern Guatemala, she grew up in a community far removed from the world of technology. But she attributes her success to having been steeped in the region’s cultural richness and her people’s unshakable resilience. The daughter of a single mother who was a schoolteacher, Caal says she spent her early years living with her aunts while her mother worked in distant towns for weeks at a time to provide for the family. In her community—mostly descendants of the indigenous Maya-Kekchi people—technology was rarely discussed. Pursuing a degree meant studying to become a physician, the most prestigious occupation anyone there was aware of.

No one imagined that a girl from Cobán would one day hold a doctorate in engineering or conduct cancer research in France.

On the path to her ambitious goals, Caal got a big assist from IEEE. She received a Gray scholarship, awarded by the IEEE Systems Council to students pursuing graduate studies in process control systems engineering, plant automation, or instrumentation measurement. The US $5,000 award supplemented other scholarships which helped her to study for her Ph.D.

Discovering robotics and mechatronics in high school

Caal was introduced to technology when, at age 14, she received a government scholarship to attend the Instituto Técnico de Capacitación y Productividad, a high school in Guatemala City. It was her first exposure to electronics, robotics, and mechatronics (an interdisciplinary field that combines mechanical engineering, electronics, computer science, and control systems)—subjects that weren’t taught in her local school. Caal was fascinated by the ability to study the fields, though her family couldn’t afford the tuition to the private universities where she could earn a degree. But that didn’t dissuade her.

Pursuing a mechatronics career despite gender barriers

She applied for a scholarship from the Gutiérrez Foundation, named for the founder of CMI, a Guatemala-based multinational company. The foundation’s scholarship covers full tuition, fees, and the cost of books for the duration of a recipient’s undergraduate studies.

In 2016 Caal earned a bachelor’s degree in mechatronics engineering at the Universidad del Valle de Guatemala, also in Guatemala City. There were few women in her class.

The job market was unwelcoming, however, she says. Despite her credentials, employers often required five years of experience for entry-level positions, and they expressed a preference for male employees, she says. It took six months to land her first job as a mechanical maintenance supervisor near her hometown.

She held that job for six months before moving back to Guatemala City in search of better opportunities. She took a position as head of mechanical maintenance at Mayaprin, a company specializing in commercial printing services, but she wasn’t satisfied with her career trajectory.

Earning an engineering education abroad

Caal decided to return to school in 2018 to pursue a master’s degree in mechatronics and micromechatronics engineering. She received a scholarship from the Mundus Joint Master program, part of a European Commission–sponsored initiative that provides funding for education, training, and youth in sports. Because the Mundus scholarship requires recipients to study at several universities, she took classes at schools in Europe and Africa, including École Nationale Supérieure de Mécanique et des Microtechniques, Nile University, and Universidad de Oviedo. Her studies focused on mechatronics and microelectronics, and the courses were taught in French, English, and Spanish.

The multilingual challenge was immense, she says. She recently had learned English, and French was completely new to her. Yet she persevered, driven by her goal of working on technology that could serve humanity.

She received a master’s degree from Universidad de Oviedo in 2020 and was accepted into a Ph.D. program at Université de Bourgogne Franche-Comté, in Besançon, France. Her doctoral studies were aided by the Gray scholarship.

Her research led to a full-time job last year as an R&D engineer focused on mechatronics and robotics at HyprView in Caen, France. The startup, founded in 2021, develops software to assist with medical data analysis and boost the performance of imaging tools.

Caal says she is part of a team that uses AI and automated systems to improve cancer detection. Although she has held the position for less than a year, she says she already feels she is contributing to public health through applied technology.

IEEE support and STEM mentorship

Through much of Caal’s journey, IEEE has played a critical role. As an undergraduate, she was vice president and then president of her university’s IEEE student branch. Her first international conference experience came from attending IEEE Region 9 conferences, which she says opened her eyes to the world of research, publishing, and the global engineering community.

She organized outreach efforts to local schools, conducting simple experiments to encourage girls to consider STEM careers. Her efforts were in direct opposition to longstanding gender norms in Guatemala. Caal was also an active member of the IEEE student branch at FEMTO-ST /Université de Bourgogne Franche-Comté.

Today, Caal continues to advise these student branches while advancing her career in France.

Language issues and gender bias remain obstacles: “As a young woman leading male engineers, I have repeatedly had to prove my competence in ways my male peers haven’t,” she says. But the challenges have only strengthened her resolve, she adds.

Eventually, she says, she hopes to return to Guatemala to help build a stronger research infrastructure there with sufficient career opportunities for tech professionals in industry and academia. She says she also wants to ensure that children in even the most rural, poverty-stricken schools have access to food, electricity, and the Internet.

Her mission is clear: “To use technology to serve a purpose, always aimed at improving lives.”

“I don’t want to create technology just for the sake of it,” she says. “I want it to mean something—to help solve real problems in society, like the ones I faced early on.”