2026-02-21 03:00:02

IEEE has enhanced its standing as a trusted, neutral authority on the role of technology in climate change mitigation and adaption. Last year it became the first technical association to be invited to a U.N. Conference of the Parties on Climate Change.
IEEE representatives participated in several sessions at COP30, held from 11 to 20 November in Belém, Brazil. More than 56,000 delegates attended, including policymakers, technologists, and representatives from industry, finance, and development agencies.
Following the conference, IEEE helped host the selective International Symposium on Achieving a Sustainable Climate. The International Telecommunication Union and IEEE hosted ISASC on 16 and 17 December at ITU’s headquarters in Geneva. Among the more than 100 people who attended were U.N. agency representatives, diplomats, senior leaders from academia, and experts from government, industry, nongovernment organizations, and standards development bodies.
Power and energy expert Saifur Rahman, the 2023 IEEE president, led IEEE’s delegation at both events. Rahman is the immediate past chair of IEEE’s Technology for a Sustainable Climate Matrix Organization, which coordinates, communicates, and amplifies the organization’s efforts.
IEEE first attended a COP in 2021.
“Over successive COPs, IEEE’s role has evolved from contributing individual technical sessions to being recognized as a trusted partner in climate action,” Rahman noted in a summary of COP30. “There is [a] growing demand for engineering insight, not just to discuss technologies but [also] to help design pathways for deployment, capacity-building, and long-term resilience.”
Joining Rahman at COP30 were IEEE Fellow Claudio Canizares and IEEE Member Filipe Emídio Tôrres.
Canizares is a professor of electrical and computer engineering at the University of Waterloo, in Ontario, Canada, and the executive director of the university’s sustainable energy institute.
Tôrres chairs the IEEE Centro-Norte Brasil Section (Brazil Chapter). An entrepreneur and a former professor, he is pursuing a Ph.D. in biomedical engineering at the University of Brasilia. He also represented the IEEE Young Professionals group while attending the conference.
In the Engineering for Climate Resilience: Water Planning, Energy Transition, Biodiversity session, Rahman showed a video from his 2024 visit to Shennongjia, China, where he monitored a clean energy project designed to protect endangered snub-nosed monkeys from human encroachment. The project integrates renewable energy, which helps preserve the forest and its wildlife.
Rahman also chaired a session at the Sustainable Development Goal Pavilion on balancing decarbonization efforts between industrialized and emerging economies.
Additionally, he participated in a joint panel discussion hosted by IEEE and the World Federation of Engineering Organizations on engineering strategies for climate resilience, including energy transition and biodiversity.
Rahman, Canizares, and Tôrres took part in a session on clean-tech solutions for a sustainable climate, hosted by the International Youth Nuclear Congress. The topics included fossil fuel–free electricity for communications in remote areas and affordable electricity solutions for off-grid areas.
The three also joined several panels organized by the IYNC that addressed climate resilience, career pathways in sustainability, and a mentoring program.
“Over successive COPs, IEEE’s role has evolved from contributing individual technical sessions to being recognized as a trusted partner in climate action.” —Saifur Rahman, 2023 IEEE president
The IYNC hosted the Voices of Transition: Including Pathways to a Clean Energy Future session, for which Tôrres and Rahman were panelists. They discussed the need to include underrepresented and marginalized groups, which often get overlooked in projects that convert communities to renewable energy.
Rahman, Canizares, and Tôrres visited the COP Village, where they met several of the 5,000 Indigenous leaders participating in the conference and discussed potential partnerships and collaborations. Climate change has made the land where the Indigenous people live more susceptible to severe droughts and wildfires, particularly in the Amazon region.
Rahman and Tôrres took a field trip to the Federal University of Para, where they met several faculty members and students and toured the LASSE engineering lab.
Tôrres, who says representing IEEE at COP30 was transformative, wrote a detailed report about the event.
“The experience reaffirmed my belief that engineering and technology, when combined with respect for cultural diversity, can play a critical role in shaping a more sustainable and equitable world,” he wrote. “It highlighted the importance of combining cutting-edge technological solutions with Indigenous wisdom and cultural knowledge to address the climate crisis.”
Rahman and Canizares give an overview of their COP30 experiences in an IEEE webinar.
“IEEE has a place at the table,” Rahman says in the video. “We want to showcase outside our comfort zone what IEEE can do. We go to all these global events so that our name becomes a familiar term. We are the first technical association organization ever to go to COP and talk about engineering.”
Canizares added that IEEE is now collaborating closely with the United Nations.
“This is an important interaction. And I think, moving forward, IEEE will become more relevant, particularly in the context of technology deployment,” he said. “As governments start technology deployments, they will see IEEE as a provider of solutions.”
Rahman was the general chair of the ISASC event, which focused on the delivery and deployment of clean energy. Among the presenters were IEEE members including Canizares, Paulina Chan, Surekha Deshmukh, Ashutosh Dutta, Tariq Durrani, Samina Husain, Bruce Kraemer, Bruno Meyer, Carlo Alberto Nucci, and Seizo Onoe.
Sessions were organized around six themes: energy transition, information and communication technology, financing, case studies, technical standards, and public-private collaborations. A detailed report includes the discussions, insights, and opportunities identified throughout ISASC.
Here are some key takeaways.
As part of ISASC, IEEE presented a technology assessment tool prototype. The web-based platform is designed to help policymakers, practitioners, and investors compare technology options against climate goals.
The tool can run a comparative analysis of sustainable climate technologies and integrate publicly available, expert-validated data.
The ISASC report concluded that by connecting engineering expertise with real-world deployment challenges, IEEE is working to translate global climate goals into measurable actions.
The discussions highlighted that the path forward lies less in inventing new technologies and more in aligning systems to deliver ones that already exist.
Summaries of COP30 and ISASC are available on the IEEE Technology for a Sustainable Climate website.
2026-02-21 02:00:02

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
Enjoy today’s videos!
So, humanoid robots are nearing peak human performance. I would point out, though, that this is likely very far from peak robot performance, which has yet to be effectively exploited, because it requires more than just copying humans.
[ Unitree ]
“The Street Dance of China” Turning lightness into gravity, and rhythm into impact.This is a head-on collision between metal and beats. This Chinese New Year, watch PNDbotics Adam bring the heat with a difference.
[ PNDbotics ]
You had me at robot pandas.
[ MagicLab ]
NASA’s Perseverance rover can now precisely determine its own location on Mars without waiting for human help from Earth. This is possible thanks to a new technology called Mars Global Localization. This technology rapidly compares panoramic images from the rover’s navigation cameras with onboard orbital terrain maps. It’s done with an algorithm that runs on the rover’s Helicopter Base Station processor, which was originally used to communicate with the Ingenuity Mars Helicopter. In a few minutes, the algorithm can pinpoint Perseverance’s position to within about 10 inches (25 centimeters). The technology will help the rover drive farther autonomously and keep exploring.
[ NASA Jet Propulsion Laboratory ]
Legs? Where we’re going, we don’t need legs!
[ Paper ]
This is a bit of a tangent to robotics, but it gets a pass because of the cute jumping spider footage.
[ Berkeley Lab ]
Corvus One for Cold Chain is engineered to live and operate in freezer environments permanently, down to -20°F, while maintaining full flight and barcode scanning performance.
I am sure there is an excellent reason for putting a cold storage facility in the Mojave desert.
[ Corvus Robotics ]
The video documents the current progress made in the picking rate of the Shiva robot when picking strawberries. It first shows the previous status, then the further development, and finally the field test.
[ DFKI ]
Data powers an organization’s digital transformation, and ST Engineering MRAS is leveraging Spot to get a full view of critical equipment and facility. Working autonomously, Spot collects information about machine health - and now, thanks to an integration of the Leica BLK ARC for reality capture, detailed and accurate point cloud data for their digital twin.
[ Boston Dynamics ]
The title of this video is “Get out and have fun!” Is that mostly what humanoid robots are good for right now, pretty much...?
[ Engine AI ]
ASTORINO is a modern 6-axis robot based on 3D printing technology. Programmable in AS-language, it facilitates the preparation of classes with ready-made teaching materials, is easy both to use and to repair, and gives the opportunity to learn and make mistakes without fear of breaking it.
[ Kawasaki ]
Can I get this in my living room?
[ Yaskawa ]
What does it mean to build a humanoid robot in seven months, and the next one in just five? This documentary takes you behind the scenes at Humanoid, a UK-based AI and robotics company building reliable, safe, and helpful humanoid robots. You’ll hear directly from our engineering, hardware, product, and other teams as they share their perspectives on the journey of turning physical AI into reality.
[ Humanoid ]
This IROS 2025 keynote is from Tim Chung who is now at Microsoft, on “Catalyzing the Future of Human, Robot, and AI Agent Teams in the Physical World.”
The convergence of technologies—from foundation AI models to diverse sensors and actuators to ubiquitous connectivity—is transforming the nature of interactions in the physical and digital world. People have accelerated their collaborative connections and productivity through digital and immersive technologies, no longer limited by geography or language or access. Humans have also leveraged and interacted with AI in many different forms, with the advent of hyperscale AI models (i.e., large language models) forever changing (and at an ever-astonishing pace) the nature of human-AI teams, realized in this era of the AI “copilot.” Similarly, robotics and automation technologies now afford greater opportunities to work with and/or near humans, allowing for increasingly collaborative physical robots to dramatically impact real-world activities. It is the compounding effect of enabling all three capabilities, each complementary to one another in valuable ways, and we envision the triad formed by human-robot-AI teams as revolutionizing the future of society, the economy, and of technology.
[ IROS 2025 ]
This GRASP SFI talk is by Chris Paxton at Agility Robotics, on “How Close Are We To Generalist Humanoid Robots?”
With billions of dollars of funding pouring into robotics, general-purpose humanoid robots seem closer than ever. And certainly it feels like the pace of robotics is faster than ever, with multiple companies beginning large-scale deployments of humanoid robots. In this talk, I’ll go over the challenges still facing scaling robot learning, looking at insights from a year of discussions with researchers all over the world.
[ University of Pennsylvania GRASP Laboratory ]
This week’s CMU RI Seminar is from Jitendra Malik at UC Berkeley, on “Robot Learning, With Inspiration From Child Development.”
For intelligent robots to become ubiquitous, we need to “solve” locomotion, navigation and manipulation at sufficient reliability in widely varying environments. In locomotion, we now have demonstrations of humanoid walking in a variety of challenging environments. In navigation, we pursued the task of “Go to Any Thing” – a robot, on entering a newly rented Airbnb, should be able to find objects such as TV sets or potted plants. RL in simulation and sim-to-real have been workhorse technologies for us, assisted by a few technical innovations. I will sketch promising directions for future work.
2026-02-20 01:03:24

More money has been invested in AI than it took to land on the moon. Spending on the technology this year is projected to reach up to $700 billion, almost double last year’s spending. Part of the impetus for this frantic outlay is a conviction among investors and policymakers in the United States that it needs to “beat China.” Indeed, headlines have long cast AI development as a zero-sum rivalry between the U.S. and China, framing the technology’s advance as an arms race with a defined finish line. The narrative implies speed, symmetry, and a common objective.
But a closer look at AI development in the two countries shows they’re not only not racing toward the same finish line: “The U.S. and China are running in very different lanes,” says Selina Xu, who leads China and AI policy research for Eric Schmidt, the tech investor, philanthropist and former Google chief, in New York City. “The U.S. is doubling down on scaling,” in pursuit of artificial general intelligence (AGI) Xu says, “while for China it’s more about boosting economic productivity and real-world impact.”
Lumping the U.S. and China onto a single AI scoreboard isn’t just inaccurate, it can impact policy and business decisions in a harmful way. “An arms race can become a self-fulfilling prophecy,” Xu says. “If companies and governments all embrace a ‘race to the bottom’ mentality, they will eschew necessary security and safety guardrails for the sake of being ahead. That increases the odds of AI-related crises.”
As machine learning advanced in the 2010s, prominent public figures such as Stephen Hawking and Elon Musk warned that it would be impossible to separate AI’s general-purpose potential from its military and economic implications, echoing Cold War–era frameworks for strategic competition. “An arms race is an easy way to think about this situation even if it’s not exactly right,” says Karson Elmgren, a China researcher at the Institute for AI Policy and Strategy, a think tank in San Francisco. Frontier labs, investors, and media benefit from simple, comparable progress metrics, like larger models, better benchmarks, and more computing power, so they favor and compound the arms race framing.
Artificial general intelligence is the implied “finish line” if AI is an arms race. But one of the many problems with an AGI finish line is that by its very nature, a machine superintelligence would be smarter than humans and therefore impossible to control. “If superintelligence were to emerge in a particular country, there’s no guarantee that that country’s interests are going to win,” says Graham Webster, a China researcher at Stanford University, in Palo Alto, California.
An AGI finish line also assumes the U.S. and China are both optimizing for this goal and putting the majority of their resources towards it. This isn’t the case, as the two countries have starkly different economic landscapes.
After decades of rapid growth, China is now facing a grimmer reality. “China has been suffering through an economic slowdown for a mixture of reasons, from real estate to credit to consumption and youth unemployment,” says Xu, adding that the country’s leaders have been “trying to figure out what is the next economic driver that can get China to sustain its growth.”
Enter AI. Rather than pouring resources into speculative frontier models, Beijing has a pressing incentive to use the technology as a more immediate productivity engine. “In China we define AI as an enabler to improve existing industry, like healthcare, energy, or agriculture,” says AI policy researcher Liang Zheng, of Tsinghua University in Beijing, China. “The first priority is to use it to benefit ordinary people.”
To that end, AI investment in China is focused on embedding the technology into manufacturing, logistics, energy, finance, and public services. “It’s a long-term structural change, and companies must invest more in machines, software, and digitalization,” Liang says. “Even very small and medium enterprises are exploring use of AI to improve their productivity.”
China’s AI Plus initiative encourages using AI to boost efficiency. “Having a frontier technology doesn’t really move China towards an innovation-led developed economy,” says Kristy Loke, a fellow at MATS Research who focuses on China’s AI innovation and governance strategies. Instead, she says, “It’s really important to make sure that [these tools] are able to meet the demands of the Chinese economy, which are to industrialize faster, to do more smart manufacturing, to make sure they’re producing things in competitive processes.”
Automakers have embraced intelligent robots in “dark factories” with minimal human intervention; as of 2024, China had around five times more factory robots in use than the U.S. “We used to use human eyes for quality control and it was very inefficient,” says Liang. Now, computer vision systems detect errors and software predicts equipment failures, pausing production and scheduling just-in-time maintenance. Agricultural models advise farmers on crop selection, planting schedules, and pest control.
In healthcare, AI tools triage patients, interpret medical images, and assist diagnoses; Tsinghua is even piloting an AI “Agent Hospital” where physicians work alongside virtual clinical assistants. “In hospitals you used to have to wait a long time, but now you can use your agent to make a precise appointment,” Liang says. Many such applications use simpler “narrow AI” designed for specific tasks.
AI is also increasingly embedded across industries in the U.S., but the focus tends toward service-oriented and data-driven applications, leveraging large language models (LLMs) to handle unstructured data and automate communication. For example, banks use LLM-based assistants to help users manage accounts, find transactions, and handle routine requests; LLMs help healthcare professionals extract information from medical notes and clinical documentation.
“LLMs as a technology naturally fit the U.S. service-sector-based economy more so than the Chinese manufacturing economy,” Elmgren says.
The U.S. and China do compete more or less head-to-head in some AI-related areas, such as the underlying chips. The two have grappled to gain enough control over their supply chains to ensure national security, as recent tariff and export control fights have shown. “I think the main competitive element from a top level [for China] is to wriggle their way out of U.S. coercion over semiconductors. They want to have an independent capability to design, build, and package advanced semiconductors,” Webster says.
Military applications of AI are also a significant arena of U.S.–China competition, with both governments aiming to speed decision-making, improve intelligence, and increase autonomy in weapons systems. The U.S. Department of Defense launched its AI Acceleration Strategy last month, and China has explicitly integrated AI into its military modernization strategy under its policy of military-civil fusion. “From the perspective of specific military systems, there are incremental advantages that one side or the other can gain,” Webster says.
Despite China’s commitment to military and industrial applications, it has not yet picked an AI national champion. “After Deepseek in early 2025 the government could have easily said, ‘You guys are the winners, I’ll give you all the money, please build AGI,’ but they didn’t. They see being ‘close enough’ to the technological frontier as important, but putting all eggs in the AGI basket as a gamble,” Loke says.
American companies are also still working with Chinese technology and workers, despite a slow uncoupling of the two economies. Though it may seem counterintuitive, more cooperation—and less emphasis on cutthroat competition—could yield better results for all. “For building more secure, trustworthy AI, you need both U.S. and Chinese labs and policymakers to talk to each other, to reach consensus on what’s off limits, then compete within those boundaries,” Xu says. “The arms race narrative also just misses the actual on-the-ground reality of companies co-opting each other’s approaches, the amount of research that gets exchanged in academic communities, the supply chains and talent that permeates across borders, and just how intertwined the two ecosystems are.”
2026-02-19 03:00:03

In the rapidly evolving world of engineering technology, professionals devote enormous energy to such tasks as mastering the latest frameworks, optimizing architectures, and refining machine learning models. It’s easy to let technical expertise become the sole measure of professional value. However, one of the most important skills an engineer can develop is the capacity to write and communicate effectively.
Whether you’re conducting research at a university or leading systems development projects at a global firm, your expertise can become impactful only when you share it in a way that others can understand and act upon. Without a clear narrative, even groundbreaking data or innovative designs can fail to gain traction, limiting their reach among colleagues and stakeholders, and in peer‑reviewed journals.
Writing is often labeled a “soft skill”—which can diminish its importance. In reality, communication is a core engineering competency. It lets us document methods, articulate research findings, and persuade decision-makers who determine whether projects move forward.
If your writing is dense, disorganized, or overloaded with technical jargon, the value of the underlying work can become obscured. A strong proposal might be dismissed not because the idea lacks merit but because the justification is difficult to follow.
Clear writing can strengthen the impact of your work. Poor writing can distract from the points you’re trying to make, as readers might not understand what you’re saying.
Technical writing differs from other forms of prose because readers expect information to follow predictable, logical patterns. Unclear writing can leave readers unsure of the author’s intent.
One of the most enduring frameworks for writing about technology in an understandable manner is the IMRaD structure: introduction, methods, results, and discussion.
More than just a template for academic papers, IMRaD is a road map for logical reasoning. Mastering the structure can help engineers communicate in a way that aligns with professional writing standards used in technical journals, so their work is better understood and more respected.
Despite technical communication’s importance, engineering curricula often limit or lack formal instruction in it.
Recognizing that gap, IEEE has expanded its role as a global knowledge leader by offering From Research to Publication: A Step-by-Step Guide to Technical Writing. The course is led by Traci Nathans-Kelly, director of the engineering communications program at Cornell.
Developed by IEEE Educational Activities and the IEEE Professional Communication Society, the learning opportunity goes beyond foundational writing skills. It addresses today’s challenges, such as the ethical use of generative AI in the writing workflow, the complexities of team-based authorship, and publishing strategies.
The program centers on core skill areas that can influence an engineer’s ability to communicate. Participants learn to master the IMRaD structure and learn advanced editing techniques to help strip away jargon, making complex ideas more accessible. In addition, the course covers strategic approaches to publishing work in high‑impact journals and improving a writer’s visibility within the technical community.
The course is available on the IEEE Learning Network. Participants earn professional development credit and a shareable digital badge. IEEE members receive a US $100 discount. Organizations can connect with an IEEE content specialist to offer the training to their teams.
2026-02-18 23:14:00

One day soon, a doctor might prescribe a pill that doesn’t just deliver medicine but also reports back on what it finds inside you—and then takes actions based on its findings.
Instead of scheduling an endoscopy or CT scan, you’d swallow an electronic capsule smaller than a multivitamin. As it travels through your digestive system, it could check tissue health, look for cancerous changes, and send data to your doctor. It could even release drugs exactly where they’re needed or snip a tiny biopsy sample before passing harmlessly out of your body.
This dream of a do-it-all pill is driving a surge of research into ingestible electronics: smart capsules designed to monitor and even treat disease from inside the gastrointestinal (GI) tract. The stakes are high. GI diseases affect tens of millions of people worldwide, including such ailments as inflammatory bowel disease, celiac disease, and small intestinal bacterial overgrowth. Diagnosis often involves a frustrating maze of blood tests, imaging, and invasive endoscopy. Treatments, meanwhile, can bring serious side effects because drugs affect the whole body, not just the troubled gut.
If capsules could handle much of that work—streamlining diagnosis, delivering targeted therapies, and sparing patients repeated invasive procedures—they could transform care. Over the past 20 years, researchers have built a growing tool kit of ingestible devices, some already in clinical use. These capsule-shaped devices typically contain sensors, circuitry, a power source, and sometimes a communication module, all enclosed in a biocompatible shell. But the next leap forward is still in development: autonomous capsules that can both sense and act, releasing a drug or taking a tissue sample.
That’s the challenge that our lab—the MEMS Sensors and Actuators Laboratory (MSAL) at the University of Maryland, College Park—is tackling. Drawing on decades of advances in microelectromechanical systems (MEMS), we’re building swallowable devices that integrate sensors, actuators, and wireless links in packages that are small and safe enough for patients. The hurdles are considerable: power, miniaturization, biocompatibility, and reliability, to name a few. But the potential payoff will be a new era of personalized and minimally invasive medicine, delivered by something as simple as a pill you can swallow at home.
The idea of a smart capsule has been around since the late 1950s, when researchers first experimented with swallowable devices to record temperature, gastric pH, or pressure inside the digestive tract. At the time, it seemed closer to science fiction than clinical reality, bolstered by pop-culture visions like the 1966 film Fantastic Voyage, where miniaturized doctors travel inside the human body to treat a blood clot.
One of the authors (Ghodssi) holds a miniaturized drug-delivery capsule that’s designed to release medication at specific sites in the gastrointestinal tract.Maximilian Franz/Engineering at Maryland Magazine
For decades, though, the mainstay of GI diagnostics was endoscopy: a camera on a flexible tube, threaded down the throat or up through the colon. These procedures are quite invasive and require patients to be sedated, which increases both the risk of complications and procedural costs. What’s more, it’s difficult for endoscopes to safely traverse the circuitous pathway of the small intestine. The situation changed in the early 2000s, when video-capsule endoscopy arrived. The best-known product, PillCam, looks like a large vitamin but contains a camera, LEDs, and a transmitter. As it passes through the gut, it beams images and videos to a wearable device.
Today, capsule endoscopy is a routine tool in gastroenterology; ingestible devices can measure acidity, temperature, or gas concentrations. And researchers are pushing further, with experimental prototypes that deliver drugs or analyze the microbiome. For example, teams from Tufts University, in Massachusetts, and Purdue University, in Indiana, are working on devices with dissolvable coatings and mechanisms to collect samples of liquid for studies of the intestinal microbiome.
Still, all those devices are passive. They activate on a timer or by exposure to the neutral pH of the intestines, but they don’t adapt to conditions in real time. The next step requires capsules that can sense biomarkers, make decisions, and trigger specific actions—moving from clever hardware to truly autonomous “smart pills.” That’s where our work comes in.
Since 2017, MSAL has been pushing ingestible devices forward with the goal of making an immediate impact in health care. The group built on the MEMS community’s legacy in microfabrication, sensors, and system integration, while taking advantage of new tools like 3D printing and materials like biocompatible polymers. Those advances have made it possible to prototype faster and shrink devices smaller, sparking a wave of innovation in wearables, implants, and now ingestibles. Today, MSAL is collaborating with engineers, physicians, and data scientists to move these capsules from lab benches to pharmaceutical trials.
As a first step, back in 2017, we set out to design sensor-carrying capsules that could reliably reach the small intestine and indicate when they reached it. Another challenge was that sensors that work well on the benchtop can falter inside the gut, where shifting pH, moisture, digestive enzymes, and low-oxygen conditions can degrade typical sensing components.
Our earliest prototype adapted MEMS sensing technology to detect abnormal enzyme levels in the duodenum that are linked to pancreatic function. The sensor and its associated electronics were enclosed in a biocompatible, 3D-printed shell coated with polymers that dissolved only at certain pH levels. This strategy could one day be used to detect biomarkers in secretions from the pancreas to detect early-stage cancer.
A high-speed video shows how a capsule deploys microneedles to deliver drugs into intestinal tissue.University of Maryland/Elsevier
That first effort with a passive device taught us the fundamentals of capsule design and opened the door to new applications. Since then, we’ve developed sensors that can track biomarkers such as the gas hydrogen sulfide, neurotransmitters such as serotonin and dopamine, and bioimpedance—a measure of how easily ions pass through intestinal tissue—to shed light on the gut microbiome, inflammation, and disease progression. In parallel, we’ve worked on more-active devices: capsule-based tools for controlled drug release and tissue biopsy, using low-power actuators to trigger precise mechanical movements inside the gut.
Like all new medical devices and treatments, ingestible electronics face many hurdles before they reach patients—from earning physician trust and insurance approval to demonstrating clear benefits, safety, and reliability. Packaging is a particular focus, as the capsules must be easy to swallow yet durable enough to survive stomach acid. The field is steadily proving safety and reliability, progressing from proof of concept in tissue, through the different stages of animal studies, and eventually to human trials. Every stage provides evidence that reassures doctors and patients—for example, showing that ingesting a properly packaged tiny battery is safe, and that a capsule’s wireless signals, far weaker than those of a cellphone, pose no health risk as they pass through the gut.
The gastrointestinal tract is packed with clues about health and disease, but much of it remains out of reach of standard diagnostic tools. Ingestible capsules offer a way in, providing direct access to the small intestine and colon. Yet in many cases, the concentrations of chemical biomarkers can be too low to detect reliably in early stages of a disease, which makes the engineering challenge formidable. What’s more, the gut’s corrosive, enzyme-rich environment can foul sensors in multiple ways, interfering with measurements and adding noise to the data.


Microneedle designs for drug-delivery capsules have evolved over the years. An early prototype [top] used microneedle anchors to hold a capsule in place. Later designs adopted molded microneedle arrays [center] for more uniform fabrication. The most recent version [bottom] integrates hollow microinjector needles, allowing more precise and controllable drug delivery.From top: University of Maryland/Wiley;University of Maryland/Elsevier;University of Maryland/ACS
Take, for example, inflammatory bowel disease, for which there is no standard clinical test. Rather than searching for a scarce biomarker molecule, our team focused on a physical change: the permeability of the gut lining, which is a key factor in the disease. We designed capsules that measure the intestinal tissue’s bioimpedance by sending tiny currents across electrodes and recording how the tissue resists or conducts those currents at different frequencies (a technique called impedance spectroscopy). To make the electrodes suitable for in vivo use, we coated them with a thin, conductive, biocompatible polymer that reduces electrical noise and keeps stable contact with the gut wall. The capsule finishes its job by transmitting its data wirelessly to our computers.
In our lab tests, the capsule performed impressively, delivering clean impedance readouts from excised pig tissue even when the sample was in motion. In our animal studies, it detected shifts in permeability triggered by calcium chelators, compounds that pry open the tight junctions between intestinal cells. These results suggest that ingestible bioimpedance capsules could one day give clinicians a direct, minimally invasive window into gut-barrier function and inflammation. We believe that ingestible diagnostics can serve as powerful tools—catching disease earlier, confirming whether treatments are working, and establishing a baseline for gut health.
Targeted drug delivery is one of the most compelling applications for ingestible capsules. Many drugs for GI conditions—such as biologics for inflammatory bowel disease—can cause serious side effects that limit both dosage and duration of treatment. A promising alternative is delivering a drug directly to the diseased tissue. This localized approach boosts the drug’s concentration at the target site while reducing its spread throughout the body, which improves effectiveness and minimizes side effects. The challenge is engineering a device that can both recognize diseased tissue and deliver medication quickly and precisely.
With other labs making great progress on the sensing side, we’ve devoted our energy to designing devices that can deliver the medicine. We’ve developed miniature actuators—tiny moving parts—that meet strict criteria for use inside the body: low power, small size, biocompatibility, and long shelf life.
Some of our designs use soft and flexible polymer “cantilevers” with attached microneedle systems that pop out from the capsule with enough force to release a drug, but without harming the intestinal tissue. While hollow microneedles can directly inject drugs into the intestinal lining, we’ve also demonstrated prototypes that use the microneedles for anchoring drug payloads, allowing the capsule to release a larger dose of medication that dissolves at an exact location over time.
In other experimental designs, we had the microneedles themselves dissolve after injecting a drug. In still others, we used microscale 3D printing to tailor the structure of the microneedles and control how quickly a drug is released—providing either a slow and sustained dose or a fast delivery. With this 3D printing, we created rigid microneedles that penetrate the mucosal lining and gradually diffuse the drug into the tissue, and soft microneedles that compress when the cantilever pushes them against the tissue, forcing the drug out all at once.
Tissue sampling remains the gold standard diagnostic tool in gastroenterology, offering insights far beyond what doctors can glean from visual inspection or blood tests. Capsules hold unique promise here: They can travel the full length of the GI tract, potentially enabling more frequent and affordable biopsies than traditional procedures. But the engineering hurdles are substantial. To collect a sample, a device must generate significant mechanical force to cut through the tough, elastic muscle of the intestines—while staying small enough to swallow.
Different strategies have been explored to solve this problem. Torsion springs can store large amounts of energy but are difficult to fit inside a tiny capsule. Electrically driven mechanisms may demand more power than current capsule batteries can provide. Magnetic actuation is another option, but it requires bulky external equipment and precise tracking of the capsule inside the body.
Our group has developed a low-power biopsy system that builds on the torsion-spring approach. We compress a spring and use adhesive to “latch” it closed within the capsule, then attach a microheater to the latch. When we wirelessly send current to the device, the microheater melts the adhesive on the latch, triggering the spring. We’ve experimented with tissue-collection tools, integrating a bladed scraper or a biopsy punch (a cylindrical cutting tool) with our spring-activated mechanisms; either of those tools can cut and collect tissue from the intestinal lining. With advanced 3D printing methods like direct laser writing, we can put fine, microscale edges on these miniature cutting tools that make it easier for them to penetrate the intestinal lining.
Storing and protecting the sample until the capsule naturally passes through the body is a major challenge, requiring both preservation of the sample and resealing the capsule to prevent contamination. In one of our designs, residual tension in the spring keeps the bladed scraper rotating, pulling the sample into the capsule and effectively closing a hatch that seals it inside.
Looking ahead, we expect to see the first clinical applications emerge in early-stage screening. Capsules that can detect electrochemical, bioimpedance, or visual signals could help doctors make sense of symptoms like vague abdominal pain by revealing inflammation, gut permeability, tumors, or bacterial overgrowth. They could also be adapted to screen for GI cancers. This need is pressing: The American Cancer Society reports that as of 2021, 41 percent of eligible U.S. adults were not up to date on colorectal cancer screening. What’s more, effective screening tools don’t yet exist for some diseases, such as small bowel adenocarcinoma. Capsule technology could make screening less invasive and more accessible.
Of course, ingestible capsules carry risks. The standard hazards of endoscopy still apply, such as the possibility of bleeding and perforation, and capsules introduce new complications. For example, if a capsule gets stuck in its passage through the GI tract, it could cause bowel obstruction and require endoscopic retrieval or even surgery. And concerns that are specific to ingestibles, including the biocompatibility of materials, reliable encapsulation of electronics, and safe battery operation, all demand rigorous testing before clinical use.
A microbe-powered biobattery designed for ingestible devices dissolves in water within an hour. Seokheun Choi/Binghamton University
Powering these capsules is a key challenge that must be solved on the path to the clinic. Most capsule endoscopes today rely on coin-cell batteries, typically silver oxide, which offer a safe and energy-dense source but often occupy 30 to 50 percent of the capsule’s volume. So researchers have investigated alternatives, from wireless power transfer to energy-harvesting systems. At the State University of New York at Binghamton, one team is exploring microbial fuel cells that generate electricity from probiotic bacteria interacting with nutrients in the gut. At MIT, researchers used the gastric fluids of a pig’s stomach to power a simple battery. In our own lab, we are exploring piezoelectric and electrochemical approaches to harvesting energy throughout the GI tract.
The next steps for our team are pragmatic ones: working with gastroenterologists and animal-science experts to put capsule prototypes through rigorous in vivo studies, then refining them for real-world use. That means shrinking the electronics, cutting power consumption, and integrating multiple functions into a single multimodal device that can sense, sample, and deliver treatments in one pass. Ultimately, any candidate capsule will require regulatory approval for clinical use, which in turn demands rigorous proof of safety and clinical effectiveness for a specific medical application.
The broader vision is transformative. Swallowable capsules could bring diagnostics and treatment out of the hospital and into patients’ homes. Whereas procedures with endoscopes require anesthesia, patients could take ingestible electronics easily and routinely. Consider, for example, patients with inflammatory bowel disease who live with an elevated risk of cancer; a smart capsule could perform yearly cancer checks, while also delivering medication directly wherever necessary.
Over time, we expect these systems to evolve into semiautonomous tools: identifying lesions, performing targeted biopsies, and perhaps even analyzing samples and applying treatment in place. Achieving that vision will require advances at the very edge of microelectronics, materials science, and biomedical engineering, bringing together capabilities that once seemed impossible to combine in something the size of a pill. These devices hint at a future in which the boundary between biology and technology dissolves, and where miniature machines travel inside the body to heal us from within.
2026-02-18 03:58:48

At CES 2026 in Las Vegas, Singapore-based startup Strutt introduced the EV1, a powered personal mobility device that uses lidar, cameras, and onboard computing for collision avoidance. Unlike manually-steered powered wheelchairs, the EV1 assists with navigation in both indoor and outdoor environments—stopping or rerouting itself before a collision can occur.
Strutt describes its approach as “shared control,” in which the user sets direction and speed, while the device intervenes to avoid unsafe motion.
“The problem isn’t always disability,” says Strutt cofounder and CEO Tony Hong. “Sometimes people are just tired. They have limited energy, and mobility shouldn’t consume it.”
Building a mobility platform was not Hong’s original ambition. Trained in optics and sensor systems, he previously worked in aerospace and robotics. From 2016 to 2019, he led the development of lidar systems for drones at Shenzhen, China-based DJI, a leading manufacturer of consumer and professional drones. Hong then left DJI for a position as an assistant professor at Southern University of Science and Technology in Shenzhen—a school known for its research in robotics, human augmentation, sensors, and rehabilitation engineering.
However, he says, demographic trends around him proved hard to ignore. Populations in Asia, Europe, and North America are aging rapidly. More people are living longer, with limited stamina, slower reaction times, or balance challenges. So, Hong says he left academia to develop technology that would help people facing mobility limitations.
EV1 combines two lidar units, two cameras, 10 time-of-flight depth sensors, and six ultrasonic sensors. Sensor data feeds into onboard computing that performs object detection and path planning.
“We need accuracy at a few centimeters,” Hong says. “Otherwise, you’re hitting door frames.”
Using the touchscreen interface, users can select a destination within the mapped environment. The onboard system calculates a safe route and guides the vehicle at a reduced speed of about 3 miles per hour. The rider can override the route instantly with joystick input. The system even supports voice commands, allowing the user to direct the EV1 to waypoints saved in its memory.
The user can say, for example, “Go to the fridge,” and it will chart a course to the refrigerator and go there, avoiding obstacles along the way.
The Strutt EV1 puts both joystick controls and a lidar-view of the environment in front of the device’s user. Strutt
Driving EV1 in manual mode, the rider retains full control, with vibration feedback warning of nearby obstacles. In “copilot” mode, the vehicle prevents direct collisions by stopping before impact. In “copilot plus,” it can steer around obstacles while continuing in the intended direction of travel.
“We don’t call it autonomous driving,” Hong says. “The user is always responsible and can take control instantly.”
Hong says Strutt has also kept its users’ digital privacy in mind. All perception, planning, and control computations, he says, occur onboard the device. Sensor data is not transmitted unless the user chooses to upload logs for diagnostics. Camera and microphone activity is visibly indicated, and wireless communications are encrypted. Navigation and obstacle avoidance function without cloud connectivity.
“We don’t think of this as a wheelchair,” Hong says. “We think of it as an everyday vehicle.”
Strutt promotes EV1’s use for both outdoor and indoor environments—offering high-precision sensing capabilities to navigate confined spaces. Strutt
To ensure that the EV1 could withstand years of shuttling a user back and forth inside their home and around their neighborhood, the Strutt team subjected the mobility vehicleto two million roller cycles—mechanical simulation testing that allows engineers to estimate how well the motors, bearings, suspension, and frame will hold up over time.
The EV1’s 600-watt-hour lithium iron phosphate battery provides 32 kilometers of range—enough for a full day of errands, indoor navigation, and neighborhood travel. A smaller 300-watt-hour version, designed to comply with airline lithium-battery limits, delivers 16 km. Charging from zero to 80 percent takes two hours.
The EV1 retails for US $7,500—a price that could place it outside the reach of people without deep pockets. For now, advanced sensors and embedded computing keep manufacturing cost high, while insurance reimbursement frameworks for AI-assisted mobility devices depend on where a person lives.
“A retail price of $7,500 raises serious equity concerns,” says Erick Rocha, communications and development coordinator at the Los Angeles-based advocacy organization Disability Voices United,. “Many mobility device users in the United States rely on Medicaid,” the government insurance program for people with limited incomes. “Access must not be restricted to those who can afford to pay out of pocket.”
Medicaid coverage for high-tech mobility devices varies widely by state, and some states have rules that create significant barriers to approval (especially for non-standard or more specialized equipment).
Even in states that do cover mobility devices, similar types of hurdles often show up. Almost all states require prior approval for powered mobility devices, and the process can be time-consuming and documentation-heavy. Many states rigidly define what “medically necessary” means. They may require a detailed prescription describing the features of the mobility device and why the patient’s needs cannot be met with a simpler mobility aid such as a walker, cane, or standard manual wheelchair. Some states’ processes include a comprehensive in-person exam, documenting how the impairment described by the clinician limits activities of daily living such as toileting, dressing, bathing, or eating. Even if a person overcomes those hurdles, a state Medicaid program could deny coverage if a device doesn’t fit neatly into existing Healthcare Common Procedure Coding System billing codes
“Sensor-assisted systems can improve safety,” Rocha says. “But the question is whether a device truly meets the lived, day-to-day realities of people with limited mobility.”
Hong says that Strutt, founded in 2023, is betting that falling sensor prices and advances in embedded processing now make commercial deployment of the EV1 feasible.