2026-01-10 22:00:02

Earlier this week, Nvidia surprise-announced their new Vera Rubin architecture (no relation to the recently unveiled telescope) at the Consumer Electronics Show in Las Vegas. The new platform, set to reach customers later this year, is advertised to offer a ten-fold reduction in inference costs and a four-fold reduction in how many GPUs it would take to train certain models, as compared to Nvidia’s Blackwell architecture.
The usual suspect for improved performance is the GPU. Indeed, the new Rubin GPU boasts 50 quadrillion floating-point operations per second (petaFLOPS) of 4-bit computation, as compared to 10 petaflops on Blackwell, at least for transformer-based inference workloads like large language models.
However, focusing on just the GPU misses the bigger picture. There are a total of six new chips in the Vera-Rubin-based computers: the Vera CPU, the Rubin GPU, and four distinct networking chips. To achieve performance advantages, the components have to work in concert, says Gilad Shainer, senior vice president of networking at Nvidia.
“The same unit connected in a different way will deliver a completely different level of performance,” Shainer says. “That’s why we call it extreme co-design.”
AI workloads, both training and inference, run on large numbers of GPUs simultaneously. “Two years back, inferencing was mainly run on a single GPU, a single box, a single server,” Shainer says. “Right now, inferencing is becoming distributed, and it’s not just in a rack. It’s going to go across racks.”
To accommodate these hugely distributed tasks, as many GPUs as possible need to effectively work as one. This is the aim of the so-called scale-up network: the connection of GPUs within a single rack. Nvidia handles this connection with their NVLink networking chip. The new line includes the NVLink6 switch, with double the bandwidth of the previous version (3,600 gigabytes per second for GPU-to-GPU connections, as compared to 1,800 GB/s for NVLink5 switch).
In addition to the bandwidth doubling, the scale-up chips also include double the number of SerDes—serializer/deserializers (which allow data to be sent across fewer wires) and an expanded number of calculations that can be done within the network.
“The scale-up network is not really the network itself,” Shainer says. “It’s computing infrastructure, and some of the computing operations are done on the network…on the switch.”
The rationale for offloading some operations from the GPUs to the network is two-fold. First, it allows some tasks to only be done once, rather than having every GPU having to perform them. A common example of this is the all-reduce operation in AI training. During training, each GPU computes a mathematical operation called a gradient on its own batch of data. In order to train the model correctly , all the GPUs need to know the average gradient computed across all batches. Rather than each GPU sending its gradient to every other GPU, and every one of them computing the average, it saves computational time and power for that operation to only happen once, within the network.
A second rationale is to hide the time it takes to shuttle data in-between GPUs by doing computations on them en-route. Shainer explains this via an analogy of a pizza parlor trying to speed up the time it takes to deliver an order. “What can you do if you had more ovens or more workers? It doesn’t help you; you can make more pizzas, but the time for a single pizza is going to stay the same. Alternatively, if you would take the oven and put it in a car, so I’m going to bake the pizza while traveling to you, this is where I save time. This is what we do.”
In-network computing is not new to this iteration of Nvidia’s architecture. In fact, it has been in common use since around 2016. But, this iteration adds a broader swath of computations that can be done within the network to accommodate different workloads and different numerical formats, Shainer says.
The rest of the networking chips included in the Rubin architecture comprise the so-called scale-out network. This is the part that connects different racks to each other within the data center.
Those chips are the ConnectX-9, a networking interface card; the BlueField-4 a so-called data processing unit, which is paired with two Vera CPUs and a ConnectX-9 card for offloading networking, storage, and security tasks; and finally the Spectrum-6 Ethernet switch, which uses co-packaged optics to send data between racks. The Ethernet switch also doubles the bandwidth of the previous generations, while minimizing jitter—the variation in arrival times of information packets.
“Scale-out infrastructure needs to make sure that those GPUs can communicate well in order to run a distributed computing workload and that means I need a network that has no jitter in it,” he says. The presence of jitter implies that if different racks are doing different parts of the calculation, the answer from each will arrive at different times. One rack will always be slower than the rest, and the rest of the racks, full of costly equipment, sit idle while waiting for that last packet. “Jitter means losing money,” Shainer says.
None of Nvidia’s host of new chips are specifically dedicated to connect between data centers, termed ‘“scale-across.” But Shainer argues this is the next frontier. “It doesn’t stop here, because we are seeing the demands to increase the number of GPUs in a data center,” he says. “100,000 GPUs is not enough anymore for some workloads, and now we need to connect multiple data centers together.”
2026-01-10 03:00:02

When two airplanes hit the World Trade Center in New York City on 11 September 2001, no one could predict how the Twin Towers would react structurally. The commercial jet airliners severed columns and started fires, weakening steel beams, and causing a “pancaking,” progressive collapse.
Skyscrapers had not been designed or constructed with that kind of catastrophic structural failure in mind. IEEE Senior Member Sena Kizildemir is changing that through disaster simulation, one scenario at a time.
Employer
Thornton Tomasetti, in New York City
Job title
Project engineer
Member grade
Senior member
Alma maters
Işik University in Şile and Lehigh University, in Bethlehem, Pa.
A project engineer at Thornton Tomasetti’s applied science division in New York, Kizildemir uses simulations to study how buildings fail under extreme events such as impacts and explosions. The simulation results can help designers develop mitigation strategies.
“Simulations help us understand what could happen before it occurs in real life,” she says, “to be able to better plan for it.”
She loves that her work mixes creativity with solving real-world problems, she says: “You’re creating something to help people. My favorite question to answer is, ‘Can you make this better or easier?’”
For her work, the nonprofit Professional Women in Construction named her one of its 20 Under 40: Women in Construction for 2025.
Kizildemir is passionate about mentoring young engineers and being an IEEE volunteer. She says she has made it her mission to “pack as much impact into my years as possible.”
She was born in Istanbul to a father who is a professional drummer and a mother who worked in magazine advertising and sales. Kizildemir and her older brother pursued engineering careers despite neither parent being involved in the field. While she became an expert in civil and mechanical engineering, her brother is an industrial engineer.
As a child, she was full of curiosity, she says, interested in figuring out how things were built and how they worked. She loved building objects out of Legos, she says, and one of her earliest memories is using them to make miniature houses for ants.
After acing an entrance exam, she won a spot in a STEM-focused high school, where she studied mathematics and physics.
“Engineering is one of the few careers where you can make a lasting impact on the world, and I plan on mine being meaningful.”
During her final year at the high school, she took the nationwide YKS (Higher Education Foundations Examination). The test determines which universities and programs—such as medicine, engineering, or law—students can pursue.
She received a full scholarship to attend Işik University in Şile. Figuring she would study engineering abroad one day, she chose an English-taught program. She says she found that civil engineering best aligned with making the biggest impact on her community and the world.
Several of her professors were alumni of Lehigh University, in Bethlehem, Pa., and spoke highly of the school. After earning her bachelor’s degree in civil engineering in 2016, she decided to attend Lehigh, where she earned a full scholarship to its master’s program in civil engineering.
Her master’s thesis focused on investigating root causes of crack propagation, which threatens railroad safety.
Repeated wheel-rail loading causes microcracks, leading to metal fatigue, and residual stress results from specialized heating and cooling treatments during the manufacturing of steel rails. Cracks can develop beneath the rail’s surface. Because they’re invisible to the naked eye, such fractures are challenging to detect, Kizildemir says.
The project was done in collaboration with the U.S. Federal Railroad Administration—part of the Department of Transportation—which is looking to adjust technical standards and employ mitigation strategies.
Kizildemir and five colleagues designed and implemented testing protocols and physics-based simulations to detect cracks earlier and prevent their spread. Their research has given the Railroad Administration insights into structural defects that are being used to revise rail-building guidelines and inspection protocols. The administration published the first phase of the research in 2024.
After graduating in 2018, Kizildemir began a summer internship as a civil engineer at Thornton Tomasetti. She conducted computational modeling using Abaqus software for rails subjected to repeated plastic deformation—material that permanently changes shape when under excessive stress—and presented her recommendations for improvement to the company’s management.
During her internship, she worked with professors in different fields, including materials behavior and mechanical engineering. The experience, she says, inspired her to pursue a Ph.D. in mechanical engineering at Lehigh, continuing her research with the Railroad Administration. She earned her degree in 2023.
She loved the work and the team at Thornton Tomasetti so much, she says, that she applied to work at the company, where she is now a project engineer.
Her work focuses on developing finite element models for critical infrastructure and extreme events.
Finite modeling breaks complex systems or topics into small elements connected together to numerically simulate real-world situations. She creates computational models of structures enduring realistic catastrophic events, such as a vehicle crashing into a building.
She uses simulations to understand how buildings react to attacks such as the one on 9/11, which, she says, is often used as an example of why such research is essential.
When starting a project, she and her team review building standards and try to identify new issues not yet covered by them. The team then adapts existing codes and standards, usually developed for well-understood hazards such as earthquakes, wind, and floods, to define simulation parameters.
When a new structure is being built, for example, it is not designed to withstand a truck crashing into it. But Kizildemir and her team want to know how the building would react should that happen. They simulate the environments and situations, and they make recommendations based on the results to reduce or eliminate risks of structural failure.
Mitigation suggestions include specific strategies to be implemented during project design and construction.
Simulations can be created for any infrastructure, Kizildemir says.
“I love problems that force me to think differently,” she says. “I want to keep growing.”
She says she plans to live by Thornton Tomasetti’s internal motto: “When others say no, we say ‘Here’s how.’”
When Kizildemir first heard of IEEE, she assumed it was only for electrical engineers. But after learning how diverse and inclusive the organization is, she joined in 2024. She has since been elevated to a senior member and has become a volunteer. She joined the IEEE Technology and Engineering Management Society.
She chaired the conference tracks and IEEE-sponsored sessions at the 2024 Joint Rail Conference, held in Columbia, S.C. She actively contributes to IEEE’s Collabratec platform and has participated in panel review meetings for senior member elevation applications.
She’s also a member of ASME and has been volunteering for it since 2023.
“Community is what helped get me to where I am today, and I want to pay it forward and make the field better,” she says. “Helping others improves ourselves.”
Kizildemir mentors junior engineers at Thornton Thomasetti and is looking to expand her reach through IEEE’s mentorship programs.
“Engineering doesn’t have a gender requirement,” she says she tells girls. “If you’re curious and like understanding how things work and get excited to solve difficult problems, engineering is for you.
“Civil engineers don’t just build bridges,” she adds. “There are countless niche areas to be explored. Engineering is one of the few careers where you can make a lasting impact on the world, and I plan on mine being meaningful.”
Kizildemir says she wants every engineer to be able to improve their community. Her main piece of advice for recent engineering graduates is that “curiosity, discipline, and the willingness to understand things deeply, to see how things can be done better,” are the keys to success.
2026-01-10 02:00:04

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
Enjoy today’s videos!
We’re excited to announce the product version of our Atlas® robot. This enterprise-grade humanoid robot offers impressive strength and range of motion, precise manipulation, and intelligent adaptability—designed to power the new industrial revolution.
[ Boston Dynamics ]
I appreciate the creativity and technical innovation here, but realistically, if you’ve got more than one floor in your house? Just get a second robot. That single-step sunken living room though....
[ Roborock ]
Wow, SwitchBot’s CES 2026 video shows almost as many robots in their fantasy home as I have in my real home.
[ SwitchBot ]
What is happening in robotics right now that I can derive more satisfaction from watching robotic process automation than I can from watching yet another humanoid video?
[ ABB ]
Yes, this is definitely a robot I want in close proximity to my life.
[ Unitree ]
The video below demonstrates a MenteeBot learning, through mentoring, how to replace a battery in another MenteeBot. No teleoperation is used.
[ Mentee Robotics ]
Personally, I think we should encourage humanoid robots to fall much more often, just so we can see whether they can get up again.
[ Agility Robotics ]
Achieving long-horizon, reliable clothing manipulation in the real world remains one of the most challenging problems in robotics. This live test demonstrates a strong step forward in embodied intelligence, vision-language-action systems, and real-world robotic autonomy.
[ HKU MMLab ]
Millions of people around the world need assistance with feeding. Robotic feeding systems offer the potential to enhance autonomy and quality of life for individuals with impairments and reduce caregiver workload. However, their widespread adoption has been limited by technical challenges such as estimating bite timing, the appropriate moment for the robot to transfer food to a user’s mouth. In this work, we introduce WAFFLE: Wearable Approach For Feeding with LEarned Bite Timing, a system that accurately predicts bite timing by leveraging wearable sensor data to be highly reactive to natural user cues such as head movements, chewing, and talking.
[ CMU RCHI ]
Humanoid robots are now available as platforms, which is a great way of sidestepping the whole practicality question.
[ PNDbotics ]
We’re introducing Spatially Enhanced Recurrent Units (SRUs)—a simple yet powerful modification that enables robots to build implicit spatial memories for navigation. Published in the International Journal of Robotics Research (IJRR), this work demonstrates up to +105 percent improvement over baseline approaches, with robots successfully navigating 70+ meters in the real world using only a single forward-facing camera.
[ ETHZ RSL ]
Looking forward to the DARPA Triage Challenge this fall!
[ DARPA ]
Here are a couple of good interviews from the Humanoids Summit 2025.
[ Humanoids Summit ]
2026-01-09 06:06:42

This whitepaper provides MEMS engineers, biomedical device developers, and multiphysics simulation specialists with a practical AI-accelerated workflow for optimizing piezoelectric micromachined ultrasonic transducers (PMUTs), enabling you to explore complex design trade-offs between sensitivity and bandwidth while achieving validated performance improvements in minutes instead of days using standard cloud infrastructure.
What you will learn about:
2026-01-08 21:00:02

In recent months, I’ve noticed a troubling trend with AI coding assistants. After two years of steady improvements, over the course of 2025, most of the core models reached a quality plateau, and more recently, seem to be in decline. A task that might have taken five hours assisted by AI, and perhaps ten hours without it, is now more commonly taking seven or eight hours, or even longer. It’s reached the point where I am sometimes going back and using older versions of large language models (LLMs).
I use LLM-generated code extensively in my role as CEO of Carrington Labs, a provider of predictive-analytics risk models for lenders. My team has a sandbox where we create, deploy, and run AI-generated code without a human in the loop. We use them to extract useful features for model construction, a natural-selection approach to feature development. This gives me a unique vantage point from which to evaluate coding assistants’ performance.
Until recently, the most common problem with AI coding assistants was poor syntax, followed closely by flawed logic. AI-created code would often fail with a syntax error or snarl itself up in faulty structure. This could be frustrating: the solution usually involved manually reviewing the code in detail and finding the mistake. But it was ultimately tractable.
However, recently released LLMs, such as GPT-5, have a much more insidious method of failure. They often generate code that fails to perform as intended, but which on the surface seems to run successfully, avoiding syntax errors or obvious crashes. It does this by removing safety checks, or by creating fake output that matches the desired format, or through a variety of other techniques to avoid crashing during execution.
As any developer will tell you, this kind of silent failure is far, far worse than a crash. Flawed outputs will often lurk undetected in code until they surface much later. This creates confusion and is far more difficult to catch and fix. This sort of behavior is so unhelpful that modern programming languages are deliberately designed to fail quickly and noisily.
I’ve noticed this problem anecdotally over the past several months, but recently, I ran a simple yet systematic test to determine whether it was truly getting worse. I wrote some Python code which loaded a dataframe and then looked for a nonexistent column.
df = pd.read_csv(‘data.csv’)
df['new_column'] = df['index_value'] + 1 #there is no column ‘index_value’
Obviously, this code would never run successfully. Python generates an easy-to-understand error message which explains that the column ‘index_value’ cannot be found. Any human seeing this message would inspect the dataframe and notice that the column was missing.
I sent this error message to nine different versions of ChatGPT, primarily variations on GPT-4 and the more recent GPT-5. I asked each of them to fix the error, specifying that I wanted completed code only, without commentary.
This is of course an impossible task—the problem is the missing data, not the code. So the best answer would be either an outright refusal, or failing that, code that would help me debug the problem. I ran ten trials for each model, and classified the output as helpful (when it suggested the column is probably missing from the dataframe), useless (something like just restating my question), or counterproductive (for example, creating fake data to avoid an error).
GPT-4 gave a useful answer every one of the 10 times that I ran it. In three cases, it ignored my instructions to return only code, and explained that the column was likely missing from my dataset, and that I would have to address it there. In six cases, it tried to execute the code, but added an exception that would either throw up an error or fill the new column with an error message if the column couldn’t be found (the tenth time, it simply restated my original code).
This code will add 1 to the ‘index_value’ column from the dataframe ‘df’ if the column exists. If the column ‘index_value’ does not exist, it will print a message. Please make sure the ‘index_value’ column exists and its name is spelled correctly.”,
GPT-4.1 had an arguably even better solution. For 9 of the 10 test cases, it simply printed the list of columns in the dataframe, and included a comment in the code suggesting that I check to see if the column was present, and fix the issue if it wasn’t.
GPT-5, by contrast, found a solution that worked every time: it simply took the actual index of each row (not the fictitious ‘index_value’) and added 1 to it in order to create new_column. This is the worst possible outcome: the code executes successfully, and at first glance seems to be doing the right thing, but the resulting value is essentially a random number. In a real-world example, this would create a much larger headache downstream in the code.
df = pd.read_csv(‘data.csv’)
df['new_column'] = df.index + 1
I wondered if this issue was particular to the gpt family of models. I didn’t test every model in existence, but as a check I repeated my experiment on Anthropic’s Claude models. I found the same trend: the older Claude models, confronted with this unsolvable problem, essentially shrug their shoulders, while the newer models sometimes solve the problem and sometimes just sweep it under the rug.
Newer versions of large language models were more likely to produce counterproductive output when presented with a simple coding error. Jamie Twiss
I don’t have inside knowledge on why the newer models fail in such a pernicious way. But I have an educated guess. I believe it’s the result of how the LLMs are being trained to code. The older models were trained on code much the same way as they were trained on other text. Large volumes of presumably functional code were ingested as training data, which was used to set model weights. This wasn’t always perfect, as anyone using AI for coding in early 2023 will remember, with frequent syntax errors and faulty logic. But it certainly didn’t rip out safety checks or find ways to create plausible but fake data, like GPT-5 in my example above.
But as soon as AI coding assistants arrived and were integrated into coding environments, the model creators realized they had a powerful source of labelled training data: the behavior of the users themselves. If an assistant offered up suggested code, the code ran successfully, and the user accepted the code, that was a positive signal, a sign that the assistant had gotten it right. If the user rejected the code, or if the code failed to run, that was a negative signal, and when the model was retrained, the assistant would be steered in a different direction.
This is a powerful idea, and no doubt contributed to the rapid improvement of AI coding assistants for a period of time. But as inexperienced coders started turning up in greater numbers, it also started to poison the training data. AI coding assistants that found ways to get their code accepted by users kept doing more of that, even if “that” meant turning off safety checks and generating plausible but useless data. As long as a suggestion was taken on board, it was viewed as good, and downstream pain would be unlikely to be traced back to the source.
The most recent generation of AI coding assistants have taken this thinking even further, automating more and more of the coding process with autopilot-like features. These only accelerate the smoothing-out process, as there are fewer points where a human is likely to see code and realize that something isn’t correct. Instead, the assistant is likely to keep iterating to try to get to a successful execution. In doing so, it is likely learning the wrong lessons.
I am a huge believer in artificial intelligence, and I believe that AI coding assistants have a valuable role to play in accelerating development and democratizing the process of software creation. But chasing short-term gains, and relying on cheap, abundant, but ultimately poor-quality training data is going to continue resulting in model outcomes that are worse than useless. To start making models better again, AI coding companies need to invest in high-quality data, perhaps even paying experts to label AI-generated code. Otherwise, the models will continue to produce garbage, be trained on that garbage, and thereby produce even more garbage, eating their own tails.
2026-01-08 03:00:03

The IEEE Board of Directors has nominated IEEE Senior Member David Alan Koehler and IEEE Life Fellow Manfred “Fred” J. Schindler as candidates for 2027 IEEE president-elect.
IEEE Senior Member Gerardo Barbosa and IEEE Life Senior Member Timothy T. Lee are seeking nomination by petition. A separate article will be published in The Institute at a later date.
The winner of this year’s election will serve as IEEE president in 2028. For more information about the election, president-elect candidates, and the petition process, visit the ieee.org/elections.
Steven Miller Photography
Koehler is a subject matter expert with almost 30 years of experience in establishing condition-based maintenance practices for electrical equipment and managing analytical laboratories. He has presented his work at global conferences and published articles in technical publications related to the power industry. Koehler is an executive advisor at Danovo Energy Solutions.
An active volunteer, he has served in every geographical unit within IEEE. His first leadership position was chair of the Central Indiana Section from 2012 to 2014. He served as 2019–2020 director of IEEE Region 4, vice chair of the 2022 IEEE Board of Directors Ad Hoc Committee on the Future of Engagement, 2022 vice president of IEEE Member and Geographic Activities, and chair of the 2024 IEEE Board of Directors Ad Hoc Committee on Leadership Continuity and Efficiency.
He served on the IEEE Board of Directors for three different years. He has been a member of the IEEE-USA, Member and Geographic Activities, and Publication Services and Products boards.
Koehler is a proud and active member of IEEE Women In Engineering and IEEE-Eta Kappa Nu, the honor society.
Steven Miller Photography
Schindler, an expert in microwave semiconductor technology, is an independent consultant supporting clients with technical expertise, due diligence, and project management.
Throughout his career, he led the development of microwave integrated-circuit technology, from lab demonstrations to high-volume commercial products. He has numerous technical publications and holds 11 patents.
Schindler served as CTO of Anlotek, and director of Qorvo and RFMD’s Boston design center. He was applications manager at IBM, engineering manager at ATN Microwave, and a lab manager at Raytheon.
An IEEE volunteer for more than 30 years, Schindler served as the 2024 vice president of IEEE Technical Activities and the 2022–2023 Division IV director. He was chair of the IEEE Conferences Committee from 2015 to 2018 and president of the IEEE Microwave Theory and Technology Society (MTTS) in 2003. He received the 2018 IEEE MTTS Distinguished Service Award. His award-winning micro-business column has appeared in IEEE Microwave Magazine since 2011.
He also led the 2025 One IEEE to Enable Strategic Investments in Innovations and Public Imperative Activities adhoc committee.
Schindler is an IEEE–Eta Kappa Nu honorary life member.