2026-04-27 21:00:01

Electric vehicles, whether they’re cars on the road or electric vertical take-off and landing (eVTOL) aircraft, are built around similar electric motors. But there are vital differences including component costs, mass, and redundancy.
Jon Wagner spent five years as the senior director of battery engineering for Tesla before joining California-based eVTOL developer Joby Aviation in 2017. He spoke with IEEE Spectrum about how engineering differs between cars and aircraft.
Jon Wagner leads power train and electronics at Joby Aviation.
How do eVTOL motors differ from car motors?
Jon Wagner: In general, ground transportation has a different focus on cost versus mass. You know, would you be willing to spend more on the parts in order to save a certain amount of mass? The trade-offs end on the ground vehicle and at a certain point the cost is dominant, whereas with aviation, the trade-offs between cost and mass go a lot deeper. And so for certain solutions eVTOL makers are willing to spend more money in order to enable either lighter weight or greater efficiency.
The other key difference is related to safety. In essence, we’re dealing with the same motor technologies for ground transportation and aviation right now, so the failure modes are similar. But of course, with aviation we have the desire for continued safe flight and landing, and that drives what you do in the design to mitigate those failures if they were to occur. In many cases in ground transportation, the mitigation for a failure is to pull over safely to the side of the road. In aviation, the mitigation is redundancy, because there’s not an option to pull over.
Is redundancy designed into EV motors?
Wagner: Typically, redundancy is not designed into electric vehicle drive systems solely for the purpose of redundancy. There are some cars now that have all-wheel drive—so there’s a motor on the front, a motor on the back—so as a secondary feature you get the redundancy. But it wasn’t done with the primary intent of having redundancy.
How does Joby’s eVTOL manufacturing compare to EV manufacturing?
Wagner: The most efficient way to run a large-scale engineering effort in a mature industry, such as automotive, is to break your system up into pieces that can be outsourced to suppliers who are going to do a really good job on each piece. The downside is that when you break a problem up into three pieces, you now have interface boundaries between each of these pieces, and those always create inefficiencies. We were able to design highly integrated solutions without taking that manufacturing penalty.
Are there any materials you’re really excited about?
Wagner: Permendur [a cobalt-iron alloy] typically costs in the neighborhood of 10 times as much as traditional motor steel. That’s significant, and it’s often not used in ground transportation because of that cost. It comes with small improvements in performance, but enough that for aviation it’s quite interesting.
Will electric aircraft catch on like ground EVs?
Wagner: I’ve always wanted to be a very forward thinker with respect to power-train. However, one of the things I’ve learned over the years is that power-train development has to come with a very healthy dose of patience. Developing a whole new type of power-train is a big endeavor, but it’s one that I’m very confident the aviation industry will undertake. We’re certainly undertaking it here at Joby, and we’ll see that broaden, I’m sure, with time.
This article appears in the May 2026 print issue as “Jon Wagner.”
2026-04-27 20:45:01

This sponsored article is brought to you by NYU Tandon School of Engineering.
The traditional approach to academic research goes something like this: Assemble experts from a discipline, put them in a building, and hope something useful emerges. Biology departments do biology. Engineering departments do engineering. Medical schools treat patients.
NYU is turning that model inside out. At its new Institute for Engineering Health, the organizing principle centers around disease states rather than traditional disciplines. Instead of asking “what can electrical engineers contribute to medicine?,” they’re asking “what would it take to cure allergic asthma?,” and then assembling whoever can answer that question, whether they’re immunologists, computational biologists, materials scientists, AI researchers, or wireless communications engineers.
Jeffrey Hubbell, NYU’s vice president for bioengineering strategy and professor of chemical and biomolecular engineering at NYU’s Tandon School of Engineering.New York University
The early results suggest they’re onto something. A chemical engineer and an electrical engineer collaborated to build a device that detects airborne threats — including disease pathogens — that’s now a startup. A visually impaired physician teamed with mechanical engineers to create navigation technology for blind subway riders. And Jeffrey Hubbell, the Institute’s leader, is advancing “inverse vaccines” that could reprogram immune systems to treat conditions from celiac disease to allergies — work that requires equal fluency in immunology, molecular engineering, and materials science.
The underlying problem these collaborations address is conceptual as much as organizational. In his field, Hubbell argues that modern medicine has optimized around a single strategy: developing drugs that block specific molecules or suppress targeted immune responses. Antibody technology has been the workhorse of this approach. “It’s really fit for purpose for blocking one thing at a time,” he says. The pharmaceutical industry has become extraordinarily good at creating these inhibitors, each designed to shut down a particular pathway.
But Hubbell asks a different question: Rather than inhibit one bad thing at a time, what if you could promote one good thing and generate a cascade that contravenes several bad pathways simultaneously? In inflammation, could you bias the system toward immunological tolerance instead of blocking inflammatory molecules one by one? In cancer, could you drive pro-inflammatory pathways in the tumor microenvironment that would overcome multiple immune-suppressive features at once?
This shift from inhibition to activation requires a fundamentally different toolkit — and a different kind of researcher. “We’re using biological molecules like proteins, or material-based structures — soluble polymers, supramolecular structures of nanomaterials — to drive these more fundamental features,” Hubbell explains. You can’t develop those approaches if you only understand biology, or only understand materials science, or only understand immunology. You need an understanding and a mastery of all three.
“There will be people doing AI, data science, computational science theory, people doing immunoengineering and other biological engineering, people doing materials science and quantum engineering, all really in close proximity to each other.” —Jeffrey Hubbell, NYU Tandon
Which logically leads to the question: How do you create researchers with that kind of cross-disciplinary depth?
The answer isn’t what you might expect. “There may have been a time when the objective was to have the bioengineer understand the language of biology,” Hubbell says. “But that time is long, long gone. Now the engineer needs to become a biologist, or become an immunologist, or become a neuroscientist.”
Hubbell isn’t talking about engineers learning enough biology to collaborate with biologists. He’s describing something more radical: training people whose disciplinary identity is genuinely ambiguous. “The neuroengineering students — it’s very difficult to know that they’re an engineer or a neuroscientist,” Hubbell says. “That’s the whole idea.”
His own students exemplify this. They publish in immunology journals, present at immunology conferences. “Nobody knows they’re engineers,” he says. But they bring engineering approaches — computational modeling, materials design, systems thinking — to immunological problems in ways that traditional immunologists wouldn’t.
The mechanism for creating these hybrid researchers is what Hubbell calls a “milieu.” “To learn it all on your own is hopeless,” he acknowledges, “but to learn it in a milieu becomes very, very efficient.”
NYU is expanding its facilities to include a science and technology hub designed to force encounters between people across various schools and disciplines who wouldn’t naturally cross paths.Tracey Friedman/NYU
NYU is making that milieu physical. The university has acquired a large building in Manhattan that will serve as its science and technology hub — a deliberate co-location strategy designed to force encounters between people across various schools and disciplines who wouldn’t naturally cross paths.
Juan de Pablo is the Anne and Joel Ehrenkranz Executive Vice President for Global Science and Technology and Executive Dean of the NYU Tandon School of Engineering.Steve Myaskovsky, Courtesy of NYU Photo Bureau
“There will be people doing AI, data science, computational science theory, people doing immunoengineering and other biological engineering, people doing materials science and quantum engineering, all really in close proximity to each other,” Hubbell explains.
The strategy mirrors what Juan de Pablo, NYU’s Anne and Joel Ehrenkranz Executive Vice President for Global Science and Technology and Executive Dean at the NYU Tandon School of Engineering, describes as organizing around “grand challenges” rather than traditional disciplines. “What drives the recruitment and the spaces and the people that we’re bringing in are the problems that we’re trying to solve,” he says. “Great minds want to have a legacy, and we are making that possible here.”
But physical proximity alone isn’t enough. The Institute is also cultivating what Hubbell calls an “explicit” rather than “tacit” approach to translation — thinking about clinical and commercial pathways from day one.
“It’s a terrible thing to solve a problem that nobody cares about,” Hubbell tells his students. To avoid that, the Institute runs “translational exercises” — group sessions where researchers map the entire path from discovery to deployment before launching multi-year research programs. Where could this fail? What experiments would prove the idea wrong quickly? If it’s a drug, how long would the clinical trial take? If it’s a computational method, how would you roll it out safely?
The new cross-institutional initiative represents a major investment in science and technology, and includes adding new faculty, state-of-the-art facilities, and innovative programs.NYU Tandon
The approach contrasts sharply with typical academic practice. “Sometimes academics tend to think about something for 20 minutes and launch a 5-year PhD program,” Hubbell says. “That’s probably not a good way to do it.” Instead, the Institute brings together people who have actually developed drugs, built algorithms, or commercialized devices — importing their hard-won experience into the planning phase before a single experiment is run.
The timing may be fortuitous. De Pablo notes that AI is compressing timelines dramatically. “What we thought was going to take 10 years to complete, we might be able to do in 5,” he says.
But he’s quick to note AI’s limitations. While tools like AlphaFold can predict how a single protein folds — a breakthrough of the last five years — biology operates at much larger scales. “What we really need to do now is design not one protein, but collections of them that work together to solve a specific problem,” de Pablo explains.
Hubbell agrees: “Biology is much bigger — many, many, many systems.” The liver and kidney are in different places but interact. The gut and brain are connected neurologically in ways researchers are just beginning to map. “AI is not there yet, but it will be someday. And that’s our job — to develop the data sets, the computational frameworks, the systems frameworks to drive that to the next steps.”
It’s a moment of unusual ambition. “At a time when we’re seeing some research institutions retrench a little bit and limit their ambitions,” de Pablo says, “we’re doing just the opposite. We’re thinking about what are the grand challenges that we want to, and need to, tackle.”
The bet is that the breakthroughs worth making can’t emerge from any single discipline working alone. They require collisions —sometimes planned, sometimes accidental — between people who speak different technical languages and are willing to develop a shared one. NYU is engineering those collisions at scale.
2026-04-27 18:00:01

This webinar covers power system modeling and simulation across multiple timescales, from quasi-static 8760 analysis through EMT studies, fault classification, and inverter-based resource grid integration.
What Attendees will Learn
2026-04-25 02:00:02

When Yong Wang recently received one of the highest honors for early-career data visualization researchers, it marked a milestone in an extraordinary journey that began far from the world’s technology hubs.
Wang was born in a small farming village in southern China to parents with limited formal education. Today the IEEE member and associate editor of IEEE Transactions on Visualization and Computer Graphics is an assistant professor in the College of Computing and Data Science atNanyang Technological University, in Singapore. He studies how people can employdata visualization techniques to get more out of large-scale datasets as well as advanced artificial intelligence techniques.
“Visualization helps people understand complex ideas,” he says. “If we design these tools well, they can make advanced technologies accessible to everyone.”
For his work in the field, theIEEE Computer Society visualization and graphics technical committee presented him with its 2025 Significant New Researcher Award. The recognition highlights his growing influence in fields including data visualization, human-computer interaction and human-AI collaboration—areas becoming more important as the world generates more data than humans can easily interpret.
EMPLOYER
Nanyang Technological University, in Singapore
POSITION
Assistant professor of computing and data science
IEEE MEMBER GRADE
Member
ALMA MATERS
Harbin Institute of Technology in China; Huazhong University of Science and Technology in Wuhan, China; Hong Kong University of Science and Technology
“Visualization helps people understand complex ideas,” Wang says. “If we design these tools well, they can make advanced technologies accessible to everyone.”
Wang was born in in a small farming village in southern China. China’s economy was still developing, and life in his village was modest. Most families in Hunan grew rice, vegetables, and fruit to support themselves.
Wang’s parents worked in agriculture too, and his father often traveled to cities to earn money working in a factory or on construction jobs. The extra income helped support the family and made it possible for Wang to attend college.
“I’m very grateful to my parents,” Wang says. “They never attended university, but they strongly supported my education.”
“If we build tools that help people understand information, then more people can participate in science and innovation. That’s the real power of visualization.”
Technology was scarce in the village, he says. Computers were almost nonexistent, and televisions were considered precious, expensive household possessions.
One childhood memory still makes him laugh: During a summer vacation, he and his brother spent so many hours playing video games on a simple console connected to the family’s television that the TV screen eventually burned out.
“My mother was very angry,” he recalls. “At that time, a TV was a very valuable thing.”
He says that despite never having used a laptop or experimenting with electronic equipment, he was fascinated by the technologies he saw on TV shows.
His parents encouraged a practical career such as medicine or civil engineering, but he felt drawn to robotics and computing, he says.
“I didn’t really understand what computer science involved,” he says. “But from what I saw on TV, it looked exciting and advanced.”
He enrolled at Harbin Institute of Technology, in northeastern China. The esteemed university is known for its engineering programs. His major—automation—combined elements of electrical engineering, robotics, and control systems.
One of the defining experiences of his undergraduate years, he says, was a university robotics competition. Wang and his teammates designed a robot capable of autonomously navigating around obstacles.
The design was simple compared with professional systems, he acknowledges. But, he says, the experience was exhilarating. His team placed second, and Wang began to see engineering as both creative and collaborative.
He graduated with a bachelor’s degree in 2011, and then pursued a master’s degree in pattern recognition and image processing from the Huazhong University of Science and Technology, in Wuhan, China.
In 2014 he took a position as a research intern working at a technology company in Shenzhen, China.
That experience helped him clarify his future, he says: “I realized I didn’t enjoy doing repetitive work or simply following instructions. I wanted to explore ideas that interested me, and I wanted to conduct research.” The realization pushed him toward graduate school, he says.
In 2014 he took a position as a research intern working a technology company in Shenzhen, China. That experience helped him clarify his future, he says: “I realized I didn’t enjoy doing repetitive work or simply following instructions. I wanted to explore ideas that interested me, and I wanted to conduct research.” The realization pushed him toward graduate school, he says.
He enrolled in the computer science Ph.D. program at the Hong Kong University of Science and Technology and earned the degree in 2018. He remained there as a postdoctoral researcher until 2020, when he moved to Singapore to join Singapore Management University as an assistant professor of computing and information systems. He moved over to Nanyang Technological University as an assistant professor in 2024.
His research focuses on a challenge facing nearly every business: how to make sense of the enormous amounts of data being generated.
He and his students, collaborators have developed a series of approaches to recommend or automatically generate appropriate visualizations, including infographics.
It allows nontechnical people to create visualizations instead of hiring professional designers.
Another focus of Wang’s research is human-AI collaboration. AI systems can analyze data at enormous scale, but people still need to be the final decision-makers, he says.
Visualization helps bridge the gap between human intention and AI’s complex calculations by making the process an AI system uses to reach a result more transparent and understandable.
“If people understand how the AI system works,” Wang says, “they can collaborate with it more effectively.”
He recently explored how visualization techniques could help researchers understand quantum computing, a field where core concepts—such as superposition, where a bit can be in more than one state at a time—are abstract. In classical computing, the bit state is binary: It’s either 1 or 0. A quantum bit, or qubit, can be 1, 0, or both. The differences get more dizzying from there.
Visualization tools could help scientists monitor quantum systems and interpret quantum machine-learning models, he says.
“We live in an era of information explosions,” Wang says. “Huge amounts of data are generated, and it’s difficult for people to interpret all of it to make better business decisions.”
Data visualization offers a solution by turning complex information into images, patterns, and diagrams that people can more readily understand.
But many visualizations still must be designed manually by experts, Wang notes. It’s a time-consuming process that creates a bottleneck, he says.
His solution is to use large language models and multimodal systems that can generate text, images, video, and sensor data simultaneously and automate parts of the process.
One system developed by his research group lets users design complex infographics through natural-language instructions combined with simple interactions such as drawing on a touchscreen with a finger. It allows nontechnical people to generate visualizations instead of hiring professional designers.
Another focus of his research is human-AI collaboration. AI systems can analyze data at enormous scale, but people still need to be the final decision-makers, he says.
Visualization helps bridge the gap between human intention and AI’s complex calculations by making the process an AI system uses to reach a result more transparent and understandable.
“If people understand how the AI system works,” he says, “they can collaborate with it more effectively.”
He recently explored how visualization techniques could help researchers understand quantum computing, a field where core concepts—such as superposition, where a bit can be in more than one state at a time—are abstract. In classical computing, the bit state is binary: It’s either 1 or 0. A quantum bit, or qubit, can be 1, 0, or both. The differences get more dizzying from there.
Visualization tools could help scientists monitor quantum systems and interpret quantum machine-learning models, he says.
Teaching and mentoring students remain among the most meaningful parts of Wang’s career, he says.
Professional communities such as the IEEE Computer Society, he says, play a major role in helping him transform early-stage graduate students unsure of which lines of inquiry they will pursue into independent researchers with a solid technical focus. Through conferences, publications, and technical committees, IEEE connects Wang with other researchers working in visualization, AI, and human-computer interactions, he says.
Those connections have helped him share ideas, collaborate, and stay up to date on innovations in the research community.
Receiving the Significant New Researcher award motivates him to continue pushing the field forward, he says.
Looking back, he says, the distance between his rural village in Hunan and an international research career still feels remarkable. But, he says, the journey reflects something larger about his chosen field: “If we build tools that help people understand information, then more people can participate in science and innovation.
“That’s the real power of visualization.”
2026-04-23 22:00:01

Two weeks ago, Anthropic announced that its new model, Claude Mythos Preview, can autonomously find and weaponize software vulnerabilities, turning them into working exploits without expert guidance. These were vulnerabilities in key software like operating systems and internet infrastructure that thousands of software developers working on those systems failed to find. This capability will have major security implications, compromising the devices and services we use every day. As a result, Anthropic is not releasing the model to the general public, but instead to a limited number of companies.
The news rocked the internet security community. There were few details in Anthropic’s announcement, angering many observers. Some speculate that Anthropic doesn’t have the GPUs to run the thing, and that cybersecurity was the excuse to limit its release. Others argue Anthropic is holding to its AI safety mission. There’s hype and counterhype, reality and marketing. It’s a lot to sort out, even if you’re an expert.
We see Mythos as a real but incremental step, one in a long line of incremental steps. But even incremental steps can be important when we look at the big picture.
We’ve written about shifting baseline syndrome, a phenomenon that leads people—the public and experts alike—to discount massive long-term changes that are hidden in incremental steps. It has happened with online privacy, and it’s happening with AI. Even if the vulnerabilities found by Mythos could have been found using AI models from last month or last year, they couldn’t have been found by AI models from five years ago.
The Mythos announcement reminds us that AI has come a long way in just a few years: The baseline really has shifted. Finding vulnerabilities in source code is the type of task that today’s large language models excel at. Regardless of whether it happened last year or will happen next year, it’s been clear for a while this kind of capability was coming soon. The question is how we adapt to it.
We don’t believe that an AI that can hack autonomously will create permanent asymmetry between offense and defense; it’s likely to be more nuanced than that. Some vulnerabilities can be found, verified, and patched automatically. Some vulnerabilities will be hard to find but easy to verify and patch—consider generic cloud-hosted web applications built on standard software stacks, where updates can be deployed quickly. Still others will be easy to find (even without powerful AI) and relatively easy to verify, but harder or impossible to patch, such as IoT appliances and industrial equipment that are rarely updated or can’t be easily modified.
Then there are systems whose vulnerabilities will be easy to find in code but difficult to verify in practice. For example, complex distributed systems and cloud platforms can be composed of thousands of interacting services running in parallel, making it difficult to distinguish real vulnerabilities from false positives and to reliably reproduce them.
So we must separate the patchable from the unpatchable, and the easy to verify from the hard to verify. This taxonomy also provides us guidance for how to protect such systems in an era of powerful AI vulnerability-finding tools.
Unpatchable or hard to verify systems should be protected by wrapping them in more restrictive, tightly controlled layers. You want your fridge or thermostat or industrial control system behind a restrictive and constantly updated firewall, not freely talking to the internet.
Distributed systems that are fundamentally interconnected should be traceable and should follow the principle of least privilege, where each component has only the access it needs. These are bog-standard security ideas that we might have been tempted to throw out in the era of AI, but they’re still as relevant as ever.
This also raises the salience of best practices in software engineering. Automated, thorough, and continuous testing was always important. Now we can take this practice a step further and use defensive AI agents to test exploits against a real stack, over and over, until the false positives have been weeded out and the real vulnerabilities and fixes are confirmed. This kind of VulnOps is likely to become a standard part of the development process.
Documentation becomes more valuable, as it can guide an AI agent on a bug-finding mission just as it does developers. And following standard practices and using standard tools and libraries allows AI and engineers alike to recognize patterns more effectively, even in a world of individual and ephemeral instant software—code that can be generated and deployed on demand.
Will this favor offense or defense? The defense eventually, probably, especially in systems that are easy to patch and verify. Fortunately, that includes our phones, web browsers, and major internet services. But today’s cars, electrical transformers, fridges, and lampposts are connected to the internet. Legacy banking and airline systems are networked.
Not all of those are going to get patched as fast as needed, and we may see a few years of constant hacks until we arrive at a new normal: where verification is paramount and software is patched continuously.2026-04-23 21:00:01

Tom Burick has always considered himself a builder. Over the years he’s designed robots, constructed a vintage teardrop trailer, and most recently, led a group of students in building a full-scale replica of a pivotal 1940s computer.
Burick is a technology instructor at PS Academy in Gilbert, Ariz., a middle and high school for students with autism and other specialized learning needs. At the start of the 2025–26 school year, he began a project with his students to build a full-scale replica of the Electronic Numerical Integrator and Computer, or ENIAC, for the 80th anniversary of the historic computer’s construction. ENIAC was one of the world’s first programmable electronic computers. When it was built, it was about one thousand times as fast as other machines.
Before becoming a teacher, Burick owned a robotics company for a decade in the 2000s. But when a financial downturn forced him to close the business, he turned to teaching. “I had so many amazing people help me when I was young [who] really gave me their time and resources, and really changed the trajectory of my life,” Burick says. “I thought I need to pay that forward.”
As a young child in Latrobe, Pa., Burick watched the television show Lost in Space, which includes a robot character who protects the family. “He was the young boy’s best friend, and I was so captivated by that. I remember thinking to myself, I want that in my life. And that started that lifelong love affair with robotics and technology.”
He started building toy robots out of anything he could find, and in junior high school, he began adding electronics. “By early high school, I was building full-fledged autonomous, microprocessor-controlled machines,” he says. At age 15, he built a 150-pound steel firefighting robot, for which he won awards from IEEE and other organizations.
Burick kept building robots and reached out for help from local colleges and universities. He first got in touch with a student at Carnegie Mellon University, who invited him to visit campus. “My parents drove me down the next weekend, and he gave me a tour of the robotics lab. I was mesmerized. He sent me home with college textbooks and piles of metal and gears and wires,” Burick says. He would read the textbook a page at a time, reading it again and again until he felt he had an understanding of it. Then, to help fill gaps in his understanding, he got in touch with a robotics instructor at Saint Vincent College, in his hometown of Latrobe, who let him sit in on classes. Each of these adults, he says, “helped change the trajectory of my life.”
Toward the end of high school, Burick realized that college wouldn’t be the right environment for him. “I was drawn to real-world problem-solving rather than structured coursework and I chose to continue along that path,” he says. Additionally, Burick has dyscalculia, which makes traditional mathematics more challenging for him. “It pushed me to develop alternative methods of engineering.”
The ENIAC replica Burick’s students built precisely matches what the original computer would have looked like before it was disassembled in the 1950s. Robert Gamboa
When he graduated, he worked in several tech jobs before starting his own company. In 2000, he opened a computer retail store and adjacent robotics business, White Box Robotics. The idea for the company came when Burick was building a “white box” PC from standard, off-the-shelf components, and realized there was no comparable product for robotics.
So, he started developing a modular, general-purpose platform that applied white box PC standards to mobile robots. “The robot’s chassis was like a box of Legos,” he says. You could click together two torsos to double its payload, switch out the drive system, or swap its head for a different set of sensors. He filed utility and design patents for the platform, called the 914 PC-Bot, and after merging with a Canadian defense robotics company called Frontline Robotics, started production. They sold about 200 robots in 17 countries, Burick says.
Then the 2008 financial crisis hit. White Box Robotics held on for a couple of years, shuttering in late 2010. “I got to live my life’s dream for 10 years,” he says. After closing White Box, “there was some soul searching” about what to do next. He recalled the impact his own mentors had, and decided to pay it forward by teaching.
In 2013, Burick started working in a vocational training program for young adults living with autism. The program didn’t have a technical arm, so he started one and ran it until 2019, when he was hired to be a technology instructor at PS Academy Arizona.
Burick and one of his students assemble the base for one of ENIAC’s three portable function tables, which contained banks of switches that stored numerical constants. Bri Mason
Burick feels he can connect with his students, because he is also neurodivergent. Throughout his childhood, he was told what he wasn’t able to do because of his dyscalculia diagnosis. “People tell you what it takes, but they never tell you what it gives,” Burick says.
In adulthood, he realized that some of his strengths are linked to dyscalculia, too, like strong 3D spatial reasoning. “I have this CAD program that runs in my head 24 hours a day,” he says. “I think the reason I was successful in robotics, truly, was because of the dyscalculia…. To me, [it] has always been a superpower.”
Whenever his students say something disparaging about living with autism, he shares his own experience. “You need to have maybe just a bit more tenacity than others, because there are parts of it you do have to fight through, but you come through with gifts and strengths,” he tells them.
And Burick’s classes aim to play to those strengths. “I didn’t want my technology program to feel like craft hour,” he says. Instead, through projects like the ENIAC replica, students can leverage traits many of them share, like the abilities to hyperfocus and to precisely repeat tasks.
Burick has taught his students about ENIAC for several years. While reading about it, he learned that the massive, 27-tonne computer was dismantled and partially destroyed after being decommissioned in 1955. Although a few of ENIAC’s 40 original panels are on display at museums, “there was no hope of ever seeing it together again. We wanted to give the world that experience,” Burick says.
He and his students started by learning about ENIAC, and even Burick was surprised by how complex the 80-year-old computer was. They built a one-twelfth scale model to help the students better understand what it looked like. Seeing the students light up, Burick became confident in their ability to move onto the full-scale model, and he started ordering supplies.
ENIAC was composed of 40 large metal panels arranged in a U-shape that housed its many vacuum tubes, resistors, capacitors, and switches. Twenty of the panels were accumulators with the same design, so the students started with these, then worked through smaller groupings of panels. The repeating panels brought symmetry to ENIAC, Burick says, but it was also one of the main challenges of recreating it. If one part was slightly out of place, the next one would be too and the mistake would compound.
The students installed 500 simulated vacuum tubes in each of the panels here, for a total of 18,000 vacuum tubes.Robert Gamboa
Once they constructed the panels, they added ENIAC’s three function tables, which stored numerical constants in banks of switches, then two punch-card machines. Finally, they installed 18,000 simulated vacuum tubes. In total, the project used nearly 300 square meters of thick-ream cardboard, 1,600 hot-glue-gun sticks, and 7 gallons of black paint.
The scale of the machine—and his students’ work—left Burick in awe. “By the time we were done, I felt like I was in a room full of scientists,” he says.
Previously, Burick’s students built an 8-foot-long drivable Tesla Cybertruck (“complete with a 400-watt stereo system and a subwoofer”) and he plans to keep the momentum with another recreation—maybe from the Apollo moon missions.
“I go to work every day, and I feel passionate about robotics [and] technology. I get to share that passion with the students,” Burick says. “I get to feel what it’s like to be in the position of the people that helped me. It closes that loop, and I find that really rewarding.”