2025-05-20 04:51:17
A new computing paradigm—thermodynamic computing—has entered the scene. Okay, okay, maybe it’s just probabilistic computing by a new name. They both use noise (such as that caused by thermal fluctuations) instead of fighting it, to perform computations. But still, it’s a new physical approach.
“If you’re talking about computing paradigms, no, it’s this same computing paradigm,” as probabilistic computing, says Behtash Behin-Aein, the CTO and founder of probabilistic computing startup Ludwig Computing (named after Ludwig Boltzmann, a scientist largely responsible for the field of, you guessed it, thermodynamics). “But it’s a new implementation,” he adds.
In a recent publication in Nature Communications, New York-based startup Normal Computing detailed their first prototype of what they call a thermodynamic computer. They’ve demonstrated that they can use it to harness noise to invert matrices. They also demonstrated Gaussian sampling, which underlies some AI applications.
Conventionally, noise is the enemy of computation. However, certain applications actually rely on artificially generated noise. And using naturally occurring noise can be vastly more efficient.
“We’re focusing on algorithms that are able to leverage noise, stochasticity, and non-determinism,” says Zachery Belateche, silicon engineering lead at Normal Computing. “That algorithm space turns out to be huge, everything from scientific computing to AI to linear algebra. But a thermodynamic computer is not going to be helping you check your email anytime soon.”
For these applications, a thermodynamic—or probabilistic—computer starts out with its components in some semi-random state. Then, the problem the user is trying to solve is programmed into the interactions between the components. Over time, these interactions allow the components to come to equilibrium. This equilibrium is the solution to the computation.
This approach is a natural fit for certain scientific computing applications that already include randomness, such as Monte-Carlo simulations. It is also well suited for AI image generation algorithm stable diffusion, and a type of AI known as probabilistic AI. Surprisingly, it also appears to be well-suited for some linear algebra computations that are not inherently probabilistic. This makes the approach more broadly applicable to AI training.
“Now we see with AI that paradigm of CPUs and GPUs is being used, but it’s being used because it was there. There was nothing else. Say I found a gold mine. I want to basically dig it. Do I have a shovel? Or do I have a bulldozer? I have a shovel, just dig,” says Mohammad C. Bozchalui, the CEO and co-founder of Ludwig Computing. “We are saying this is a different world which requires a different tool.”
Normal Computing’s prototype chip, which they termed the stochastic processing unit (SPU), consists of eight capacitor-inductor resonators and random noise generators. Each resonator is connected to each other resonator via a tunable coupler. The resonators are initialized with randomly generated noise, and the problem under study is programmed into the couplings. After the system reaches equilibrium, the resonator units are read out to obtain the solution.
“In a conventional chip, everything is very highly controlled,” says Gavin Crooks, a staff research scientist at Normal Computing. “Take your foot off the control little bit, and the thing will naturally start behaving more stochastically.”
Although this was a successful proof-of-concept, the Normal Computing team acknowledges that this prototype is not scalable. But they have amended their design, getting rid of tricky-to-scale inductors. They now plan to create their next design in silico, rather than on a printed circuit board, and expect their next chip to come out later this year.
How far this technology can be scaled remains to be seen. The design is CMOS-compatible, but there is a lot to be worked out before it can be used to solve large-scale real-world problems. “It’s amazing what they have done,” Bozchalui of Ludwig Computing says. “But at the same time, there is a lot to be worked to really take it from what is today to commercial product to something that can be used at the scale.”
Although probabilistic computing and thermodynamic computing are essentially the same paradigm, there is a cultural difference. The companies and researchers working on probabilistic computing almost exclusively trace their academic roots to the group of Supryo Datta at Purdue University. The three cofounders of Normal Computing, however, have no ties to Purdue and come from backgrounds in quantum computing.
This results in the Normal Computing cofounders having a slightly different vision. They imagine a world where different kinds of physics are utilized for their own computing hardware, and every problem that needs solving is matched with the most optimal hardware implementation.
“We coined this term physics-based ASICs,” Normal Computing’s Belateche says, referring to application-specific integrated circuits. In their vision, a future computer will have access to conventional CPUs and GPUs, but also a quantum computing chip, a thermodynamic computing chip, and any other paradigm people might dream up. And each computation will be sent to an ASIC that uses the physics that’s most appropriate for the problem at hand.
2025-05-17 01:30:03
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
Enjoy today’s videos!
Behind the scenes at DARPA Triage Challenge Workshop 2 at the Guardian Centers in Perry, Ga.
[ DARPA ]
Watch our coworker in action as he performs high-precision stretch routines enabled by 31 degrees of freedom. Designed for dynamic adaptability, this is where robotics meets real-world readiness.
[ LimX Dynamics ]
Thanks, Jinyan!
Featuring a lightweight design and continuous operation capabilities under extreme conditions, LYNX M20 sets a new benchmark for intelligent robotic platforms working in complex scenarios.
[ DEEP Robotics ]
The sound in this video is either excellent or terrible, I’m not quite sure which.
[ TU Berlin ]
Humanoid loco-manipulation holds transformative potential for daily service and industrial tasks, yet achieving precise, robust whole-body control with 3D end-effector force interaction remains a major challenge. Prior approaches are often limited to lightweight tasks or quadrupedal/wheeled platforms. To overcome these limitations, we propose FALCON, a dual-agent reinforcement-learning-based framework for robust force-adaptive humanoid loco-manipulation.
[ FALCON ]
An MRSD Team at the CMU Robotics Institute is developing a robotic platform to map environments through perceptual degradation, identify points of interest, and relay that information back to first responders. The goal is to reduce information blindness and increase safety.
[ Carnegie Mellon University ]
We introduce an eldercare robot (E-BAR) capable of lifting a human body, assisting with postural changes/ambulation, and catching a user during a fall, all without the use of any wearable device or harness. With a minimum width of 38 centimeters, the robot’s small footprint allows it to navigate the typical home environment. We demonstrate E-BAR’s utility in multiple typical home scenarios that elderly persons experience, including getting into/out of a bathtub, bending to reach for objects, sit-to-stand transitions, and ambulation.
[ MIT ]
Sanctuary AI had the pleasure of accompanying Microsoft to Hannover Messe, where we demonstrated how our technology is shaping the future of work with autonomous labor powered by physical AI and general-purpose robots.
[ Sanctuary AI ]
Watch how drywall finishing machines incorporate collaborative robots, and learn why Canvas chose the Universal Robots platform.
[ Canvas ] via [ Universal Robots ]
We’ve officially put a stake in the ground in Dallas–Fort Worth. Torc’s new operations hub is open for business—and it’s more than just a dot on the map. It’s a strategic launchpad as we expand our autonomous freight network across the southern United States.
[ Torc ]
This Stanford Robotics Center talk is by Jonathan Hurst at Agility Robotics, on “Humanoid Robots: From the Warehouse to Your House.”
How close are we to having safe, reliable, useful in-home humanoids? If you believe recent press, it’s just around the corner. Unquestionably, advances in Al and robotics are driving innovation and activity in the sector; it truly is an exciting time to be building robots! But what does it really take to execute on the vision of useful, human-centric, multipurpose robots? Robots that can operate in human spaces, predictably and safely? We think it starts with humanoids in warehouses, an unsexy but necessary beachhead market to our future with robots as part of everyday life. I’ll talk about why a humanoid is more than a sensible form factor, it’s inevitable; and I will speak to the excitement around a ChatGPT moment for robotics, and what it will take to leverage Al advances and innovation in robotics into useful, safe humanoids.
[ Stanford ]
2025-05-16 02:00:03
For more than three years, an IEEE Standards Association working group has been refining a draft standard for procuring artificial intelligence and automated decision systems, IEEE 3119-2025. It is intended to help procurement teams identify and manage risks in high-risk domains. Such systems are used by government entities involved in education, health, employment, and many other public sector areas. Last year the working group partnered with a European Union agency to evaluate the draft standard’s components and to gather information about users’ needs and their views on the standard’s value.
At the time, the standard included five processes to help users develop their solicitations and to identify, mitigate, and monitor harms commonly associated with high-risk AI systems.
Those processes were problem definition, vendor evaluation, solution evaluation, contract negotiation, and contract monitoring.
The EU agency’s feedback led the working group to reconsider the processes and the sequence of several activities. The final draft now includes an additional process: solicitation preparation, which comes right after the problem definition process. The working group believes the added process addresses the challenges organizations experience with preparing AI-specific solicitations, such as the need to add transparent and robust data requirements and to incorporate questions regarding the maturity of vendor AI governance.
The EU agency also emphasized that it’s essential to include solicitation preparation, which gives procurement teams additional opportunities to adapt their solicitations with technical requirements and questions regarding responsible AI system choices. Leaving space for adjustments is especially relevant when acquisitions of AI are occurring within emerging and changing regulatory environments.
Gisele Waters
Currently there are several internationally accepted standards for AI management, AI ethics, and general software acquisition. Those from the IEEE Standards Association and the International Organization for Standardization target AI design, use, and life-cycle management.
Until now, there has been no internationally accepted, consensus-based standard that focuses on the procurement of AI tools and offers operational guidance for responsibly purchasing high-risk AI systems that serve the public interest.
The IEEE 3119 standard addresses that gap. Unlike the AI management standard ISO 42001 and other certifications related to generic AI oversight and risk governance, IEEE’s new standard offers a risk-based, operational approach to help government agencies adapt traditional procurement practices.
Governments have an important role to play in the responsible deployment of AI. However, market dynamics and unequal AI expertise between industry and government can be barriers that discourage success.
One of the standard’s core goals is to better inform procurement leaders about what they are buying before they make high-risk AI purchases. IEEE 3119 defines high-risk AI systems as those that make or are a substantial factor in making consequential decisions that could have significant impacts on people, groups, or society. The definition is similar to the one used in Colorado’s 2034 AI Act, the first U.S. state-level law comprehensively addressing high-risk systems.
The standard’s processes, however, do complement ISO 42001 in many ways. The relationship between both is illustrated below.
IEEE 3119 clause | ISO/IEC 42001:2023 clause |
---|---|
6 Problem definition |
4.1 Understanding the organization and its context
4.2 Understanding the needs and expectations of interested parties 6.1.4 AI system impact assessment |
7 Solicitation preparation process |
4.3 Determining the scope of the AI management system
4.4 AI management system and its processes |
8 Vendor evaluation process |
5. Leadership (commitment, policies, roles, etc.)
6.1.2 AI risk assessment 7.1 Resources 7.2 Competence 7.3 Awareness |
9 Solution evaluation process |
4.3 Determining the scope of the AI management system
4.4 AI management system and its processes 6.1 Actions to address risks and opportunities 7.4 Communication 7.5 Documented information |
10 Contract negotiation process |
6.1.3 AI risk treatment
9.1 Monitoring, measurement, analysis, and evaluation 10.2 Nonconformity and corrective action |
11 Contract monitoring process |
4.4 AI management system
6.3 Planning of changes 7.1 Resources 7.2 Competence 7.5.2 Creating and updating documented information 7.5.3 Controlling documented information 8.1 Operational planning and control 8.2 AI risk assessment 8.3 AI risk treatment 8.4 AI system impact assessment 9.1 Monitoring, measurement, analysis, and evaluation 9.2 Internal audit 9.3 Management review 10.2 Nonconformity and corrective action |
Source: IEEE 3119-2025 Working Group
International standards, often characterized as soft law, are used to shape AI development and encourage international cooperation regarding its governance.
Hard laws for AI, or legally binding rules and obligations, are a work in progress around the world. In the United States, a patchwork of state legislation governs different aspects of AI, and the approach to national AI regulation is fragmented, with different federal agencies implementing their own guidelines.
Europe has led by passing the European Union’s AI Act, which began governing AI systems based on their risk levels when it went into effect last year.
But the world lacks regulatory hard laws with an international scope.
The IEEE 3119-2025 standard does align with existing hard laws. Due to its focus on procurement, the standard supports the high-risk provisions outlined in the EU AI Act’s Chapter III and Colorado’s AI Act. The standard also conforms to the proposed Texas HB 1709 legislation, which would mandate reporting on the use of AI systems by certain business entities and state agencies.
Because most AI systems used in the public sector are procured rather than built in-house, IEEE 3119 applies to commercial AI products and services that don’t require substantial modifications or customizations.
The standard is intended for:
The IEEE Standards Association has partnered with the AI Procurement Lab to offer the IEEE Responsible AI Procurement Training program. The course covers how to apply the standard’s core processes and adapt current practices for the procurement of high-risk AI.
The standard includes over 26 tools and rubrics across the six processes, and the training program explains how to use many of these tools. For example, the training includes instructions on how to conduct a risk-appetite analysis, apply the vendor evaluation scoring guide to analyze AI vendor claims, and create an AI procurement “risk register” tied to identified use-case risks and their potential mitigations. The training session is now available for purchase.
It’s still early days for AI integration. Decision makers don’t yet have much experience in purchasing AI for high-risk domains and in mitigating those risks. The IEEE 3119-2025 standard aims to support agencies build and strengthen their AI risk mitigation muscles.
2025-05-15 05:33:07
There’s a mathematical concept called the kissing number. Somewhat disappointingly, it’s got nothing to do with actual kissing. It enumerates how many spheres can touch (or “kiss”) a single sphere of equal size without crossing it. In one dimension, the kissing number is 2. In two dimensions, it’s 6 (think The New York Times’ spelling bee puzzle configuration). As the number of dimensions grows, the answer becomes less obvious: For most dimensionalities over 4, only upper and lower bounds on the kissing number are known. Now, an AI agent developed by Google DeepMind called AlphaEvolve has made its contribution to the problem, increasing the lower bound on the kissing number in 11 dimensions from 592 to 593.
This may seem like an incremental improvement on the problem, especially given that the upper bound on the kissing number in 11 dimensions is 868, so the unknown range is still quite large. But it represents a novel mathematical discovery by an AI agent, and challenges the idea that large language models are not capable of original scientific contributions.
And this is just one example of what AlphaEvolve has accomplished. “We applied AlphaEvolve across a range of open problems in research mathematics, and we deliberately picked problems from different parts of math: analysis, combinatorics, geometry,” says Matej Balog, a research scientist at DeepMind that worked on the project. They found that for 75 percent of the problems, the AI model replicated the already known optimal solution. In 20 percent of cases, it found a new optimum that surpassed any known solution. “Every single such case is a new discovery,” Balog says. (In the other 5 percent of cases, the AI converged on a solution that was worse than the known optimal one.)
The model also developed a new algorithm for matrix multiplication—the operation that underlies much of machine learning. A previous version of DeepMind’s AI model, called AlphaTensor, had already beat the previous best known algorithm, discovered in 1969, for multiplying 4 by 4 matrices. AlphaEvolve found a more general version of that improved algorithm.
DeepMind’s AlphaEvolve made improvements to several practical problems at Google. Google DeepMind
In addition to abstract math, the team also applied their model to practical problems Google as a company faces every day. The AI was also used to optimize data-center orchestration to gain 1 percent improvement, to optimize the design of the next Google tensor processing unit, and to discover an improvement to a kernel used in Gemini training leading to a 1 percent reduction in training time.
“It’s very surprising that you can do so many different things with a single system,” says Alexander Novikov, a senior research scientist at DeepMind who also worked on AlphaEvolve.
AlphaEvolve is able to be so general because it can be applied to almost any problem that can be expressed as code, and which can be checked by another piece of code. The user supplies an initial stab at the problem—a program that solves the problem at hand, however suboptimally—and a verifier program that checks how well a piece of code meets the required criteria.
Then, a large language model, in this case Gemini, comes up with other candidate programs to solve the same problem, and each one is tested by the verifier. From there, AlphaEvolve uses a genetic algorithm such that the “fittest” of the proposed solutions survive and evolve to the next generation. This process repeats until the solutions stop improving.
AlphaEvolve uses an ensemble of Gemini large language models (LLMs) in conjunction with an evaluation code, all orchestrated by a genetic algorithm to optimize a piece of code. Google DeepMind
“Large language models came around, and we started asking ourselves, is it the case that they are only going to add what’s in the training data, or can we actually use them to discover something completely new, new algorithms or new knowledge?” Balog says. This research, Balog claims, shows that “if you use the large language models in the right way, then you can, in a very precise sense, get something that’s provably new and provably correct in the form of an algorithm.”
AlphaEvolve comes from a long lineage of DeepMind’s models, going back to AlphaZero, which stunned the world by learning to play chess, Go, and other games better than any human player without using any human knowledge—just by playing the game and using reinforcement learning to master it. Another math-solving AI based on reinforcement learning, AlphaProof, performed at the silver-medalist level on the 2024 International Math Olympiad.
For AlphaEvolve, however, the team broke from the reinforcement learning tradition in favor of the genetic algorithm. “The system is much simpler,” Balog says. “And that actually has consequences, that it’s much easier to set up on a wide range of problems.”
The team behind AlphaEvolve hopes to evolve their system in two ways.
First, they want to apply it to a broader range of problems, including those in the natural sciences. To pursue this goal, they are planning to open up an early access program for interested academics to use AlphaEvolve in their research. It may be harder to adapt the system to the natural sciences, as verification of proposed solutions may be less straightforward. But, Balog says, “We know that in the natural sciences, there are plenty of simulators for different types of problems, and then those can be used within AlphaEvolve as well. And we are, in the future, very much interested in broadening the scope in this direction.”
Second, they want to improve the system itself, perhaps by coupling it with another DeepMind project: the AI coscientist. This AI also uses an LLM and a genetic algorithm, but it focuses on hypothesis generation in natural language. “They develop these higher-level ideas and hypotheses,” Balog says. “Incorporating this component into AlphaEvolve-like systems, I believe, will allow us to go to higher levels of abstraction.”
These prospects are exciting, but for some they may also sound menacing—for example, AlphaEvolve’s optimization of Gemini training may be seen as the beginning of recursively self-improving AI, which some worry would lead to a runaway intelligence explosion referred to as the singularity. The DeepMind team maintains that that is not their goal, of course. “We are excited to contribute to advancing AI that benefits humanity,” Novikov says.
2025-05-14 20:00:03
And it’s not a new one: From early telephones to modern cellphones, everyday liquids have frequently conflicted with devices that must stay dry. Consumers often take the blame when leaks and spills inevitably occur.
Rachel Plotnick, an associate professor of cinema and media studies at Indiana University Bloomington, studies the relationship between technology and society. Last year, she spoke to IEEE Spectrum about her research on how people interact with buttons and tactile controls. In her new book, License to Spill: Where Dry Devices Meet Liquid Lives (The MIT Press, 2025), Plotnick explores the dynamic between everyday wetness and media devices through historical and contemporary examples, including cameras, vinyl records, and laptops. This adapted excerpt looks back at analog telephones of the 1910s through 1930s, the common practices that interrupted service, and the “trouble men” who were sent to repair phones and reform messy users.
Mothers never liked to blame their babies for failed telephone service. After all, what harm could a bit of saliva do? Yet in the early decades of the 20th century, reports of liquid-gone-wrong with telephones reached the pages of popular women’s magazines and big-city newspapers as evidence of basic troubles that could befall consistent service. Teething babies were particularly called out. The Boston Daily Globe in 1908 recounted, for instance, how a mother only learned her lesson about her baby’s cord chewing when the baby received a shock—or “got stung”—and the phone service went out. These youthful oral fixations rarely caused harm to the chewer, but were “injurious” to the telephone cord.
License to Spill is Rachel Plotnick’s second book. Her first, Power Button: A History of Pleasure, Panic, and the Politics of Pushing (The MIT Press, 2018), explores the history and politics of push buttons. The MIT Press
As more Americans encountered telephones in the decades before World War II, those devices played a significant role in daily life. That daily life was filled with wet conditions, not only teething babies but also “toy poodles, the ever-present spittoon, overshoes…and even people talking while in the bathtub,” according to a 1920 article from the journal Telephony. Painters washed ceilings, which dripped; telephones sat near windows during storms; phone cords came in contact with moist radiators. A telephone chief operator who handled service complaints recounted that “a frequent combination in interior decoration is the canary bird and desk telephone occupying the same table. The canary bird includes the telephone in his morning bath,” thus leading to out-of-order service calls.
Within the telephone industry, consensus built around liquids as a hazard. As a 1913 article on telephone service stated ominously, “Water is one of the worst enemies.” At the time, cords were typically made from silk tinsel and could easily corrode from wetness, while any protective treatment tended to make them too brittle. But it wasn’t an elemental force acting alone or fragile materials that bothered phone workers. Rather, the blame fell on the abusing consumer—the “energetic housewife” who damaged wiring by scrubbing her telephone with water or cleaning fluid, and men in offices who dangerously propped their wet umbrellas against the wire. Wetness lurked everywhere in people’s spaces and habits; phone companies argued that one could hardly expect proper service under such circumstances—especially if users didn’t learn to accommodate the phone’s need for dryness.
In telephony’s infancy, though, users didn’t always make the connection between liquidity and breakdown and might not even notice the wetness, at least in a phone company’s estimation.
This differing appraisal of liquids caused problems when telephone customers expected service that would not falter and directed outrage at their provider when outages did occur. Consumers even sometimes admitted to swearing at the telephone receiver and haranguing operators. Telephone company employees, meanwhile, faced intense scrutiny and pressure to tend to telephone infrastructures. “Trouble” took two forms, then, in dealing with customers’ frustration over outages and in dealing with the damage from the wetness itself.
Telephone breakdowns required determinations about the outage’s source. “Trouble men” and “trouble departments” hunted down the probable cause of the damage, which meant sussing out babies, sponges, damp locations, spills, and open windows. If customers wanted to lay blame at workers’ feet in these moments, then repairers labeled customers as abusers of the phone cord. One author attributed at least 50 percent of telephone trouble to cases where “someone has been careless or neglectful.” Trouble men employed medical metaphors to describe their work, as in “he is a physician, and he makes the ills that the telephone is heir to his life study.”
Serge Bloch
Stories about this investigative work abounded. They typically emphasized the user’s ignorance and established the trouble man as the voice of reason, as in the case of an ill-placed wet umbrella leaned up against the telephone wiring. It didn’t seem to occur to the telephone worker that the umbrella user simply didn’t notice the umbrella’s positioning. Phone companies thus tried to make wetness a collective problem—for instance, by taking out newspaper announcements that commented on how many households lost power in a particular storm due to improper umbrella habits.
Even if a consumer knew the cord had gotten wet, they didn’t necessarily blame it as the cause of the outage. The repairer often used this as an opportunity to properly socialize the user about wetness and inappropriate telephone treatment. These conversations didn’t always go well: A 1918 article in Popular Science Monthly described an explosive argument between an infuriated woman and a phone company employee over a baby’s cord habits. The permissive mother and teething child had become emblematic of misuse, a photograph of them appearing in Bell Telephone News in 1917 as evidence of common trouble that a telephone (and its repairer) might encounter. However, no one blamed the baby; telephone workers unfailingly held mothers responsible as “bad” users.
Teething babies and the mothers that let them play with phone cords were often blamed for telephone troubles. The Telephone Review/License to Spill
Repair work often involved special tools meant to identify the source of the outage. Not unlike a doctor relying upon an X-ray to visualize and interpret a patient’s body, the trouble man relied on an apparatus known as the Telefault to evaluate breakages. The repairer attached an exploring coil to a telephone receiver and then generated an intermittent current that, when sent out over the malfunctioning wire, allowed him to hear the source of the fault. This wasn’t always an easy process, but linemen nevertheless recommended the Telefault through testimonials and articles. The machine and trouble man together functioned as co-testers of wetness, making everyday life’s liquidity diagnosable and interpretable.
Armed with such a tool, repairers glorified their own expertise. One wire chief was celebrated as the “original ‘find-out artist’” who could determine a telephone’s underlying troubles even in tricky cases. Telephone company employees leveraged themselves as experts who could attribute wetness’s causes to—in their estimation—uneducated (and even dimwitted) customers, who were often female. Women were often the earliest and most engaged phone users, adopting the device as a key mechanism for social relations, and so they became an easy target.
Phone repairers were constructing everyday life as a problem for uninterrupted service; untamed mouths, clumsy hands, and wet umbrellas all stood at odds with connectivity.
Though the phone industry and repairers were often framed as heroes, troubleshooting took its toll on overextended phone workers, and companies suffered a financial burden from repairs. One estimate by the American Telephone and Telegraph Company found that each time a company “clear[ed] wet cord trouble,” it cost a dollar. Phone companies portrayed the telephone as a fragile device that could be easily damaged by everyday life, aiming to make the subscriber a proactively “dry” and compliant user.
Telephone workers also quantified the cost of moisture incidents that impaired good service. According to an investigation conducted by an Easton, Pa., central office employee, a baby chewing on a cord could lead to 1 hour and 45 minutes of lost service, while a spilled pitcher of water would cause a whopping 8-hour outage. Other quantifications related to spilled whisky, mustard, wet hands, and mops. In a cheeky summary of this work, a reporter reminded readers that the investigator did not recommend “doing away with babies, sponges and wet bouquets” but rather offered his statistics “as an educational hint to keep the telephone cord away from dampness.”
Everyday sources of wetness, including mops and mustard, could cause hours of phone interruption. Telephony/License to Spill
A blossoming accessory market also emerged, which focused on moving phones away from sources of moisture. The telephone bracket, for example, clamped onto a desk and, like a “third arm” or “human arm,” would “hold [the phone] out of your way when not in use; brings it where you want it at a touch.” The Equipoise Telephone Arm was used in offices and on ships as a sort of worker’s appendage. One company’s advertisements promised that the Equipoise could prevent liquid messes—like overturned inkstands—and could stop cords from getting tangled or impeding one’s work.
Although telephone companies put significant effort into reforming their subscribers, the increasing pervasiveness of telephony began to conflict with these abstinent aims. Thus, a new technological solution emerged that put the burden on moisture-proofing the wire. The Stromberg-Carlson Telephone Manufacturing Co. of Rochester, N.Y., began producing copper wire that featured an insulating enamel, two layers of silk, the company’s moisture-proof compound, and a layer of cotton. Called Duratex, the cord withstood a test in which the manufacturer submerged it in water for 48 hours. In its advertising, Stromberg-Carlson warned that many traditional cords—even if they seemed to dry out after wetting—had sustained interior damage so “gradual that it is seldom noticed until the subscriber complains of service.”
Serge Bloch
Western Electric, another manufacturer of liquid-friendly cords, claimed its moisture-proof and “hard-knock proof” cord could handle “rough” conditions and wore its coating like the Charles Dickens character Tony Weller in The Pickwick Papers, with his many layers of clothing. The product’s hardiness would allow the desk telephone to “withstand any climate,” even one hostile to communication technology.
Telephone companies that deployed these cords saw significant cost benefits. A report from Bell Telephone noted that in 1919, when it installed 1,800,000 of these protected cords, it began saving US $90,000 per year (about $1.6 million in today’s dollars). By 1926, that same report concluded, the company had saved $400,000. But something else significant had shifted in this transition that involved far more than developing a moisture-proof solution. The cultural balance tilted from encouraging consumers to behave properly to insulating these media technologies from their everyday circumstances.
This subtle change meant that the burden to adapt fell to the device rather than the user. As telephone wires began to “penetrate everywhere,” they were imagined as fostering constant and unimpeded connectivity that not even saliva or a spilled drink could interrupt. The move to cord protection was not accompanied by a great deal of fanfare, however. As part of telephone infrastructure, cords faded into the background of conversations.
Excerpted from License to Spill by Rachel Plotnick. Reprinted with permission from The MIT Press. Copyright 2025.
2025-05-14 02:00:04
By 2030, there will be a global shortage of 85 million workers, many of them in technical fields, according to the World Economic Forum. Many industries that need to employ technical workers will be impacted by the shortage, which is projected to cost them up to US $8.5 trillion in unrealized revenue.
Many technical roles now require university degrees. However, as companies consider how to overcome the worker shortage, some are reevaluating their higher education requirements for certain roles requiring specialized skills.
Those jobs might include technician, electrician, and programmer, along with other positions that compose the skilled technical workforce, as described by SRI International’s Center for Innovation Strategy and Policy.
Positions that don’t require higher education widen the pool of candidates.
Even if they eliminate the need for a degree, organizations will still need to rely on some kind of credential to ensure that job candidates have the skills necessary to do the job. One option is the skills-based microcredential.
Microcredentials are issued when learners prove mastery of a specific skill. Unlike traditional university degrees and course certificates, microcredential programs are not based on successfully completing a full learning program. Instead, a student might earn multiple microcredentials in a single program based on demonstrated skills. A qualified instructor using an assessment instrument determines if a learner has acquired the skill and earned the credential.
The IEEE microcredentials program offers standardized credentials in collaboration with training organizations and universities seeking to provide skills-based credentials outside formal degree programs. IEEE, as the world’s largest technical professional organization, has decades of experience offering industry-relevant credentials and expertise in global standardization.
IEEE microcredentials are industry-driven professional credentials that focus on needed skills. The program allows technical learning providers to supply credentials that bear the IEEE logo. When a hiring organization sees the logo on a microcredential, it confirms to employers that the instruction has been independently vetted and the institution is qualified to issue the credential. Credentials issued through the IEEE program include certificates and digital badges.
Training providers that want to offer standardized microcredentials can apply to the program to become approved. A committee reviews the applications to ensure that providers are credible, offer training within IEEE’s fields of interest, have qualified instructors, and have well-defined assessments.
The IEEE program offers standardized credentials in collaboration with training organizations and universities seeking to provide skills-based credentials outside formal degree programs.
Once a provider is approved, IEEE will work with it to benchmark the credentialing needs for each course, including the skills to be recognized, designing microcredentials, and creating a credential-issuing process. Upon the learner’s successful completion of the program, IEEE will issue the microcredentials on behalf of the training provider.
Microcredentials are stackable; students can earn them from different programs and institutions to demonstrate their growing skill set. The microcredentials can be listed on résumés and CVs and shared on LinkedIn and other professional networking websites.
All IEEE microcredentials that a learner earns are stored within a secure digital wallet for easy reference. The wallet also provides information about the program that issued each credential.