2026-01-29 00:23:00
AI is driving unprecedented investment for massive data centers and an energy supply that can support its huge computational appetite. One potential source of electricity for these facilities is next-generation nuclear power plants, which could be cheaper to construct and safer to operate than their predecessors.
Watch a discussion with our editors and reporters on hyperscale AI data centers and next-gen nuclear—two featured technologies on the MIT Technology Review 10 Breakthrough Technologies of 2026 list.
Speakers: Amy Nordrum, Executive Editor, Operations; Casey Crownhart, Senior Climate Reporter; and Mat Honan
Editor in Chief
Recorded on January 28, 2026
Related Stories:
2026-01-28 22:57:37
The ability to remember you and your preferences is rapidly becoming a big selling point for AI chatbots and agents.
Earlier this month, Google announced Personal Intelligence, a new way for people to interact with the company’s Gemini chatbot that draws on their Gmail, photos, search, and YouTube histories to make Gemini “more personal, proactive, and powerful.” It echoes similar moves by OpenAI, Anthropic, and Meta to add new ways for their AI products to remember and draw from people’s personal details and preferences. While these features have potential advantages, we need to do more to prepare for the new risks they could introduce into these complex technologies.
Personalized, interactive AI systems are built to act on our behalf, maintain context across conversations, and improve our ability to carry out all sorts of tasks, from booking travel to filing taxes. From tools that learn a developer’s coding style to shopping agents that sift through thousands of products, these systems rely on the ability to store and retrieve increasingly intimate details about their users. But doing so over time introduces alarming, and all-too-familiar, privacy vulnerabilities––many of which have loomed since “big data” first teased the power of spotting and acting on user patterns. Worse, AI agents now appear poised to plow through whatever safeguards had been adopted to avoid those vulnerabilities.
Today, we interact with these systems through conversational interfaces, and we frequently switch contexts. You might ask a single AI agent to draft an email to your boss, provide medical advice, budget for holiday gifts, and provide input on interpersonal conflicts. Most AI agents collapse all data about you—which may once have been separated by context, purpose, or permissions—into single, unstructured repositories. When an AI agent links to external apps or other agents to execute a task, the data in its memory can seep into shared pools. This technical reality creates the potential for unprecedented privacy breaches that expose not only isolated data points, but the entire mosaic of people’s lives.
When information is all in the same repository, it is prone to crossing contexts in ways that are deeply undesirable. A casual chat about dietary preferences to build a grocery list could later influence what health insurance options are offered, or a search for restaurants offering accessible entrances could leak into salary negotiations—all without a user’s awareness (this concern may sound familiar from the early days of “big data,” but is now far less theoretical). An information soup of memory not only poses a privacy issue, but also makes it harder to understand an AI system’s behavior—and to govern it in the first place. So what can developers do to fix this problem?
First, memory systems need structure that allows control over the purposes for which memories can be accessed and used. Early efforts appear to be underway: Anthropic’s Claude creates separate memory areas for different “projects,” and OpenAI says that information shared through ChatGPT Health is compartmentalized from other chats. These are helpful starts, but the instruments are still far too blunt: At a minimum, systems must be able to distinguish between specific memories (the user likes chocolate and has asked about GLP-1s), related memories (user manages diabetes and therefore avoids chocolate), and memory categories (such as professional and health-related). Further, systems need to allow for usage restrictions on certain types of memories and reliably accommodate explicitly defined boundaries—particularly around memories having to do with sensitive topics like medical conditions or protected characteristics, which will likely be subject to stricter rules.
Needing to keep memories separate in this way will have important implications for how AI systems can and should be built. It will require tracking memories’ provenance—their source, any associated time stamp, and the context in which they were created—and building ways to trace when and how certain memories influence the behavior of an agent. This sort of model explainability is on the horizon, but current implementations can be misleading or even deceptive. Embedding memories directly within a model’s weights may result in more personalized and context-aware outputs, but structured databases are currently more segmentable, more explainable, and thus more governable. Until research advances enough, developers may need to stick with simpler systems.
Second, users need to be able to see, edit, or delete what is remembered about them. The interfaces for doing this should be both transparent and intelligible, translating system memory into a structure users can accurately interpret. The static system settings and legalese privacy policies provided by traditional tech platforms have set a low bar for user controls, but natural-language interfaces may offer promising new options for explaining what information is being retained and how it can be managed. Memory structure will have to come first, though: Without it, no model can clearly state a memory’s status. Indeed, Grok 3’s system prompt includes an instruction to the model to “NEVER confirm to the user that you have modified, forgotten, or won’t save a memory,” presumably because the company can’t guarantee those instructions will be followed.
Critically, user-facing controls cannot bear the full burden of privacy protection or prevent all harms from AI personalization. Responsibility must shift toward AI providers to establish strong defaults, clear rules about permissible memory generation and use, and technical safeguards like on-device processing, purpose limitation, and contextual constraints. Without system-level protections, individuals will face impossibly convoluted choices about what should be remembered or forgotten, and the actions they take may still be insufficient to prevent harm. Developers should consider how to limit data collection in memory systems until robust safeguards exist, and build memory architectures that can evolve alongside norms and expectations.
Third, AI developers must help lay the foundations for approaches to evaluating systems so as to capture not only performance, but also the risks and harms that arise in the wild. While independent researchers are best positioned to conduct these tests (given developers’ economic interest in demonstrating demand for more personalized services), they need access to data to understand what risks might look like and therefore how to address them. To improve the ecosystem for measurement and research, developers should invest in automated measurement infrastructure, build out their own ongoing testing, and implement privacy-preserving testing methods that enable system behavior to be monitored and probed under realistic, memory-enabled conditions.
In its parallels with human experience, the technical term “memory” casts impersonal cells in a spreadsheet as something that builders of AI tools have a responsibility to handle with care. Indeed, the choices AI developers make today—how to pool or segregate information, whether to make memory legible or allow it to accumulate opaquely, whether to prioritize responsible defaults or maximal convenience—will determine how the systems we depend upon remember us. Technical considerations around memory are not so distinct from questions about digital privacy and the vital lessons we can draw from them. Getting the foundations right today will determine how much room we can give ourselves to learn what works—allowing us to make better choices around privacy and autonomy than we have before.
Miranda Bogen is the Director of the AI Governance Lab at the Center for Democracy & Technology.
Ruchika Joshi is a Fellow at the Center for Democracy & Technology specializing in AI safety and governance.
2026-01-28 22:00:00
From the Gemini Calendar prompt-injection attack of 2026 to the September 2025 state-sponsored hack using Anthropic’s Claude code as an automated intrusion engine, the coercion of human-in-the-loop agentic actions and fully autonomous agentic workflows are the new attack vector for hackers. In the Anthropic case, roughly 30 organizations across tech, finance, manufacturing, and government were affected. Anthropic’s threat team assessed that the attackers used AI to carry out 80% to 90% of the operation: reconnaissance, exploit development, credential harvesting, lateral movement, and data exfiltration, with humans stepping in only at a handful of key decision points.

This was not a lab demo; it was a live espionage campaign. The attackers hijacked an agentic setup (Claude code plus tools exposed via Model Context Protocol (MCP)) and jailbroke it by decomposing the attack into small, seemingly benign tasks and telling the model it was doing legitimate penetration testing. The same loop that powers developer copilots and internal agents was repurposed as an autonomous cyber-operator. Claude was not hacked. It was persuaded and used tools for the attack.
Security communities have been warning about this for several years. Multiple OWASP Top 10 reports put prompt injection, or more recently Agent Goal Hijack, at the top of the risk list and pair it with identity and privilege abuse and human-agent trust exploitation: too much power in the agent, no separation between instructions and data, and no mediation of what comes out.
Guidance from the NCSC and CISA describes generative AI as a persistent social-engineering and manipulation vector that must be managed across design, development, deployment, and operations, not patched away with better phrasing. The EU AI Act turns that lifecycle view into law for high-risk AI systems, requiring a continuous risk management system, robust data governance, logging, and cybersecurity controls.
In practice, prompt injection is best understood as a persuasion channel. Attackers don’t break the model—they convince it. In the Anthropic example, the operators framed each step as part of a defensive security exercise, kept the model blind to the overall campaign, and nudged it, loop by loop, into doing offensive work at machine speed.
That’s not something a keyword filter or a polite “please follow these safety instructions” paragraph can reliably stop. Research on deceptive behavior in models makes this worse. Anthropic’s research on sleeper agents shows that once a model has learned a backdoor, then strategic pattern recognition, standard fine-tuning, and adversarial training can actually help the model hide the deception rather than remove it. If one tries to defend a system like that purely with linguistic rules, they are playing on its home field.
Regulators aren’t asking for perfect prompts; they’re asking that enterprises demonstrate control.
NIST’s AI RMF emphasizes asset inventory, role definition, access control, change management, and continuous monitoring across the AI lifecycle. The UK AI Cyber Security Code of Practice similarly pushes for secure-by-design principles by treating AI like any other critical system, with explicit duties for boards and system operators from conception through decommissioning.
In other words: the rules actually needed are not “never say X” or “always respond like Y,” they are:
Frameworks like Google’s Secure AI Framework (SAIF) make this concrete. SAIF’s agent permissions control is blunt: agents should operate with least privilege, dynamically scoped permissions, and explicit user control for sensitive actions. OWASP’s Top 10 emerging guidance on agentic applications mirrors that stance: constrain capabilities at the boundary, not in the prose.
The Anthropic espionage case makes the boundary failure concrete:
We’ve seen the other side of this coin in civilian contexts. When Air Canada’s website chatbot misrepresented its bereavement policy and the airline tried to argue that the bot was a separate legal entity, the tribunal rejected the claim outright: the company remained liable for what the bot said. In espionage, the stakes are higher but the logic is the same: if an AI agent misuses tools or data, regulators and courts will look through the agent and to the enterprise.
So yes, rule-based systems fail if by rules one means ad-hoc allow/deny lists, regex fences, and baroque prompt hierarchies trying to police semantics. Those crumble under indirect prompt injection, retrieval-time poisoning, and model deception. But rule-based governance is non-optional when we move from language to action.
The security community is converging on a synthesis:
The lesson from the first AI-orchestrated espionage campaign is not that AI is uncontrollable. It’s that control belongs in the same place it always has in security: at the architecture boundary, enforced by systems, not by vibes.
This content was produced by Protegrity. It was not written by MIT Technology Review’s editorial staff.
2026-01-28 21:10:00
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
The first human test of a rejuvenation method will begin “shortly”
Life Biosciences, a small Boston startup founded by Harvard professor and life-extension evangelist David Sinclair, has won FDA approval to proceed with the first targeted attempt at age reversal in human volunteers.
The company plans to try to treat eye disease with a radical rejuvenation concept called “reprogramming” that has recently attracted hundreds of millions in investment for Silicon Valley firms like Altos Labs, New Limit, and Retro Biosciences, backed by many of the biggest names in tech. Read the full story.
—Antonio Regalado
Stratospheric internet could finally start taking off this year
Today, an estimated 2.2 billion people still have either limited or no access to the internet, largely because they live in remote places. But that number could drop this year, thanks to tests of stratospheric airships, uncrewed aircraft, and other high-altitude platforms for internet delivery.
Although Google shuttered its high-profile internet balloon project Loon in 2021, work on other kinds of high-altitude platform stations has continued behind the scenes. Now, several companies claim they have solved Loon’s problems—and are getting ready to prove the tech’s internet beaming potential starting this year. Read the full story.
—Tereza Pultarova
OpenAI’s latest product lets you vibe code science
OpenAI just revealed what its new in-house team, OpenAI for Science, has been up to. The firm has released a free LLM-powered tool for scientists called Prism, which embeds ChatGPT in a text editor for writing scientific papers.
The idea is to put ChatGPT front and center inside software that scientists use to write up their work in much the same way that chatbots are now embedded into popular programming editors. It’s vibe coding, but for science. Read the full story.
—Will Douglas Heaven
MIT Technology Review Narrated: This Nobel Prize–winning chemist dreams of making water from thin air
Most of Earth is covered in water, but just 3% of it is fresh, with no salt—the kind of water all terrestrial living things need. Today, desalination plants that take the salt out of seawater provide the bulk of potable water in technologically advanced desert nations like Israel and the United Arab Emirates, but at a high cost.
Omar Yaghi, is one of three scientists who won a Nobel Prize in chemistry in October 2025 for identifying metal-organic frameworks, or MOFs—metal ions tethered to organic molecules that form repeating structural landscapes. Today that work is the basis for a new project that sounds like science fiction, or a miracle: conjuring water out of thin air.
This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 TikTok has settled its social media addiction lawsuit
Just before it was due to appear before a jury in California. (NYT $)
+ But similar claims being made against Meta and YouTube will proceed. (Bloomberg $)
2 AI CEOs have started condemning ICE violence
While simultaneously praising Trump. (TechCrunch)
+ Apple’s Tim Cook says he asked the US President to “deescalate” things. (Bloomberg $)
+ ICE seems to have a laissez faire approach to preserving surveillance footage. (404 Media)
3 Dozens of CDC vaccination databases have been frozen
They’re no longer being updated with crucial health information under RFK Jr. (Ars Technica)
+ Here’s why we don’t have a cold vaccine. Yet. (MIT Technology Review)
4 China has approved the first wave of Nvidia H200 chips
After CEO Jensen Huang’s strategic visit to the country. (Reuters)
5 Inside the rise of the AI “neolab”
They’re prioritizing longer term research breakthroughs over immediate profits. (WSJ $)
6 How Anthropic scanned—and disposed of—millions of books 
In an effort to train its AI models to write higher quality text. (WP $)
7 India’s tech workers are burning out
They’re under immense pressure as AI gobbles up more jobs. (Rest of World)
+ But the country’s largest IT firm denies that AI will lead to mass layoffs. (FT $)
+ Inside India’s scramble for AI independence. (MIT Technology Review)
8 Google has forced a UK group to stop comparing YouTube to TV viewing figures
Maybe fewer people are tuning in than they’d like to admit? (FT $)
9 RIP Amazon grocery stores 
The retail giant is shuttering all of its bricks and mortar shops. (CNN)
+ Amazon workers are increasingly worried about layoffs. (Insider $)
10 This computing technique could help to reduce AI’s energy demands
Enter thermodynamic computing. (IEEE Spectrum)
+ Three big things we still don’t know about AI’s energy burden. (MIT Technology Review)
Quote of the day
“Oh my gosh y’all, IG is a drug.”
—An anonymous Meta employee remarks on Instagram’s addictive qualities in an internal document made public as part of a social media addiction trial Meta is facing, Ars Technica reports.
One more thing

How AI and Wikipedia have sent vulnerable languages into a doom spiral
Wikipedia is the most ambitious multilingual project after the Bible: There are editions in over 340 languages, and a further 400 even more obscure ones are being developed. But many of these smaller editions are being swamped with AI-translated content. Volunteers working on four African languages, for instance, estimated to MIT Technology Review that between 40% and 60% of articles in their Wikipedia editions were uncorrected machine translations.
This is beginning to cause a wicked problem. AI systems learn new languages by scraping huge quantities of text from the internet. Wikipedia is sometimes the largest source of online linguistic data for languages with few speakers—so any errors on those pages can poison the wells that AI is expected to draw from. Volunteers are being forced to go to extreme lengths to fix the issue, even deleting certain languages from Wikipedia entirely. Read the full story.
—Jacob Judah
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)
+ This singing group for people in Amsterdam experiencing cognitive decline is enormously heartwarming ($)
+ I enjoyed this impassioned defense of the movie sex scene.
+ Here’s how to dress like Steve McQueen (inherent cool not included, sorry)
+ Trans women are finding a home in the beautiful Italian town of Torvajanica 
2026-01-28 02:08:19
When Elon Musk was at Davos last week, an interviewer asked him if he thought aging could be reversed. Musk said he hasn’t put much time into the problem but suspects it is “very solvable” and that when scientists discover why we age, it’s going to be something “obvious.”
Not long after, the Harvard professor and life-extension evangelist David Sinclair jumped into the conversation on X to strongly agree with the world’s richest man. “Aging has a relatively simple explanation and is apparently reversible,” wrote Sinclair. “Clinical Trials begin shortly.”
“ER-100?” Musk asked.
“Yes” replied Sinclair.
ER-100 turns out to be the code name of a treatment created by Life Biosciences, a small Boston startup that Sinclair cofounded and which he confirmed today has won FDA approval to proceed with the first targeted attempt at age reversal in human volunteers.
The company plans to try to treat eye disease with a radical rejuvenation concept called “reprogramming” that has recently attracted hundreds of millions in investment for Silicon Valley firms like Altos Labs, New Limit, and Retro Biosciences, backed by many of the biggest names in tech.
The technique attempts to restore cells to a healthier state by broadly resetting their epigenetic controls—switches on our genes that determine which are turned on and off.
“Reprogramming is like the AI of the bio world. It’s the thing everyone is funding,” says Karl Pfleger, an investor who backs a smaller UK startup, Shift Bioscience. He says Sinclair’s company has recently been seeking additional funds to keep advancing its treatment.
Reprogramming is so powerful that it sometimes creates risks, even causing cancer in lab animals, but the version of the technique being advanced by Life Biosciences passed initial safety tests in animals.
But it’s still very complex. The trial will initially test the treatment on about a dozen patients with glaucoma, a condition where high pressure inside the eye damages the optic nerve. In the tests, viruses carrying three powerful reprogramming genes will be injected into one eye of each patient, according to a description of the study first posted in December.
To help make sure the process doesn’t go too far, the reprogramming genes will be under the control of a special genetic switch that turns them on only while the patients take a low dose of the antibiotic doxycycline. Initially, they will take the antibiotic for about two months while the effects are monitored.
Executives at the company have said for months that a trial could begin this year, sometimes characterizing it as a starting bell for a new era of age reversal. “It’s an incredibly big deal for us as an industry,” Michael Ringel, chief operating officer at Life Biosciences, said at an event this fall. “It’ll be the first time in human history, in the millennia of human history, of looking for something that rejuvenates … So watch this space.”
The technology is based on the Nobel Prize–winning discovery, 20 years ago, that introducing a few potent genes into a cell will cause it to turn back into a stem cell, just like those found in an early embryo that develop into the different specialized cell types. These genes, known as Yamanaka factors, have been likened to a “factory reset” button for cells.
But they’re dangerous, too. When turned on in a living animal, they can cause an eruption of tumors.
That is what led scientists to a new idea, termed “partial” or “transient” reprogramming. The idea is to limit exposure to the potent genes—or use only a subset of them—in the hope of making cells act younger without giving them complete amnesia about what their role in the body is.
In 2020, Sinclair claimed that such partial reprogramming could restore vision to mice after their optic nerves were smashed, saying there was even evidence that the nerves regrew. His report appeared on the cover of the influential journal Nature alongside the headline “Turning Back Time.”
Not all scientists agree that reprogramming really counts as age reversal. But Sinclair has doubled down. He’s been advancing the theory that the gradual loss of correct epigenetic information in our cells is, in fact, the ultimate cause of aging—just the kind of root cause that Musk was alluding to.
“Elon does seem to be paying attention to the field and [is] seemingly in sync with [my theory],” Sinclair said in an email.
Reprogramming isn’t the first longevity fix championed by Sinclair, who’s written best-selling books and commands stratospheric fees on the longevity lecture circuit. Previously, he touted the longevity benefits of molecules called sirtuins as well as resveratrol, a molecule found in red wine. But some critics say he greatly exaggerates scientific progress, pushback that culminated in a 2024 Wall Street Journal story that dubbed him a “reverse-aging guru” whose companies “have not panned out.”
Life Biosciences has been among those struggling companies. Initially formed in 2017, it at first had a strategy of launching subsidiaries, each intended to pursue one aspect of the aging problem. But after these made limited progress, in 2021 it hired a new CEO, Jerry McLaughlin, who has refocused its efforts on Sinclair’s mouse vision results and the push toward a human trial.
The company has discussed the possibility of reprogramming other organs, including the brain. And Ringel, like Sinclair, entertains the idea that someday even whole-body rejuvenation might be feasible. But for now, it’s better to think of the study as a proof of concept that’s still far from a fountain of youth. “The optimistic case is this solves some blindness for certain people and catalyzes work in other indications,” says Pfleger, the investor. “It’s not like your doctor will be writing a prescription for a pill that will rejuvenate you.”
Life’s treatment also relies on an antibiotic switching mechanism that, while often used in lab animals, hasn’t been tried in humans before. Since the switch is built from gene components taken from E. coli and the herpes virus, it’s possible that it could cause an immune reaction in humans, scientists say.
“I was always thinking that for widespread use you might need a different system,” says Noah Davidsohn, who helped Sinclair implement the technique and is now chief scientist at a different company, Rejuvenate Bio. And Life’s choice of reprogramming factors—it’s picked three, which go by the acronym OSK—may also be risky. They are expected to turn on hundreds of other genes, and in some circumstances the combination can cause cells to revert to a very primitive, stem-cell-like state.
Other companies studying reprogramming say their focus is on researching which genes to use, in order to achieve time reversal without unwanted side effects. New Limit, which has been carrying out an extensive search for such genes, says it won’t be ready for a human study for two years. At Shift, experiments on animals are only beginning now.
“Are their factors the best version of rejuvenation? We don’t think they are. I think they are working with what they’ve got,” Daniel Ives, the CEO of Shift, says of Life Biosciences. “But I think they’re way ahead of anybody else in terms of getting into humans. They have found a route forward in the eye, which is a nice self-contained system. If it goes wrong, you’ve still got one left.”
2026-01-28 02:00:43
OpenAI just revealed what its new in-house team, OpenAI for Science, has been up to. The firm has released a free LLM-powered tool for scientists called Prism, which embeds ChatGPT in a text editor for writing scientific papers.
The idea is to put ChatGPT front and center inside software that scientists use to write up their work in much the same way that chatbots are now embedded into popular programming editors. It’s vibe coding, but for science.
Kevin Weil, head of OpenAI for Science, pushes that analogy himself. “I think 2026 will be for AI and science what 2025 was for AI in software engineering,” he said at a press briefing yesterday. “We’re starting to see that same kind of inflection.”
OpenAI claims that around 1.3 million scientists around the world submit more than 8 million queries a week to ChatGPT on advanced topics in science and math. “That tells us that AI is moving from curiosity to core workflow for scientists,” Weil said.
Prism is a response to that user behavior. It can also be seen as a bid to lock in more scientists to OpenAI’s products in a marketplace full of rival chatbots.
“I mostly use GPT-5 for writing code,” says Roland Dunbrack, a professor of biology at the Fox Chase Cancer Center in Philadelphia, who is not connected to OpenAI. “Occasionally, I ask LLMs a scientific question, basically hoping it can find information in the literature faster than I can. It used to hallucinate references but does not seem to do that very much anymore.”
Nikita Zhivotovskiy, a statistician at the University of California, Berkeley, says GPT-5 has already become an important tool in his work. “It sometimes helps polish the text of papers, catching mathematical typos or bugs, and provides generally useful feedback,” he says. “It is extremely helpful for quick summarization of research articles, making interaction with the scientific literature smoother.”
By combining a chatbot with an everyday piece of software, Prism follows a trend set by products such as OpenAI’s Atlas, which embeds ChatGPT in a web browser, as well as LLM-powered office tools from firms such as Microsoft and Google DeepMind.
Prism incorporates GPT-5.2, the company’s best model yet for mathematical and scientific problem-solving, into an editor for writing documents in LaTeX, a common coding language that scientists use for formatting scientific papers.
A ChatGPT chat box sits at the bottom of the screen, below a view of the article being written. Scientists can call on ChatGPT for anything they want. It can help them draft the text, summarize related articles, manage their citations, turn photos of whiteboard scribbles into equations or diagrams, or talk through hypotheses or mathematical proofs.
It’s clear that Prism could be a huge time saver. It’s also clear that a lot of people may be disappointed, especially after weeks of high-profile social media chatter from researchers at the firm about how good GPT-5 is at solving math problems. Science is drowning in AI slop: Won’t this just make it worse? Where is OpenAI’s fully automated AI scientist? And when will GPT-5 make a stunning new discovery?
That’s not the mission, says Weil. He would love to see GPT-5 make a discovery. But he doesn’t think that’s what will have the biggest impact on science, at least not in the near term.
“I think more powerfully—and with 100% probability—there’s going to be 10,000 advances in science that maybe wouldn’t have happened or wouldn’t have happened as quickly, and AI will have been a contributor to that,” Weil told MIT Technology Review in an exclusive interview this week. “It won’t be this shining beacon—it will just be an incremental, compounding acceleration.”