MoreRSS

site iconMIT Technology ReviewModify

A world-renowned, independent media company whose insight, analysis, reviews, interviews and live events explain the newest technologies and their commercial, social and polit.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of MIT Technology Review

How Pokémon Go is giving delivery robots an inch-perfect view of the world

2026-03-10 21:47:26

Pokémon Go was the world’s first augmented-reality megahit. Released in 2016 by the Google spinout Niantic, the AR twist on the juggernaut Pokémon franchise fast became a global phenomenon. From Chicago to Oslo to Enoshima, players hit the streets in the urgent hope of catching a Jigglypuff or a Squirtle or (with a huge amount of luck) an ultra-rare Galarian Zapdos hovering just out of reach, superimposed on the everyday world.

In short, we’re talking about a huge number of people pointing their phones at a huge number of buildings. “Five hundred million people installed that app in 60 days,” says Brian McClendon, CTO at Niantic Spatial, an AI company that Niantic spun out in May last year. According to the video-game firm Scopely, which bought Pokémon Go from Niantic at the same time, the game still drew more than 100 million players in 2024, eight years after it launched. 

Now Niantic Spatial is using that vast and unparalleled trove of crowdsourced data—images of urban landmarks tagged with super-accurate location markers taken from the phones of hundreds of millions of Pokémon Go players around the world—to build a kind of world model, a buzzy new technology that grounds the smarts of LLMs in real environments. 

The company’s latest product is a model that it says can pinpoint your location on a map to within a few centimeters, based on a handful of snapshots of the buildings or other landmarks in view. The firm wants to use it to help robots navigate with greater precision in places where GPS is unreliable.

In the first big test of its technology, Niantic Spatial has just teamed up with Coco Robotics, a startup that deploys last-mile delivery robots in a number of cities across the US and Europe. “Everybody thought that AR was the future, that AR glasses were coming,” says McClendon. “And then robots became the audience.”

From Pikachu to pizza delivery

Coco Robotics deploys around 1,000 flight-case-size robots—built to carry up to eight extra-large pizzas or four grocery bags—in Los Angeles, Chicago, Jersey City, Miami, and Helsinki. According to CEO Zach Rash, the robots have made more than half a million deliveries to date, covering a few million miles in all weather conditions.

But to compete with human couriers, Coco’s robots, which trundle along sidewalks at around five miles per hour, must be as reliable as possible. “The best way we can do our job is by arriving exactly when we told you we were going to arrive,” says Rash. And that means not getting lost.

The problem Coco faces is that it cannot rely on GPS, which can be weak in cities because radio signals bounce off buildings and interfere with each other. “We do deliveries in a lot of dense areas with high-rises and underpasses and freeways, and those are the areas where GPS just never really works,” says Rash. 

“The urban canyon is the worst place in the world for GPS,” says McClendon. “If you look at that blue dot on your phone, you’ll often see it drift 50 meters, which puts you on a different block going a different direction on the wrong side of the street.” That’s where Niantic Spatial comes in. 

For the last few years, Niantic Spatial has been taking the data collected from players of Pokémon Go and Ingress (Niantic’s previous phone-based AR game, launched in 2013) and building a visual positioning system, technology that tells you where you are based on what you can see. “It turns out that getting Pikachu to realistically run around and getting Coco’s robot to safely and accurately move through the world is actually the same problem,” says John Hanke, CEO of Niantic Spatial.

“Visual positioning is not a very new technology,” says Konrad Wenzel at ESRI, a company that develops digital mapping and geospatial analysis software. “But it’s obvious that the more cameras we have out there, the better it becomes.” 

Niantic Spatial has trained its model on 30 billion images captured in urban environments. In particular, the images are clustered around hot spots—places that served as important locations in Niantic’s games that players were encouraged to visit, such as Pokémon battle arenas. “We had a million-plus locations around the world where we can locate you precisely,” says McClendon. “We know where you’re standing within several centimeters of accuracy and, most importantly, where you’re looking.”

The upshot is that for each of those million locations, Niantic Spatial has many thousands of images taken in more or less the same place but from different angles, at different times of day, and in different weather conditions. Each of those images comes with detailed metadata that pinpoints where in space the phone was at the time it captured the image, including which way the phone was facing, which way up it was, whether or not it was moving, how fast and in which direction, and more.   

The firm has used this data set to train a model to predict exactly where it is by taking into account what it is looking at—even for locations other than those million hot spots, where good sources of image and location data are scarcer.

In addition to GPS, Coco’s robots, which are fitted with four cameras, will now use this model to try to figure out where they are and where they are headed. The robots’ cameras are hip-height and point in all directions at once, so their viewpoint is a little different from a Pokémon Go player’s, but adapting the data was straightforward, says Rash. 

Rival companies use visual positioning systems too. For example, Starship Technologies, a robot delivery firm founded in Estonia in 2014, says its robots use their sensors to build a 3D map of their surroundings, plotting the edges of buildings and the position of streetlights. 

But Rash is betting that Niantic Spatial’s tech will give Coco an edge. He claims it will allow his robots to position themselves in the correct pickup spots outside restaurants, making sure they don’t get in anybody’s way, and stop just outside the customer’s door instead of a few steps away, which might have happened in the past.  

A Cambrian explosion in robotics 

When Niantic Spatial started work on its visual positioning system, the idea was to apply it to augmented reality, says Hanke. “If you are wearing AR glasses and you want the world to lock in to where you’re looking, then you need some method for doing that,” he says. “But now we’re seeing a Cambrian explosion in robotics.”

Some of those robots may need to share spaces with humans—spaces such as construction sites and sidewalks. “If robots are ever going to assimilate into that environment in a way that’s not disruptive for human beings, they’re going to have to have a similar level of spatial understanding,” says Hanke. “We can help robots find exactly where they are when they’ve been jostled and bumped.”

The Coco Robotics partnership is the start. What Niantic Spatial is putting in place, says Hanke, are the first pieces of what he calls a living map: a hyper-detailed virtual simulation of the world that changes as the world changes. As robots from Coco and other firms move about the world, they will provide new sources of map data, feeding into more and more detailed digital replicas of the world. 

But the way Hanke and McClendon see it, maps are not only becoming more detailed; they are being used more and more by machines. That shifts what maps are for. Maps have long been used to help people locate themselves in the world. As they moved from 2D to 3D to 4D (think of real-time simulations, such as digital twins), the basic principle hasn’t changed: Points on the map correspond to points in space or time.

And yet maps for machines may need to become more like guidebooks, full of information that humans take for granted. Companies like Niantic Spatial and ESRI want to add descriptions that tell machines what they’re actually looking at, with every object tagged with a list of its properties. “This era is about building useful descriptions of the world for machines to comprehend,” says Hanke. “The data that we have is a great starting point in terms of building up an understanding of how the connective tissue of the world works.”

There is a lot of buzz about world models right now—and Niantic Spatial knows it. LLMs may seem like know-it-alls, but they have very little common sense when it comes to interpreting and interacting with everyday environments. World models aim to fix that. Some firms, such as Google DeepMind and World Labs, are developing models that generate virtual fantasy worlds on the fly, which can then be used as training dojos for AI agents. 

Niantic Spatial says it is coming at the problem from a different angle. Push map-making far enough and you’ll end up capturing everything, says McClendon: “I’m very focused on trying to re-create the real world. We’re not there yet, but we want to be there.”

Prioritizing energy intelligence for sustainable growth

2026-03-10 21:00:00

Loudoun County, Virginia, once known for its pastoral scenery and proximity to Washington, DC, has earned a more modern reputation in recent years: The area has the highest concentration of data centers on the planet.

Ten years ago, these facilities powered email and e-commerce. Today, thanks to the meteoric rise in demand for AI-infused everything, local utility Dominion Energy is working hard to keep pace with surging power demands. The pressure is so acute that Dulles International Airport is constructing the largest airport solar installation in the country, a highly visible bid to bolster the region’s power mix.

Data center campuses like Loudoun’s are cropping up across the country to accommodate an insatiable appetite for AI. But this buildout comes at an enormous cost. In the US alone, data centers consumed roughly 4% of national electricity in 2024. Projections suggest that figure could stretch to 12% by 2028. To put this in perspective, a single 100-megawatt data center consumes roughly as much electricity as 80,000 American homes. Data centers being built today are gearing up for gigawatt scale, enough to power a mid-sized city.

For enterprise leaders, energy costs associated with AI and data infrastructure are quickly becoming both a budget concern and a potential bottleneck on growth. Meeting this moment calls for a capability most organizations are only beginning to develop: energy intelligence. The emerging discipline refers to understanding where, when, and why energy is consumed, and using that insight to optimize operations and control costs.

These efforts stand to address both immediate financial pressures and longer-term reputational risks, as communities like Loudoun County grow increasingly concerned about the energy demands associated with nearby data center development.

In December 2025, MIT Technology Review Insights conducted a survey of 300 executives to understand how companies are thinking about energy intelligence today, as well as where they’re anticipating challenges in the future.

Here are five of our most notable findings:

  • Energy intelligence is becoming a universal business priority. One hundred percent of executives surveyed expect the ability to measure and strategically manage power consumption to become an important business metric in the next two years.
  • AI workloads are already driving measurable cost increases, and the surge is just beginning. Two-thirds of executives (68%) report their companies have faced energy cost increases of 10% or more in the past 12 months due to AI and data workloads. Nearly all respondents (97%) anticipate their organization’s AI-related energy consumption will increase over the next 12-18 months.
  • Mounting costs are the top energy-related threat to AI innovation. Half of executives (51%) rank rising costs as the single greatest energy-related risk to their digital and AI initiatives. Most companies currently tracking and attempting to optimize data center energy consumption are motivated by cost management.
  • Organizations are responding through infrastructure optimization and energy-efficient partnerships. To address mounting energy demands, three in four leaders (74%) are optimizing existing infrastructure, while 69% are partnering with energy-efficient cloud and storage providers. More than half are also implementing AI workload scheduling (61%) and investing in more efficient hardware (56%).
  • Closing the measurement gap is the next frontier. Most enterprises still lack the granular data needed for true energy intelligence. This gap is especially pronounced for companies relying on third-party cloud providers and managed services for their data compute and storage needs, where 71% say rising consumption-based costs originate, yet energy metrics are often opaque.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

The Download: AI’s role in the Iran war, and an escalating legal fight

2026-03-10 20:55:32

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How AI is turning the Iran conflict into theater 

Much of the spotlight on AI in the Iran conflict has focused on models like Claude helping the US military decide where to strike. But a wave of “vibe-coded” intelligence dashboards—and the ecosystem surrounding them—reflect a new role that AI is playing in wartime: mediating information, often for the worse. 

These sorts of intelligence tools have much promise. Yet there are real reasons to be suspicious of their data feeds. Read the full story

—James O’Donnell 

This story is from The Algorithm, our weekly newsletter on AI. Sign up to receive it in your inbox every Monday. 

The must-reads 

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 

1 Anthropic has sued the US government  
The AI firm wants to stop the Pentagon from blacklisting it. (Reuters
+ The White House is preparing a new executive order to weed out the company’s technology. (Axios
+ Defense experts are alarmed. (CNBC
+.Google and OpenAI staff have filed a legal brief backing Anthropic against Trump. (Wired $) 
+ The company’s stance won many supporters. (MIT Technology Review

2 GPS jamming has become a crucial battleground in the Middle East  
The interference is endangering—and protecting—ships and planes. (BBC
+ Signal jamming has made navigating the Strait of Hormuz even more difficult. (Bloomberg
+ Quantum navigation offers a potential solution. (MIT Technology Review)  

3 A tech journalist found his AI clone editing for Grammarly 
It’s providing AI-generated feedback “inspired by” real writers without their consent. (Platformer
+ Could ChatGPT do the jobs of journalists and copywriters? (MIT Technology Review

4 Nvidia plans to launch an open-source platform for AI agents  
It’s already pitching the “NemoClaw” product to enterprise software firms. (Wired $) 
+ But don’t let the AI agents hype get ahead of reality (MIT Technology Review
 
5 A startup wants to launch a space mirror that reflects sunlight onto Earth 
Reflect Orbital reckons it could power solar panels at night. Scientists are appalled. (NYT

6 Yann LeCun’s AI startup has raised over $1bn in Europe’s largest seed round  
Meta’s former chief AI scientist plans to build systems that “understand the world.” (Bloomberg

7 Hinge’s CEO insists the app doesn’t rate users’ attractiveness 
Jackie Jantos’ strategy has helped Hinge defy the decline in dating apps. (FT $) 
+ AI companions are stealing hearts—and it’s getting weird. (New Yorker $) 
+ It’s surprisingly easy to fall into a relationship with a chatbot. (MIT Technology Review

8 “AI psychosis” could be afflicting your loved ones  
If so, here’s how you can help them. (404 Media
+ One solution: AI should be able to “hang up” on you. (MIT Technology Review

9 Nintendo is suing Trump over illegal tariffs 
The gaming giant has joined a lawsuit seeking over $200 billion in refunds. (Ars Technica

10 Bio-tech is turning ancient poop into a map of lost civilizations  
Molecular sensors are finding human traces where physical ruins have vanished. (Nature)    

Quote of the day 

“I don’t think any of us, whether it’s me or Dario [Amodei], Sam Altman, or Elon Musk, has any legitimacy to decide for society what is a good or bad use of AI.”

—Yann LeCun gives Wired his take on the Anthropic’s spat the Pentagon. 

One More Thing 

This giant microwave may change the future of war 

drones fall to the bottom with a waving interference pattern
YOSHI SODEOKA

armed forces are hunting for a weapon that disables drones en masse—and they want it fast.  

One solution focuses on microwaves: high-powered electronic devices that push out kilowatts of power to zap the circuits of a drone as if it were the tinfoil you forgot to take off your leftovers when you heated them up. 

Defense tech startup Epirus may have the winning formula. The company has developed a cutting-edge, cost-efficient drone zapper that’s sparking the interest of the US military. And drones are just one of its targets. Read the full story

—Sam Dean 

We can still have nice things 

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.) 

+ Werner Herzog’s magnificent movie about Africa’s ghost elephants has arrived on Disney+ and Hulu. 
+ A “city killer” asteroid won’t hit Earth after all. Phew.  
+ The Met is publishing high-definition 3D scans of over 100 iconic works. 
+ Marty and Doc from Back to the Future are still BFFs in real life. 

Top image credit: MIT TECHNOLOGY REVIEW (ILLUSTRATION) | PHOTO OF MISSILE (US NAVY), AI-GENERATED IMAGE OF RUBBLE VIA X, SCREENSHOTS VIA WORLDMONITOR, GLOBALTHREATMAP 

Send asteroids to [email protected].  

You can follow me on LinkedIn. Thanks for reading! 
 
 

—Thomas  

How AI is turning the Iran conflict into theater

2026-03-09 23:11:01

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

“Anyone wanna host a get together in SF and pull this up on a 100 inch TV?” 

The author of that post on X was referring to an online intelligence dashboard following the US-Israel strikes against Iran in real time. Built by two people from the venture capital firm Andreessen Horowitz, it combines open-source data like satellite imagery and ship tracking with a chat function, news feeds, and links to prediction markets, where people can bet on things like who Iran’s next “supreme leader” will be (the recent selection of Mojtaba Khamenei left some bettors with a payout). 

I’ve reviewed over a dozen other dashboards like this in the last week. Many were apparently “vibe-coded” in a couple of days with the help of AI tools, including one that got the attention of a founder of the intelligence giant Palantir, the platform through which the US military is accessing AI models like Claude during the war. Some were built before the conflict in Iran, but nearly all of them are being advertised by their creators as a way to beat the slow and ineffective media by getting straight to the truth of what’s happening on the ground. “Just learned more in 30 seconds watching this map than reading or watching any major news network,” one commenter wrote on LinkedIn, responding to a visualization of Iran’s airspace being shut down before the strikes.

Much of the spotlight on AI and the Iran conflict has rightfully been on the role that models like Claude might be playing in helping the US military make decisions about where to strike. But these intelligence dashboards and the ecosystem surrounding them reflect a new role that AI is playing in wartime: mediating information, often for the worse.

There’s a confluence of factors at play. AI coding tools mean people don’t need much technical skill to assemble open-source intelligence anymore, and chatbots can offer fast, if dubious, analysis of it. The rise in fake content leaves observers of the war wanting the sort of raw, accurate analysis normally accessible only to intelligence agencies. Demand for these dashboards is also driven by real-time prediction markets that promise financial rewards to anyone sufficiently informed. And the fact that the US military is using Anthropic’s Claude in the conflict (despite its designation as a supply chain risk) has signaled to observers that AI is the intelligence tool the pros use. Together, these trends are creating a new kind of AI-enabled wartime circus that can distort the flow of information as much as it clarifies it.

As a journalist, I believe these sorts of intelligence tools have a lot of promise. While many of us know that real-time data on shipping routes or power outages exist, it’s a powerful thing to actually see it all assembled in one place (though using it to watch a war unfold while you munch on popcorn and place bets turns the war into perverse entertainment). But there are real reasons to think that these sorts of raw data feeds are not as informative as they may feel. 

Craig Silverman, a digital investigations expert who teaches investigative techniques, has been keeping a log of these dashboards (he’s up to 20). “The concern,” he says, “is there’s an illusion of being on top of things and being in control, where all you’re really doing is just pulling in a ton of signals and not necessarily understanding what you’re seeing, or being able to pull out true insights from it.” 

One problem has to do with the quality of the information. Many dashboards feature “intel feeds” with AI-generated summaries of complex, ever-changing news events. These can introduce inaccuracies. By design, the data is not especially curated. Instead, the feeds just display everything at once, with a map of strike locations in Iran next to the prices of obscure cryptocurrencies. 

Intelligence agencies, on the other hand, pair data feeds with people who can offer expertise and historical context. They also, of course, have access to proprietary information that doesn’t show up on the open web. 

The implicit promise from the people building and selling this sort of information pipeline about the Iran conflict is that AI can be a great democratizing force. There’s a secret feed of information that only the elites have had access to, the thinking goes, but now AI can bring it to everyone to do with what they wish, whether that’s simply to be more informed or to make bets on nuclear strikes. But an abundance of information, which AI is undeniably good at assembling, does not come with the accuracy or context required for real understanding. Intelligence agencies do this in-house; good journalism does the same work for the rest of us.

It is, by the way, hard to overstate the connection this all has with betting markets. The dashboard created by the pair at Andreessen Horowitz has a scrolling list of bets being made on the prediction platform Kalshi (which Andreessen Horowitz has invested in). Other dashboards link to Polymarket, offering bets on whether the US will strike Iraq or when Iran’s internet will return.

AI has also long made it cheaper and easier to spread fake content, and that problem is on full display during the Iran conflict: last week the Financial Times found a slew of AI-generated satellite imagery spreading online. 

“The emergence of manipulated or outright fake satellite imagery is really concerning,” Silverman says. The average person tends to see such imagery as very trustworthy. The spread of such fakes could erode confidence in one of the most important pieces of evidence used to show what’s actually happening in the war. 

The result is an ocean of AI-enabled content—dashboards, betting markets, photos both real and fake—that makes this war harder, not easier, to comprehend.

The usability imperative for securing digital asset devices

2026-03-09 22:00:00

When Tony Fadell started working on the iPod, usability often trumped security. The result was an iterative process. Every time someone would find a security weakness or a way to hack the device, the development group would iterate to add measures and fix the issues. Yet, flaws would frequently be found, and the secure design of the product became a moving target.

But when it came to designing a device specifically for security purposes, there could be no iterative process after rolling it out: Security had to be the number one priority. 

“As you develop these things, you’re a victim of your own development speed,” says Fadell, who developed Ledger Stax, a signing device for securing digital assets, and is now a board member at digital asset security firm Ledger. “If you introduced these features and functions without the proper review, and now customers are demanding security, you’ll realize that you should have designed it differently from the start, and it’s very hard to undo what you’ve already done.”

A critical aspect of designing secure technology, however, must be ease of use too. Without it, it is all too simple for users to make a mistake or use an unsafe workaround that undermines device protections. Think a post-it stuck to a monitor or some variation of “123456” or “admin” for passwords.

With digital asset security devices like signers—more commonly called “wallets”—such errors could lead to seriously detrimental outcomes. If, for example, a user’s private key falls into the wrong hands, bad actors can use it to steal their digital assets. Estimates suggest that around 20% of all Bitcoin—worth around $355 billion—are inaccessible to owners. One of the reasons for this is likely because they lost their private keys.

In the past, crypto devices have been notoriously difficult to use. As cryptocurrency becomes ever more popular, valuable, and mainstream—attracting greater attention from criminals as the stakes rise—designers and engineers are prioritizing both security and usability when developing digital asset devices, drawing on in-depth research to iterate.

The three components of security

Strong security models for devices like signers, which are used to secure blockchain transactions,  require three major components. First, a secure operating system. Second, a secure element to bind the software to the hardware. And third, a secure user interface. Each of which need to be frequently tested by researchers and white hat hackers to simulate real-world attacks and improve product resilience and usability.

The first two elements focus on securing the device software and hardware. Secure software has always been a problem, but one that has improved over the last decade, as security architectures and processes have been refined. Meanwhile, hardware security components have become widely available—from trusted platform modules on computers to secure enclaves in smartphones—allowing digital information to essentially be locked to a device.

For crypto signers, hardware must provide encryption capabilities. And the security of the software must be frequently tested. Ledger, for example, has a secure OS and a Secure Element that handles encryption primitives, and a secure display that prevents device takeover.

Security and usability working hand in hand

Asset recovery is a major consideration when designing signers. If recovery options are not easy to use, an owner could lose access. But if recovery processes are not secure enough, attackers could exploit the system. With SIM swapping attacks, for example, attackers can tap into a mobile communications channel used for account recovery and “recover” a victim’s password to steal their assets.

In the digital-asset ecosystem, the creation of the seed phrase, a sequence of 12 to 24 words that could act as a passphrase for wallets is an example of improving usability and security. Known more formally as Bitcoin Improvement Proposal 39 (BIP-39), the approach gives users a master password to unlock their hierarchical deterministic (HD) wallets. 

There is a lot of creative tension between the security team and the UX team that happens to achieve the proper balance between convenience and safety, Fadell says, referring to Ledger’s security research team, the Donjon. “We mock things up, we prototype things from a UX UI perspective, we walk through it, then we walk the Donjon team through it,” Fadell explains. “We push back and forth to find the absolute optimal solution to balance the two.” 

Through the research the Donjon team has conducted, Ledger designed its Recovery Key—an NFC-based physical card to back up your 24 words—to be both user-friendly and secure. “What we did, as a first in the industry, was include an NFC card,” says Fadell. “Instead of only writing it down, you can also have an NFC card called a Recovery Key. You can have multiple Recovery Keys and store them in a lockbox, a safety deposit box, or give them to someone you trust for safekeeping.”

A number of government initiatives are working to regulate this balance between security and usability. This includes the US Cybersecurity and Infrastructure Security Agency’s Secure by Design, which aims to build cybersecurity into the design and manufacture of technology products. And the UK’s National Cyber Security Centre’s Software Security Code of Practice, which outlines security principles expected of all organizations that develop or sell software. 

Enterprise security presents distinct challenges

Embedding usability and security into devices for companies adds further complexity as businesses need features such as multi-signature capabilities to protect against single points of failure, whether from external attacks or internal bad actors. 

Security design can take these requirements into account, with secure governance using multiple signatures (multisig), hardware security modules (HSMs) for key storage, trusted display systems, and other usable security capabilities.

These technologies are critically important for companies who have roles in the blockchain ecosystem. Failure to establish robust security measures can have dire consequences. In 2024, for example, unknown cybercriminals made off with more than $300 million worth of assets from DMM Bitcoin, leading the Japanese cryptocurrency platform to close six months later. Japan’s Financial Services Agency discovered severe risk management issues, including inadequate oversight, lack of independent audits, and poor security practices.

For companies, allowing a multi-stage process that involves a required number of stakeholders is critical, says Fadell. “It’s making sure that the attack vector is not just one person, and so you need to support multiple people with multiple factors on all of their devices as well,” he says. “It gets to be a real combinatoric problem.”

R&D to stay one step ahead 

To keep up with requirements and offer strong security with improved visibility, crypto firms need to invest in research and development, Fadell says. Attack labs, such as Ledger Donjon, can conduct real-world testing on specific enterprise security requirements and create scenarios to educate both management and workers of the potential threats. 

Such research and development can support device designers and engineers in their never-ending mission to balance security measures with usability so that digital asset devices can support users to safeguard their digital assets in a constantly evolving crypto and cyber landscape.

Learn more about how to secure digital assets in the Ledger Academy.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

The Download: murky AI surveillance laws, and the White House cracks down on defiant labs

2026-03-09 21:57:44

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Is the Pentagon allowed to surveil Americans with AI?

The ongoing public feud between the Department of Defense and the AI company Anthropic has raised a deep and still unanswered question: Does the law actually allow the US government to conduct mass surveillance on Americans?

Surprisingly, the answer is not straightforward. More than a decade after Edward Snowden exposed the NSA’s collection of bulk metadata from the phones of Americans, the US is still navigating a gap between what ordinary people think and what the law allows. 

Today, the legal complexity has a new edge: AI is supercharging surveillance—and our laws haven’t caught up. Read the full story.

—Michelle Kim

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The White House has tightened its AI rules amid the Anthropic spat
New guidelines require companies to allow “any lawful” use of their ‌models. (FT $)
+ London’s mayor has slammed Trump’s treatment of Anthropic and invited the firm to expand in the city. (BBC)

2 A satellite firm has stopped sharing imagery after exposing Iranian strikes
Planet Lab said it wants to stop “adversarial actors” from using the data. (Ars Technica)
+ AI is turbocharging the conflict in Iran. (WSJ $)
+ War is adding a brutal new element to the country’s internet issues.
(Wired $)

3 The OpenAI-Anthropic feud is getting messy
The Pentagon contract controversy has intensified a deeply personal animosity between the founders. (NYT $)
+ Sam Altman and Dario Amodei’s rivalry could reshape the future of AI. (WSJ $) 
+ OpenAI’s robotics lead has quit over concerns about surveillance and “lethal autonomy.” (TechCrunch)
+ The company’s DoD “compromise” has brought Anthropic’s fears to life. (MIT Technology Review)

4 Staff at Block are outraged over the company’s “AI layoffs” 
They’re pushing back against Jack Dorsey’s bullishness on AI. (The Guardian)
+ They’ve also cast doubt on the payroll savings. (Gizmodo)
+ It’s not the first case of fears over AI taking everyone’s jobs. (MIT Technology Review)

5 Data center “man camps” are springing up in Texas
Aimed at luring workers to help build the centers, they will offer free steaks and golf simulators. (Bloomberg $)

6 The OpenClaw craze is sparking a rally in Chinese tech stocks
Shares surged after government agencies and tech leaders promoted the AI agent. (Bloomberg $)
+ Why is China falling so hard for it? (SCMP)

7 AI-generated videos are altering our relationship to nature
And could lead to “distorted expectations” of animal behavior. (NYT $)
+ AI slop could form a new kind of pop culture. (MIT Technology Review)

8 A rogue AI agent freed itself to mine crypto in secret
The model escaped its sandbox to start a side hustle in digital currency. (Axios)
+ AI agents are also starting to harass people. (MIT Technology Review)

9 In a first, a spacecraft has changed an asteroid’s orbit around the sun
The feat was a test of Earth’s future defenses. (Engadget)

10 How the Furby brought creepy-cute robotics into playtime   
A new show traces the legacy of the surprisingly high-tech toy. (The Verge)

Quote of the day

“I wanted to approach the whole situation with love.”

—Block cofounder and CEO Jack Dorsey tells Wired why he wore a hat with the word ‘Love’ on it during a meeting where he laid off 40% of his workforce. 

One more thing

Geoffrey Hinton holds his hand up to partially obscure his face
LINDA NYLIND / EYEVINE VIA REDUX

Geoffrey Hinton tells us why he’s now scared of the tech he helped build

Geoffrey Hinton is a pioneer of deep learning who helped develop some of the most important techniques at the heart of modern artificial intelligence, but after a decade at Google, he’s stepped down to focus on concerns he now has about AI.

Hinton wants to spend his time on what he describes as “more philosophical work.” And that will focus on the small but—to him—very real danger that AI will turn out to be a disaster. Read the full story.

—Will Douglas Heaven

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)

+ De La Soul’s Tiny Desk concert is a masterclass in joy and grief, proving their “Daisy Age” philosophy is timeless.
+ These original Disney concepts of beloved characters are a portal into an alternate childhood.
+ This square phone traverses two decades of nostalgia by rotating into a Game Boy AND a BlackBerry.
+ A newly discovered Rembrandt shows the Old Masters still have new tricks to reveal.