2026-01-06 18:00:00
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
Sometimes AI feels like a niche topic to write about, but then the holidays happen, and I hear relatives of all ages talking about cases of chatbot-induced psychosis, blaming rising electricity prices on data centers, and asking whether kids should have unfettered access to AI. It’s everywhere, in other words. And people are alarmed.
Inevitably, these conversations take a turn: AI is having all these ripple effects now, but if the technology gets better, what happens next? That’s usually when they look at me, expecting a forecast of either doom or hope.
I probably disappoint, if only because predictions for AI are getting harder and harder to make.
Despite that, MIT Technology Review has, I must say, a pretty excellent track record of making sense of where AI is headed. We’ve just published a sharp list of predictions for what’s next in 2026 (where you can read my thoughts on the legal battles surrounding AI), and the predictions on last year’s list all came to fruition. But every holiday season, it gets harder and harder to work out the impact AI will have. That’s mostly because of three big unanswered questions.
For one, we don’t know if large language models will continue getting incrementally smarter in the near future. Since this particular technology is what underpins nearly all the excitement and anxiety in AI right now, powering everything from AI companions to customer service agents, its slowdown would be a pretty huge deal. Such a big deal, in fact, that we devoted a whole slate of stories in December to what a new post-AI-hype era might look like.
Number two, AI is pretty abysmally unpopular among the general public. Here’s just one example: Nearly a year ago, OpenAI’s Sam Altman stood next to President Trump to excitedly announce a $500 billion project to build data centers across the US in order to train larger and larger AI models. The pair either did not guess or did not care that many Americans would staunchly oppose having such data centers built in their communities. A year later, Big Tech is waging an uphill battle to win over public opinion and keep on building. Can it win?
The response from lawmakers to all this frustration is terribly confused. Trump has pleased Big Tech CEOs by moving to make AI regulation a federal rather than a state issue, and tech companies are now hoping to codify this into law. But the crowd that wants to protect kids from chatbots ranges from progressive lawmakers in California to the increasingly Trump-aligned Federal Trade Commission, each with distinct motives and approaches. Will they be able to put aside their differences and rein AI firms in?
If the gloomy holiday dinner table conversation gets this far, someone will say: Hey, isn’t AI being used for objectively good things? Making people healthier, unearthing scientific discoveries, better understanding climate change?
Well, sort of. Machine learning, an older form of AI, has long been used in all sorts of scientific research. One branch, called deep learning, forms part of AlphaFold, a Nobel Prize–winning tool for protein prediction that has transformed biology. Image recognition models are getting better at identifying cancerous cells.
But the track record for chatbots built atop newer large language models is more modest. Technologies like ChatGPT are quite good at analyzing large swathes of research to summarize what’s already been discovered. But some high-profile reports that these sorts of AI models had made a genuine discovery, like solving a previously unsolved mathematics problem, were bogus. They can assist doctors with diagnoses, but they can also encourage people to diagnose their own health problems without consulting doctors, sometimes with disastrous results.
This time next year, we’ll probably have better answers to my family’s questions, and we’ll have a bunch of entirely new questions too. In the meantime, be sure to read our full piece forecasting what will happen this year, featuring predictions from the whole AI team.
2026-01-06 00:00:00
When business leaders talk about digital transformation, their focus often jumps straight to cloud platforms, AI tools, or collaboration software. Yet, one of the most fundamental enablers of how organizations now work, and how employees experience that work, is often overlooked: audio.
As Genevieve Juillard, CEO of IDC, notes, the shift to hybrid collaboration made every space, from corporate boardrooms to kitchen tables, meeting-ready almost overnight. In the scramble, audio quality often lagged, creating what research now shows is more than a nuisance. Poor sound can alter how speakers are perceived, making them seem less credible or even less trustworthy.
“Audio is the gatekeeper of meaning,” stresses Julliard. “If people can’t hear clearly, they can’t understand you. And if they can’t understand you, they can’t trust you, and they can’t act on what you said. And no amount of sharp video can fix that.” Without clarity, comprehension and confidence collapse.
For Shure, which has spent a century advancing sound technology, the implications extend far beyond convenience. Chris Schyvinck, Shure’s president and CEO, explains that ineffective audio undermines engagement and productivity. Meetings stall, decisions slow, and fatigue builds.
“Use technology to make hybrid meetings seamless, and then be clear on which conversations truly require being in the same physical space,” says Juillard. “If you can strike that balance, you’re not just making work more efficient, you’re making it more sustainable, you’re also making it more inclusive, and you’re making it more resilient.”
When audio is prioritized on equal footing with video and other collaboration tools, organizations can gain something rare: frictionless communication. That clarity ensures the machines listening in, from AI transcription engines to real-time translation systems, can deliver reliable results.
The research from Shure and IDC highlights two blind spots for leaders. First, buying decisions too often privilege price over quality, with costly consequences in productivity and trust. Second, organizations underestimate the stress poor sound imposes on employees, intensifying the cognitive load of already demanding workdays. Addressing both requires leaders to view audio not as a peripheral expense but as core infrastructure.
Looking ahead, audio is becoming inseparable from AI-driven collaboration. Smarter systems can already filter out background noise, enhance voices in real time, and integrate seamlessly into hybrid ecosystems.
“We should be able to provide improved accessibility and a more equitable meeting experience for people,” says Schyvinck.
For Schyvinck and Juillard, the future belongs to companies that treat audio transformation as an integral part of digital transformation, building workplaces that are more sustainable, equitable, and resilient.
This episode of Business Lab is produced in partnership with Shure.
Full Transcript
Megan Tatum: From MIT Technology Review, I’m Megan Tatum, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.
This episode is produced in partnership with Shure.
As companies continue their journeys towards digital transformation, audio modernization is an often overlooked but key component of any successful journey. Clear audio is imperative not only for quality communication, but also for brand equity, both for internal and external stakeholders and even the company as a whole.
Two words for you: audio transformation.
My guests today are Chris Schyvinck, President and CEO at Shure. And Genevieve Juillard, CEO at IDC.
Welcome Chris and Genevieve.
Chris Schyvinck: It’s really nice to be here. Thank you very much.
Genevieve Juillard: Yeah, thank you so much for having us. Great to be here.
Megan Tatum: Thank you both so much for being here. Genevieve, we could start with you. Let’s start with some history perhaps for context. How would you describe the evolution of audio technology and how use cases and our expectations of audio have evolved? What have been some of the major drivers throughout the years and more recently, perhaps would you consider the pandemic to be one of those drivers?
Genevieve: It’s interesting. If you go all the way back to 1976, Norman Macrae of The Economist predicted that video chat would actually kill the office, that people would just work from home. Obviously, that didn’t happen then, but the core technology for remote collaboration has actually been around for decades. But until the pandemic, most of us only experienced it in very specific contexts. Offices had dedicated video conferencing rooms and most ran on expensive proprietary systems. And then almost overnight, everything including literally the kitchen table had to be AV ready. The cultural norms shifted just as fast. Before the pandemic, it was perfectly fine to keep your camera off in a meeting, and now that’s seen as disengaged or even rude, and that changes what normalized video conferencing and my hybrid meetings.
But in a rush to equip a suddenly remote workforce, we hit two big problems. Supply chain disruptions and a massive spike in demand. High-quality gear was hard to get so low-quality audio and video became the default. And here’s a key point. We now know from research that audio quality matters more than video quality for meeting outcomes. You can run a meeting without video, but you can’t run a meeting without clear audio. Audio is the gatekeeper of meaning. If people can’t hear clearly, they can’t understand you. And if they can’t understand you, they can’t trust you and they can’t act on what you said. And no amount of sharp video can fix that.
Megan: Oh, true. It’s fascinating, isn’t it? And Chris, Shure and IDC recently released some research titled “The Hidden Influencer Rethinking Audio Could Impact Your Organization Today, Tomorrow, and Forever.” The research highlighted that importance of audio that Genevieve’s talking about in today’s increasingly virtual world. What did you glean from those results and did anything surprise you?
Chris: Yeah, well, the research certainly confirmed a lot of hunches we’ve had through the years. When you think about a company like Shure that’s been doing audio for 100 years, we just celebrated that anniversary this year.
Megan: Congratulations.
Chris: Our legacy business is over more in the music and performance arena. And so just what Genevieve said in terms of, “Yeah, you can have a performance and look at somebody, but that’s like 10% of it, right? 90% is hearing that person sing, perform, and talk.” We’ve always, of course, from our perspective, understood that clean, clear, crisp audio is what is needed in any setting. When you translate what’s happening on the stage into a meeting or collaboration space at a corporation, we’ve thought that that is just equally as important.
And we always had this hunch that if people don’t have the good audio, they’re going to have fatigue, they’re going to get a little disengaged, and the whole meeting is going to become quite unproductive. The research just really amplified that hunch for us because it really depicted the fact that people not only get kind of frustrated and disengaged, they might actually start to distrust what the other person with bad audio is saying or just cast it in a different light. And the degree to which that frustration becomes almost personal was very surprising to us. Like I said, it validated some hunches, but it really put an exclamation point on it for us.
Megan: And Genevieve, based on the research results, I understand that IDC pulled together some recommendations for organizations. What is it that leaders need to know and what is the biggest blind spot for them to overcome as well?
Genevieve: The biggest blind spot is this. If your microphone has poor audio quality, like Chris said, people will literally perceive you as less intelligent and less trustworthy. And by the way, that’s not an opinion. It’s what the science says. But yet, when we surveyed first time business buyers, the number one factor they used to choose audio gear was price. However, for repeat buyers, the top factor flipped to audio quality. My guess is they learn the lesson the hard way. The second blind spot is to Chris’s point, it’s the stress that bad audio creates. Poor sound forces your brain to work harder to decode what’s being said. That’s a cognitive load and it creates stress. And over a full day of meetings, that stress adds up. Now, we don’t have long-term studies yet on the effects, but we do know that prolonged stress is something that every company should be working to reduce.
Good audio lightens that cognitive load. It keeps people engaged and it levels the playing field. Whether you’re in a room or you’re halfway across the world, and here’s one that’s often overlooked, bad audio can sabotage AI transcription tools. As AI becomes more and more central to everyday work, that starts to become really critical. If your audio isn’t clear, the transcription won’t be accurate. And there’s a world of difference between working, for example, the consulting department and the insulting department, and that is an actual example from the field.
The bottom line is you fix the audio, you cut friction, you save time, and you make meetings more productive.
Megan: I mean, it’s just a huge game changer, isn’t it, really? I mean, and given that, Chris, in your experience across industries, are audio technologies being included in digital transformation strategies and also artificial intelligence implementation? Do we need a separate audio transformation perhaps?
Chris: Well, like I mentioned earlier, yes, people tend to initially focus on that visual platform, but increasingly the attention to audio is really coming into focus. And I’d hate to tear apart audio as a separate sort of strategy because at the same time, we, as an audio expert, are trying to really seamlessly integrate audio into the rest of the ecosystem. It really does need to be put on an equal footing with the rest of the components in that ecosystem. And to Genevieve’s point, as we are seeing audio and video systems with more AI functionalities, the importance of real-time translations that are being used, voice recognition, being able to attribute who said what in a meeting and take action items, it’s really, I think starting to elevate the importance of that clear audio. And it’s got to be part of a comprehensive, really collaboration plan that helps some company figure out what’s their whole digital transformation about. It just really has to be included in that comprehensive plan, but put on equal footing with the rest of the components in that system.
Megan: Yeah, absolutely. And in the broader landscape, Genevieve, in terms of discussing the importance of audio quality, what have you noticed across research projects about the effects of good and bad audio, not only from that company perspective, but from employee and client perspectives as well?
Genevieve: Well, let’s start with employees.
Megan: Sure.
Genevieve: Bad audio adds friction you don’t need, we’ve talked about this. When you’re straining to hear or make sense of what’s being said, your brain is burning energy on decoding instead of contributing. That frustration, it builds up, and by the end of the day, it hurts productivity. From a company perspective, the stakes get even higher. Meetings are where decisions happen or at least where they’re supposed to happen. And if people can’t hear clearly, decisions get delayed, mistakes creep in, and the whole process slows down. Poor audio doesn’t just waste time, it chips away at the ability to move quickly and confidently. And then there’s the client experience. So whether it’s in sales, customer service, or any external conversation, poor audio can make you sound less credible and yet less trustworthy. Again, that’s not my opinion. That’s what the research shows. So that’s quite a big risk when you’re trying to close a deal or solve a major problem.
The takeaway is good audio, it matters, it’s a multiplier. It makes meetings more productive and it can help decisions happen faster and client interactions be stronger.
Megan: It’s just so impactful, isn’t it, in so many different ways. I mean, Chris, how are you seeing these research results reflected as companies work through digital and AI transformations? What is it that leaders need to understand about what is involved in audio implementation across their organization?
Chris: Well, like I said earlier, I do think that audio is finally maybe getting its place in the spotlight a little bit up there with our cousins over in the video side. Audio, it’s not just a peripheral aspect anymore. It’s a very integral part of that sort of comprehensive collaboration plan I was talking about earlier. And when we think about how can we contribute solutions that are really more easy to use for our end users, because if you create something complicated, we were talking about the days gone by of walking into a room. It’s a very complicated system, and you need to find the right person that knows how to run it. Increasingly, you just need to have some plug and play kind of solutions. We’re thinking about a more sustainable strategy for our solutions where we make really high-quality hardware. We’ve done that account for a hundred years. People will come up to me and tell the story of the SM58 microphone they bought in 1980 and how they’re still using it every day.
We know how to do that part of it. If somebody is willing to make that investment upfront, put some high-quality hardware into their system, then we are getting to the point now where updates can be handled via software downloads or cloud connectivity. And just really being able to provide sort of a sustainable solution for people over time.
More in our industry, we’re collaborating with other industry partners to go in that direction, make something that’s very simple for anybody to walk into a room or on their individual at home setup and do something pretty simple. And I think we have the right industry groups, the right industry associations that can help make sure that the ecosystems have the proper standards, the right kind of ways to make sure everything is interoperable within a system. We’re all kind of heading in that direction with that end user in mind.
Megan: Fantastic. And when the internet of things was emerging, efforts began to create sort of these data ecosystems, it seems there’s an argument to be made that we need audio ecosystems as well. I wonder, Chris, what might an audio ecosystem look like and what would be involved in implementation?
Chris: Well, I think it does have to be part of that bigger ecosystem I was just talking about where we do collaborate with others in industry and we try to make sure that we’re all playing by the kind of same set of rules and protocols and standards and whatnot. And when you think about compatibility across all the devices that sit in a room or sit in your, again, maybe your at home setup, making sure that the audio quality is as good as it can be, that you can interoperate with everything else in the system. That’s just become very paramount in our day-to-day work here. Your hardware has to be scalable like I just alluded to a moment ago. You have to figure out how you can integrate with existing technologies, different platforms.
We were joking when we came into this session that when you’re going from the platform at your company, maybe you’re on Teams and you go into a Zoom setting or you go into a Google setting, you really have to figure out how to adapt to all those different sort of platforms that are out there. I think the ecosystem that we’re trying to build, we’re trying to be on that equal footing with the rest of the components in that system. And people really do understand that if you want to have extra functionalities in meetings and you want to be able to transcribe or take notes and all of that, that audio is an absolutely critical piece.
Megan: Absolutely. And speaking of bit of all those different platforms and use cases, that sort of audio is so relevant to Genevieve that goes back to this idea of in audio one size does not fit all and needs may change. How can companies also plan their audio implementations to be flexible enough to meet current needs and to be able to grow with future advancements?
Genevieve: I’m glad you asked this question. Even years after the pandemic, many companies, they’re still trying to get the balance right between remote, in office, how to support it. But even if a company has a strict return to office in-person policy, the reality is that work still isn’t going away for that company. They may have teams across cities or countries, clients and external stakeholders will have their own office preferences that they have to adapt to. Supporting hybrid work is actually becoming more important, not less. And our research shows that companies are leaning into, not away from, hybrid setups. About one third of companies are now redesigning or resizing office spaces every single year. For large organizations with multiple sites, staggered leases, that’s a moving target. It’s really important that they have audio solutions that can work before, during, after all of those changes that they’re constantly making. And so that’s where flexibility becomes really important. Companies need to buy not just for right now, but for the future.
And so here’s IDC’s kind of pro-tip, which is make sure as a company that you go with a provider that offers top-notch audio quality and also has strong partnerships and certifications with the big players and communications technology because that will save you money in the long run. Your systems will stay compatible, your investments will last longer, and you won’t be scrambling when that next shift happens.
Megan: Of course. And speaking of building for the future, as companies begin to include sustainability in their company goals, Chris, I wonder how can audio play a role in those sustainability efforts and how might that play into perhaps the return on investment in building out a high-quality audio ecosystem?
Chris: Well, I totally agree with what Genevieve just said in terms of hybrid work is not going anywhere. You get all of those big headlines that talk about XYZ company telling people to get back into the office. And I saw a fantastic piece of data just last week that showed the percent of in-office hours of the American workers versus out-of-office remote kind of work. It has basically been flatlined since 2022. This is our new way of working. And of course, like Genevieve mentioned, you have people in all these different locations. And in a strange way, living through the pandemic did teach us that we can do some things by not having to hop on an airplane and travel to go somewhere. Certainly that helps with a more sustainable strategy over time, and you’re saving on travel and able to get things done much more quickly.
And then from a product offering perspective, I’ll go back to the vision I was painting earlier where we and others in our industry see that we can create great solid hardware platforms. We’ve done it for decades, and now that advancements around AI and all of our software that enables products and everything else that has happened in the last probably decade, we can get enhancements and additions and new functionality to people in simpler ways on existing hardware. I think we’re all careening down this path of having a much more sustainable ecosystem for all collaboration. It’s really quite an exciting time, and that pays off with any company implementing a system, their ROI is going to be much better in the long run.
Megan: Absolutely. And Genevieve, what trends around sustainability are you seeing? What opportunities do you see for audio to play into those sustainability efforts going forward?
Genevieve: Yeah, similar to Chris. In some industries, there’s still a belief that the best work happens when everyone’s in the same room. And yes, face-to-face time is really important for building relationships, for brainstorming, for closing big deals, but it does come at a cost. The carbon footprint of daily commutes, the sales visits, the constant business travel. And then there’s the basic consideration, as we’ve talked about, of just pure practicality. The good news is with the right AV setup, especially high-quality audio, many of those interactions can happen virtually without losing effectiveness, as Chris said it, but our research shows it.
Our research shows that virtual meetings can be just as productive as in-person ones, and every commute or flight you avoid, of course makes a measurable sustainability impact. I don’t think, personally, that the takeaway is replace all in-person meetings, but instead it’s to be intentional. Use technology to make hybrid meetings seamless, and then be clear on which conversations truly require being in the same physical space. If you can strike that balance, you’re not just making work more efficient, you’re making it more sustainable, you’re also making it more inclusive, and you’re making it more resilient.
Megan: Such an important point. And let’s close with a future forward look, if we can. Genevieve, what innovations or advancements in the audio field are you most excited to see to come to fruition, and what potential interesting use cases do you see on the horizon?
Genevieve: I’m especially interested in how AI and audio are converging. We’re now seeing AI that can identify and isolate human voices in noisy environments. For example, right now, there are some jets flying overhead. It’s very loud in here, but I suspect you may not even know that that’s happening.
Megan: We can’t hear a thing. No.
Genevieve: Right. That technology, it’s pulling voices forward so that conversations like ours are crystal clear. And that’s a big deal, especially as companies invest more and more in AI tools, especially for that translating, transcribing and summarizing meetings. But as we’ve talked before, AI is only as good as the audio it hears. If the sound is poor or a word gets misheard, the meaning can shift entirely. And sometimes that’s just inconvenient, or it can even be funny. But in really high stakes settings, like healthcare for example, a single mis-transcribed word can have serious consequences. So that’s why our position as high quality audio is critical and it’s necessary for making AI powered communication accurate, trustworthy, and useful because when the input is clean, the output can actually live up to its promise.
Megan: Fantastic. And Chris, finally, what are you most excited to see developed? What advancements are you most looking forward to seeing?
Chris: Well, I really do believe that this is one of the most exciting times that I know I’ve lived through in my career. Just the pace of how fast technology is moving, the sudden emergence of all things AI. I was actually in a roundtable session of CEOs yesterday from lots of different industries, and the facilitator was talking about change management internally in companies as you’re going through all of these technology shifts and some of the fear that people have around AI and things like that. And the facilitator asked each of us to give one word that describes how we’re feeling right now. And the first CEO that went used the word dread. And that absolutely floored me because you enter into these eras with some skepticism and trying to figure out how to make things work and go down the right path. But my word was truly optimism.
When I look at all the ways that we are able to deliver better audio to people more quickly, there’s so many opportunities in front of us. We’re working on things outside of AI like algorithms that Genevieve just mentioned that filter out the bad sounds that you don’t want entering into a meeting. We’ve been doing that for quite a long time now. There’s also opportunities to do real time audio improvements, enhancements, make audio more personal for people. How do they want to be able to very simply, through voice commands perhaps, adjust their audio? There shouldn’t have to be a whole lot of techie settings that come along with our solutions.
We should be able to provide improved accessibility and a little bit more equitable meeting experience for people. And we’re looking at tech technology solutions around immersive audio. How can you maybe feel like you’re a bit more engaged in the meeting, kind of creating some realistic virtual experiences, if you will. There’s just so many opportunities in front of us, and I can just picture a day when you walk into a room and you tell the room, “Hey, call Genevieve. We’re going to have a meeting for an hour, and we might need to have Megan on call to come in at a certain time.”
And all of this will just be very automatic, very seamless, and we’ll be able to see each other and talk at the same time. And this isn’t years away. This is happening really, really quickly. And I do think it’s a really exciting time for audio and just all together collaboration in our industry.
Megan: Absolutely. Sounds like there’s plenty of reason to be optimistic. Thank you both so much.
That was Chris Schyvinck, President and CEO at Shure. And Genevieve Juillard, CEO at IDC, whom I spoke with from Brighton, England.
That’s it for this episode of Business Lab. I’m your host, Megan Tatum. I’m a contributing editor at Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology, and you can find us in print on the web and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.
This show is available wherever you get your podcasts. And if you enjoy this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review, and this episode was produced by Giro Studios. Thanks for listening.
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
2026-01-05 21:10:00
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
Welcome to Kenya’s Great Carbon Valley: a bold new gamble to fight climate change
In June last year, startup Octavia Carbon began running a high-stakes test in the small town of Gilgil in south-central Kenya. It’s harnessing some of the excess energy generated by vast clouds of steam under the Earth’s surface to power prototypes of a machine that promises to remove carbon dioxide from the air in a manner that the company says is efficient, affordable, and—crucially—scalable.
The company’s long-term vision is undoubtedly ambitious—it wants to prove that direct air capture (DAC), as the process is known, can be a powerful tool to help the world keep temperatures from rising to ever more dangerous levels.
But DAC is also a controversial technology, unproven at scale and wildly expensive to operate. On top of that, Kenya’s Maasai people have plenty of reasons to distrust energy companies. Read the full story.
—Diana Kruzman
This article is also part of the Big Story series: MIT Technology Review’s most important, ambitious reporting. The stories in the series take a deep look at the technologies that are coming next and what they will mean for us and the world we live in. Check out the rest of them here.
AI Wrapped: The 14 AI terms you couldn’t avoid in 2025
If the past 12 months have taught us anything, it’s that the AI hype train is showing no signs of slowing. It’s hard to believe that at the beginning of the year, DeepSeek had yet to turn the entire industry on its head, Meta was better known for trying (and failing) to make the metaverse cool than for its relentless quest to dominate superintelligence, and vibe coding wasn’t a thing.
If that’s left you feeling a little confused, fear not. Our writers have taken a look back over the AI terms that dominated the year, for better or worse. Read the full list.
MIT Technology Review’s most popular stories of 2025
2025 was a busy and productive year here at MIT Technology Review. We published magazine issues on power, creativity, innovation, bodies, relationships, and security. We hosted 14 exclusive virtual conversations with our editors and outside experts in our subscriber-only series, Roundtables, and held two events on MIT’s campus. And we published hundreds of articles online, following new developments in computing, climate tech, robotics, and more.
As the new year begins, we wanted to give you a chance to revisit some of this work with us. Whether we were covering the red-hot rise of artificial intelligence or the future of biotech, these are some of the stories that resonated the most with our readers.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 Washington’s battle to break up Big Tech is in peril
A string of judges have opted not to force them to spin off key assets. (FT $)
+ Here’s some of the major tech litigation we can expect in the next 12 months. (Reuters)
2 Disinformation about the US invasion of Venezuela is rife on social media
And the biggest platforms don’t appear to be doing much about it. (Wired $)
+ Trump shared a picture of captured president Maduro on Truth Social. (NYT $)
3 Here’s what we know about Big Tech’s ties to the Israeli military
AI is central to its military operations, and giant US firms have stepped up to help. (The Guardian)
4 Alibaba’s AI tool is detecting cancer cases in China
PANDA is adept at spotting pancreatic cancer, which is typically tough to identify. (NYT $)
+ How hospitals became an AI testbed. (WSJ $)
+ A medical portal in New Zealand was hacked into last week. (Reuters)
5 This Discord community supports people recovering from AI-fueled delusions
They say reconnecting with fellow humans is an important step forward. (WP $)
+ The looming crackdown on AI companionship. (MIT Technology Review)
6 Californians can now demand data brokers delete their personal information
Thanks to a new tool—but there’s a catch. (TechCrunch)
+ This California lawmaker wants to ban AI from kids’ toys. (Fast Company $)
7 Chinese peptides are flooding into Silicon Valley
The unproven drugs promise to heal injuries, improve focus and reduce appetite—and American tech workers are hooked. (NYT $)
8 Alaska’s court system built an AI assistant to navigate probate
But the project has been plagued by delays and setbacks. (NBC News)
+ Inside Amsterdam’s high-stakes experiment to create fair welfare AI. (MIT Technology Review)
9 These ghostly particles could upend how we think about the universe
The standard model of particle physics may have a crack in it. (New Scientist $)
+ Why is the universe so complex and beautiful? (MIT Technology Review)
10 Sick of the same old social media apps?
Give these alternative platforms a go. (Insider $)
Quote of the day
“Just an unbelievable amount of pollution.”
—Sharon Wilson, a former oil and gas worker who tracks methane releases, tells the Guardian what a thermal imaging camera pointed at xAI’s Colossus datacentre has revealed.
One more thing

How aging clocks can help us understand why we age—and if we can reverse it
Wrinkles and gray hairs aside, it can be difficult to know how well—or poorly—someone’s body is truly aging. A person who develops age-related diseases earlier in life, or has other biological changes associated with aging, might be considered “biologically older” than a similar-age person who doesn’t have those changes. Some 80-year-olds will be weak and frail, while others are fit and active.
Over the past decade, scientists have been uncovering new methods of looking at the hidden ways our bodies are aging. And what they’ve found is changing our understanding of aging itself. Read the full story.
—Jessica Hamzelou
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)
+ You heard it here first: 2026 is the year of cabbage (yes, cabbage.)
+ Darts is bigger than ever. So why are we still waiting for the first great darts video game? 
+ This year’s CES is already off to a bang, courtesy of an essential, cutting-edge vibrating knife.
+ At least one good thing came out of that Stranger Things finale—streams of Prince’s excellent back catalog have soared.
2026-01-05 19:04:46
MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.
In an industry in constant flux, sticking your neck out to predict what’s coming next may seem reckless. (AI bubble? What AI bubble?) But for the last few years we’ve done just that—and we’re doing it again.
How did we do last time? We picked five hot AI trends to look out for in 2025, including what we called generative virtual playgrounds, a.k.a world models (check: From Google DeepMind’s Genie 3 to World Labs’s Marble, tech that can generate realistic virtual environments on the fly keeps getting better and better); so-called reasoning models (check: Need we say more? Reasoning models have fast become the new paradigm for best-in-class problem solving); a boom in AI for science (check: OpenAI is now following Google DeepMind by setting up a dedicated team to focus on just that); AI companies that are cozier with national security (check: OpenAI reversed position on the use of its technology for warfare to sign a deal with the defense-tech startup Anduril to help it take down battlefield drones); and legitimate competition for Nvidia (check, kind of: China is going all in on developing advanced AI chips, but Nvidia’s dominance still looks unassailable—for now at least).
So what’s coming in 2026? Here are our big bets for the next 12 months.
The last year shaped up as a big one for Chinese open-source models. In January, DeepSeek released R1, its open-source reasoning model, and shocked the world with what a relatively small firm in China could do with limited resources. By the end of the year, “DeepSeek moment” had become a phrase frequently tossed around by AI entrepreneurs, observers, and builders—an aspirational benchmark of sorts.
It was the first time many people realized they could get a taste of top-tier AI performance without going through OpenAI, Anthropic, or Google.
Open-weight models like R1 allow anyone to download a model and run it on their own hardware. They are also more customizable, letting teams tweak models through techniques like distillation and pruning. This stands in stark contrast to the “closed” models released by major American firms, where core capabilities remain proprietary and access is often expensive.
As a result, Chinese models have become an easy choice. Reports by CNBC and Bloomberg suggest that startups in the US have increasingly recognized and embraced what they can offer.
One popular group of models is Qwen, created by Alibaba, the company behind China’s largest e-commerce platform, Taobao. Qwen2.5-1.5B-Instruct alone has 8.85 million downloads, making it one of the most widely used pretrained LLMs. The Qwen family spans a wide range of model sizes alongside specialized versions tuned for math, coding, vision, and instruction-following, a breadth that has helped it become an open-source powerhouse.
Other Chinese AI firms that were previously unsure about committing to open source are following DeepSeek’s playbook. Standouts include Zhipu’s GLM and Moonshot’s Kimi. The competition has also pushed American firms to open up, at least in part. In August, OpenAI released its first open-source model. In November, the Allen Institute for AI, a Seattle-based nonprofit, released its latest open-source model, Olmo 3.
Even amid growing US-China antagonism, Chinese AI firms’ near-unanimous embrace of open source has earned them goodwill in the global AI community and a long-term trust advantage. In 2026, expect more Silicon Valley apps to quietly ship on top of Chinese open models, and look for the lag between Chinese releases and the Western frontier to keep shrinking—from months to weeks, and sometimes less.
—Caiwei Chen
The battle over regulating artificial intelligence is heading for a showdown. On December 11, President Donald Trump signed an executive order aiming to neuter state AI laws, a move meant to handcuff states from keeping the growing industry in check. In 2026, expect more political warfare. The White House and states will spar over who gets to govern the booming technology, while AI companies wage a fierce lobbying campaign to crush regulations, armed with the narrative that a patchwork of state laws will smother innovation and hobble the US in the AI arms race against China.
Under Trump’s executive order, states may fear being sued or starved federal funding if they clash with his vision for light-touch regulation. Big Democratic states like California—which just enacted the nation’s first frontier AI law requiring companies to publish safety testing for their AI models—will take the fight to court, arguing that only Congress can override state laws. But states that can’t afford to lose federal funding, or fear getting in Trump’s crosshairs, might fold. Still, expect to see more state lawmaking on hot-button issues, especially where Trump’s order gives states a green light to legislate. With chatbots accused of triggering teen suicides and data centers sucking up more and more energy, states will face mounting public pressure to push for guardrails.
In place of state laws, Trump promises to work with Congress to establish a federal AI law. Don’t count on it. Congress failed to pass a moratorium on state legislation twice in 2025, and we aren’t holding out hope that it will deliver its own bill this year.
AI companies like OpenAI and Meta will continue to deploy powerful super-PACs to support political candidates who back their agenda and target those who stand in their way. On the other side, super-PACs supporting AI regulation will build their own war chests to counter. Watch them duke it out at next year’s midterm elections.
The further AI advances, the more people will fight to steer its course, and 2026 will be another year of regulatory tug-of-war—with no end in sight.
—Michelle Kim
Imagine a world in which you have a personal shopper at your disposal 24-7—an expert who can instantly recommend a gift for even the trickiest-to-buy-for friend or relative, or trawl the web to draw up a list of the best bookcases available within your tight budget. Better yet, they can analyze a kitchen appliance’s strengths and weaknesses, compare it with its seemingly identical competition, and find you the best deal. Then once you’re happy with their suggestion, they’ll take care of the purchasing and delivery details too.
But this ultra-knowledgeable shopper isn’t a clued-up human at all—it’s a chatbot. This is no distant prediction, either. Salesforce recently said it anticipates that AI will drive $263 billion in online purchases this holiday season. That’s some 21% of all orders. And experts are betting on AI-enhanced shopping becoming even bigger business within the next few years. By 2030, between $3 trillion and $5 trillion annually will be made from agentic commerce, according to research from the consulting firm McKinsey.
Unsurprisingly, AI companies are already heavily invested in making purchasing through their platforms as frictionless as possible. Google’s Gemini app can now tap into the company’s powerful Shopping Graph data set of products and sellers, and can even use its agentic technology to call stores on your behalf. Meanwhile, back in November, OpenAI announced a ChatGPT shopping feature capable of rapidly compiling buyer’s guides, and the company has struck deals with Walmart, Target, and Etsy to allow shoppers to buy products directly within chatbot interactions.
Expect plenty more of these kinds of deals to be struck within the next year as consumer time spent chatting with AI keeps on rising, and web traffic from search engines and social media continues to plummet.
—Rhiannon Williams
I’m going to hedge here, right out of the gate. It’s no secret that large language models spit out a lot of nonsense. Unless it’s with monkeys-and-typewriters luck, LLMs won’t discover anything by themselves. But LLMs do still have the potential to extend the bounds of human knowledge.
We got a glimpse of how this could work in May, when Google DeepMind revealed AlphaEvolve, a system that used the firm’s Gemini LLM to come up with new algorithms for solving unsolved problems. The breakthrough was to combine Gemini with an evolutionary algorithm that checked its suggestions, picked the best ones, and fed them back into the LLM to make them even better.
Google DeepMind used AlphaEvolve to come up with more efficient ways to manage power consumption by data centers and Google’s TPU chips. Those discoveries are significant but not game-changing. Yet. Researchers at Google DeepMind are now pushing their approach to see how far it will go.
And others have been quick to follow their lead. A week after AlphaEvolve came out, Asankhaya Sharma, an AI engineer in Singapore, shared OpenEvolve, an open-source version of Google DeepMind’s tool. In September, the Japanese firm Sakana AI released a version of the software called SinkaEvolve. And in November, a team of US and Chinese researchers revealed AlphaResearch, which they claim improves on one of AlphaEvolve’s already better-than-human math solutions.
There are alternative approaches too. For example, researchers at the University of Colorado Denver are trying to make LLMs more inventive by tweaking the way so-called reasoning models work. They have drawn on what cognitive scientists know about creative thinking in humans to push reasoning models toward solutions that are more outside the box than their typical safe-bet suggestions.
Hundreds of companies are spending billions of dollars looking for ways to get AI to crack unsolved math problems, speed up computers, and come up with new drugs and materials. Now that AlphaEvolve has shown what’s possible with LLMs, expect activity on this front to ramp up fast.
—Will Douglas Heaven
For a while, lawsuits against AI companies were pretty predictable: Rights holders like authors or musicians would sue companies that trained AI models on their work, and the courts generally found in favor of the tech giants. AI’s upcoming legal battles will be far messier.
The fights center on thorny, unresolved questions: Can AI companies be held liable for what their chatbots encourage people to do, as when they help teens plan suicides? If a chatbot spreads patently false information about you, can its creator be sued for defamation? If companies lose these cases, will insurers shun AI companies as clients?
In 2026, we’ll start to see the answers to these questions, in part because some notable cases will go to trial (the family of a teen who died by suicide will bring OpenAI to court in November).
At the same time, the legal landscape will be further complicated by President Trump’s executive order from December—see Michelle’s item above for more details on the brewing regulatory storm.
No matter what, we’ll see a dizzying array of lawsuits in all directions (not to mention some judges even turning to AI amid the deluge).
—James O’Donnell
2026-01-02 19:00:00
The Italian neurosurgeon Sergio Canavero has been preparing for a surgery that might never happen. His idea? Swap a sick person’s head—or perhaps just the brain—onto a younger, healthier body.
Canavero caused a stir in 2017 when he announced that a team he advised in China had exchanged heads between two corpses. But he never convinced skeptics that his technique could succeed—or to believe his claim that a procedure on a live person was imminent. The Chicago Tribune labeled him the “P.T. Barnum of transplantation.”
Canavero withdrew from the spotlight. But the idea of head transplants isn’t going away. Instead, he says, the concept has recently been getting a fresh look from life-extension enthusiasts and stealth Silicon Valley startups.
It’s been rocky. After he began publishing his surgical ideas a decade ago, Canavero says, he got his “pink slip” from the Molinette Hospital in Turin, where he’d spent 22 years on staff. “I’m an out-of-the-establishment guy. So that has made things harder, I have to say,” he says.
No other solution to aging is on the horizon. “It’s become absolutely clear over the past years that the idea of some incredible tech to rejuvenate elderly people—happening in some secret lab, like Google—is really going nowhere,” he says. “You have to go for the whole shebang.”
He means getting a new body, not just one new organ. Canavero has an easy mastery of English idioms and an unexpected Southern twang. He says that’s due to a fascination with American comics as a child. “For me, learning the language of my heroes was paramount,” he says. “So I can shoot the breeze.”
Canavero is now an independent investigator and has advised entrepreneurs who want to create brainless human clones as a source of DNA-matched organs that wouldn’t get rejected by a recipient’s immune system. “I can tell you there are guys from top universities involved,” he says.
Combining the necessary technologies, like reliably precise surgical robots and artificial wombs to grow the clones, is going to be complex and very, very expensive. Canavero lacks the funds to take his plans further, but he believes “the money is out there” for a commercial moonshot project: “What I say to the billionaires is ‘Come together.’ You will all have your own share, plus make yourselves immortal.”
2026-01-02 19:00:00
My daughter introduced me to El Estepario Siberiano’s YouTube channel a few months back, and I have been obsessed ever since. The Spanish drummer (real name: Jorge Garrido) posts videos of himself playing supercharged cover versions of popular tracks, hitting his drums with such jaw-dropping speed and technique that he makes other pro drummers shake their heads in disbelief. The dozens of reaction videos posted by other musicians are a joy in themselves.

Garrido is up-front about the countless hours that it took to get this good. He says he sat behind his kit almost all day, every day for years. At a time when machines appear to do it all, there’s a kind of defiance in that level of human effort. It’s why my favorites are Garrido’s covers of electronic music, where he out-drums the drum machine. Check out his version of Skrillex and Missy Elliot’s “Ra Ta Ta” and tell me it doesn’t put happiness in your heart.
Watching Sora videos of Michael Jackson stealing a box of chicken nuggets or Sam Altman biting into the pink meat of a flame-grilled Pikachu has given me flashbacks to an Ed Atkins exhibition at Tate Britain I saw a few months ago. Atkins is one of the most influential and unsettling British artists of his generation. He is best known for hyper-detailed CG animations of himself (pore-perfect skin, janky movement) that play with the virtual representation of human emotions.

In The Worm we see a CGI Atkins make a long-distance call to his mother during a covid lockdown. The audio is from a recording of an actual conversation. Are we watching Atkins cry or his avatar? Our attention flickers between two realities. “When an actor breaks character during a scene, it’s known as corpsing,” Atkins has said. “I want everything I make to corpse.” Next to Atkins’s work, generative videos look like cardboard cutouts: lifelike but not alive.
What’s it like to be a pet? Australian author Laura Jean McKay’s debut novel, The Animals in That Country, will make you wish you’d never asked. A flu-like pandemic leaves people with the ability to hear what animals are saying. If that sounds too Dr. Dolittle for your tastes, rest assured: These animals are weird and nasty. A lot of the time they don’t even make any sense.

With everybody now talking to their computers, McKay’s book resets the anthropomorphic trap we’ve all fallen into. It’s a brilliant evocation of what a nonhuman mind might contain—and a meditation on the hard limits of communication.