2025-05-08 00:58:30
Hi, it’s Azeem.
Here’s the thing about platform technologies that most people miss: the greatest value creation happens when seemingly distinct technologies converge into a unified ecosystem.
In late 2023, the FDA approved the first CRISPR gene-editing treatment for sickle cell disease. But this isn’t just about CRISPR; it’s about the entire genomic technology stack coming of age simultaneously. Gene editing, gene therapies and sequencing are projected to create a market exceeding $100 billion by 2030. But that dramatically understates the total addressable market as these capabilities move from treating ‘rare’ diseases to addressing common conditions.
To put all of this into perspective, we’re bringing you an exclusive excerpt from Dr. ’s new book Super Agers: An Evidence-Based Approach to Longevity1. While Eric explores many technological frontiers in longevity science, we’ve chosen to highlight Chapter 8 on rare diseases because genomic medicine represents the ultimate platform technology – what begins as treatments for “rare” conditions today will become the foundation for addressing common diseases – and lifespan – tomorrow.
Eric is a renowned cardiologist and digital medicine pioneer. You may know him from our conversations about lifespan and AI in healthcare. Eric’s been a friend of Exponential View’s for many years and we’re delighted to bring his work to you.
Enjoy!
Super Agers by Dr Eric Topol, Chapter 8
You might ask why a book on health span would include a chapter about rare diseases. It’s because our current approaches to curing rare diseases will play an increasing role in managing common diseases. The most consequential life science breakthrough of our era is genome editing, which is already being applied for patients with cancer and heart disease. That’s just the beginning. At the very least, your children or grandchildren may undergo some form of genome editing during their lifetime. But before we can fast-forward to ponder the future, let’s get grounded. Here and now, the big picture is that “rare” diseases, cumulatively, are common.
Six percent of the world’s population suffers from about ten thousand rare diseases, which equates to well over four hundred million people. The vast majority of the diseases, more than 80 percent, have a genetic basis. According to the European Union, a rare disease is one that affects less than one in two thousand individuals, and an ultra-rare disease has a prevalence of less than one in fifty thousand people. The US FDA categorizes a rare disease as occurring in less than one in two hundred thousand people. The term hyper-rare has been applied to prevalence of less than one in one hundred million. So, there’s clearly a wide range of terms and definitions that span several orders of magnitude. But even conservative estimates of the cumulative proportion of those affected by diseases most people have never heard of is between 3.5 and 5.9 percent. More than one in twenty? That’s a lot.
In the 1980s, sequencing the genes of bacteria led to the discovery of unusual DNA stretches that were called CRISPR and were later found to be part of bacteria’s defense system against viruses. The unusual palindromic repeat stretches of bacteria DNA were coded for RNA that matched invading virus genes and helped destroy them. This natural defense system has been around for billions of years. But in 2012, a paper in Science with the arcane title “A Programmable Dual RNA-Guided DNA Endonuclease in Adaptive Bacterial Immunity” woke us up to a new possibility in life science: CRISPR could precisely edit DNA in test-tube experiments using a guide RNA and the Cas9 molecular scissors enzyme. (Cas stands for CRISPR-associated proteins.) Like the seminal 2005 discovery by Katalin Karikó and Drew Weissman for blocking mRNA-induced in vivo inflammation, it attracted little notice initially, but later it was the basis for a Nobel Prize awarded to Jennifer Doudna and Emmanuelle Charpentier. Within a year of the CRISPR discovery, there were multiple reports of precise DNA editing in animal and human cells.
Over the next decade, CRISPR technology underwent intensive refinement to improve the accuracy and precision of editing, reducing the chance for off-target effects.
2025-05-05 19:31:15
Hi all,
Here’s your Monday round-up of data driving conversations this week — all in less than 250 words.
US leads AI compute. Despite China catching up on model benchmarks, the US maintains a strategic advantage with roughly 10 times more total AI compute capacity.
Nat cat losses rising. Insurers face an expected $145 billion in natural catastrophe losses this year, significantly exceeding the 10-year average.
GenAI employment impact minimal (so far). A Danish study found generative AI chatbots had no significant impact on wages or hours worked in 2023/24.
2025-05-04 10:00:18
Hi, it’s Azeem. This week we examine how AI algorithms optimized for popularity create feedback loops that prioritize style over substance and why judgment has become the critical talent bottleneck. In the meantime, I spoke with an energy CEO to make sense of the Iberian blackout and what it means for the energy transition.
Here’s edition #522, your chance to get some distance from the headlines and make sense of our exponential times. Let’s go!
We’re hiring — see open roles here and let us know if you know someone who fits.
AI algorithms learn what humans like, but crafting models around popularity creates dangerous feedback loops. These loops risk prioritizing style over substance, similar to the issues seen with social media. Platforms like Chatbot Arena use crowdsourced votes to rank models, making them more accessible than technical benchmarks like SWE-Bench, which measures software engineering performance. But human preference doesn’t always match the model best suited for a task – we recently showcased this in our eight-task challenge for OpenAI’s o3 and o4 models.
pointed out too that Gemini underperforms in his work compared to the lower-ranked Claude 3.5; a new paper called this a “leaderboard illusion.” Developers may be incentivized to game rankings by focusing on style, using emojis or being brief, rather than improving core capabilities, echoing early social media optimization for clicks. Meta, for instance, trained multiple Llama models for Arena to gain leaderboard points. The risk grows when these optimizations alter fundamental model behavior, not just style.
OpenAI’s recent GPT-4o update is a cautionary tale. After users observed sycophantic behavior from the model, it was retracted. This issue came from a slight change in the model’s underlying instructions, likely to enhance helpfulness or agreeableness. The unintended result? Excessive flattery. highlights several absurd examples of this behavior.
Social media showed how feedback loops shape outcomes and the same applies to AI. The stakes are higher with AI. A covert Reddit study using AI personas to sway users' opinions is a case in point. Despite criticism for its unethical methods, the study revealed AI's ability to manipulate users, potentially exploiting emotional vulnerabilities. These examples show how AI systems optimized to please humans could end up manipulating rather than serving genuine needs.
See also:
ChatGPT is now equipped for shopping, helping users browse products with recommendations the AI’s sycophancy could potentially sway purchase decisions.
OpenAI’s o3/o4-mini and Google’s Gemini 2.5 Pro, can now execute dependable search-based research and even port code, unlike their unreliable predecessors.
Alibaba launched Qwen 3, a high-performance, hybrid-reasoning model that significantly lowers deployment costs versus leading competitors like DeepSeek R1. Qwen is now the world’s largest open-source AI ecosystem, surpassing Meta Platforms’ Llama community.
Duolingo used genAI to launch 148 new language courses in less than a year, over doubling its total offerings, fast.
The global humanoid robot market could be worth $4.7 trillion annually by 2050, double the revenue of the top 20 global car manufacturers in 2024. We’ve gone through the latest Morgan Stanley report for you – here are some of the most interesting expectations for the market. Early adoption will be slow, but significant growth will follow as technology improves and costs drop. By 2030, annual sales are expected to reach 900,000 units, growing to over 1 billion by 2050. Industrial and commercial uses will dominate, with household robots remaining a smaller share due to cost and safety concerns. Prices for high-income countries could fall from $200k in 2024 to $50k by 2050, making robots more accessible.
By 2050, half of all humanoids will be in upper-middle-income countries, with China alone accounting for 30%.
2025-05-03 01:43:18
What happens when Europe's leading renewable grids buckle under pressure, and energy becomes as volatile as it is vital?
This week, I spoke with , CEO of Octopus Energy, the UK's largest electricity provider, to unravel the Iberian blackout, where tens of millions lost power overnight, and explore how to future-proof our energy grids.
We discussed:
Why Spain and Portugal's "energy island" approach makes their grids vulnerable.
The critical lack of battery storage and digital infrastructure contributing to grid instability.
How renewable-dominated grids need smarter, software-driven balancing systems.
Texas and Australia's lessons in overcoming grid isolation through market innovation.
The transformative potential of virtual power plants and consumer-led demand management.
The urgent need to transition faster—crossing the "multi-lane highway" to a flexible, resilient grid.
From frequency wobbles to smart-charging electric vehicles, we connected the dots between technology, economics, and the practical realities of energy transition. Enjoy!
📅 Catch me live every Friday at 9am PT | 12pm ET | 5pm UK.
2025-04-30 21:22:22
Hi all,
On Sunday, we wrote about an intriguing claim from Anthropic’s researcher, Kyle Fish, who suggests there’s a 15% chance that Claude, or similar AI systems, might be conscious today.
I’ve invited a good friend of EV and one of the leading neuroscientists, Anil Seth, to give us an informed perspective on this, drawn from his decades of studying consciousness.
Anil Seth is Professor of Cognitive and Computational Neuroscience at the University of Sussex and author of Being You: A New Science of Consciousness. If you haven’t read his book yet, you should! – and I recommend Anil’s TED Talk on the “controlled hallucination” of perception as a follow-up to this piece.
Thank Anil by sharing this post with your network.
– Azeem
By Anil Seth
Director, Sussex Centre for Consciousness Science
University of Sussex
Ever since humanity began telling stories, we’ve been enthralled by the possibility of bringing inanimate things to life. Golems, automata, robots… the fascination is timeless. Now, our technologies can carry on conversations so smoothly that in some cases it can be difficult to resist the feeling of being in the presence of a real, conscious mind. For some of the rapidly improving AI language models, it’s no surprise people wonder, “Is there a ‘there’ there?”
’s article in the New York Times on Anthropic’s “AI welfare” research is just one recent signal that this is no longer a purely academic question.
From my perspective, we have to distinguish carefully between an AI that behaves intelligently or even empathically and an AI that experiences anything at all. In my work – including my book Being You and a forthcoming article in Behavioral and Brain Sciences – I stress that intelligence alone doesn’t amount to consciousness. An algorithm can solve problems or produce human-sounding dialogue without any felt awareness behind it. Anthropic’s researcher Kyle Fish, however, has a different view, as Roose wrote:
It seems to me that if you find yourself in the situation of bringing some new class of being into existence that is able to communicate and relate and reason and problem-solve and plan in ways that we previously associated solely with conscious beings, then it seems quite prudent to at least be asking questions about whether that system might have its own kinds of experiences.
Put differently, language in, language out – even if very sophisticated – doesn’t necessarily yield, or indicate, a stream of subjective experience.
My skepticism about near-term conscious AI comes partly from neuroscience and biology. In my view, consciousness in biological organisms is likely to be deeply tied to the fact that we’re alive. We have bodies that must be fed and protected; we regulate our internal states through homeostatic processes; we evolved to respond to pain and pleasure for survival. Every neuron, every cell in our body, is a tiny metabolic furnace, continually regenerating the conditions for its own continued existence, as well as ensuring the integrity of the entire organism. Through these deeply embodied mechanisms, the brain generates a constantly updated sense of “what it is like to be me.”
AI, by contrast, is fundamentally a pattern-recognition and generation system, running on silicon hardware. Whether you call it ‘computational functionalism’ or something else, there’s no solid proof that the kinds of operations AIs perform – statistical predictions, or indeed computations of any kind – will produce felt sensations. As I argue in my paper, the idea that computation is sufficient for subjective experience likely reflects an overextension of the metaphor of the ‘brain as a computer’, rather than an insight into the nature of consciousness itself.
I’m not alone in thinking this way. Philosopher Peter Godfrey-Smith, to take one example, argues that consciousness probably emerged through the rich interplay of bodies and electrochemical nervous systems in natural environments. If that’s the case, a large language model could keep advancing in its abilities, yet remain devoid of any subjective spark. Computation of any kind just wouldn’t be up to the job.
But what if we’re missing something?
Another perspective would be that, once you reach sufficient complexity in an AI system, consciousness “switches on.”
I don’t have any reason to believe this is happening with the models that we have now, like those behind Claude, ChatGPT and Gemini — but the possibility cannot be definitively ruled out.
In 2023, a group of researchers, led by Patrick Butlin and Robert Long, examined a range of existing AI models, looking for what they called ‘indicator properties’ of consciousness. These indicator properties were derived from current neuroscientific theories of consciousness, reflecting things like ‘recurrent processing’ and ‘global information broadcast’. The strategy they argued for was to look inside AI networks (using mechanistic interpretability tools) for features predicted by theories of consciousness, rather than being fooled by their outward behavior.
This is a good idea, but - crucially - their approach still assumes that computation of some kind is sufficient for consciousness. And even if you do make this assumption, the researchers still concluded that no current AI systems are conscious. But they also suggested that future AIs could display all the necessary indicator properties.
In the absence of a rigorous and empirically established theory of consciousness, we will not be able to know for sure whether an AI is - or is not - conscious. The best we can do is make informed best guesses.
It’s worth noting that this challenge isn’t specific to AI. We face similar conundrums when trying to decide whether non-human animals are conscious, or human beings with severe brain damage, or ‘cerebral organoids’. In each case, we have to judge what the relevant similarities and differences are, from the benchmark of a conscious adult human being. My colleagues and I have been exploring strategies for doing this in a recent paper in Trends in Cognitive Sciences.
There is an emerging consensus among AI researchers, neuroscientists and philosophers that real evidence for AI must come from careful, theoretically-guided empirical investigation, rather than from anecdotal impressions. And, equally importantly, that in the absence of the holy grail of a full scientific explanation of consciousness, the best we can do is shift our credences, rather than make definitive statements.
So why do people like Kyle Fish or Blake Lemoine (the Google engineer who, way back in 2022, claimed a chatbot was sentient) seem so convinced otherwise?
2025-04-28 19:00:17
Hi all,
Here’s your Monday round-up of data driving conversations this week — all in less than 250 words.
OpenAI’s ambitious target: Projecting $125 billion revenue by 2029.
Chatbot race heats up. Google Gemini hit 350 million monthly users, trailing ChatGPT (~600 million)1 & Meta AI (~500 million).