2025-02-21 21:41:23
Hi friends 👋,
Happy Friday and welcome back to our 132nd Weekly Dose of Optimism. This isn’t a Chocolate Factory, but all this Gene stuff is about to get a lot Wilder. If that makes zero sense to you now, keep reading.1
Let’s get to it.
Are you incurring huge tax bills when you sell your appreciated stocks, or holding too much in one or two stocks? The Cache Exchange Fund, benchmarked to the Nasdaq-100, is built to diversify your stocks while deferring taxes. Everyone from engineers to CEOs have turned to Cache to manage their concentrated stocks.
New investors are onboarded bi-weekly, with the next fund closing on Feb 28th.
Learn more about the Cache Exchange Fund.
(1) Functionally blind man gets sight back after gene therapy
Fergal Bowers for RTE via Irish Friend of the Dose Will O’Brien
A 31-year-old Sligo man who was functionally blind has got his sight back, after being treated with a new gene therapy at the Mater University Hospital in Dublin. He is the first patient to receive the ground-breaking ocular gene therapy treatment 'Luxturna' in Ireland.
Would you look at that!
You know the world is becoming a better place when you have to use like the fifth best joke about a blind person regaining sight to intro a story because you’ve used the other top four jokes already to cover other stories about a blind person regaining sight.
A functionally blind Irish man regained much of his vision after receiving Luxturna, a groundbreaking ocular gene therapy, at Mater University Hospital in Dublin. He had previously suffered from a rare inherited retinal dystrophy for over a decade but experienced significant improvement within weeks of treatment. The procedure involves injecting a modified virus carrying a functional copy of the faulty gene into the retina (which doesn’t sound super pleasant tbh), enabling cells to produce the missing enzyme needed for vision. Professor David Keegan, the lead surgeon, called this a major step in precision medicine, with a national registry now being developed to identify more patients.
I can’t wait to see what happens next.
(2) Rare genetic disorder treated in womb for the first time
Smriti Mallapaty for Nature
A two-and-a-half-year-old girl shows no signs of a rare genetic disorder, after becoming the first person to be treated for the motor-neuron condition while in the womb. The child’s mother took the gene-targeting drug during late pregnancy, and the child continues to take it.
More cool gene therapy stuff.
For the first time, a fetus was successfully treated for spinal muscular atrophy (SMA), a severe genetic disorder that typically leads to fatal motor neuron degeneration in infancy. The child's mother took Risdiplam, a gene-modifying drug, during the final six weeks of pregnancy, and the child has continued treatment post-birth. Now nearly three years old, the child shows no signs of the disease, which is unprecedented for cases this severe. Hell yeah!
This breakthrough suggests that early, in-utero intervention could significantly improve outcomes for SMA, which remains a leading genetic cause of infant mortality. The idea originated from the parents (talk about being good parents!), who had lost a previous child to the disease, and the FDA approved the treatment for this one case (go…FDA?). This is a one-of-one case, but if replicated, this approach could shift how we treat diseases before birth.
(3) Genome modeling and design across all domains of life with Evo 2
From Arc Institute
Evo 2 learns from DNA sequence alone to accurately predict the functional impacts of genetic variation—from noncoding pathogenic mutations to clinically significant BRCA1 variants—without task-specific finetuning. Applying mechanistic interpretability analyses, we reveal that Evo 2 autonomously learns a breadth of biological features, including exon–intron boundaries, transcription factor binding sites, protein structural elements, and prophage genomic regions.
If you think curing blindness and rare genetic disorders in the womb with genetic therapy is crazy, then oh baby, you better strap the fuck in, because the next decade is going to be wild.
Evo 2 is a new AI model from a team at the Arc Institute that learns the “language” of DNA, allowing it to predict the effect of mutations, design new sequences, and influence gene expression.
The model is trained on 9.3 trillion DNA base pairs across all domains of life. It predicts whether mutations are harmful or benign without specific training on human disease data, outperforming specialized models and accurately handling noncoding mutations—suggesting it has internalized DNA’s fundamental principles. That’s right: Evo 2 has learned the core rules of DNA on its own, allowing it to predict genetic effects across all life forms without being specifically trained on human disease data.
Evo 2 can create entire DNA sequences, like those for mitochondria and bacteria, in a way that looks natural. It can also control how genes are turned on or off by influencing DNA structure. As a test, researchers used it to encode Morse code into DNA, showing its potential for designing custom genetic systems.
One more time for those in the back: like God and/or evolution, Evo 2 can create entire DNA sequences.
Better yet, Evo 2 is fully open-source, making genome design more accessible and increasing the pace of innovation in bioengineering. In the future, we won’t just study biology, we’ll code it.
(4) Accelerating scientific breakthroughs with an AI co-scientist
Juraj Gottweis, Google Fellow, and Vivek Natarajan, Research Lead
We introduce AI co-scientist, a multi-agent AI system built with Gemini 2.0 as a virtual scientific collaborator to help scientists generate novel hypotheses and research proposals, and to accelerate the clock speed of scientific and biomedical discoveries.
If it wasn’t yet abundantly clear already, the pace of scientific research and discovery is about to absolutely take off. To that end, this week Google introduced AI co-scientist, a multi-agent system designed to assist researchers by generating novel hypotheses, research plans, and experimental protocols.
This isn’t just a powerful LLM that churns through existing research and summarizes the literature. AI co-scientist mimics the scientific method, using specialized agents to iteratively refine ideas and improve output quality, allowing it to generate novel research ideas. It operates like a scientist: self-critique, ranking, feedback loops. And not just any scientist, a successful scientist.
So far, early testing has shown promising results in biomedical research, things like drug repurposing and disease targeting discovery. It has also demonstrated the ability to rediscover novel mechanisms in antimicrobial resistance, validating its potential as a scientific collaborator. Meaning, AI co-scientist came to a similar conclusion as human researchers, but without exposure to their unpublished work. It scientific method-ed its way to the same conclusion.
If AGI looks something like automated expert level and breakthrough research, then boy, AGI feels closer and closer. What a great time to be a human.
(5) Interferometric single-shot parity measurement in InAs–Al hybrid devices
From Microsoft Azure Quantum (explain in NYT here)
We implement a single-shot interferometric measurement of fermion parity in indium arsenide–aluminium heterostructures with a gate-defined superconducting nanowire…The large capacitance shift and long poisoning time enable a parity measurement with an assignment error probability of 1%.
The four states of primary matter: solid, liquid, gas, and topological qubit.
That’s according to new research out of Microsoft, which claims to have developed this new physical state that can be harnessed to power quantum computing. A topological qubit is a special kind of qubit that stores information in the way certain particles move and interact, making it much more stable and resistant to errors than regular qubits. There’s that error reduction bit again. Any time you read about advancements in quantum computing, it generally is about some novel way of error reduction.
So what exactly did Microsoft do? The team developed a device that can measure the state of special particles called Majorana Zero Modes (MZMs). MZMs are special quantum particles that act as their own antimatter and can store quantum information in a way that makes them resistant to errors. Using a superconducting nanowire and quantum capacitance measurements, they detected these states quickly and with high accuracy—making errors only 1% of the time. The measured states remained stable for over a millisecond, long enough to be useful for computation.
Anyway you cut it, this research is a pretty big advancement on the road towards useful quantum computing. That said, there is still some skepticism around whether the observed signals truly come from MZMs in a topological phase or from more conventional quantum states that mimic their behavior. Without that definitive proof that these are genuine MZMs, it’s still a breakthrough but the path towards scalable useful quantum is way less clear.
Microsoft CEO Satya Nadella went on Dwarkesh to talk about his company’s quantum breakthrough, AI video game creation, and path to smarter and smarter machines.
The whole thing is worth a watch, but the part we agree with most is Satya’s assertion that you shouldn’t believe AGI is here when any lab tells you it is, but when GPD starts growing at 10% a year. Again - just a really great time to be a human.
(6) Big Week for the Robots, Too
Packy here. Not only are we adding a sixth story, but it’s a two-for-one. Massive week. We didn’t even mention Grok 3, which is very cool but not cool enough to make the cut. But we would be remiss if we didn’t mention the robots.
On Thursday, the Polish company Clone released a video of its very human / Westworld looking Protoclone hanging in midair and twitching to a dark cinematic track by Ludwig Göransson, which was creepy but still pretty amazing, especially given that PitchBook says Clone has only raised $7.1 million, which at least we can agree to put to rest once and for all the joke that Polish people are stupid, right?
On the other end of the funding spectrum, Figure, which is rumored to be raising $1.5 billion at a $38 billion pre-money valuation (I would not be described as the most valuation sensitive investor in the world and I think Tech is Going to Get Much Bigger than people expect, including humanoid robots eventually, but that valuation beggars even my formidable belief) released a mindblowing video of two of its Figures running on its new Helix Vision-Language-Action model.
Once again, a bit creepy watching them communicate without verbally communicating (robots talking to each other would be skeuomorphic and inefficient, for our comfort only, but its absence is certainly discomforting). BUT watching the one Figure hand the bag of cheese to the other Figure was one of those moments that makes you realize “Oh, shit, the future is going to be even more different than I expected because while it makes total sense, I’d just never pictured multiple robots working with each other.”
All of this — the AI breakthroughs, the robots — can seem a little bit scary from the perspective of a human. But personally, I hate diseases and I don’t particularly like putting away my groceries, and I for one welcome our robot underlings.
BONUS: Packy on Sourcery with Molly O’Shea
Packy went on Molly O’Shea’s podcast Sourcery last week. They covered a bunch — scaling the newsletter, his approach to writing, and the types of companies that most interest him today. Give the pod a listen, or if Packy is not your cup of tea, then at least check out Molly’s work and give her a follow.
Have a great weekend y’all.
Thanks to Cache for sponsoring! We’ll be back in your inbox next week.
Thanks for reading,
Packy + Dan
PM Writes: This makes zero sense to me.
2025-02-20 21:51:39
Welcome to the 307 newly Not Boring people who have joined us since last week! If you haven’t subscribed, join 240,367 smart, curious folks by subscribing here:
Today’s Not Boring is brought to you by… Rox
Rox is on a mission to empower every business owner to secure and grow their revenue.
What does that mean? Since Rox is going to be sticking around with us as a 2025 presenting sponsor, I figured I’d introduce you to the man behind the agent swarm, Rox CEO and co-founder Ishan Mukherjee, so that he can tell you in his own words.
You can check out our full conversation here.
Ishan will tell you that best sellers at the best companies, like OpenAI, Ramp, and MongoDB, trust Rox to power their revenue. By combining warehouse-native architecture with swarms of AI agents that keep you up to speed on your top customers, Rox helps top sellers do much more in less time.
With no cost (and no pain) to start, there’s no reason not to see what happens to your business’ sales when your sellers have their own Rox Agent Swarms supporting them.
Hi friends 👋,
Happy Thursday!
Over the past month, a name I bucket as part of tech's legacy keeps coming up in discussions about its future.
Larry Ellison is the richest tech person we know the least about. You know who Elon impregnated, what the Latin on Zuck’s shirt means in English, and whose bosom graces the bow of Bezos’ boat, but you might not even know what Ellison’s company, Oracle, actually does.
I know someone who knows.1
As soon as I saw Larry Ellison at the center of Stargate, my mind went back to a conversation with Chroma co-founder and my erstwhile Anton Teaches Packy AI co-host Anton Troynikov in his San Francisco office two years ago. He was right in the middle of everything at the dawn of the AI Age, so I asked him what was exciting him the most, of all of the incredibly exciting AI things to be excited by.
“Business Process Automation,” Anton replied. “Have you read Softwar?”
So I DM’ed Anton and asked him to answer a question that’s been on my mind: “How the heck is Larry Ellison still relevant?” Normally, we ask “Why now?” Today, we ask, “How, now?!”
Note: read the footnotes.
Let’s get to it.
A Guest Post by Anton Troynikov
On the first full day of Donald Trump’s second presidency, the returning President assembled three men in the Oval Office for the announcement of a $500 billion plan to ensure America’s AI supremacy: The Stargate Project.
Two, Masa and Sama, were exactly two of the men you’d expect. The third was Larry Ellison.
Trump introduced Ellison as “the CEO of Everything,” which sounds like an instance of the hyperbolic tendency that the two men share, but is directionally true. The 80-year old2 once again finds himself back at the center of the future. Or, rather, has put himself back in the center of the future.
Ellison’s Oracle (Oracle Corp, NYSE: ORCL) is a 48 year old, 500 billion dollar company, which appeared seemingly out of nowhere to become one of the four players behind the biggest AI investment of the decade.
In 2024, Oracle’s stock rose over 50%, beating Amazon (+17%), Apple (+18%), Google (+7%), and Microsoft (+9%). Ellison, the company’s founder and still its CTO, is the 4th richest man in the world.
And he just keeps showing up in the middle of everything.
Last summer, Ellison’s son, David, took over Paramount through a merger with his company, Sundance, made possible in large part by a $6 billion contribution by Ellison père.
In November, America’s top-ranked quarterback recruit, Bryce Underwood, flipped his commitment from LSU to Michigan. Ellison, whose 5th wife Jolin is a Michigan grad, picked up the check.
In January, the same week Stargate was announced, when President Trump was figuring out how to keep TikTok running in America under American ownership, rumors swirled that Oracle was his preferred buyer. Which makes sense, because all the kids love the Oracle Suite of Products.
What gives?
The last time most people in tech heard about Oracle was when they sued Google over the Java API in 2010, which eventually led to a US Supreme Court decision which determined that implementing an API constitutes fair use.
This kind of jockeying over IP, especially in the mid-2010’s when open source, open standards, and open platforms were all the rage, cemented Oracle’s (not entirely undeserved) reputation as a company more interested in litigation and vendor lock-in than innovation.
Indeed, by the 2010’s Oracle looked to be losing. After leading the way in enterprise databases and integrated software for years, competing fiercely for market share, Oracle missed three successive waves in computing: the internet, mobile, and finally, the cloud.
While Oracle was able to continue earning money from its extensive enterprise and government contracts which were locked in for years, and was able to hang on to relevance by using its distributional advantage with its existing customers to ship Oracle versions of emerging platforms and software, by 2016 things looked pretty grim. Oracle had ceded much of cloud to AWS, and Microsoft had started to muscle in with its own customer relationships to Fortune 500s and governments.
While still a titan, Oracle’s lack of innovation seemed to predict a steady slide into irrelevance. Yet over the last few years, the company has made a stunning comeback, somehow indispensable to the biggest players.
The straightforward business reason is that Oracle effectively positioned itself through massive investment in GPU datacenters to become one of the go-to players for training large language models, especially among its government and large enterprise customer base. The early miss on cloud also turned out to be an advantage, as AWS, Microsoft, and Google chose Oracle as a neutral service provider to help run their AI-focused datacenters.3
Given the enormous investments in AI being made across the board, Oracle’s positioning has allowed it to capture a lot of value doing what it does best; large, long-term sales and services contracts with big players. The 90’s are back.
But the straightforward business reason is narratively boring. Instead, it’s much more interesting to understand Oracle as an extension of Larry Ellison’s will.
So who is Larry Ellison? Despite increasing public interest (for various reasons) in the emerging class of powerful tech billionaires, Lawrence Joseph Ellison seems to remain relatively quiet. Reportedly a close friend and mentor to Elon Musk, the 80 year old Ellison has led a colorful life, though has lately stayed out of the media spotlight.
To the extent that Oracle is an extension of Ellison’s will, Oracle’s fortunes reflect his leadership. My favorite history of the company, “Softwar”, gives the impression that the lifecycle of Oracle has gone as follows:
Larry Ellison has an insight which leads to a breakthrough initiative which has the potential to reposition Oracle to the forefront of the industry, completely bypassing the competition.
It works.
Larry Ellison checks out to go sailing or play tennis or something for like a year.
Oracle gets into trouble. New entrants and existing competitors are eating away at its market share, and Oracle is losing head-to-head.4
GOTO 1.
For example, in the early 90s, Oracle faced stiff competition from several fast-growing software companies. At the time, enterprise software was heavily customized by the end-user, and third party integrators who would munge together several ‘best of breed’ packages to create frankenstinian hodge-podges of software, one solution per organizational function or process. It was a nightmare for everyone but the vendors and integrators, who made money either way.
Rather than competing on features, or cost, or attempting to create more best-of-breed products, or pay ever fatter sales commissions which ate away margins, Ellison chose to tell his customers that they were doing it wrong.5 Software shouldn’t be built to fit business processes, he said, business processes must adapt to software. Vertically integrated platforms were the future.6
Getting everything from one vendor meant you didn’t need customization or integration, things would just work. Yes, it meant changing the way you were used to doing things, but that’s what it took to get the most out of software. It just so happened that the vendor investing the most into this vertical strategy was Oracle. The move to vertically integrated platforms locked in Oracle’s distributional advantage for another decade.7 Between 1990 and 2001, Oracle’s revenue went from $196M to $10 billion.
Tacking before competitors may have come naturally to Ellison, a lifelong sailing enthusiast. In the 2010 America’s Cup, Oracle-sponsored USA-17 defeated Swiss Alinghi 5. This gave the winning team the right to decide the class of boat for the 2013. Making an already expensive and dangerous sport far more expensive, and far more dangerous, Team Oracle chose wingsail catamarans, with hydrofoils.
These boats could literally sail faster than the wind. The change was far from incremental; sailing teams were pushed to their absolute limit in terms of cost and training. A crew member from Artemis Racing died during a training run. Team Oracle USA went on to win the 2013 America’s Cup. (Fun fact: I watched them do it from the Presidio, and it was incredible to behold).8
Even Softwar itself says more about Ellison than just what’s in the content of the text itself. It’s the only biography or business history I’ve ever read where the subject inserted his own footnotes,9 many of which contradict the author. For example:
Author:
Ellison ‘showed up in shorts and a pink tank top, holding a glass of carrot juice,' recalls former Oracle vice president Mark Barrenechea of his first meeting with the man.
Ellison Footnote:
LE writes: I met Mark out on my deck just as I arrived back home from the gym. I was wearing a black cotton tank top with a Sayonara logo on it. I have never owned, nor would I ever wear, a pink tank top. This is very important.
One can only imagine the kind of ruthless negotiation over completely non-standard terms in publishing that Ellison must have conducted.
Larry Ellison is out to win, in a way that goes beyond playing the game on the field. When Ellison is threatened with falling behind, he changes the rules to turn weakness into strength. When he’s ahead, he changes the rules to put himself even further ahead.10 11
Oracle seemed to ‘come out of nowhere’ because Ellison saw how to pivot a miss in cloud into a winning position in the next computing wave, AI. Now Oracle finds itself exactly where it's most comfortable: selling expensive, long-term contracts to big enterprises and governments. The 90s are indeed back, and Larry Ellison wrote the playbook.
Ellison took a losing position, pivoted, caught up, and made it look like that’s what he meant to do all along. So when Trump introduced Ellison as 'the CEO of Everything' in the Oval Office, it wasn't hyperbole. It was recognition that once again, Larry Ellison had changed the rules of the game until they benefited Larry Ellison.
The only surprise is that we're still surprised.
Thanks to Anton for teaching me once again. Go follow him on X. And thanks to Rox for supporting Not Boring. Go get your sellers an Agent Swarm.
That’s all for today. We’ll be back in your inbox tomorrow with a jam-packed Weekly Dose.
Thanks for reading,
Packy
AT writes: Oracle is best known for its Database software, which runs a lot of Fortune 500, large enterprise, and government workloads. In 1977 Oracle was founded to commercialize the relational database (at the time, databases were mostly custom-built for a given application, and hierarchical / ontological). Ironically, the relational database was invented by researchers at IBM, making Microsoft only the second time IBM failed to capture enormous value creation from a technology they invented. They also make enterprise applications (CRM, ERP, HCM and other very ‘enterprise’ acronyms) which work with their database, cloud version of those applications, cloud compute infra which runs those applications, servers, storage systems, and other hardware to run the software and infra, and consulting to charge money for making it all work together for your business.
PM writes: Ellison was born on the exact same day – August 17, 1944 – as the Liberation of Saint-Malo depicted in the book and movie All the Light We Cannot See.
PM writes: This is true. Because Oracle wasn’t seen as a threat to the core cloud businesses of the Big 3, they were willing to partner with OCI (Oracle Cloud Infrastructure) to give customers access to more GPUs. Enterprises could run AI workloads on Oracle’s high-performance GPU infrastructure while still keeping their core cloud operations within AWS, Azure, or Google. As one example, Microsoft and Oracle announced an expanded cloud partnership in 2023 that deeply integrated OCI into Azure, allowing joint customers to access Oracle’s AI infrastructure while using Azure as their primary cloud.
PM writes: It’s wild how often Oracle screwed up and recovered. It feels like half the footnotes in Softwar are “yeah that was one of the most costly mistakes I [Ellison] ever made.”
PM writes: This was after Ellison let his lieutenants convince him that Oracle, too, should be in the best-of-breed integration business (after its foray into building all of its own applications failed the first time). It was a disaster. Ellison about-faced, and in an almost Chamath-like way completely memory holed his fuck up and told everyone that the way he had just been doing things, which is the way they were still doing things BECAUSE ELLISON TOLD THEM TO, was totally wrong.
PM writes: Love me some vertical integration… and synchronicities. The day Anton sent me a draft of this piece, my Readwise surfaced a highlight from Softwar, which I read five years ago, in which Ellison says: “If Detroit ran like Silicon Valley, nobody would sell cars—just parts. Customers would have to figure out which were the ‘best parts’ — a Honda engine, a Ford transmission, a BMW chassis, GM electrical system — and buy them and try to assemble them into a working car. Good luck. I know it sounds crazy, but that’s how companies put together business systems today.”
PM writes: Ellison is a lot of things, but he’s not dumb. Softwar, again: “If that was a stroke of luck for Oracle, what wasn’t was Ellison’s decision, to the horror of many colleagues and customers, to abandon all further development of client/server-based applications and concentrate the firm’s entire engineering effort on building for the new computing architecture of the Internet. While rivals in the apps business, such as the German powerhouse SAP and PeopleSoft, talked up the Internet and put a web front-end on some of their products, Ellison went much further. Oracle was the first established software firm to risk everything on the new paradigm. His rationale was simple: Oracle could never hope to be number one in enterprise applications as long as client/server prevailed—it was fated always to play second fiddle to SAP, whose strength in the enterprise apps market almost matched Oracle’s dominance in databases. By getting to the Internet first, assuming that the software could be made to work, Ellison would force Oracle’s competitors to become followers, gaining vital time-to-market over them.”
PM writes: We really should not be surprised at Ellison’s current comeback. His Oracle Team USA came back from an 8-1 deficit in the best of 17 series against Emirates Team New Zealand. Best of 17 is the same thing as “first to 9.” ETNZ only had to win one more race. OTUSA had to win eight straight. OTUSA won eight straight. It was One of the Greatest Comebacks in Sports History. What the team figured out mid-way was how to foil upwind - essentially to be able to lift out of the water even when sailing against the wind, which meant reduced drag and much faster speeds. ETNZ couldn’t adapt because they had designed their boat and strategy around downwind foiling, which was their innovation and what allowed them to jump out to that 8-1 lead. Fucking Larry Ellison, once again, figured out how to take advantage of a competitor’s early lead and use it against them. Counter-positioning on the water or something. Unbelievable. He’s better when he’s behind.
PM writes: you see what we did there?
PM writes: No joke, just last week my Perplexity app sent me a notification: “Ellison Calls for Governments to Unify Data.” At the World Governments Summit in Dubai last week, he called for governments to consolidate all national data into unified platforms for artificial intelligence consumption. Coincidentally, platforms for unified data are exactly the kind of thing Oracle might be able to provide! HE’S TRYING TO CHANGE THE RULES TO BENEFIT ORACLE AGAIN. HE CAN’T KEEP GETTING AWAY WITH THIS!
LE writes (I imagine): Yes I can. Watch me.
2025-02-14 21:28:18
Hi friends 👋,
Happy Friday and welcome back to our 131st Weekly Dose of Optimism. Usually my X feed is filled with all of the cool things we usually cover in the Weekly Dose, but this week the algo was feeding me non-stop Eagles content. I am OK with that. Go Birds! But don’t worry, it wasn’t too hard to find five stories (outside of the Eagles) that had me optimistic about our future.
Let’s get to it.
Today’s Weekly Dose is brought to you by… Ramp, Rox, and Vanta
Packy here. You read a lot (often here) about AI doing the things humans don’t want to do so that we can spend our time doing the things we do. But what does that mean in practice?
Vanta automates compliance & streamlines security reviews so you can sell to enterprise.
Rox’s agent swarms support the best sellers so they can focus on the human side of selling.
Ramp saves your finance team time so they can get strategic with your company’s money.
Importantly, all three are giving me the time and space to make Not Boring’s 2025 essays deeper and better than ever. Say thanks, sign up, and go give yourself time back: win/win/win.
(1) Observation of an ultra-high-energy cosmic neutrino with KM3NeT
The KM3NeT Collaboration in Nature
Here we report an exceptionally high-energy event observed by KM3NeT, the deep-sea neutrino telescope in the Mediterranean Sea4, which we associate with a cosmic neutrino detection.
Bang!
The KM3NeT deep-sea telescope has detected the highest-energy neutrino ever recorded, estimated at 220 million billion electronvolts—far beyond anything seen before. A neutrino is a tiny, nearly invisible particle that passes through almost everything without interacting. The discovery was made when KM3NeT, still under construction, detected a muon—a secondary particle created by a neutrino hitting nearby rock or seawater—leaving a bright trail in the detector. This confirms that ultra-high-energy neutrinos exist and opens a new window into the most extreme environments in the universe.
Scientists believe this neutrino could have been generated by a powerful cosmic accelerator, such as a supermassive black hole or a gamma-ray burst, or it might be a rare "cosmogenic" neutrino linked to the Big Bang’s remnants. Or more simply put, scientists don’t yet know how this neutrino was generated. The finding challenges current models and suggests more undiscovered high-energy particles could be out there. As KM3NeT expands, researchers hope to pinpoint the neutrino’s origin and uncover new cosmic secrets.
(2) Early detection of pancreatic cancer by a high-throughput protease-activated nanosensor assay
Mira et al in Science
Montoya Mira et al. designed a rapid, noninvasive assay based on a fluorescently labeled protease-sensitive peptide coupled to a magnetic nanosensor to detect protease activity from a small sample of blood. The assay, termed “PAC-MANN-1,” was optimized to detect all stages of PDAC with better performance than the clinical biomarker CA 19-9.
Call this test PAC-MANN Jones, because it’s providing shutdown coverage of pancreatic cancer. (This was not a crossover I ever anticipated).
Researchers developed a blood test, called PAC-MANN, that detects pancreatic cancer (PDAC) by measuring specific enzyme activity in the blood. The test, using nanosensors, identifies a cancer-associated protease with high accuracy, distinguishing PDAC from both healthy individuals and those with other pancreatic diseases.
In a study of 356 patients, the test showed 98% specificity and 73% sensitivity, with improved performance when combined with the existing CA 19-9 biomarker. As we’ve talked about before, early detection is the closest thing we have to a cure for cancer. And PAC-MANN’s ability to provide early detection of PDAC could mean significantly higher survival rates.
(3) Reindustrialization from First Principles: Cost Physics
Denver Rayburn
Today, we stand at another such moment. Intelligence, automation, and cheap abundant energy are converging to rewrite the economics of production. Beyond industrial or trade policy, something more profound is changing: the underlying cost physics of manufacturing (coined with inspiration from Coogan’s Law.) And unlike previous technological revolutions, this transformation may be permanent – creating a lasting shift that uniquely favors American manufacturing.
America’s role in the world has changed many times in its history.
Hell, America’s role in the world has changed many times over the last 6 weeks.
More and more folks believe that America once again has to, and can, take on its role of the manufacturing powerhouse it once was. Denver Rayburn, in this excellent primer, lays out his thinking on how America may be favorably positioned to take on that role once again.
Rayburn argues that the convergence of AI, automation, and cheap energy is permanently reshaping manufacturing economics and eliminating the labor cost arbitrage that drove manufacturing out of the U.S. in the first place. In a world where labor cost is no longer the primary input cost, a country that has abundant and cheap energy, the capital markets to finance an automation rollout, and is a leader in artificial intelligence finds itself in a very strong position. Thankfully, the U.S. checks all of those boxes.
It’s kind of scary to think how much of U.S. foreign and domestic policy really hinges on a) whether or not this transition is actually underway and b) how well positioned the U.S. is in this transition. Let’s hope Rayburn is right and that both the government and private sector take advantage of this unique opportunity in history to rebuild America as a manufacturing hub.
(4) Anduril takes over Microsoft's $22 billion US Army headset program
By Reuters
Palmer Luckey-founded defense tech startup Anduril will take over the development and production of Microsoft's mixed-reality headset program for the U.S. Army, the companies said on Tuesday. Anduril will assume control over production, as well as future hardware and software development and delivery timelines for the Integrated Visual Augmentation System (IVAS) project, the companies said.
“Whatever you are imagining, however crazy you imagine I am, multiply it by ten and then do it again. I am back, and I am only getting started.” - Palmer Luckey on X, February 11th 2025.
Anduril is taking over development and production of Microsoft’s $22B mixed-reality headset program for the U.S. Army’s Integrated Visual Augmentation System (IVAS). IVAS is a military AR/VR headset designed to enhance soldiers' situational awareness, decision-making, and battlefield effectiveness. An AR/VR headset for the military…Palmer Luckey was born to win this contract.
Some specifics on the deal: Anduril is taking over production, hardware and software development, and delivery timelines for the program from Microsoft. Microsoft Azure will remain the preferred cloud provider for IVAS and Anduril’s AI-driven defense technologies. The total IVAS contract is worth $22B over a ten year period and in 2025 the Army requested $255 million of that budget to procure 3,162 IVAS systems.
This is another massive win for Anudril, which has been on absolute tear as of late. Mega factories, mega rounds, mega contracts, etc, etc. We believe Anduril being on an absolute tear is a good thing not only for Anduril, and the military, and the American people, but for the entire world. Peace through strength. And increasingly, no company is contributing more to the velocity of our nation’s strength than Anduril.
(5) Quantum-computing technology that makes qubits from atoms wins mega investment
Elizabeth Gibney for Nature
QuEra, an academic spin-out company that uses atoms and lasers to encode quantum bits or ‘qubits’, announced on 11 February that it has raised US$230 million in funding — one of the largest single investments in any quantum firm so far.
QuEra, a Harvard and MIT spin-out, has raised $230 million—one of the largest single investments in a quantum company—signaling neutral atom quantum computing as a major player in the field. Neutral atom quantum computing traps rubidium atoms with lasers and encodes quantum information in their electron energy levels, offering scalable and stable qubits without the need for extreme cooling. This approach has historically failed at precisely controlling atoms, but a 2019 breakthrough in laser-based techniques has significantly improved accuracy.
Practical quantum computing requires tens of thousands of qubits to reach utility, and we’re still a ways off from that. But with major companies like Google and IBM investing billions into R&D and a wave of newly funded startups introducing novel architectures, the race for quantum is heating up. As neutral atom systems close the performance gap with more established technologies, competition in the field is shifting from feasibility to scalability, and QuEra’s funding signals growing confidence in this approach as a contender for the future of quantum computing.
Have a great weekend y’all.
Thanks to Ramp, Rox, and Vanta for sponsoring! We’ll be back in your inbox next week.
Thanks for reading,
Packy + Dan
2025-02-12 21:45:37
Welcome to the 811 newly Not Boring people who have joined us since our last essay! If you haven’t subscribed, join 240,060 smart, curious folks by subscribing here:
Today’s Weekly Dose is brought to you by… Vanta
As a startup founder, finding product-market fit is your top priority.
But landing bigger customers requires SOC 2 or ISO 27001 compliance—a time-consuming process that pulls you away from building and shipping.
That’s where Vanta comes in.
By automating up to 90% of the work needed for SOC 2, ISO 27001, and more, Vanta gets you compliant fast. Vanta opens the door to growth.
It works. Over 9,000 companies like Atlassian, Factory, and Chili Piper streamline compliance with Vanta’s automation and trusted network of security experts. Whether you’re closing your first deal or gearing up for growth, Vanta makes compliance easy.
It’s time to grow your ARR. Vanta can help Get $1,000 off Vanta here:
Hi friends 👋,
Happy Wednesday!
I’ve been spending a lot of time thinking about the Big Question I’m trying to answer in my work and life. This essay, like many, isn’t meant to provide an answer, but to share how I’m thinking through it in real-time.
As more knowledge workers spend their days prompting LLMs, it’s become popular to argue that asking good questions is becoming more valuable. What is less obvious but I think more interesting is that it will expose how little we actually care about answers, and in turn, what makes questions so valuable in the first place.
Let’s get to it.
Writers all know a secret that I want to share with you, because whether you consider yourself a writer or not, you’re writing a story, too:
Questions reshape your reality.
We all think what we want is answers. We don’t, actually. Answers are dead things. Questions are animating. What we want is great questions.
By the time a question gets its answer, all of the juice has been squeezed. The answer is the pulp and pith. Answers are static things. Questions are kinetic. I like watching the NBA Draft more than I like watching any single NBA game. The Draft creates questions. Games are answers.
Actually, fuck answers, the more I think about it. The best questions don’t have answers. The point of the question isn’t to find an answer. The best questions are organizing principles, magnets, ways of seeing.
Writers know this. Staring at a blank page with no question to answer is felt proof of Sartre’s observation: “Humans are condemned to be free.” (Maybe it had to be a writer who made that observation.) Fire up a blank page with a question in mind, however, and the world becomes grownup Blue’s Clues. You’re reading something or talking to someone and it’s almost as if you see a blue paw print and hear little kids shouting “A clue! A clue!” whenever something you might be able to use appears.
What’s true in writing, I’m arguing, is true (maybe increasingly so) with life.
The most important question you can ask yourself, if you don’t know your question already, is “What question am I trying to answer?” Like with your life. This one actually should have, if not an answer, at least a working hypothesis. But since the answer is a question, we’ll let it slide.
Writers get to do this over and over again, each time we work on a new piece, in silico. If I am writing an essay on networking, my brain tingles every time I come across some bit of information that might be useful to explaining networking. It is unbelievable to experience the first few times; tidbits I would have skimmed right over suppressing a yawn in normal times become engrossing when they help answer my question.
For a writer, though, each essay is a specific question in service, hopefully, of a larger question, and it’s that larger question that is relevant to even those of you who don’t consider yourself writers, because it’s about how you write the story of your life. This is the question that shapes you in vivo. The one that animates you.
This kind of question is like a mission, but with infinite paths – equally driving but endlessly explorable. A mission is almost too much like an answer, after which you autopilot in pursuit. A question branches. A mission is a wonderful thing, but more people have questions (and more of them).
Over the past few months, I’ve been obsessed with becoming a better writer. That is not my main question - How can I become a better writer? - but exists in service of it. Sub-questions are important, too, like side quests, and are made pointier by the existence of your Big Question. So I’ve tried to read a lot of great essays and profiles – David Foster Wallace’s David Lynch Keeps His Head and Roger Federer as a Religious Experience, and Guy Talese’s Frank Sinatra Has a Cold, and Tad Friend’s The Mind of Marc Andreessen and Sam Altman’s Manifest Destiny – and I’ve read some books on writing by the greats in the particular literary nonfiction form in which I want to write: John McPhee’s Draft No. 4 and Robert Caro’s Working among them, the latter in which Caro, who you might recognize from The Power Broker and The Years of Lyndon Johnson, writes:
One of the reasons I believed I had become an author in the first place was to find out how things really worked and to explain those workings, and, as my focus had narrowed to politics, that reason had become to explain how political power really worked. And during the few years I had been a reporter I had convinced myself, in part because of the easy gratifications that go with the journalist’s life–the bylines; the gratitude or the wary respect or fear that the subjects of your articles had for you; the awareness of friends or neighbors of what you were doing; the feeling that you were at the center of the action–that I was succeeding in doing what I had set out to do.
But the more I thought about Robert Moses’ career, the more I understood that I had been deluding myself.
That is a question-haver right there. Extrinsic praise crumbles in the face of the question.
Caro’s question is: How does political power really work in America?
Once he asks it, it takes the wheel. Caro is willing to go broke to answer it.
For example: one day in the 1960s, his wife Ina sold their home so that he wouldn’t have to, because it was obvious to both of them that they needed to. And despite being warned by those around Robert Moses, the subject the only book that Caro wrote while broke, unknown, and powerless, that no one over whom Moses held any power (which was a lot of people) would speak with him about the book, and certainly Moses himself would not, Caro just spoke with whoever he could, even four layers removed from the man, until, eventually, Moses caved, impressed by the younger Robert’s doggedness: “ [Sidney M. Shapiro, Moses’ aide] had said that ‘RM,’ learning of my stubbornness despite his strictures, had concluded that at last someone had come along who was going to write the book whether he cooperated or not.” The book became The Power Broker.
Caro asks plenty of specific questions, ones with specific answers, in The Power Broker. Exactly how much did Moses build? How did he physically shape New York City? How did Moses use Authorities as his source of power? What was his impact on people who lived in his way?
Each of these, though, was in service of the larger question of how political power really works. It’s this question that drove him to ask all of the others, even at great personal cost.
This question literally shaped his life, how he spent it during the months that turned into years writing The Power Broker, and how he would spend it afterwards, when he’d won a Pulitzer and could write about whatever he wanted. Caro had been planning on writing a biography on New York City Mayor Fiorello LaGuardia – Caro lived in New York, and after writing about Moses, he knew New York – but he realized that if he really wanted to answer the question How does political power really work in America? he would need to study someone who shaped America on the national level. So he chose Lyndon B. Johnson, and he has been writing about power through the LBJ lens for over a half century since.
Do you see what I mean? The question shaped everything about him – where he and his wife Ina lived, what they read, who they spoke with, and the words he wrote.
Nothing in Caro’s life would have been the same if he’d asked a different question.
There is nothing unique, from that perspective at least, about Robert Caro. What is perhaps unique is that he found a question so powerful to him that he allowed it to fully shape his life. Had he chosen another question he cared about less, he presumably would not have let it dictate where he lived or how he’s spent every day for fifty-one years (and counting). He also wouldn’t be Robert Caro; I certainly wouldn’t be writing about him right now.
But I think that for most of us there exists a question that could impact each of us similarly, and I know that most of us have not spent nearly enough time asking what that question is.
Let’s test that: what’s your question?
—
Often, almost as a way of catching my breath, I’ll feed what I’ve written to Claude and ask for feedback. I just did just that. After some specific suggestions, Claude wrote, “The piece is developing a compelling argument about how the right question can generate almost supernatural persistence.”
That is funny. I will explain why.
—
It’s Saturday morning (as I write this). I’ve been noodling this piece for a few days, and I put fingers to keys on it for the first time yesterday (Friday) afternoon. I went fingers up at around 5pm in order to get back and meet Puja and the kids for dinner near our house. It was a sharply cold but clear and beautiful afternoon, so I got off of the N at Atlantic and instead of transferring to the R for the last leg, I walked the 13 minutes to the restaurant. Since listening to The Telepathy Tapes, I have been meditating every day, as I shared in The Return of Magic. Since getting into meditation, I’ve started appreciating light, planes, and clouds more than I have in the past. So on my walk yesterday, looking up at the light, planes, and clouds, I asked the universe for a sign. This wasn’t nearly as weird or dramatic as it sounds, it was almost a throwaway thought as I was walking sans headphones with mental time to kill, and the dream scenario would have been a glowing orb or something but I didn’t actually expect that (or anything) to happen. But then I looked up and to my right and I saw this:
That looks like a question mark, right? While I’m in the middle of writing a piece about questions? Weird, at least. You gotta give me that.
To be fair, that is most likely a coincidence, potentially a hallucination, and only possibly a synchronicity – a meaningful coincidence that suggests an underlying pattern or connection beyond normal causality. Carl Jung first coined the term.
I wrote about synchronicities in The Return of Magic, too. About how one Friday night, I was watching the Jesse Michels’ American Alchemy interview with Riz Virk in which they talk about simulation theory, Philip K. Dick, simulacrum, and glitches in the Matrix, and then I asked the “simulators” to help me pick a book. My eyes went to a book I forgot I owned: The Simulacra by Philip K. Dick. Weird, at least. You gotta give me that.
So last night, another Friday night, this question of how to write about questions – about how questions are like more flexible missions – bumping around in my brain, I decided to shut my brain off and watch the new American Alchemy, this one with Communion author Whitley Strieber. Then two hours in, this exchange happens:
Whitley: Stanton was the most driven human being I've ever known. He would go anywhere, he would do anything, he would not stop... And he was very coy about whether or not he had been activated - you see a UFO up close or you have a close encounter experience, you're activated. Most people that happens to… a lot of people don't remember, a lot of people blow it off, but there are certain people who just will not stop. They will not stop.
Jesse: So the UFO almost installs in them some sort of drive or mission to disclosure or an insatiable curiosity.
Whitley: It installs in them a burning question, and they just cannot let it go.
Come on. That is the thesis of this essay: burning questions as motivating force and organizing principle. Remove the UFOs and Whitley is describing Robert Caro.
Does that seem like a synchronicity to you? At least a coincidence, right?
I mean, even if you’re skeptical, even if you think I’m fooling myself because I want to believe and that I just noticed “burning question” because I’m in the middle of asking questions about questions… AHA! Well that’s exactly the point I’m trying to make.
The questions we ask reshape our reality.
If I were not asking myself a question about questions, there is no way that, 2:04:20 into an interview at 9:45pm on a Friday - past my bedtime - I would have even registered that sentence. But since I was asking myself about questions, that sentence jumped right into my brain and, from there, sent tingles down my spine. A clue! A clue!
This is what questions do: they pull the right stuff into your brain. The late, great strategy professor Clayton Christensen apparently told Jason Fried (lightly paraphrased by Fried, emphasis mine):
Questions are places in your mind where answers fit. If you haven’t asked the question, the answer has nowhere to go. It hits your mind and bounces right off. You have to ask the question — you have to want to know — in order to open up the space for the answer to fit.1
Your mind can’t fit everything. Questions filter out the things you don’t care to fit and find a place for the things you do. So having a question becomes more important in proportion to the amount of information available to you, which these days is way too much. A question keeps you from getting sucked into the algorithmic noise, and surfaces some signal at least when you inevitably occasionally do. Algorithms are braindamaging btw because they shoot at you a stream of answers you never asked for, answers with no right place to fit, like a wetware DDoS attack.
But questions do more than just filter; filtering is too passive a thing. Questions attract, drive, and distinguish.
We talked a little bit about attracting. This is what happens to me when I’m writing a piece. Suddenly, I discover useful tidbits without even looking for them.
We talked about their driving, too. Asking a question doesn’t just make information appear. It pushes you to seek out any information that might help you get closer (asymptotically, never really getting there, because that’s when the fun ends) to an answer. This is the story of Robert Caro. This is why he waded through piss and feces in East Tremont and why he and Ina moved to Hill Country. Having a question that you truly want to answer gives you what Graham Duncan calls a “hungry mind,” which he says, correctly, is very hard to fake. Questions are what send you in search of old out of print niche publications instead of scrolling the same garbage everyone else is. They’re what push you to uncover informational alpha – to call the person who knew the person who knew the person, in case they might have a nugget. We say curiosity is one of the most important traits in a person; questions are curiosity made tangible.
We haven’t really talked about distinguishing, but having a question does that too.
I told you that in trying to become a better writer, I’ve been studying Caro, McPhee, and DFW. All three are masters of literary nonfiction, even if DFW would have identified more with his fiction. The three are entirely, completely different writers in a way that I never bothered to appreciate until I asked myself the question “How can I become a better writer?” in service of a bigger question I can feel but am still trying to pin down concretely. They are different writers because they ask different questions.
Caro asks: How does political power really work in America? and attempts to answer by studying just two Americans who wielded power most impactfully. Really, since 1965, sixty years ago, he’s written two books (one over four, soon to be five, volumes).
McPhee asks something entirely different! On the surface, it might look like he’s asking, “How do humans interact with and shape the natural world, and how does it shape us?" or something. But I think he’s really asking "How can words capture the full depth, structure, and texture of reality?"
Asking an entirely different question has created an entirely different life! Where Caro has produced two stories, McPhee has published 34 books and written more than 100 articles for The New Yorker. That is the output of someone who is working on the craft itself. What’s more, McPhee has been teaching writing at Princeton University since 1975.
Caro uses writing in service of his subject. McPhee uses subjects in service of his writing.
DFW asked a much different and more personally urgent question: “How can we live a meaningful, non-bullshitty life in a world increasingly full of irony, entertainment, and distraction?”
His novels, like Infinite Jest and The Pale King, are long (long) explorations of the question; his profiles, like David Lynch Keeps His Head to Roger Federer as a Religious Experience, examine rare individuals, whose strangeness and mastery, respectively, suggest an answer.
Incidentally, in both of those essays, two of my favorites, DFW explicitly names the sub-question he plans to pursue in order to handle the particular subject in a way that others – others with different Big Questions than DFW, or with no Big Questions at all – would not.
In D.L.K.H.H., he writes:
So the obvious "Hollywood insider"-type question w/r/t Lost Highway is whether the movie will rehabilitate Lynch's reputation. For me, though, a more interesting question ended up being whether David Lynch really gives a shit about whether his reputation is rehabilitated or not. The impression I get from rewatching his movies and from hanging around his latest production is that he really doesn't. This attitude-like Lynch himself, like his work-seems to me to be both grandly admirable and sort of nuts.
In R.F.A.A.R.E., he actually names the idea we talked about earlier about answers not being that interesting…
Journalistically speaking, there is no hot news to offer you about Roger Federer. He is, at 25, the best tennis player currently alive. Maybe the best ever. Bios and profiles abound. “60 Minutes” did a feature on him just last year. Anything you want to know about Mr. Roger N.M.I. Federer — his background, his home town of Basel, Switzerland, his parents’ sane and unexploitative support of his talent, his junior tennis career, his early problems with fragility and temper, his beloved junior coach, how that coach’s accidental death in 2002 both shattered and annealed Federer and helped make him what he now is, Federer’s 39 career singles titles, his eight Grand Slams, his unusually steady and mature commitment to the girlfriend who travels with him (which on the men’s tour is rare) and handles his affairs (which on the men’s tour is unheard of), his old-school stoicism and mental toughness and good sportsmanship and evident overall decency and thoughtfulness and charitable largess — it’s all just a Google search away. Knock yourself out.
…before writing that what he wanted to write about was the transcendent experience of watching Federer play:
This present article is more about a spectator’s experience of Federer, and its context. The specific thesis here is that if you’ve never seen the young man play live, and then do, in person, on the sacred grass of Wimbledon, through the literally withering heat and then wind and rain of the ’06 fortnight, then you are apt to have what one of the tournament’s press bus drivers describes as a “bloody near-religious experience.” It may be tempting, at first, to hear a phrase like this as just one more of the overheated tropes that people resort to to describe the feeling of Federer Moments. But the driver’s phrase turns out to be true — literally, for an instant ecstatically — though it takes some time and serious watching to see this truth emerge.
Understanding DFW’s Big Question, do you see why he asks the specific sub-questions he does in each of these profiles?
I guess the point here being that do you see what I mean w/r/t answers just not being that interesting and the questions you ask shaping your reality? No one else would write Federer like DFW did. McPhee, writing about a tennis match in one of his most famous essays, Levels of the Game, writes an entirely different piece! It’s analytical and structured where DFW’s is subjective and ecstatic. They are both excellent. They both answer entirely different questions.
In a New Yorker profile on Wallace a year after his death by suicide, D.T. Max describes the particular tragedy of losing a genius mid-question:
The sadness over Wallace’s death was also connected to a feeling that, for all his outpouring of words, he died with his work incomplete. Wallace, at least, never felt that he had hit his target. His goal had been to show readers how to live a fulfilled, meaningful life. “Fiction’s about what it is to be a fucking human being,” he once said.
I don’t know. One of the things it is to be a human being is to pursue a question so Big that you could pursue it for a Bryan Johnson Lifetime and never get your answer.
How many total lives have been spent questioning the meaning of life? Are we closer to an answer? Is there an answer?
Most of us spend most of our working lives trying to find specific answers to other peoples’ questions. No two ways about it: that skill is becoming commoditized.
There has never been a better time, however, to be someone with a burning question to answer.
Pinning down your question isn’t easy – I’m still wrestling with mine – but it’s worth every second you can give to it. It will shape everything else you do.
Take some time this weekend – better yet, play hooky, take a snow day or a sick day – and just think about the question you’d be happy spending a decade or six trying to answer. It’ll light up your world more than any answer could.
Long questions, short answers.
Thanks to Claude for editing and to Vanta for supporting Not Boring. Go support them so I can keep asking questions.
That’s all for today. We’ll be back in your inbox on Friday with the Weekly Dose.
Thanks for reading,
Packy
I was going to take the piece in a different direction from here because I thought the string of connections from Christensen to Rumelt to Graham Duncan was clever, but now that I’m trying to be a better writer, I decided to cut it. It didn’t earn its word count. Having read too much DFW, however, I decided to put it here IYI.
Christensen was in the right place to make that observation. As a professor, he must have seen thousands of kids fumble the opportunity to learn from THE CLAYTON CHRISTENSEN because they were there to get a degree, not answer a question. One of my great regrets is not having had the questions I have now when I went to school, a magical place designed to let you explore all of them. One of my great joys is that my son Dev wants to build worlds and approaches almost everything through the lens of “Will this help me build a world?” I can see the question opening up places in his mind. My job now is to protect that question-led curiosity at all costs. But Christensen doubtlessly saw so many kids whose driving question was “Will this be on the test?” that the question-as-place insight became obvious.
As a strategy professor, he would have experienced the opposite: the value of asking the right question. A friend of Richard Rumelt, my other favorite strategy professor, once observed to him that “it looks to me as if there is really only one question you’re asking in each case: What’s going on here?” Rumelt writes that the observation was “instantly and obviously correct. A great deal of strategy work is trying to figure out what is going on. Not just deciding what to do, but the more fundamental problem of comprehending the situation.” Not just getting the answer, but the more fundamental problem of asking the right question.
I came across that Rumelt anecdote in Graham Duncan’s essay, What’s going on here, with this human?, which I re-read after listening to Graham’s fantastic conversation with Patrick O’Shaughnessy.
Patrick named the episode “The Talent Whisperer,” and what becomes clear throughout the conversation is that Graham has crafted a successful career and life by asking the question: “How do you see someone, including yourself, clearly?”
Graham has made an art of understanding people – Patrick introduced him by saying that his “reputation was as the most discerning people picker on Wall Street” – and in his essay, he writes that one of the best ways to do that is to see what kinds of questions they ask:
It also helps to have the candidate you’re trying to see clearly ask you questions. Questions have very high signal value compared to most anything else you can get from a candidate… I write down each question and sometimes respond with “I’ll answer, but first I’m curious, why did you ask that?” I’m looking for the felt sense of a “hungry mind” based on the way their questions flow. That’s very hard to fake.
I am picturing myself, many moons ago, in the last few minutes of many Wall Street interviews, the portion when they ask, “So what questions do you have for me?”, asking dumb questions like “What’s your favorite part about the culture at Deutsche Bank?” or something equally stupid because I didn’t actually care enough. I left finance a few years later when I realized that I had friends who loved finance, and that they would always beat me.
Another way of putting that, in the context of questions, is that they actually wanted to answer the question, “How do you pull money from the markets when so many other smart people are competing to do the same thing?” So when they got put in a room with someone who did that professionally, they asked hungrier specific questions in order to better answer their big question. As Graham wrote, it is very hard to fake what your mind is hungry for.
This is why it is so important to figure out the big question you’re trying to answer. Putting yourself in situations that feed your mind what it’s hungry for lets you outwork everyone else without even realizing you’re working. You’re just trying to answer your question!
Each person’s mind is hungry for different things (different nutritious things, at least), and each person’s big question will be different. There are things that I find absolutely fascinating that possibly only 37 people in the world would care about. Like when I was interviewing Anil and Sunil for the Deep Dive on Meter, I was ravenous for more information on how they chose which architecture to bet on and why they “selfishly” replace all of their hardware whenever they release new hardware. That was so much fun for me, because one of my sub-questions is, “How can Vertical Integrators beat incumbents by building better, faster, cheaper products?” and there were two of the best I’d ever met at doing just that willing to spend their time to help me answer it.
2025-02-07 21:25:34
Hi friends 👋,
Happy Friday and welcome back to our 130th Weekly Dose of Optimism. Just a darn solid week here at the Weekly Dose — some good breakthrough scientific research and some important AI launches. We’ve come to expect these types of weeks, but we cannot take them for granted. Yes, the world is chaotic right now. But the world is also good. Got to remember that.
Let’s get to it.
Today’s Weekly Dose is brought to you by…Create
Dan here: As some of you know, my main gig is running a creatine brand called Create. We popularized the creatine gummy over the last couple of years, but now offer a variety creatine-forward products. Believe it or not, we have over 250,000 unique customers just 2 years after launch. Wild.
The TL;DR on creatine is that it was once this highly stigmatized supplement (bodybuilders, quasi-steroid, etc etc…you know the stigma) but is now generally viewed as a highly-effective, research-backed, and safe compound that really anyone can benefit from. What to expect: meaningful changes in strength, body composition, recovery, and energy levels. I personally take about 15-20g/day across gummies & powders to maximize creatine’s physical and cognitive benefits.
You can get 30% off an order of Create using code: notboring30
(1) OLIG2 mediates a rare targetable stem cell fate transition in sonic hedgehog medulloblastoma
Desai et al in Nature
Targeting this rare OLIG2-driven proliferative programme with a small molecule inhibitor, CT-179, dramatically attenuates early tumour formation and tumour regrowth post-therapy, and significantly increases median survival in vivo.
Some very good news to start the week. We may have discovered how to stop childhood tumors before they even begin to form.
Researchers found that the protein Olig2 is a key driver of medulloblastoma (a deadly pediatric brain cancer) growth by activating dormant cancer stem cells and turning them into rapidly dividing tumor cells. Blocking Olig2 with the drug CT-179 keeps these stem cells in a dormant state, preventing both initial tumor formation and relapse after treatment. In short, CT-179 might be able prevent pediatric brain tumors and also prevent them from coming back after treatment.
This approach is different from current therapies that target fast-growing tumor cells but fail to stop the cancer from returning. By focusing on the root of the disease—the stem cells that fuel tumor regrowth—this strategy could lead to more effective, long-lasting treatments for medulloblastoma. If you’re interested in a more in-depth breakdown, @vittorio has a full breakdown of the research on X.
From OpenAI
Deep research is a specialized AI capability designed to perform in-depth, multi-step research using data on the public web. It’s fine-tuned on the upcoming OpenAI o3 reasoning model and can autonomously search for and read information from diverse online sources. This enables it to create thorough, documented, and clearly cited reports on complex topics.
If you read the research paper from Nature above…good on ya. But if you’re anything like me, you likely copy + pasted the highly technical research into ChatGPT and prompted the model to give you a relatively short summary intended for an intelligent, but not expert audience. ChatGPT has been good at this type of summary work for a while. Now, it’s starting to get better at multi-step, complex research. Not just summarizing but, discovering, synthesizing, and reasoning about the content, and then it adapts its plan as it discovers more and more information. It can take over 5 minutes for deep research to return its answer as it works through the information and develops a thoughtful response. The response, then, is a fully cited research paper at the level of an expert researcher.
The use cases for deep research are plentiful — imagine an expert-level report on any topic generated in about 5 minutes. Not AI slop or excerpt from Wikipedia, but an in-depth, highly technical (if necessary), and thoughtfully reasoned research report. That’s useful in scientific research, advanced data analysis, investing, law, national security…or really any type of knowledge work that requires complex thinking. Ultimately, the goal for a tool like deep research is that it would be producing novel concepts and contributing to the research itself, but we’re not there yet. The current iteration is about discovering, synthesizing, and reasoning through existing information and presenting it in a way that can be understood.
Rauch et al
The compact size of NanoCas, in combination with robust nuclease editing, opens the door for single-AAV editing of non-liver tissues in vivo, including the use of newer editing modalities such as reverse transcriptase (RT) editing, base editing, and epigenetic editing.
What is this, CRISPR for ants?!
NanoCas is a newly designed tiny CRISPR system, about one-third the size of standard CRISPR (Cas9), making it small enough to fit into a single AAV delivery vector. An AAV delivery vector is a harmless virus used to deliver genetic material into cells for gene therapy. The big breakthrough with NanoCas is that its ability to fit into a single AAV is that it allows gene editing to reach muscle and heart tissues, not just the liver.
In monkeys, NanoCas successfully edited 30% of muscle cells, the first time a single-AAV CRISPR system has done this. It also fixed genes linked to cholesterol and muscular dystrophy in mice and monkeys, showing promise for treating muscle diseases. A modified version made NanoCas even more effective by improving how well it binds to DNA. NanoCas’ small size and strong performance could unlock potential gene editing treatments for muscle, the heart, and the brain. Lucas Harrington had a great breakdown of the research on X.
(4) Replit Agent
From Replit
Last week, we launched Replit Agent, our AI system that can create and deploy applications. Now, with only a few sentences and a few minutes, you can take an application from idea to deployment.
Make an app for that.
The gap between idea and execution keeps shrinking. This week, Not Boring portfolio company, Replit, launched Replit Agent. Basically you can prompt Replit Agent in as little as a few words to build an app for you and voilà you have the app. It takes your prompt and configures your development environment, installs dependencies, and executes code. If you’re not technical but have tried to code before, you’re likely familiar with the challenges of setting up first setting up the coding environment and actually deploying the code into production. Replit Agent makes all of that easy, and oh yeah, does the actually hard thing of writing the code as well. I believe even Packy (!) was able to spin up a pretty solid meditation app all while brushing his teeth.
The idea here to is lower the barrier to entry on coding as low as possible. Don’t know how to code? No problem. Don’t know how to set up a development environment? Why would you? In that same vein, Replit Agent is now available on mobile devices and completely free. You can be on the subway, or on the toilet, or walking to work and developing an app on your phone. And of course, in Replit fashion, you can always dig deeper from there — investigating the code, upgrading the design, customizing the UX, etc, etc.
(5) Super Bowl LIX
Blow the whistle!
It’s Super Bowl weekend and our hometown Philadelphia Eagles are once again competing for the Lombardi Trophy. I am surprised I was able to get any work done this week, frankly. I love football, I love the Eagles, and I love the city of Philadelphia. So this is a big week!
In a world that’s constantly changing, it’s rare to have a true interest that’s lasted now for over 20 years. It’s comforting and makes you feel grounded. That’s how many people feel about football, regardless of where you’re from. But there’s something about Philly and the Eagles that’s a bit different, at least in my very biased opinion.
Perhaps it’s due to the city itself: Philly is often overlooked and wedged between New York and DC. It’s one of the few big cities in our country where the people that live here are from here. It’s filled with history dating back to our country’s founding and prides itself on its blue collar work ethic. It’s racially and economically diverse, but on Sundays we all bleed green. Perhaps it’s the franchise: so close for so many years, but never quite able to push it over the goal line. It has ownership that embraces the city and invests in the team as if it were the city’s most valuable asset. It attracts talented players that know, going in, they’re playing for more than a paycheck. Perhaps it’s the players: this century, we’ve been blessed by big personalities like Donovan McNabb, Terrell Owens, and DeSean Jackson. Certified Football Guys like Jason Kelce, Chris Long, and Brandon Graham. And truly elite talents like Fletcher Cox, Lane Johnson, and Saquon Barkley.
I am lucky to be an Eagles fan. Go Birds.
Plus, speaking of Saquon:
When worlds collide. Our favorite finance operations platform, Ramp, and our favorite football player, Saquon Barkley, teamed up on this Super Bowl ad. This feels like micro ad targeting.
Packy here: if you love Saquon and Not Boring and you hate doing expenses, get your business on Ramp:
Ramp: The Official Business Card of Not Boring and Saquon Barkley. 🦅
We’ll be back in your inbox next week. In the meantime, Go Birds!
Thanks for reading,
Dan + Packy
2025-01-31 21:24:30
Hi friends 👋,
Happy Friday and welcome back to our 129th Weekly Dose of Optimism. So much going on in the world right now: a flurry of executive orders and action coming out of the White House, the AI industry (maybe) being turned on its head, and a nightmarish airplane accident occurring at an airport I am sure many of us have flown in-and-out of many times. The world can be a chaotic place, but it’s a good place.
Let’s get to it.
Today’s Weekly Dose is brought to you by our 2025 Presenting Sponsor… Vanta
Open doors to next-level growth with Vanta
As a startup founder, finding product-market fit is your top priority.
But landing bigger customers requires SOC 2 or ISO 27001 compliance—a time-consuming process that pulls you away from building and shipping.
That’s where Vanta comes in.
By automating up to 90% of the work needed for SOC 2, ISO 27001, and more, Vanta gets you compliant fast—opening doors to next-level growth opportunities.
Over 9,000 companies like Atlassian, Factory, and Chili Piper streamline compliance with Vanta’s automation and trusted network of security experts. Whether you’re closing your first deal or gearing up for growth, Vanta makes compliance easy.
Let’s 10x the Not Boring audience’s ARR in 2025. Get $1,000 off Vanta here:
(1) XB-1 First Supersonic Flight
From Boom Supersonic
The successful first supersonic flight of Boom’s demonstrator aircraft, XB-1, took place on January 28 2025 at the Mojave Air & Space Port in California. Boom designed, built, and flew the world’s first independently developed supersonic jet—the first civil supersonic jet made in America.
We are once again a Supersonic Civilization.
On Tuesday, Boom Supersonic’s XB-1 demonstrator aircraft broke the sound barrier during a test flight at Mojave Air & Space Port. The XB-1, piloted by Chief Test Pilot Tristan "Geppetto" Brandenburg (great name), reached an altitude of 35,290 feet and accelerated to Mach 1.122 (750 mph), marking the first time a privately developed civil supersonic jet has achieved supersonic speeds.
Boom’s XB-1 demonstrator aircraft and its demonstration of going supersonic, is a major milestone for the company’s ultimate mission of making commercial aircraft travel supersonic. The company’s planned Overture airliner aims to have capacity for 60-80 passengers and travel at speeds of Mach 1.7, or 1,300 mph. Overture should be able to travel twice as fast overwater and 20% as fast overland as traditional commercial airliners. That would cut down the time it takes to fly from New York to London from about seven hours to approximately three and a half hours. The company has already secured 130 pre-orders for its supersonic airliners from the likes of American Airlines, United, and Japan Airlines.
Boom.
(2) DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning
From DeepSeek
We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1. DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without super- vised fine-tuning (SFT) as a preliminary step, demonstrates remarkable reasoning capabilities.
If I had a nickel for every DeepSeek hot take I’ve read over the past week, I could train DeepSeek.
We don’t really have a hot take, but I think there are probably some takeaways from the whole ordeal:
It’s turns out you can train a best-in-class AI model for way less than you’ve had to in the past. DeepSeek is marketing that it spent $5.5M to train R1, but that is likely an underestimation of the true costs, let alone the cost is was able to forgo by building on (and training on) the models that came before it. Whatever the true cost, it was surprisingly cheap and that’s going to drive costs down in the rest of the industry.
Open sourcing leads to commodization and, likely, a faster pace of innovation. Going forward, there will still be frontier breakthroughs, but those breakthroughs will likely be quickly copied and open sourced. That’ll drive down the prices to use AI models and that will mean we can use AI to do more things that otherwise wouldn’t be economically logical.
We’ll likely see Jevons Paradox play out at massive scale. Jevons Paradox is the idea that increasing efficiency in resource use often leads to higher overall consumption of that resource, rather than a reduction. Everyone is freaking out about how DeepSeek impacts future Nvidia demand, but Jevons would lead you to believe that training AI models get more cost efficient, the demand for chips will actually increase. This is especially true for Nvidia, which now does a much higher percentage of its revenue from inference: as models get cheaper and AI usage increases, inference needs will increase ~linearly. The training business may be in some trouble, but that trouble may be offset from a massive wave of demand for inference.
Sam Altman is now basically running a DTC brand at OpenAI. In DTC, it’s hard to differentiate on product and channels for acquiring new customers are becoming saturated and less efficient. On the product side, once there are signs of traction, your product gets copy catted out of the wazoo — both from legitimate competitors and Chinese dupers. On the acquisition front, overtime the value accrues to who is able to generate attention and demand. In traditional DTC, that’s Meta, Google, and Amazon. It’s not yet quite exactly clear where value will accrue in user acquisition in AI. Ultimately, the only way to differentiate or win long-term is to build a brand that people trust or feel compelled to talk about. Sam if you need some advise on running a DTC brand, feel free to shoot me a text.
(3) Science Corp. Aims to Plant Ideas in Brains with New Device
Ashlee Vance for Core Memory
Near the end of last year, Science revealed its work on technology that makes it possible to fuse large quantities of lab-grown neurons with an animal’s brain. To do this, Science has built a device that preserves the manufactured neurons in a gel. It then takes out part of an animal’s skull and places its device atop the animal’s brain. In the days that follow, the neurons in the device begin to develop wiring that stretches out from Science’s hardware and into the brain, giving the animal access to extra stores of mental horsepower.
Science Corp got the Ashlee Vance treatment.
The company, led by Neuralink cofounder Max Hodak, is pioneering a new kind of brain-computer interface (BCI) that fuses lab-grown neurons with an animal's brain using a neuron-rich gel device (see above) placed atop the skull. Unlike traditional implants that target specific brain regions, its "biohybrid neural interfaces" aim to integrate seamlessly, providing general-purpose cognitive enhancement without deep brain penetration or genetic modifications.
The company early experiments in mice suggest that it could help repair neural damage by rerouting signals around compromised tissue, and early use cases will focus on righting damages to the brain, but according to Hodak, “The really interesting stuff is how do we go from, like, closing your eyes and imagining a scene to copying that to a computer. You can just imagine a sound and then download it. We think that may be possible.”
Go Duke.
(Sidenote: it’s great to have Ashlee Vance unleashed with his new media company, Core Memory. On Wednesday, he dropped a profile on HudZah — the Waterloo student who used AI to make a nuclear fusor in his house — that just makes you excited for the future and envious of the youths. What a time to be alive.)
(4) Engineered heart muscle allografts for heart repair in primates and humans
Jebran et al in Nature
In the heart failure model, evidence for EHM allograft-enhanced target heart wall contractility and ejection fraction, which are measures for local and global heart support, was obtained. Histopathological and gadolinium-based perfusion magnetic resonance imaging analyses confirmed cell retention and functional vascularization. Arrhythmia and tumour growth were not observed. The obtained feasibility, safety and efficacy data provided the pivotal underpinnings for the approval of a first-in-human clinical trial on tissue-engineered heart repair. Our clinical data confirmed remuscularization by EHM implantation in a patient with advanced heart failure.
One personal story. One macro stat.
Personal: When I was two years old, I had open heart surgery to repair an atrial septal defect. It was a pretty massive surgery to conduct on a very small body, and was quite risky back in the mid-90s. I’ve been scared of and interested in heart stuff ever since.
Macro: Heart disease is the leading cause of death in the U.S., killing one person every 33 seconds—more than all forms of cancer combined. Turns out the heart is a very, very important organ. So advancements in cardiovascular research can have a massive impact. Which is why this research in Nature caught my eye.
Researchers created engineered heart muscle (EHM) grafts from stem cells to repair failing hearts. In initial trials on monkeys, these grafts integrated well, improved heart function, and showed no major side effects. A human trial also showed successful heart remuscularization in a patient with advanced heart failure. The ability to successfully and scalable repair failing hearts offers a potential alternative to the 10,000+ full heart transplants that occur each year globally.
Great news for anyone with a heart. Personally, I’m pumped.
(5) "New Space" (Frontier Film Trailer)
Friend of the newsletter, Jason Carman, is releasing his first feature length film “New Space” tomorrow evening. The movie is near 90 minutes long, features over 20 interviews with space leaders from industry and government, and chronicles the story of space today. If the full-length version is anything like the trailer, I won’t be able to look away. Seriously, watch that trailer and try not to get pumped up. Follow the S3 Youtube channel and turn on your notis for Jason on X so you don’t miss the drop.
At Not Boring, we’ve written extensively on what’s happening in the space industry today. We cover it weekly and Packy has written multiple short books on the industry and its emergent players over the years. Reading all of that (here, here, here, and here) is great, but watching it all come to life in an epic 90 minute video may be even cooler.
Bonus:
Meter is building the "internet utility"—a vertically integrated networking company that aims to make setting up and managing internet infrastructure as seamless as turning on electricity or water. This is one of the most impressive companies you’ve never heard of. And Packy covered it like the super thoughtful schizo that he is in a mind bending 100 page deep dive.
Pairs well with a little weekend morning and a big coffee.
Double Bonus: Substack Market Forecast Summit
Packy here. At 11:30am this morning, and I will be talking about VC from the perspective of Solo GPs as part of Substack’s Market Forecast Summit.
You can join us (free) LIVE in the Substack app at the link here. You’ll need the app anyway to hear and Patrick Collison talk U.S. technological competitiveness this afternoon anyway!
Thanks to Vanta for sponsoring! No essay next week, so we’ll be back in your inbox next Friday.
Thanks for reading,
Dan + Packy