2025-05-13 21:00:50
US global dominance in science was no accident, but a product of a far-seeing partnership between public and private sectors to boost innovation and economic growth.
Since 20 January, US science has been upended by severe cutbacks from the administration of US President Donald Trump. A series of dramatic reductions in grants and budgets — including the US National Institutes of Health (NIH) slashing reimbursements of indirect research costs to universities from around 50% to 15% — and deep cuts to staffing at research agencies have sent shock waves throughout the academic community.
These cutbacks put the entire US research enterprise at risk. For more than eight decades, the United States has stood unrivalled as the world’s leader in scientific discovery and technological innovation. Collectively, US universities spin off more than 1,100 science-based start-up companies each year, leading to countless products that have saved and improved millions of lives, including heart and cancer drugs, and the mRNA-based vaccines that helped to bring the world out of the COVID-19 pandemic.
These breakthroughs were made possible mostly by a robust partnership between the US government and universities. This system emerged as an expedient wartime design to fund weapons research and development (R&D) in universities. It has fuelled US innovation, national security and economic growth.
But, today, this engine is being sabotaged in the Trump administration’s attempt to purge research programmes in areas it doesn’t support, such as climate change and diversity, equity and inclusion, and to rein in campus protests. But the broader cuts are also dismantling the very infrastructure that made the United States a scientific superpower. At best, US research is at risk from friendly fire; at worst, it’s political short-sightedness.
Researchers mustn’t be complacent. They must communicate the difference between eliminating ideologically objectionable programmes and undermining the entire research ecosystem. Here’s why the US research system is uniquely valuable, and what stands to be lost.
The backbone of US innovation is a close partnership between government, universities and industry. It is a well-calibrated ecosystem: federally funded research at universities drives scientific advancement, which in turn spins off technology, patents and companies. This system emerged in the wake of the Second World War, rooted in the vision of US presidential science adviser Vannevar Bush and a far-sighted Congress, which recognized that US economic and military strength hinge on investment in science (see ‘Two systems’).
It need not have been this way. Before the Second World War, the United Kingdom led the world in many scientific domains, but its focus on centralized government laboratories rather than university partnerships stifled post-war commercialization. By contrast, the United States channelled wartime research funds into universities, enabling breakthroughs that were scaled up by private industry to drive the nation’s post-war economic boom. This partnership became the foundation of Silicon Valley and the aerospace, nuclear and biotechnology industries.
The US government remains the largest source of academic R&D funding globally — with a budget of US$201.9 billion for federal R&D in the financial year 2025. Out of this pot, more than two dozen research agencies direct grants to US universities, totalling $59.7 billion in 2023, with the NIH and the US National Science Foundation (NSF) receiving the most.
The agencies do this for a reason: they want professors at universities to do research for them. In exchange, the agencies get basic research from universities that moves science forward, or applied research that creates prototypes of potential products. By partnering with universities, the agencies get more value for money and quicker innovation than if they did all the research themselves.
This is because universities can leverage their investments from the government with other funds that they draw in. For example, in 2023, US universities received $27.7 billion from charitable donations, $6.2 billion in industrial collaborations, $6.7 billion from non-profit organizations, $5.4 billion from state and local government and $3.1 billion from other sources — boosting the $59.7 billion up to $108.8 billion (see ‘US research ecosystem’). This external money goes mostly to creating research labs and buildings that, as any campus visitor has seen, are often named after their donors.
Source: US Natl Center for Science and Engineering Statistics; US Congress; US Natl Venture Capital Assoc; AUTM; Small Business Administration
Thus, federal funding for science research in the United States is decentralized. It supports mostly curiosity-driven basic science, but also prizes innovation and commercial applicability. Academic freedom is valued and competition for grants is managed through peer review. Other nations, including China and those in Europe, tend to have more-centralized and bureaucratic approaches.
But what makes the US ecosystem so powerful is what then happens to the university research: it’s the engine for creating start-ups and jobs. In 2023, US universities licensed 3,000 patents, 3,200 copyrights and 1,600 other licences to technology start-ups and existing companies. Such firms spin off more than 1,100 science-based start-ups each year, which lead to countless products.
Since the 1980 Bayh–Dole Act, US universities have been able to retain ownership of inventions that were developed using federally funded research (see go.nature.com/4cesprf). Before this law, any patents resulting from government-funded research were owned by the government, so they often went unused.
Closing the loop, these technology start-ups also get a yearly $4-billion injection in seed-funding grants from the same government research agencies. Venture capital adds a whopping $171 billion to scale those investments.
It all adds up to a virtuous circle of discovery and innovation.
A crucial but under-appreciated component of this US research ecosystem is the indirect-cost reimbursement system, which allows universities to maintain the facilities and administrative support necessary for cutting-edge research. Critics often misunderstand the function of these funds, assuming that universities can spend this money on other areas, such as diversity, equity and inclusion programmes. In reality, they fund essential infrastructure: laboratory space, compliance with safety regulations, data storage and administrative support that allows principal investigators to focus on science rather than paperwork. Without this support, universities cannot sustain world-class research.
Reimbursing universities for indirect costs began during the Second World War, and broke ground, just as the weapons development did. Unlike in a typical fixed-price contract, the government did not set requirements for university researchers to meet or specifications for them to design their research to. It asked them to do research and, if the research looked like it might solve a military problem, to build a prototype they could test. In return, the government paid the researchers for their direct and indirect research costs.
Vannevar Bush (right) led the US Office of Scientific Research and Development during the Second World War.Credit: Bettmann/Getty
At first, the government reimbursed universities for indirect costs at a flat rate of 25% of direct costs. Unlike businesses, universities had no profit margin, so indirect-cost recovery was their only way to pay for and maintain their research infrastructure. By the end of the war, some universities had agreed on a 50% rate. The rate is applied to direct costs, so that a principal investigator will be able to spend two-thirds of a grant on direct research costs and the rest will go to the university for indirect costs. (A common misconception is that indirect-cost rates are a percentage of the total grant, for example a 50% rate meaning that half of the award goes to overheads.)
After the Second World War, the US Office of Naval Research (ONR) began negotiating indirect-cost rates with universities on the basis of actual institutional expenses. Universities had to justify their overhead costs (administration, facilities, utilities) to receive full reimbursement. The ONR formalized financial auditing processes to ensure that institutions reported indirect costs accurately. This led to the practice of negotiating indirect-cost rates, which is still used today.
Since then, the reimbursement process has been tweaked to prevent gaming the system, but has remained essentially the same. Universities negotiate their indirect-cost rates with either the US Department of Health and Human Services (HHS) or the ONR. Most research-intensive universities receive rates of 50–60% for on-campus research. Private foundations often have a lower rate (10–20%), but tend to have wider criteria for what can be considered a direct cost.
In 2017, the first Trump administration attempted to impose a 10% cap on indirect costs for NIH research. Some in the administration viewed such costs as a form of bureaucratic bloat and argued that research universities were profiting from inflated overhead rates.
Congress rejected this and later added language in the annual funding bill that essentially froze most rates at their 2017 levels. This provision is embodied in section 224 of the Consolidated Appropriations Act of 2024, which has been extended twice and is still in effect.
In February, however, the NIH slashed its indirect reimbursement rate to an arbitrary 15% (see go.nature.com/4cgsndz). That policy is currently being challenged in court.
If the policy is ultimately allowed to proceed, the consequences will be immediate. Billions of dollars of support for research universities will be gone. In anticipation, some research universities are already scaling back their budgets, halting lab expansions and reducing graduate-student funding. This will mean fewer start-ups being founded, with effects on products, services, jobs, taxes and exports.
The ripple effects of Trump’s cuts to US academia are spreading, and one area in which there will be immediate ramifications is the loss of scientific talent. The United States has historically been the top destination for international researchers, thanks to its well-funded universities, innovation-driven economy and opportunities for commercialization.
US-trained scientists — many of whom have historically stayed in the country to launch start-ups or contribute to corporate R&D — are being actively recruited by foreign institutions, particularly in China, which has ramped up its science investments. China has expanded its Thousand Talents Program, which offers substantial financial incentives to researchers willing to relocate. France and other European nations are beginning to design packages to attract top US researchers.
Erosion of the US scientific workforce will have long-term consequences for its ability to innovate. If the country dismantles its research infrastructure, future transformative breakthroughs — whether in quantum computing, cancer treatment, autonomy or artificial intelligence — will happen elsewhere. The United States runs the risk of becoming dependent on foreign scientific leadership for its own economic and national-security needs.
History suggests that, once a nation loses its research leadership, regaining it is difficult. The United Kingdom never reclaimed its pre-war dominance in technological innovation. If current trends continue, the same fate might await the United States.
University research is not merely an academic concern — it is an economic and strategic imperative. Policymakers must recognize that federal R&D investments are not costs but catalysts for growth, job creation and national security.
Policymakers need to reaffirm the United States’ commitment to scientific leadership. If the country fails to act now, the consequences will be felt for generations. The question is no longer whether the United States can afford to invest in research. It is whether it can afford not to.
2025-04-15 21:00:52
Prior to WWII the U.S was a distant second in science and engineering. By the time the war was over, U.S. science and engineering had blown past the British, and led the world for 85 years.
It happened because two very different people were the science advisors to their nation’s leaders. Each had radically different views on how to use their country’s resources to build advanced weapon systems. Post war, it meant Britain’s early lead was ephemeral while the U.S. built the foundation for a science and technology innovation ecosystem that led the world – until now.
The British – Military Weapons Labs
When Winston Churchill became the British prime minister in 1940, he had at his side his science advisor, Professor Frederick Lindemann, his friend for 20 years. Lindemann headed up the physics department at Oxford and was the director of the Oxford Clarendon Laboratory. Already at war with Germany, Britain’s wartime priorities focused on defense and intelligence technology projects, e.g. weapons that used electronics, radar, physics, etc. – a radar-based air defense network called Chain Home, airborne radar on night fighters, and plans for a nuclear weapons program – the MAUD Committee which started the British nuclear weapons program code-named Tube Alloys. And their codebreaking organization at Bletchley Park was starting to read secret German messages – the Enigma – using the earliest computers ever built.
As early as the mid 1930s, the British, fearing Nazi Germany, developed prototypes of these weapons using their existing military and government research labs. The Telecommunications Research Establishment built early-warning Radar, critical to Britain’s survival during the Battle of Britain, and electronic warfare to protect British bombers over Germany. The Admiralty Research Lab built Sonar and anti-submarine warfare systems. The Royal Aircraft Establishment was developing jet fighters. The labs then contracted with British companies to manufacture the weapons in volume. British government labs viewed their universities as a source of talent, but they had no role in weapons development.
Under Churchill, Professor Lindemann influenced which projects received funding and which were sidelined. Lindemann’s WWI experience as a researcher and test pilot on the staff of the Royal Aircraft Factory at Farnborough gave him confidence in the competence of British military research and development labs. His top-down, centralized approach with weapons development primarily in government research labs shaped British innovation during WW II – and led to its demise post-war.
The Americans – University Weapons Labs
Unlike Britain, the U.S. lacked a science advisor. It wasn’t until June 1940, that Vannevar Bush, ex-MIT dean of engineering, and President of the Carnegie Institute told President Franklin Roosevelt that World War II would be the first war won or lost on the basis of advanced technology electronics, radar, physics problems, etc.
Unlike Lindemann, Bush had a 20-year-long contentious history with the U.S. Navy and a dim view of government-led R&D. Bush contended that the government research labs were slow and second rate. He convinced the President that while the Army and Navy ought to be in charge of making conventional weapons – planes, ships, tanks, etc. — scientists from academia could develop better advanced technology weapons and deliver them faster than Army and Navy research labs. And he argued the only way the scientists could be productive was if they worked in a university setting in civilian-run weapons labs run by university professors.
To the surprise of the Army and Navy Service chiefs, Roosevelt agreed to let Bush build exactly that organization to coordinate and fund all advanced weapons research.
(While Bush had no prior relationship with the President, Roosevelt had been the Assistant Secretary of the Navy during World War I and like Bush had seen first-hand its dysfunction. Over the next four years they worked well together. Unlike Churchill, Roosevelt had little interest in science and accepted Bush’s opinions on the direction of U.S. technology programs, giving Bush sweeping authority.)
In 1941, Bush upped the game by convincing the President that in addition to research, development, acquisition and deployment of these weapons also ought to be done by professors in universities. There they would be tasked to develop military weapons systems and solve military problems to defeat Germany and Japan. (The weapons were then manufactured in volume by U.S. corporations Western Electric, GE, RCA, Dupont, Monsanto, Kodak, Zenith, Westinghouse, Remington Rand and Sylvania.) To do this Bush created the Office of Scientific Research and Development (OSR&D).
OSR&D headquarters divided the wartime work into 19 “divisions,” 5 “committees,” and 2 “panels,” each solving a unique part of the military war effort. There were no formal requirements.
Staff at OSRD worked with their military liaisons to understand what the most important military problems were and then each OSR&D division came up with solutions. These efforts spanned an enormous range of tasks – the development of advanced electronics, radar, rockets, sonar, new weapons like the proximity fuse, Napalm, the Bazooka and new drugs such as penicillin, cures for malaria, chemical warfare, and nuclear weapons.
Each division was run by a professor hand-picked by Bush. And they were located in universities – MIT, Harvard, Johns Hopkins, Caltech, Columbia and the University of Chicago all ran major weapons systems programs. Nearly 10,000 scientists and engineers, professors and their grad students received draft deferments to work in these university labs.
(Prior to World War 2, science in U.S. universities was primarily funded by companies interested in specific research projects. But funding for basic research came from two non-profits: The Rockefeller Foundation and the Carnegie Institution. In his role as President of the Carnegie Institution Bush got to know (and fund!) every top university scientist in the U.S. As head of Physics at Oxford, Lindemann viewed other academics as competitors.)
Americans – Unlimited Dollars
What changed U.S. universities, and the world forever, was government money. Lots of it. Prior to WWII most advanced technology research in the U.S. was done in corporate innovation labs (GE, AT&T, Dupont, RCA, Westinghouse, NCR, Monsanto, Kodak, IBM, et al.) Universities had no government funding (except for agriculture) for research. Academic research had been funded by non-profits, mostly the Rockefeller and Carnegie foundations and industry. Now, for the first time, U.S. universities were getting more money than they had ever seen. Between 1941 and 1945, OSR&D gave $9 billion (in 2025 dollars) to the top U.S. research universities. This made universities full partners in wartime research, not just talent pools for government projects as was the case in Britain.
The British – Wartime Constraints
Wartime Britain had very different constraints. First, England was under daily attack. They were being bombed by air and blockaded by submarines, so it was logical that they focused on a smaller set of high-priority projects to counter these threats. Second, the country was teetering on bankruptcy. It couldn’t afford the broad and deep investments that the U.S. made. (Illustrated by their abandonment of their nuclear weapons programs when they realized how much it would cost to turn the research into industrial scale engineering.) This meant that many other areas of innovation—such as early computing and nuclear research—were underfunded compared to their American counterparts.
Post War – Britain
Churchill was voted out of office in 1945. With him went Professor Lindemann and the coordination of British science and engineering. Britain would be without a science advisor until 1951-55 when Churchill returned for a second term and brought back Lindemann with him.
The end of the war led to extreme downsizing of the British military including severe cuts to all the government labs that had developed Radar, electronics, computing, etc.
With post-war Britain financially exhausted, post-war austerity limited its ability to invest in large-scale innovation. There were no post-war plans for government follow-on investments. The differing economic realities of the U.S. and Britain also played a key role in shaping their innovation systems. The United States had an enormous industrial base, abundant capital, and a large domestic market, which enabled large-scale investment in research and development. In Britain, a socialist government came to power. Churchill’s successor, Labor’s Clement Attlee, dissolved the British empire, nationalized banking, power and light, transport, and iron and steel, all which reduced competition and slowed technological progress.
While British research institutions like Cambridge and Oxford remained leaders in theoretical science, they struggled to scale and commercialize their breakthroughs. For instance Alan Turing’s and Tommy Flower’s pioneering work on computing at Bletchley Park didn’t turn into a thriving British computing industry—unlike in the U.S., where companies like ERA, Univac, NCR and IBM built on their wartime work.
Without the same level of government support for dual-use technologies or commercialization, and with private capital absent for new businesses, Britain’s post-war innovation ecosystem never took off.
Post War – The U.S.
Meanwhile in the U.S. universities and companies realized that the wartime government funding for research had been an amazing accelerator for science, engineering, and medicine. Everyone, including Congress, agreed that the U.S. government should continue to play a large role in continuing it. In 1945, Vannevar Bush published a report “Science, The Endless Frontier” advocating for government funding of basic research in universities, colleges, and research institutes. Congress argued on how to best organize federal support of science.
By the end of the war, OSR&D funding had taken technologies that had been just research papers or considered impossible to build at scale and made them commercially viable – computers, rockets, radar, Teflon, synthetic fibers, nuclear power, etc. Innovation clusters formed around universities like MIT and Harvard which had received large amounts of OSR&D funding (MIT’s Radiation Lab or “Rad Lab” employed 3,500 civilians during WWII and developed and built 100 radar systems deployed in theater,) or around professors who ran one of the OSR&D divisions – like Fred Terman at Stanford.
When the war ended, the Atomic Energy Commission spun out of the Manhattan Project in 1946 and the military services took back advanced weapons development. In 1950 Congress set up the National Science Foundation to fund all basic science in the U.S. (except for Life Sciences, a role the new National Institutes of Health would assume.) Eight years later DARPA and NASA would also form as federal research agencies.
Ironically, Vannevar Bush’s influence would decline even faster than Professor Lindemann’s. When President Roosevelt died in April 1945 and Secretary of War Stimson retired in September 1945, all the knives came out from the military leadership Bush had bypassed in the war. His arguments on how to reorganize OSR&D made more enemies in Congress. By 1948 Bush had retired from government service. He would never again play a role in the U.S. government.
Divergent Legacies
Britain’s focused, centralized model using government research labs was created in a struggle for short-term survival. They achieved brilliant breakthroughs but lacked the scale, integration and capital needed to dominate in the post-war world.
The U.S. built a decentralized, collaborative ecosystem, one that tightly integrated massive government funding of universities for research and prototypes while private industry built the solutions in volume.
A key component of this U.S. research ecosystem was the genius of the indirect cost reimbursement system. Not only did the U.S. fund researchers in universities by paying the cost of their salaries, the U.S. gave universities money for the researchers facilities and administration. This was the secret sauce that allowed U.S. universities to build world-class labs for cutting-edge research that were the envy of the world. Scientists flocked to the U.S. causing other countries to complain of a “brain drain.”
Today, U.S. universities license 3,000 patents, 3,200 copyrights and 1,600 other licenses to technology startups and existing companies. Collectively, they spin out over 1,100 science-based startups each year, which lead to countless products and tens of thousands of new jobs. This university/government ecosystem became the blueprint for modern innovation ecosystems for other countries.
Summary
By the end of the war, the U.S. and British innovation systems had produced radically different outcomes. Both systems were influenced by the experience and personalities of their nations science advisor.
2024-10-22 21:00:16
In March 2022 I wrote a description of the Quantum Technology Ecosystem. I thought this would be a good time to check in on the progress of building a quantum computer and explain more of the basics.
Just as a reminder, Quantum technologies are used in three very different and distinct markets: Quantum Computing, Quantum Communications and Quantum Sensing and Metrology. If you don’t know the difference between a qubit and cueball, (I didn’t) read the tutorial here.
Summary –
We talk a lot about qubits in this post. As a reminder a qubit – is short for a quantum bit. It is a quantum computing element that leverages the principle of superposition (that quantum particles can exist in many possible states at the same time) to encode information via one of four methods: spin, trapped atoms and ions, photons, or superconducting circuits.
Incremental Technical Progress
As of 2024 there are seven different approaches being explored to build physical qubits for a quantum computer. The most mature currently are Superconducting, Photonics, Cold Atoms, Trapped Ions. Other approaches include Quantum Dots, Nitrogen Vacancy in Diamond Centers, and Topological. All these approaches have incrementally increased the number of physical qubits.
These multiple approaches are being tried, as there is no consensus to the best path to building logical qubits. Each company believes that their technology approach will lead them to a path to scale to a working quantum computer.
Every company currently hypes the number of physical qubits they have working. By itself this is a meaningless number to indicate progress to a working quantum computer. What matters is the number of logical qubits.
Reminder – Why Build a Quantum Computer?
One of the key misunderstandings about quantum computers is that they are faster than current classical computers on all applications. That’s wrong. They are not. They are faster on a small set of specialized algorithms. These special algorithms are what make quantum computers potentially valuable. For example, running Grover’s algorithm on a quantum computer can search unstructured data faster than a classical computer. Further, quantum computers are theoretically very good at minimization / optimizations /simulations…think optimizing complex supply chains, energy states to form complex molecules, financial models (looking at you hedge funds,) etc.
However, while all of these algorithms might have commercial potential one day, no one has yet to come up with a use for them that would radically transform any business or military application. Except for one – and that one keeps people awake at night. It’s Shor’s algorithm for integer factorization – an algorithm that underlies much of existing public cryptography systems.
The security of today’s public key cryptography systems rests on the assumption that breaking into those keys with a thousand or more digits is practically impossible. It requires factoring large prime numbers (e.g., RSA) or elliptic curve (e.g., ECDSA, ECDH) or finite fields (DSA) that can’t be done with any type of classic computer regardless of how large. Shor’s factorization algorithm can crack these codes if run on a Quantum Computer. This is why NIST has been encouraging the move to Post-Quantum / Quantum-Resistant Codes.
How many physical qubits do you need for one logical qubit?
Thousands of logical qubits are needed to create a quantum computer that can run these specialized applications. Each logical qubit is constructed out of many physical qubits. The question is, how many physical qubits are needed? Herein lies the problem.
Unlike traditional transistors in a microprocessor that once manufactured always work, qubits are unstable and fragile. They can pop out of a quantum state due to noise, decoherence (when a qubit interacts with the environment,) crosstalk (when a qubit interacts with a physically adjacent qubit,) and imperfections in the materials making up the quantum gates. When that happens errors will occur in quantum calculations. So to correct for those error you need lots of physical qubits to make one logical qubit.
So how do you figure out how many physical qubits you need?
You start with the algorithm you intend to run.
Different quantum algorithms require different numbers of qubits. Some algorithms (e.g., Shor’s prime factoring algorithm) may need >5,000 logical qubits (the number may turn out to be smaller as researchers think of how to use fewer logical qubits to implement the algorithm.)
Other algorithms (e.g., Grover’s algorithm) require fewer logical qubits for trivial demos but need 1000’s of logical qubits to see an advantage over linear search running on a classical computer. (See here, here and here for other quantum algorithms.)
Measure the physical qubit error rate.
Therefore, the number of physical qubits you need to make a single logical qubit starts by calculating the physical qubit error rate (gate error rates, coherence times, etc.) Different technical approaches (superconducting, photonics, cold atoms, etc.) have different error rates and causes of errors unique to the underlying technology.
Current state-of-the-art quantum qubits have error rates that are typically in the range of 1% to 0.1%. This means that on average one out of every 100 to one out of 1000 quantum gate operations will result in an error. System performance is limited by the worst 10% of the qubits.
Choose a quantum error correction code
To recover from the error prone physical qubits, quantum error correction encodes the quantum information into a larger set of physical qubits that are resilient to errors. Surface Codes is the most commonly proposed error correction code. A practical surface code uses hundreds of physical qubits to create a logical qubit. Quantum error correction codes get more efficient the lower the error rates of the physical qubits. When errors rise above a certain threshold, error correction fails, and the logical qubit becomes as error prone as the physical qubits.
The Math
To factor a 2048-bit number using Shor’s algorithm with a 10-2 (1% per physical qubit) error rate:
If you could reduce the error rate by a factor of 10 – to 10-3 (0.1% per physical qubit,)
In reality there another 10% or so of ancillary physical bits needed for overhead. And no one yet knows the error rate in wiring multiple logical bits together via optical links or other technologies.
(One caveat to the math above. It assumes that every technical approach (Superconducting, Photonics, Cold Atoms, Trapped Ions, et al) will require each physical qubit to have hundreds of bits of error correction to make a logical qubit. There is always a chance a breakthrough could create physical qubits that are inherently stable, and the number of error correction qubits needed drops substantially. If that happens, the math changes dramatically for the better and quantum computing becomes much closer.)
Today, the best anyone has done is to create 1,000 physical qubits.
We have a ways to go.
Advances in materials science will drive down error rates
As seen by the math above, regardless of the technology in creating physical qubits (Superconducting, Photonics, Cold Atoms, Trapped Ions, et al.) reducing errors in qubits can have a dramatic effect on how quickly a quantum computer can be built. The lower the physical qubit error rate, the fewer physical qubits needed in each logical qubit.
The key to this is materials engineering. To make a system of 100s of thousands of qubits work the qubits need to be uniform and reproducible. For example, decoherence errors are caused by defects in the materials used to make the qubits. For superconducting qubits that requires uniform thickness, controlled grain size, and roughness. Other technologies require low loss, and uniformity. All of the approaches to building a quantum computer require engineering exotic materials at the atomic level – resonators using tantalum on silicon, Josephson junctions built out of magnesium diboride, transition-edge sensors, Superconducting Nanowire Single Photon Detectors, etc.
Materials engineering is also critical in packaging these qubits (whether it’s superconducting or conventional packaging) and to interconnect 100s of thousands of qubits, potentially with optical links. Today, most of the qubits being made are on legacy 200mm or older technology in hand-crafted processes. To produce qubits at scale, modern 300mm semiconductor technology and equipment will be required to create better defined structures, clean interfaces, and well-defined materials. There is an opportunity to engineer and build better fidelity qubits with the most advanced semiconductor fabrication systems so the path from R&D to high volume manufacturing is fast and seamless.
There are likely only a handful of companies on the planet that can fabricate these qubits at scale.
Regional research consortiums
Two U.S. states; Illinois and Colorado are vying to be the center of advanced quantum research.
Illinois Quantum and Microelectronics Park (IQMP)
Illinois has announced the Illinois Quantum and Microelectronics Park initiative, in collaboration with DARPA’s Quantum Proving Ground (QPG) program, to establish a national hub for quantum technologies. The State approved $500M for a “Quantum Campus” and has received $140M+ from DARPA with the state of Illinois matching those dollars.
Elevate Quantum
Elevate Quantum is the quantum tech hub for Colorado, New Mexico, and Wyoming. The consortium was awarded $127m from the Federal and State Governments – $40.5 million from the Economic Development Administration (part of the Department of Commerce) and $77m from the State of Colorado and $10m from the State of New Mexico.
(The U.S. has a National Quantum Initiative (NQI) to coordinate quantum activities across the entire government see here.)
Venture capital investment, FOMO, and financial engineering
Venture capital has poured billions of dollars into quantum computing, quantum sensors, quantum networking and quantum tools companies.
However, regardless of the amount of money raised, corporate hype, pr spin, press releases, public offerings, no company is remotely close to having a quantum computer or even being close to run any commercial application substantively faster than on a classical computer.
So why all the investment in this area?
Often, companies in a “hot space” (like quantum) can go public and sell shares to retail investors who have almost no knowledge of the space other than the buzzword. If the stock price can stay high for 6 months the investors can sell their shares and make a pile of money regardless of what happens to the company.
The track record so far of quantum companies who have gone public is pretty dismal. Two of them are on the verge of being delisted.
Here are some simple questions to ask companies building quantum computers:
Lessons Learned
- Lots of companies
- Lots of investment
- Great engineering occurring
- Improvements in quantum algorithms may add as much (or more) to quantum computing performance as hardware improvements
- The winners will be the one who master material engineering and interconnects
- Jury is still out on all bets
Update: the kind folks at Applied Materials pointed me to the original 2012 Surface Codes paper. They pointed out that the math should look more like:
Still pretty far away from the 1,000 qubits we currently can achieve.
For those so inclined…
The logical qubit error rate P_L is P_L = 0.03 (p/p_th)^((d+1)/2), where p_th ~ 0.6% is the error rate threshold for surface codes, p the physical qubit error rate, and d is the size of the code, which is related to the number of the physical qubits: N = (2d – 1)^2.
See the plot below for P_L versus N for different physical qubit error rate for reference.
2024-10-08 22:25:00
This article first appeared in First Round Review.
“Only the Paranoid Survive”
Andy Grove – Intel CEO 1987-1998
I just had an urgent “can we meet today?” coffee with Rohan, an ex-student. His three-year-old startup had been slapped with a notice of patent infringement from a Fortune 500 company. “My lawyers said defending this suit could cost $500,000 just for discovery, and potentially millions of dollars if it goes to trial. Do you have any ideas?”
The same day, I got a text from Jared, a friend who’s running a disruptive innovation organization inside the Department of Defense. He just learned that their incumbent R&D organization has convinced leadership they don’t need any outside help from startups or scaleups.
Sigh….
Rohan and Jared have learned three valuable lessons:
It’s a reminder that innovators need to be better prepared about all the possible ways incumbents sabotage innovation.
Innovators often assume that their organizations and industry will welcome new ideas, operating concepts and new companies. Unfortunately, the world does not unfold like business school textbooks.
Whether you’re a new entrant taking on an established competitor or you’re trying to stay scrappy while operating within a bigger company here’s what you need to know about how incumbents will try to stand in your way – and what you can do about it.
Entrepreneurs versus Saboteurs
Startups and scaleups outside of companies or government agencies want to take share of an existing market, or displace existing vendors. Or if they have a disruptive technology or business model, they want to create a new capability or operating concept – even creating a new market.
As my student Rohan just painfully learned, the incumbent suppliers and existing contractors want to kill these new entrants. They have no intention of giving up revenue, profits and jobs. (In the government, additional saboteurs can include Congressional staffers, Congressman and lobbyists, as these new entrants threaten campaign contributions and jobs in local districts.)
Intrapreneurs versus Saboteurs
Innovators inside of companies or government agencies want to make their existing organization better, faster, more effective, more profitable, more responsive to competitive threats or to adversaries. They might be creating or advocating for a better version of something that exists. Or perhaps they are trying to create something disruptive that never existed before.
Inside these commercial or government organizations there are people who want to kill innovation (as my friend Jared just discovered). These can be managers of existing programs, or heads of engineering or R&D organizations who are feeling threatened by potential loss of budget and authority. Most often, budgets and headcount are zero-sum games so new initiatives threaten the status quo.
Leaders of existing organizations often focus on the success of their department or program rather than the overall good of the organization. And at times there are perverse incentives as some individuals are aligned with the interests of incumbent vendors rather than the overall good of the company or government agency.
How Do incumbents Kill Innovation?
Rohan and Jared were each dealing with one form of innovation sabotage. Incumbents use a variety of ways to sabotage and kill innovative ideas inside of organizations and outside new companies. And most of the time innovators have no idea what just hit them. And those that do – like Rohan and Jared – have no game plan in place to respond.
Here are the most common methods of sabotage that I’ve seen, followed by a few suggestions on how to prepare and defend against them.
Founders and Innovators should expect that existing organizations and companies will defend their turf – ferociously.
There is no magic bullet I could have offered Rohan or Jared to defend against every possible move an incumbent might make. However, if they had realized that incumbents wouldn’t welcome them, they (and you) might have considered the suggestions below on how to prepare for innovation saboteurs.
In both government and commercial markets:
Jared is still trying to get senior leadership to understand that the clock is ticking, and internal R&D efforts and current budget allocation won’t be sufficient or timely. He’s building a larger coalition for change, but the inertia for the status quo is overwhelming.
Rohan’s company was lucky. After months of scrambling (and tens of thousands of dollars), they ended up buying a patent portfolio from a defunct startup and were able to use it to convince the Fortune 500 company to drop their lawsuit.
I hope they both succeed.
What have you found to be effective in taking on incumbents?
2024-10-06 02:44:35
I got a call from an ex-student asking me “how do you know when you found product market fit?”
There’s been lots of words written about it, but no actual recordings of the moment.
I remembered I had saved this 90 second, 26 year-old audio file because this is when I knew we had found it at Epiphany.
The speaker was the the Chief Financial Officer of a company called Visio, subsequently acquired by Microsoft
I played it for her and I think it provided some clarity.
It’s worth a listen.
If you can’t hear the audio click here
2024-09-17 21:00:59
Finding a customer for your product in the Department of Defense is hard: Who should you talk to? How do you get their attention?
How do you know if they have money to spend on your product?
It almost always starts with a Program Executive Office.
The Department of Defense (DoD) no longer owns all the technologies, products and services to deter or win a war – e.g. AI, autonomy, drones, biotech, access to space, cyber, semiconductors, new materials, etc.
Today, a new class of startups are attempting to sell these products to the Defense Department. Amazingly, there is no single DoD-wide phone book available to startups of who to call in the Defense Department.
So I wrote one.
Think of the PEO Directory linked below as a “Who buys in the government?” phone book.
The DoD buys hundreds of billions of dollars of products and services per year, and nearly all of these purchases are managed by Program Executive Offices. A Program Executive Office may be responsible for a specific program (e.g., the Joint Strike Fighter) or for an entire portfolio of similar programs (e.g., the Navy Program Executive Office for Digital and Enterprise Services). PEOs define requirements and their Contracting Officers buy things (handling the formal purchasing, issuing requests for proposals (RFPs), and signing contracts with vendors.) Program Managers (PMs) work with the PEO and manage subsets of the larger program.
Existing defense contractors know who these organizations are and have teams of people tracking budgets and contracts. But startups? Most startups don’t have a clue where to start.
This is a classic case of information asymmetry and it’s not healthy for the Department of Defense or the nascent startup defense ecosystem.
That’s why I put this PEO Directory together.
This first version of the directory lists 75 Program Executive Offices and their Program Executive Officers and Program/Project Managers.
Each Program Executive Office is headed by a Program Executive Officer who is a high ranking official – either a member of the military or a high ranking civilian – responsible for the cost, schedule, and performance of a major system, or portfolio of systems, some worth billions of dollars.
Below is a summary of 75 Program Executive Offices in the Department of Defense.
You can download the full 64-page document of Program Executive Offices and Officers with all 602 names here.
Caveats
Do not depend on this document for accuracy or completeness.
It is likely incomplete and contains errors.
Military officers typically change jobs every few years.
Program Offices get closed and new ones opened as needed.
This means this document was out of date the day it was written. Still it represents an invaluable starting point for startups looking to work with DoD.
How to Use The PEO Directory As Part of A Go-To-Market Strategy
While it’s helpful to know what Program Executive Offices exist and who staffs them, it’s even better to know where the money is, what it’s being spent on, and whether the budget is increasing, decreasing, or remaining the same.
The best place to start is by looking through an overview of the entire defense budget here. Then search for those programs in the linked PEO Directory. You can get an idea whether that program has $ Billions, or $ Millions.
Next, take a look at the budget documents released by the DoD Comptroller –
particularly the P-1 (Procurement) and R-1 (R&D) budget documents.
Combining the budget document with this PEO directory helps you narrow down which of the 75 Program Executive Offices and 500+ program managers to call on.
With some practice you can translate the topline, account, or Program Element (PE) Line changes into a sales Go-To-Market strategy, or at least a hypothesis of who to call on.
Armed with the program description (it’s full of jargon and 9-12 months out of date) and the Excel download here and the Appendix here –– you can identify targets for sales calls with DoD where your product has the best chance of fitting in.
The people and organizations in this list change more frequently than the money.
Knowing the people is helpful only after you understand their priorities — and money is the best proxy for that.
Future Work
Ultimately we want to give startups not only who to call on, and who has the money, but which Program Offices are receptive to new entrants. And which have converted to portfolio management, which have tried OTA contracts, as well as highlighting those who are doing something novel with metrics or outcomes.
Going forward this project will be kept updated by the Stanford Gordian Knot Center for National Security Innovation.
In the meantime send updates, corrections and comments to [email protected]
Credit Where Credit Is Due
Clearly, the U.S. government intends to communicate this information. They have published links to DoD organizations here, even listing DoD social media accounts. But the list is fragmented and irregularly updated. Consequently, this type of directory has not existed in a usable format – until now.