2025-06-17 21:00:04
The videos and PowerPoints embedded in this post are best viewed on steveblank.com
We just finished our 10th annual Hacking for Defense class at Stanford.
What a year.
Hacking for Defense, now in 70 universities, has teams of students working to understand and help solve national security problems. At Stanford this quarter the 8 teams of 41 students collectively interviewed 1106 beneficiaries, stakeholders, requirements writers, program managers, industry partners, etc. – while simultaneously building a series of minimal viable products and developing a path to deployment.
This year’s problems came from the U.S. Army, U.S. Navy, CENTCOM, Space Force/Defense Innovation Unit, the FBI, IQT, and the National Geospatial-Intelligence Agency.
We opened this year’s final presentations session with inspiring remarks by Joe Lonsdale on the state of defense technology innovation and a call to action for our students. During the quarter guest speakers in the class included former National Security advisor H.R. McMaster, Jim Mattis ex Secretary of Defense, John Cogbill Deputy Commander 18th Airborne Corps, Michael Sulmeyer former Assistant Secretary of Defense for Cyber Policy, and John Gallagher Managing Director of Cerberus Capital.
“Lessons Learned” Presentations
At the end of the quarter, each of the eight teams gave a final “Lessons Learned” presentation along with a 2-minute video to provide context about their problem. Unlike traditional demo days or Shark Tanks which are, “Here’s how smart I am, and isn’t this a great product, please give me money,” the Lessons Learned presentations tell the story of each team’s 10-week journey and hard-won learning and discovery. For all of them it’s a roller coaster narrative describing what happens when you discover that everything you thought you knew on day one was wrong and how they eventually got it right.
While all the teams used the Mission Model Canvas, Customer Development and Agile Engineering to build Minimal Viable Products, each of their journeys was unique.
This year we had the teams add two new slides at the end of their presentation: 1) tell us which AI tools they used, and 2) their estimate of progress on the Technology Readiness Level and Investment Readiness Level.
Here’s how they did it and what they delivered.
Team Omnyra – improving visibility into AI-generated bioengineering threats.
If you can’t see the team Omnyra summary video click here
If you can’t see the Omnyra presentation click here
These are “Wicked” Problems
Wicked problems refer to really complex problems, ones with multiple moving parts, where the solution isn’t obvious and lacks a definitive formula. The types of problems our Hacking For Defense students work on fall into this category. They are often ambiguous. They start with a problem from a sponsor, and not only is the solution unclear but figuring out how to acquire and deploy it is also complex. Most often students find that in hindsight the problem was a symptom of a more interesting and complex problem – and that Acquisition of solutions in the Dept of Defense is unlike anything in the commercial world. And the stakeholders and institutions often have different relationships with each other – some are collaborative, some have pieces of the problem or solution, and others might have conflicting values and interests.
The figure shows the types of problems Hacking for Defense students encounter, with the most common ones shaded.
Team HydraStrike – bringing swarm technology to the maritime domain.
If you can’t see the HydraStrike summary video click here.
If you can’t see the HydraStrike presentation click here
Mission-Driven Entrepreneurship
This class is part of a bigger idea – Mission-Driven Entrepreneurship. Instead of students or faculty coming in with their own ideas, we ask them to work on societal problems, whether they’re problems for the State Department or the Department of Defense or non-profits/NGOs or the Oceans and Climate or for anything the students are passionate about. The trick is we use the same Lean LaunchPad / I-Corps curriculum — and the same class structure – experiential, hands-on– driven this time by a mission-model not a business model. (The National Science Foundation and the Common Mission Project have helped promote the expansion of the methodology worldwide.)
Mission-driven entrepreneurship is the answer to students who say, “I want to give back. I want to make my community, country or world a better place, while being challenged to solve some of the toughest problems.”
Team HyperWatch – tracking hypersonic threats.
If you can’t see the HyperWatch video click here
If you can’t see the HyperWatch presentation click here
It Started With An Idea
Hacking for Defense has its origins in the Lean LaunchPad class I first taught at Stanford in 2011. I observed that teaching case studies and/or how to write a business plan as a capstone entrepreneurship class didn’t match the hands-on chaos of a startup. Furthermore, there was no entrepreneurship class that combined experiential learning with the Lean methodology. Our goal was to teach both theory and practice. The same year we started the class, it was adopted by the National Science Foundation to train Principal Investigators who wanted to get a federal grant for commercializing their science (an SBIR grant.) The NSF observed, “The class is the scientific method for entrepreneurship. Scientists understand hypothesis testing” and relabeled the class as the NSF I-Corps (Innovation Corps). I-Corps became the standard for science commercialization for the National Science Foundation, National Institutes of Health and the Department of Energy, to date training 3,051 teams and launching 1,300+ startups.
Team ChipForce – Securing U.S. dominance in critical minerals.
If you can’t see the ChipForce video click here
If you can’t see the ChipForce presentation click here
Note: After briefing the Department of Commerce, the Chipforce was offered jobs with the department.
Origins Of Hacking For Defense
In 2016, brainstorming with Pete Newell of BMNT and Joe Felter at Stanford, we observed that students in our research universities had little connection to the problems their government was trying to solve or the larger issues civil society was grappling with. As we thought about how we could get students engaged, we realized the same Lean LaunchPad/I-Corps class would provide a framework to do so. That year we launched both Hacking for Defense and Hacking for Diplomacy (with Professor Jeremy Weinstein and the State Department) at Stanford. The Department of Defense adopted and scaled Hacking for Defense across 60 universities while Hacking for Diplomacy has been taught at Georgetown, James Madison University, Rochester Institute for Technology, University of Connecticut and now Indiana University, sponsored by the Department of State Bureau of Diplomatic Security (see here).
Team ArgusNet – instant geospatial data for search and rescue.
If you can’t see the ArgusNet video click here
If you can’t see the ArgusNet presentation click here
Goals for Hacking for Defense
Our primary goal for the class was to teach students Lean Innovation methods while they engaged in national public service.
In the class we saw that students could learn about the nation’s threats and security challenges while working with innovators inside the DoD and Intelligence Community. At the same time the experience would introduce to the sponsors, who are innovators inside the Department of Defense (DOD) and Intelligence Community (IC), a methodology that could help them understand and better respond to rapidly evolving threats. We wanted to show that if we could get teams to rapidly discover the real problems in the field using Lean methods, and only then articulate the requirements to solve them, defense acquisition programs could operate at speed and urgency and deliver timely and needed solutions.
Finally, we wanted to familiarize students with the military as a profession and help them better understand its expertise, and its proper role in society. We hoped it would also show our sponsors in the Department of Defense and Intelligence community that civilian students can make a meaningful contribution to problem understanding and rapid prototyping of solutions to real-world problems.
Team NeoLens – AI-powered troubleshooting for military mechanics.
If you can’t see the NeoLens video click here
If you can’t see the NeoLens presentation click here
Go-to-Market/Deployment Strategies
The initial goal of the teams is to ensure they understand the problem. The next step is to see if they can find mission/solution fit (the DoD equivalent of commercial product/market fit.) But most importantly, the class teaches the teams about the difficult and complex path of getting a solution in the hands of a warfighter/beneficiary. Who writes the requirement? What’s an OTA? What’s color of money? What’s a Program Manager? Who owns the current contract? …
Team Omnicomm – improving the quality, security and resiliency of communications for special operations units.
If you can’t see the Omnicomm video click here
If you can’t see the Omnicomm presentation click here
Mission-Driven in 70 Universities and Continuing to Expand in Scope and Reach
What started as a class is now a movement.
From its beginning with our Stanford class, Hacking for Defense is now offered in over 70 universities in the U.S., as well as in the UK as Hacking for the MOD and in Australia. In the U.S., the course is a program of record and supported by Congress, H4D is sponsored by the Common Mission Project, Defense Innovation Unit (DIU), and the Office of Naval Research (ONR). Corporate partners include Boeing, Northrop Grumman and Lockheed Martin.
Steve Weinstein started Hacking for Impact (Non-Profits) and Hacking for Local (Oakland) at U.C. Berkeley, and Hacking for Oceans at bot Scripps and UC Santa Cruz, as well as Hacking for Climate and Sustainability at Stanford. Jennifer Carolan started Hacking for Education at Stanford.
Team Strom – simplified mineral value chain.
If you can’t see the Strom video click here
If you can’t see the Strom presentation click here
What’s Next For These Teams?
.When they graduate, the Stanford students on these teams have the pick of jobs in startups, companies, and consulting firms .This year, seven of our teams applied to the Defense Innovation Unit accelerator – the DIU Defense Innovation Summer Fellows Program – Commercialization Pathway. Seven were accepted. This further reinforced our thinking that Hacking for Defense has turned into a pre-accelerator – preparing students to transition their learning from the classroom to deployment
See the teams present in person here
It Takes A Village
While I authored this blog post, this class is a team project. The secret sauce of the success of Hacking for Defense at Stanford is the extraordinary group of dedicated volunteers supporting our students in so many critical ways.
The teaching team consisted of myself and:
Our teaching assistants this year were Joel Johnson, Rachel Wu, Evan Twarog, Faith Zehfuss, and Ethan Hellman.
31 Sponsors, Business and National Security Mentors
The teams were assisted by the originators of their problems – the sponsors.
Thanks to all!
2025-06-10 21:00:25
The videos embedded in this post are best viewed on steveblank.com
International Policy students will be spending their careers in an AI-enabled world. We wanted our students to be prepared for it. This is why we’ve adopted and integrated AI in our Stanford national security policy class – Technology, Innovation and Great Power Competition.
Here’s what we did, how the students used it, and what they (and we) learned.
Technology, Innovation and Great Power Competition is an international policy class at Stanford (taught by me, Eric Volmar and Joe Felter.) The course provides future policy and engineering leaders with an appreciation of the geopolitics of the U.S. strategic competition with great power rivals and the role critical technologies are playing in determining the outcome.
This course includes all that you would expect from a Stanford graduate-level class in the Masters in International Policy – comprehensive readings, guest lectures from current and former senior policy officials/experts, and deliverables in the form of written policy papers. What makes the class unique is that this is an experiential policy class. Students form small teams and embark on a quarter-long project that got them out of the classroom to:
The class combines multiple teaching tools.
Rationale for AI
Using this quarter to introduce AI we had three things going for us: 1) By fall 2024 AI tools were good and getting exponentially better, 2) Stanford had set up an AI Playground enabling students to use a variety of AI Tools (ChatGPT, Claude, Perplexity, NotebookLM, Otter.ai, Mermaid, Beautiful.ai, etc.) and 3) many students were using AI in classes but it was usually ambiguous about what they were allowed to do.
Policy students have to read reams of documents weekly. Our hypotheses was that our student teams could use AI to ingest and summarize content, identify key themes and concepts across the content, provide an in-depth analysis of critical content sections, and then synthesize and structure their key insights and apply their key insights to solve their specific policy problem. They did all that, and much, much, more.
While Joe Felter and I had arm-waved “we need to add AI to the class” Eric Volmar was the real AI hero on the teaching team. As an AI power user Eric was most often ahead of our students on AI skills. He threw down a challenge to the students to continually use AI creatively and told them that they would be graded on it. He pushed them hard on AI use in office hours throughout the quarter. The results below speak for themselves.
If you’re not familiar with these AI tools in practice it’s worth watching these one minute videos.
Team OSC
Team OSC was trying to understand what is the appropriate level of financial risk for the U.S. Department of Defense to provide loans or loan guarantees in technology industries?
The team started using AI to do what we had expected, summarizing the stack of weekly policy documentsusing Claude 3.5. And like all teams, the unexpected use of AI was to create new leads for their stakeholder interviews. They found that they could ask AI to give them a list of leaders that were involved in similar programs, or that were involved in their program’s initial stages of development.
See how Team OSC summarized policy papers here:
If you can’t see the video click here
Claude was also able to create a list of leaders with the Department of Energy Title17 credit programs, Exim DFC, and other federal credit programs that the team should interview. In addition, it created a list of leaders within Congressional Budget Office and the Office of Management and Budget that would be able to provide insights. See the demo here:
If you can’t see the video click here
The team also used AI to transcribe podcasts. They noticed that key leaders of the organizations their problem came from had produced podcasts and YouTube videos. They used Otter.ai to transcribe these. That provided additional context for when they did interview them and allowed the team to ask insightful new questions.
If you can’t see the video click here
Note the power of fusing AI with interviews. The interviews ground the knowledge in the teams lived experience.
The team came up with a use case the teaching team hadn’t thought of – using AI to critique the team’s own hypotheses. The AI not only gave them criticism but supported it with links from published scholars. See the demo here:
If you can’t see the video click here
Another use the teaching team hadn’t thought was using Mermaid AI to create graphics for their weekly presentations. See the demo here:
If you can’t see the video click here
The surprises from this team kept coming. Their last was that the team used Beautiful.ai in order to generate PowerPoint presentations. See the demo here:
If you can’t see the video click here
For all teams, using AI tools was a learning/discovery process all its own. By and large, students were largely unfamiliar with most tools on day 1.
Team OSC suggested that students should start using AI tools early in the quarter and experiment with tools like ChatGPT, Otter.ai. Tools that that have steep learning curves, like Mermaid should be started at the very start of the project to train their models.
Team OSC AI tools summary: AI tools are not perfect, so make sure to cross check summaries, insights and transcriptions for accuracy and relevancy. Be really critical of their outputs. The biggest takeaway is that AI works best when prepared with human efforts.
Team FAAST
The FAAST team was trying to understand how can the U.S. improve and scale the DoE FASST program in the urgent context of great power competition?
Team FAAST started using AI to do what we had expected, summarizing the stack of weekly policy documents they were assigned to read and synthesizing interviews with stakeholders.
One of the features of ChatGPT this team appreciated, and important for a national security class, was the temporary chat feature – data they entered would not be used to train the open AI models. See the demo below.
If you can’t see the video click here
The team used AI do a few new things we didn’t expect – to generate emails to stakeholders and to create interview questions. During the quarter the team used ChatGPT, Claude, Perplexity, and NotebookLM. By the end of the 10-week class they were using AI to do a few more things we hadn’t expected. Their use of AI expanded to include simulating interviews. They gave ChatGPT specific instructions on who they wanted it to act like, and it provided personalized and custom answers. See the example here.
If you can’t see the video click here
Learning-by-doing was a key part of this experiential course. The big idea is that students learn both the method and the subject matter together. By learning it together, you learn both better.
Finally, they used AI to map stakeholders, get advice on their next policy move, and asked ChatGPT to review their weekly slides (by screenshotting the slides and putting them into ChatGPT and asking for feedback and advice.)
The FAAST team AI tool summary: ChatGPT was specifically good when using images or screenshots, so in these multi-level tasks, and when you wanted to use kind of more custom instructions, as we used for the stakeholder interviews. Claude was better at more conversational and human in writing, so used it when sending emails. Perplexity was better for researchers because it provides citations, so you’re able to access the web and actually get directed to the source that it’s citing. NotebookLM was something we tried out, but it was not as successful. It was a cool tool that allowed us to summarize specific policy documents into a podcast, but the summaries were often pretty vague.
Team NSC Energy
Team NSC Energy was working on a National Security Council problem, “How can the United States generate sufficient energy to support compute/AI in the next 5 years?”
At the start of the class, the team began by using ChatGPT to summarize their policy papers and generate tailored interview questions, while Claude was used to synthesize research for background understanding. As ChatGPT occasionally hallucinated information, by the end of the class they were cross validating the summaries via Perplexity pro.
The team also used ChatGPT and Mermaid to organize their thoughts and determine who they wanted to talk to. ChatGPT was used to generate code to put into the Mermaid flowchart organizer. Mermaid has its own language, so ChatGPT was helpful, so we didn’t have to learn all the syntax for this language.
See the video of how Team NSC Energy used ChaptGPT and Mermaid here:
If you can’t see the video click here
Team Alpha Strategy
The Alpha Strategy team was trying to discover whether the U.S. could use AI to create a whole-of-government decision-making factory.
At the start of class, Team Alpha Strategy used ChatGPT.40 for policy document analysis and summary, as well for stakeholder mapping. However, they discovered going one by one through the countless numbers of articles was time consuming. So the team pivoted to using Notebook LM, for document search and cross analysis. See the video of how Team Alpha Strategy used Notebook LM here:
If you can’t see the video click here
The other tools the team used were custom Gpts to build stakeholder maps and diagrams and organize interview notes. There’s going to be a wide variety of specialized Gpts. One that was really helpful, they said, was a scholar GPT.
See the video of how Team Alpha Strategy used custom GPTs:
If you can’t see the video click here
Like other teams, Alpha Strategy used ChatGPT to summarize their interview notes and to create flow charts to paste into their weekly presentations.
Team Congress
The Congress team was exploring the question, “if the Department of Defense were given economic instruments of power, which tools would be most effective in the current techno-economic competition with the People’s Republic of China?”
As other teams found, Team Congress first used ChatGPT to extract key themes from hundreds of pages of readings each week and from press releases, articles, and legislation. They also used for mapping and diagramming to identify potential relationships between stakeholders, or to creatively suggest alternate visualizations.
When Team Congress weren’t able to reach their sponsor in the initial two weeks of the class, much like Team OSC, they used AI tools to pretend to be their sponsor, a member of the defense modernization caucus. Once they realized its utility, they continued to do mock interviews using AI role play.
The team also used customized models of ChatGPT but in their case found that this was limited in the number of documents they could upload, because they had a lot of content. So they used retrieval augmented generation, which takes in a user’s query, and matches it with relevant sources in their knowledge base, and fed that back out as the output. See the video of how Team Congress used retrieval augmented generation here:
If you can’t see the video click here
Team NavalX
The NavalX team was learning how the U.S. Navy could expand its capabilities in Intelligence, Surveillance, and Reconnaissance (ISR) operations on general maritime traffic.
Like all teams they used ChatGPT to summarize and extract from long documents, organizing their interview notes, and defining technical terms associated with their project. In this video, note their use of prompting to guide ChatGPT to format their notes.
See the video of how Team NavalX used tailored prompts for formatting interview notes here:
If you can’t see the video click here
They also asked ChatGPT to role play a critic of our argument and solution so that we could find the weaknesses. They also began uploading many interviews at once, and asked Claude to find themes or ideas in common that they might have missed on their own.
Here’s how the NavalX team used Perplexity for research.
If you can’t see the video click here
Like other teams, the NavalX team discovered you can customize ChatGPT by telling it how you want it to act.
If you can’t see the video click here
Another surprising insight from the team is that you can use ChatGPT to tell you how to write better prompts for itself.
If you can’t see the video click here
In summary, Team NavalX used Claude to translate texts from Mandarin, and found that ChatGPT was the best for writing tasks, Perplexity the best for research tasks, Claude the best for reading tasks, and notebook LM was the best for summarization.
Lessons Learned
- Integrating AI into this class took a dedicated instructor with a mission to create a new way to teach using AI tools
- The result was AI vastly enhanced and accelerated learning of all teams
- It acted as a helpful collaborator
- Fusing AI with stakeholders interviews was especially powerful
- At the start of the class students were familiar with a few of these AI tools
- By the end of the class they were fluent in many more of them
- Most teams invented creative use cases
- All Stanford classes we now teach – Hacking for Defense, Lean Launchpad, Entrepreneurship Inside Government – have AI integrated as part of the course
- Next year’s AI tools will be substantively better
2025-05-13 21:00:50
US global dominance in science was no accident, but a product of a far-seeing partnership between public and private sectors to boost innovation and economic growth.
Since 20 January, US science has been upended by severe cutbacks from the administration of US President Donald Trump. A series of dramatic reductions in grants and budgets — including the US National Institutes of Health (NIH) slashing reimbursements of indirect research costs to universities from around 50% to 15% — and deep cuts to staffing at research agencies have sent shock waves throughout the academic community.
These cutbacks put the entire US research enterprise at risk. For more than eight decades, the United States has stood unrivalled as the world’s leader in scientific discovery and technological innovation. Collectively, US universities spin off more than 1,100 science-based start-up companies each year, leading to countless products that have saved and improved millions of lives, including heart and cancer drugs, and the mRNA-based vaccines that helped to bring the world out of the COVID-19 pandemic.
These breakthroughs were made possible mostly by a robust partnership between the US government and universities. This system emerged as an expedient wartime design to fund weapons research and development (R&D) in universities. It has fuelled US innovation, national security and economic growth.
But, today, this engine is being sabotaged in the Trump administration’s attempt to purge research programmes in areas it doesn’t support, such as climate change and diversity, equity and inclusion, and to rein in campus protests. But the broader cuts are also dismantling the very infrastructure that made the United States a scientific superpower. At best, US research is at risk from friendly fire; at worst, it’s political short-sightedness.
Researchers mustn’t be complacent. They must communicate the difference between eliminating ideologically objectionable programmes and undermining the entire research ecosystem. Here’s why the US research system is uniquely valuable, and what stands to be lost.
The backbone of US innovation is a close partnership between government, universities and industry. It is a well-calibrated ecosystem: federally funded research at universities drives scientific advancement, which in turn spins off technology, patents and companies. This system emerged in the wake of the Second World War, rooted in the vision of US presidential science adviser Vannevar Bush and a far-sighted Congress, which recognized that US economic and military strength hinge on investment in science (see ‘Two systems’).
It need not have been this way. Before the Second World War, the United Kingdom led the world in many scientific domains, but its focus on centralized government laboratories rather than university partnerships stifled post-war commercialization. By contrast, the United States channelled wartime research funds into universities, enabling breakthroughs that were scaled up by private industry to drive the nation’s post-war economic boom. This partnership became the foundation of Silicon Valley and the aerospace, nuclear and biotechnology industries.
The US government remains the largest source of academic R&D funding globally — with a budget of US$201.9 billion for federal R&D in the financial year 2025. Out of this pot, more than two dozen research agencies direct grants to US universities, totalling $59.7 billion in 2023, with the NIH and the US National Science Foundation (NSF) receiving the most.
The agencies do this for a reason: they want professors at universities to do research for them. In exchange, the agencies get basic research from universities that moves science forward, or applied research that creates prototypes of potential products. By partnering with universities, the agencies get more value for money and quicker innovation than if they did all the research themselves.
This is because universities can leverage their investments from the government with other funds that they draw in. For example, in 2023, US universities received $27.7 billion from charitable donations, $6.2 billion in industrial collaborations, $6.7 billion from non-profit organizations, $5.4 billion from state and local government and $3.1 billion from other sources — boosting the $59.7 billion up to $108.8 billion (see ‘US research ecosystem’). This external money goes mostly to creating research labs and buildings that, as any campus visitor has seen, are often named after their donors.
Source: US Natl Center for Science and Engineering Statistics; US Congress; US Natl Venture Capital Assoc; AUTM; Small Business Administration
Thus, federal funding for science research in the United States is decentralized. It supports mostly curiosity-driven basic science, but also prizes innovation and commercial applicability. Academic freedom is valued and competition for grants is managed through peer review. Other nations, including China and those in Europe, tend to have more-centralized and bureaucratic approaches.
But what makes the US ecosystem so powerful is what then happens to the university research: it’s the engine for creating start-ups and jobs. In 2023, US universities licensed 3,000 patents, 3,200 copyrights and 1,600 other licences to technology start-ups and existing companies. Such firms spin off more than 1,100 science-based start-ups each year, which lead to countless products.
Since the 1980 Bayh–Dole Act, US universities have been able to retain ownership of inventions that were developed using federally funded research (see go.nature.com/4cesprf). Before this law, any patents resulting from government-funded research were owned by the government, so they often went unused.
Closing the loop, these technology start-ups also get a yearly $4-billion injection in seed-funding grants from the same government research agencies. Venture capital adds a whopping $171 billion to scale those investments.
It all adds up to a virtuous circle of discovery and innovation.
A crucial but under-appreciated component of this US research ecosystem is the indirect-cost reimbursement system, which allows universities to maintain the facilities and administrative support necessary for cutting-edge research. Critics often misunderstand the function of these funds, assuming that universities can spend this money on other areas, such as diversity, equity and inclusion programmes. In reality, they fund essential infrastructure: laboratory space, compliance with safety regulations, data storage and administrative support that allows principal investigators to focus on science rather than paperwork. Without this support, universities cannot sustain world-class research.
Reimbursing universities for indirect costs began during the Second World War, and broke ground, just as the weapons development did. Unlike in a typical fixed-price contract, the government did not set requirements for university researchers to meet or specifications for them to design their research to. It asked them to do research and, if the research looked like it might solve a military problem, to build a prototype they could test. In return, the government paid the researchers for their direct and indirect research costs.
Vannevar Bush (right) led the US Office of Scientific Research and Development during the Second World War.Credit: Bettmann/Getty
At first, the government reimbursed universities for indirect costs at a flat rate of 25% of direct costs. Unlike businesses, universities had no profit margin, so indirect-cost recovery was their only way to pay for and maintain their research infrastructure. By the end of the war, some universities had agreed on a 50% rate. The rate is applied to direct costs, so that a principal investigator will be able to spend two-thirds of a grant on direct research costs and the rest will go to the university for indirect costs. (A common misconception is that indirect-cost rates are a percentage of the total grant, for example a 50% rate meaning that half of the award goes to overheads.)
After the Second World War, the US Office of Naval Research (ONR) began negotiating indirect-cost rates with universities on the basis of actual institutional expenses. Universities had to justify their overhead costs (administration, facilities, utilities) to receive full reimbursement. The ONR formalized financial auditing processes to ensure that institutions reported indirect costs accurately. This led to the practice of negotiating indirect-cost rates, which is still used today.
Since then, the reimbursement process has been tweaked to prevent gaming the system, but has remained essentially the same. Universities negotiate their indirect-cost rates with either the US Department of Health and Human Services (HHS) or the ONR. Most research-intensive universities receive rates of 50–60% for on-campus research. Private foundations often have a lower rate (10–20%), but tend to have wider criteria for what can be considered a direct cost.
In 2017, the first Trump administration attempted to impose a 10% cap on indirect costs for NIH research. Some in the administration viewed such costs as a form of bureaucratic bloat and argued that research universities were profiting from inflated overhead rates.
Congress rejected this and later added language in the annual funding bill that essentially froze most rates at their 2017 levels. This provision is embodied in section 224 of the Consolidated Appropriations Act of 2024, which has been extended twice and is still in effect.
In February, however, the NIH slashed its indirect reimbursement rate to an arbitrary 15% (see go.nature.com/4cgsndz). That policy is currently being challenged in court.
If the policy is ultimately allowed to proceed, the consequences will be immediate. Billions of dollars of support for research universities will be gone. In anticipation, some research universities are already scaling back their budgets, halting lab expansions and reducing graduate-student funding. This will mean fewer start-ups being founded, with effects on products, services, jobs, taxes and exports.
The ripple effects of Trump’s cuts to US academia are spreading, and one area in which there will be immediate ramifications is the loss of scientific talent. The United States has historically been the top destination for international researchers, thanks to its well-funded universities, innovation-driven economy and opportunities for commercialization.
US-trained scientists — many of whom have historically stayed in the country to launch start-ups or contribute to corporate R&D — are being actively recruited by foreign institutions, particularly in China, which has ramped up its science investments. China has expanded its Thousand Talents Program, which offers substantial financial incentives to researchers willing to relocate. France and other European nations are beginning to design packages to attract top US researchers.
Erosion of the US scientific workforce will have long-term consequences for its ability to innovate. If the country dismantles its research infrastructure, future transformative breakthroughs — whether in quantum computing, cancer treatment, autonomy or artificial intelligence — will happen elsewhere. The United States runs the risk of becoming dependent on foreign scientific leadership for its own economic and national-security needs.
History suggests that, once a nation loses its research leadership, regaining it is difficult. The United Kingdom never reclaimed its pre-war dominance in technological innovation. If current trends continue, the same fate might await the United States.
University research is not merely an academic concern — it is an economic and strategic imperative. Policymakers must recognize that federal R&D investments are not costs but catalysts for growth, job creation and national security.
Policymakers need to reaffirm the United States’ commitment to scientific leadership. If the country fails to act now, the consequences will be felt for generations. The question is no longer whether the United States can afford to invest in research. It is whether it can afford not to.
2025-04-15 21:00:52
Prior to WWII the U.S was a distant second in science and engineering. By the time the war was over, U.S. science and engineering had blown past the British, and led the world for 85 years.
It happened because two very different people were the science advisors to their nation’s leaders. Each had radically different views on how to use their country’s resources to build advanced weapon systems. Post war, it meant Britain’s early lead was ephemeral while the U.S. built the foundation for a science and technology innovation ecosystem that led the world – until now.
The British – Military Weapons Labs
When Winston Churchill became the British prime minister in 1940, he had at his side his science advisor, Professor Frederick Lindemann, his friend for 20 years. Lindemann headed up the physics department at Oxford and was the director of the Oxford Clarendon Laboratory. Already at war with Germany, Britain’s wartime priorities focused on defense and intelligence technology projects, e.g. weapons that used electronics, radar, physics, etc. – a radar-based air defense network called Chain Home, airborne radar on night fighters, and plans for a nuclear weapons program – the MAUD Committee which started the British nuclear weapons program code-named Tube Alloys. And their codebreaking organization at Bletchley Park was starting to read secret German messages – the Enigma – using the earliest computers ever built.
As early as the mid 1930s, the British, fearing Nazi Germany, developed prototypes of these weapons using their existing military and government research labs. The Telecommunications Research Establishment built early-warning Radar, critical to Britain’s survival during the Battle of Britain, and electronic warfare to protect British bombers over Germany. The Admiralty Research Lab built Sonar and anti-submarine warfare systems. The Royal Aircraft Establishment was developing jet fighters. The labs then contracted with British companies to manufacture the weapons in volume. British government labs viewed their universities as a source of talent, but they had no role in weapons development.
Under Churchill, Professor Lindemann influenced which projects received funding and which were sidelined. Lindemann’s WWI experience as a researcher and test pilot on the staff of the Royal Aircraft Factory at Farnborough gave him confidence in the competence of British military research and development labs. His top-down, centralized approach with weapons development primarily in government research labs shaped British innovation during WW II – and led to its demise post-war.
The Americans – University Weapons Labs
Unlike Britain, the U.S. lacked a science advisor. It wasn’t until June 1940, that Vannevar Bush, ex-MIT dean of engineering, and President of the Carnegie Institute told President Franklin Roosevelt that World War II would be the first war won or lost on the basis of advanced technology electronics, radar, physics problems, etc.
Unlike Lindemann, Bush had a 20-year-long contentious history with the U.S. Navy and a dim view of government-led R&D. Bush contended that the government research labs were slow and second rate. He convinced the President that while the Army and Navy ought to be in charge of making conventional weapons – planes, ships, tanks, etc. — scientists from academia could develop better advanced technology weapons and deliver them faster than Army and Navy research labs. And he argued the only way the scientists could be productive was if they worked in a university setting in civilian-run weapons labs run by university professors.
To the surprise of the Army and Navy Service chiefs, Roosevelt agreed to let Bush build exactly that organization to coordinate and fund all advanced weapons research.
(While Bush had no prior relationship with the President, Roosevelt had been the Assistant Secretary of the Navy during World War I and like Bush had seen first-hand its dysfunction. Over the next four years they worked well together. Unlike Churchill, Roosevelt had little interest in science and accepted Bush’s opinions on the direction of U.S. technology programs, giving Bush sweeping authority.)
In 1941, Bush upped the game by convincing the President that in addition to research, development, acquisition and deployment of these weapons also ought to be done by professors in universities. There they would be tasked to develop military weapons systems and solve military problems to defeat Germany and Japan. (The weapons were then manufactured in volume by U.S. corporations Western Electric, GE, RCA, Dupont, Monsanto, Kodak, Zenith, Westinghouse, Remington Rand and Sylvania.) To do this Bush created the Office of Scientific Research and Development (OSR&D).
OSR&D headquarters divided the wartime work into 19 “divisions,” 5 “committees,” and 2 “panels,” each solving a unique part of the military war effort. There were no formal requirements.
Staff at OSRD worked with their military liaisons to understand what the most important military problems were and then each OSR&D division came up with solutions. These efforts spanned an enormous range of tasks – the development of advanced electronics, radar, rockets, sonar, new weapons like the proximity fuse, Napalm, the Bazooka and new drugs such as penicillin, cures for malaria, chemical warfare, and nuclear weapons.
Each division was run by a professor hand-picked by Bush. And they were located in universities – MIT, Harvard, Johns Hopkins, Caltech, Columbia and the University of Chicago all ran major weapons systems programs. Nearly 10,000 scientists and engineers, professors and their grad students received draft deferments to work in these university labs.
(Prior to World War 2, science in U.S. universities was primarily funded by companies interested in specific research projects. But funding for basic research came from two non-profits: The Rockefeller Foundation and the Carnegie Institution. In his role as President of the Carnegie Institution Bush got to know (and fund!) every top university scientist in the U.S. As head of Physics at Oxford, Lindemann viewed other academics as competitors.)
Americans – Unlimited Dollars
What changed U.S. universities, and the world forever, was government money. Lots of it. Prior to WWII most advanced technology research in the U.S. was done in corporate innovation labs (GE, AT&T, Dupont, RCA, Westinghouse, NCR, Monsanto, Kodak, IBM, et al.) Universities had no government funding (except for agriculture) for research. Academic research had been funded by non-profits, mostly the Rockefeller and Carnegie foundations and industry. Now, for the first time, U.S. universities were getting more money than they had ever seen. Between 1941 and 1945, OSR&D gave $9 billion (in 2025 dollars) to the top U.S. research universities. This made universities full partners in wartime research, not just talent pools for government projects as was the case in Britain.
The British – Wartime Constraints
Wartime Britain had very different constraints. First, England was under daily attack. They were being bombed by air and blockaded by submarines, so it was logical that they focused on a smaller set of high-priority projects to counter these threats. Second, the country was teetering on bankruptcy. It couldn’t afford the broad and deep investments that the U.S. made. (Illustrated by their abandonment of their nuclear weapons programs when they realized how much it would cost to turn the research into industrial scale engineering.) This meant that many other areas of innovation—such as early computing and nuclear research—were underfunded compared to their American counterparts.
Post War – Britain
Churchill was voted out of office in 1945. With him went Professor Lindemann and the coordination of British science and engineering. Britain would be without a science advisor until 1951-55 when Churchill returned for a second term and brought back Lindemann with him.
The end of the war led to extreme downsizing of the British military including severe cuts to all the government labs that had developed Radar, electronics, computing, etc.
With post-war Britain financially exhausted, post-war austerity limited its ability to invest in large-scale innovation. There were no post-war plans for government follow-on investments. The differing economic realities of the U.S. and Britain also played a key role in shaping their innovation systems. The United States had an enormous industrial base, abundant capital, and a large domestic market, which enabled large-scale investment in research and development. In Britain, a socialist government came to power. Churchill’s successor, Labor’s Clement Attlee, dissolved the British empire, nationalized banking, power and light, transport, and iron and steel, all which reduced competition and slowed technological progress.
While British research institutions like Cambridge and Oxford remained leaders in theoretical science, they struggled to scale and commercialize their breakthroughs. For instance Alan Turing’s and Tommy Flower’s pioneering work on computing at Bletchley Park didn’t turn into a thriving British computing industry—unlike in the U.S., where companies like ERA, Univac, NCR and IBM built on their wartime work.
Without the same level of government support for dual-use technologies or commercialization, and with private capital absent for new businesses, Britain’s post-war innovation ecosystem never took off.
Post War – The U.S.
Meanwhile in the U.S. universities and companies realized that the wartime government funding for research had been an amazing accelerator for science, engineering, and medicine. Everyone, including Congress, agreed that the U.S. government should continue to play a large role in continuing it. In 1945, Vannevar Bush published a report “Science, The Endless Frontier” advocating for government funding of basic research in universities, colleges, and research institutes. Congress argued on how to best organize federal support of science.
By the end of the war, OSR&D funding had taken technologies that had been just research papers or considered impossible to build at scale and made them commercially viable – computers, rockets, radar, Teflon, synthetic fibers, nuclear power, etc. Innovation clusters formed around universities like MIT and Harvard which had received large amounts of OSR&D funding (MIT’s Radiation Lab or “Rad Lab” employed 3,500 civilians during WWII and developed and built 100 radar systems deployed in theater,) or around professors who ran one of the OSR&D divisions – like Fred Terman at Stanford.
When the war ended, the Atomic Energy Commission spun out of the Manhattan Project in 1946 and the military services took back advanced weapons development. In 1950 Congress set up the National Science Foundation to fund all basic science in the U.S. (except for Life Sciences, a role the new National Institutes of Health would assume.) Eight years later DARPA and NASA would also form as federal research agencies.
Ironically, Vannevar Bush’s influence would decline even faster than Professor Lindemann’s. When President Roosevelt died in April 1945 and Secretary of War Stimson retired in September 1945, all the knives came out from the military leadership Bush had bypassed in the war. His arguments on how to reorganize OSR&D made more enemies in Congress. By 1948 Bush had retired from government service. He would never again play a role in the U.S. government.
Divergent Legacies
Britain’s focused, centralized model using government research labs was created in a struggle for short-term survival. They achieved brilliant breakthroughs but lacked the scale, integration and capital needed to dominate in the post-war world.
The U.S. built a decentralized, collaborative ecosystem, one that tightly integrated massive government funding of universities for research and prototypes while private industry built the solutions in volume.
A key component of this U.S. research ecosystem was the genius of the indirect cost reimbursement system. Not only did the U.S. fund researchers in universities by paying the cost of their salaries, the U.S. gave universities money for the researchers facilities and administration. This was the secret sauce that allowed U.S. universities to build world-class labs for cutting-edge research that were the envy of the world. Scientists flocked to the U.S. causing other countries to complain of a “brain drain.”
Today, U.S. universities license 3,000 patents, 3,200 copyrights and 1,600 other licenses to technology startups and existing companies. Collectively, they spin out over 1,100 science-based startups each year, which lead to countless products and tens of thousands of new jobs. This university/government ecosystem became the blueprint for modern innovation ecosystems for other countries.
Summary
By the end of the war, the U.S. and British innovation systems had produced radically different outcomes. Both systems were influenced by the experience and personalities of their nations science advisor.
2024-10-22 21:00:16
In March 2022 I wrote a description of the Quantum Technology Ecosystem. I thought this would be a good time to check in on the progress of building a quantum computer and explain more of the basics.
Just as a reminder, Quantum technologies are used in three very different and distinct markets: Quantum Computing, Quantum Communications and Quantum Sensing and Metrology. If you don’t know the difference between a qubit and cueball, (I didn’t) read the tutorial here.
Summary –
We talk a lot about qubits in this post. As a reminder a qubit – is short for a quantum bit. It is a quantum computing element that leverages the principle of superposition (that quantum particles can exist in many possible states at the same time) to encode information via one of four methods: spin, trapped atoms and ions, photons, or superconducting circuits.
Incremental Technical Progress
As of 2024 there are seven different approaches being explored to build physical qubits for a quantum computer. The most mature currently are Superconducting, Photonics, Cold Atoms, Trapped Ions. Other approaches include Quantum Dots, Nitrogen Vacancy in Diamond Centers, and Topological. All these approaches have incrementally increased the number of physical qubits.
These multiple approaches are being tried, as there is no consensus to the best path to building logical qubits. Each company believes that their technology approach will lead them to a path to scale to a working quantum computer.
Every company currently hypes the number of physical qubits they have working. By itself this is a meaningless number to indicate progress to a working quantum computer. What matters is the number of logical qubits.
Reminder – Why Build a Quantum Computer?
One of the key misunderstandings about quantum computers is that they are faster than current classical computers on all applications. That’s wrong. They are not. They are faster on a small set of specialized algorithms. These special algorithms are what make quantum computers potentially valuable. For example, running Grover’s algorithm on a quantum computer can search unstructured data faster than a classical computer. Further, quantum computers are theoretically very good at minimization / optimizations /simulations…think optimizing complex supply chains, energy states to form complex molecules, financial models (looking at you hedge funds,) etc.
However, while all of these algorithms might have commercial potential one day, no one has yet to come up with a use for them that would radically transform any business or military application. Except for one – and that one keeps people awake at night. It’s Shor’s algorithm for integer factorization – an algorithm that underlies much of existing public cryptography systems.
The security of today’s public key cryptography systems rests on the assumption that breaking into those keys with a thousand or more digits is practically impossible. It requires factoring large prime numbers (e.g., RSA) or elliptic curve (e.g., ECDSA, ECDH) or finite fields (DSA) that can’t be done with any type of classic computer regardless of how large. Shor’s factorization algorithm can crack these codes if run on a Quantum Computer. This is why NIST has been encouraging the move to Post-Quantum / Quantum-Resistant Codes.
How many physical qubits do you need for one logical qubit?
Thousands of logical qubits are needed to create a quantum computer that can run these specialized applications. Each logical qubit is constructed out of many physical qubits. The question is, how many physical qubits are needed? Herein lies the problem.
Unlike traditional transistors in a microprocessor that once manufactured always work, qubits are unstable and fragile. They can pop out of a quantum state due to noise, decoherence (when a qubit interacts with the environment,) crosstalk (when a qubit interacts with a physically adjacent qubit,) and imperfections in the materials making up the quantum gates. When that happens errors will occur in quantum calculations. So to correct for those error you need lots of physical qubits to make one logical qubit.
So how do you figure out how many physical qubits you need?
You start with the algorithm you intend to run.
Different quantum algorithms require different numbers of qubits. Some algorithms (e.g., Shor’s prime factoring algorithm) may need >5,000 logical qubits (the number may turn out to be smaller as researchers think of how to use fewer logical qubits to implement the algorithm.)
Other algorithms (e.g., Grover’s algorithm) require fewer logical qubits for trivial demos but need 1000’s of logical qubits to see an advantage over linear search running on a classical computer. (See here, here and here for other quantum algorithms.)
Measure the physical qubit error rate.
Therefore, the number of physical qubits you need to make a single logical qubit starts by calculating the physical qubit error rate (gate error rates, coherence times, etc.) Different technical approaches (superconducting, photonics, cold atoms, etc.) have different error rates and causes of errors unique to the underlying technology.
Current state-of-the-art quantum qubits have error rates that are typically in the range of 1% to 0.1%. This means that on average one out of every 100 to one out of 1000 quantum gate operations will result in an error. System performance is limited by the worst 10% of the qubits.
Choose a quantum error correction code
To recover from the error prone physical qubits, quantum error correction encodes the quantum information into a larger set of physical qubits that are resilient to errors. Surface Codes is the most commonly proposed error correction code. A practical surface code uses hundreds of physical qubits to create a logical qubit. Quantum error correction codes get more efficient the lower the error rates of the physical qubits. When errors rise above a certain threshold, error correction fails, and the logical qubit becomes as error prone as the physical qubits.
The Math
To factor a 2048-bit number using Shor’s algorithm with a 10-2 (1% per physical qubit) error rate:
If you could reduce the error rate by a factor of 10 – to 10-3 (0.1% per physical qubit,)
In reality there another 10% or so of ancillary physical bits needed for overhead. And no one yet knows the error rate in wiring multiple logical bits together via optical links or other technologies.
(One caveat to the math above. It assumes that every technical approach (Superconducting, Photonics, Cold Atoms, Trapped Ions, et al) will require each physical qubit to have hundreds of bits of error correction to make a logical qubit. There is always a chance a breakthrough could create physical qubits that are inherently stable, and the number of error correction qubits needed drops substantially. If that happens, the math changes dramatically for the better and quantum computing becomes much closer.)
Today, the best anyone has done is to create 1,000 physical qubits.
We have a ways to go.
Advances in materials science will drive down error rates
As seen by the math above, regardless of the technology in creating physical qubits (Superconducting, Photonics, Cold Atoms, Trapped Ions, et al.) reducing errors in qubits can have a dramatic effect on how quickly a quantum computer can be built. The lower the physical qubit error rate, the fewer physical qubits needed in each logical qubit.
The key to this is materials engineering. To make a system of 100s of thousands of qubits work the qubits need to be uniform and reproducible. For example, decoherence errors are caused by defects in the materials used to make the qubits. For superconducting qubits that requires uniform thickness, controlled grain size, and roughness. Other technologies require low loss, and uniformity. All of the approaches to building a quantum computer require engineering exotic materials at the atomic level – resonators using tantalum on silicon, Josephson junctions built out of magnesium diboride, transition-edge sensors, Superconducting Nanowire Single Photon Detectors, etc.
Materials engineering is also critical in packaging these qubits (whether it’s superconducting or conventional packaging) and to interconnect 100s of thousands of qubits, potentially with optical links. Today, most of the qubits being made are on legacy 200mm or older technology in hand-crafted processes. To produce qubits at scale, modern 300mm semiconductor technology and equipment will be required to create better defined structures, clean interfaces, and well-defined materials. There is an opportunity to engineer and build better fidelity qubits with the most advanced semiconductor fabrication systems so the path from R&D to high volume manufacturing is fast and seamless.
There are likely only a handful of companies on the planet that can fabricate these qubits at scale.
Regional research consortiums
Two U.S. states; Illinois and Colorado are vying to be the center of advanced quantum research.
Illinois Quantum and Microelectronics Park (IQMP)
Illinois has announced the Illinois Quantum and Microelectronics Park initiative, in collaboration with DARPA’s Quantum Proving Ground (QPG) program, to establish a national hub for quantum technologies. The State approved $500M for a “Quantum Campus” and has received $140M+ from DARPA with the state of Illinois matching those dollars.
Elevate Quantum
Elevate Quantum is the quantum tech hub for Colorado, New Mexico, and Wyoming. The consortium was awarded $127m from the Federal and State Governments – $40.5 million from the Economic Development Administration (part of the Department of Commerce) and $77m from the State of Colorado and $10m from the State of New Mexico.
(The U.S. has a National Quantum Initiative (NQI) to coordinate quantum activities across the entire government see here.)
Venture capital investment, FOMO, and financial engineering
Venture capital has poured billions of dollars into quantum computing, quantum sensors, quantum networking and quantum tools companies.
However, regardless of the amount of money raised, corporate hype, pr spin, press releases, public offerings, no company is remotely close to having a quantum computer or even being close to run any commercial application substantively faster than on a classical computer.
So why all the investment in this area?
Often, companies in a “hot space” (like quantum) can go public and sell shares to retail investors who have almost no knowledge of the space other than the buzzword. If the stock price can stay high for 6 months the investors can sell their shares and make a pile of money regardless of what happens to the company.
The track record so far of quantum companies who have gone public is pretty dismal. Two of them are on the verge of being delisted.
Here are some simple questions to ask companies building quantum computers:
Lessons Learned
- Lots of companies
- Lots of investment
- Great engineering occurring
- Improvements in quantum algorithms may add as much (or more) to quantum computing performance as hardware improvements
- The winners will be the one who master material engineering and interconnects
- Jury is still out on all bets
Update: the kind folks at Applied Materials pointed me to the original 2012 Surface Codes paper. They pointed out that the math should look more like:
Still pretty far away from the 1,000 qubits we currently can achieve.
For those so inclined…
The logical qubit error rate P_L is P_L = 0.03 (p/p_th)^((d+1)/2), where p_th ~ 0.6% is the error rate threshold for surface codes, p the physical qubit error rate, and d is the size of the code, which is related to the number of the physical qubits: N = (2d – 1)^2.
See the plot below for P_L versus N for different physical qubit error rate for reference.
2024-10-08 22:25:00
This article first appeared in First Round Review.
“Only the Paranoid Survive”
Andy Grove – Intel CEO 1987-1998
I just had an urgent “can we meet today?” coffee with Rohan, an ex-student. His three-year-old startup had been slapped with a notice of patent infringement from a Fortune 500 company. “My lawyers said defending this suit could cost $500,000 just for discovery, and potentially millions of dollars if it goes to trial. Do you have any ideas?”
The same day, I got a text from Jared, a friend who’s running a disruptive innovation organization inside the Department of Defense. He just learned that their incumbent R&D organization has convinced leadership they don’t need any outside help from startups or scaleups.
Sigh….
Rohan and Jared have learned three valuable lessons:
It’s a reminder that innovators need to be better prepared about all the possible ways incumbents sabotage innovation.
Innovators often assume that their organizations and industry will welcome new ideas, operating concepts and new companies. Unfortunately, the world does not unfold like business school textbooks.
Whether you’re a new entrant taking on an established competitor or you’re trying to stay scrappy while operating within a bigger company here’s what you need to know about how incumbents will try to stand in your way – and what you can do about it.
Entrepreneurs versus Saboteurs
Startups and scaleups outside of companies or government agencies want to take share of an existing market, or displace existing vendors. Or if they have a disruptive technology or business model, they want to create a new capability or operating concept – even creating a new market.
As my student Rohan just painfully learned, the incumbent suppliers and existing contractors want to kill these new entrants. They have no intention of giving up revenue, profits and jobs. (In the government, additional saboteurs can include Congressional staffers, Congressman and lobbyists, as these new entrants threaten campaign contributions and jobs in local districts.)
Intrapreneurs versus Saboteurs
Innovators inside of companies or government agencies want to make their existing organization better, faster, more effective, more profitable, more responsive to competitive threats or to adversaries. They might be creating or advocating for a better version of something that exists. Or perhaps they are trying to create something disruptive that never existed before.
Inside these commercial or government organizations there are people who want to kill innovation (as my friend Jared just discovered). These can be managers of existing programs, or heads of engineering or R&D organizations who are feeling threatened by potential loss of budget and authority. Most often, budgets and headcount are zero-sum games so new initiatives threaten the status quo.
Leaders of existing organizations often focus on the success of their department or program rather than the overall good of the organization. And at times there are perverse incentives as some individuals are aligned with the interests of incumbent vendors rather than the overall good of the company or government agency.
How Do incumbents Kill Innovation?
Rohan and Jared were each dealing with one form of innovation sabotage. Incumbents use a variety of ways to sabotage and kill innovative ideas inside of organizations and outside new companies. And most of the time innovators have no idea what just hit them. And those that do – like Rohan and Jared – have no game plan in place to respond.
Here are the most common methods of sabotage that I’ve seen, followed by a few suggestions on how to prepare and defend against them.
Founders and Innovators should expect that existing organizations and companies will defend their turf – ferociously.
There is no magic bullet I could have offered Rohan or Jared to defend against every possible move an incumbent might make. However, if they had realized that incumbents wouldn’t welcome them, they (and you) might have considered the suggestions below on how to prepare for innovation saboteurs.
In both government and commercial markets:
Jared is still trying to get senior leadership to understand that the clock is ticking, and internal R&D efforts and current budget allocation won’t be sufficient or timely. He’s building a larger coalition for change, but the inertia for the status quo is overwhelming.
Rohan’s company was lucky. After months of scrambling (and tens of thousands of dollars), they ended up buying a patent portfolio from a defunct startup and were able to use it to convince the Fortune 500 company to drop their lawsuit.
I hope they both succeed.
What have you found to be effective in taking on incumbents?