Why Science Needs a Marketplace
How we can make scientific progress less accidental.
The Hidden Architecture of Discovery
In 1928, Alexander Fleming discovered penicillin while studying how antiseptics affected bacterial growth. He wasn’t looking for antibiotics, but when a stray mold contaminated one of his petri dishes, he noticed it had killed the surrounding bacteria, and recognized what it meant.
If Fleming worked today, that discovery might never have survived. His finding wouldn’t fit a grant’s stated aims, “accidental antibacterial molds” would fail peer review, and the experiment might be written off as a failed milestone.
This tension between what science funds and how science actually works is everywhere. The laws of thermodynamics emerged not from theoretical physics, but from engineers trying to make steam engines more efficient. The World Wide Web began as a simple data-sharing tool at CERN. CRISPR started as research on bacterial immunity, decades before it became a tool for rewriting genes.
In each case, there was an asymmetry between intent and outcome, discovery and recognition, capability and impact. The people who made the discovery weren’t always those who could recognize or apply it. Yet our funding system is designed for symmetry: it rewards researchers for doing what they said they would do.
Serendipity is treated as an accident to be celebrated after the fact, not a feature to be designed for. We revere it in hindsight, while building systems that suppress it in real time.
Serendipity Needs a System
Serendipity is not the work of a lone genius; it is an emergent property of a connected system. Most breakthroughs depend on a chain of people, those who make a discovery and those who later recognize its broader significance. For that chain to function, the system itself has to be well-networked, with ideas able to move freely across disciplines, institutions, and time. Science today is not built for that.
Funders with narrow missions or metrics often see discoveries that fall outside their focus as failures. A project can produce a world-changing insight and still be penalized for missing its stated goal. This bias toward alignment may serve accountability, but it works against progress.
Serendipity also suffers from latency. The time between discovery and recognition can span decades, and most potential connections are never made. AI systems that can map relationships across papers and patents may shorten that gap, but not if the scientific enterprise itself remains siloed. As long as cross-disciplinary work is treated as exceptional, aligning attention, capital, and expertise will remain slow and accidental.
To make serendipity a feature of the system rather than its byproduct, we need to design for connection, not coincidence.
Lab Survival as Local Optimization
For many investigators, keeping a lab alive requires a continuous stream of grants to support students, staff, and facilities. Faced with the choice between proposing a safe, fundable project or a bold, uncertain one, most choose survival.
This is local optimization at the expense of systemic progress. A lab that must plan two years ahead can’t afford to chase a surprising lead with no guarantee of publishable output, especially in a culture that discourages negative results. Review panels prefer linear narratives: hypotheses that yield results, results that yield publications. Deviation from that pattern can become a reputational liability.
Over time, the logic compounds. Scientists internalize caution, funders select for predictability, and institutions reward throughput over imagination. The system optimizes for grant efficiency, not scientific progress.
Part of the problem is rooted in the time investment to write grants, the uncertainty about who might be interested in funding it, and the time it takes for a decision to be made. If we gave investigators the ability to post a range of ideas, both ambitious and safer, reviewed by relevant experts and visible to multiple funders in real time, we could collapse the timeline and the uncertainty.
Is Research Funding a Commodity or a Brand?
Is a dollar from the National Science Foundation the same as a dollar from a retired engineer with a personal fascination for frontier science?
On paper, yes. Both buy time, talent, and ideas. But in practice, they carry very different weights. An NSF grant comes wrapped in institutional credibility, peer validation, and a network of experts that signals quality. A dollar from an individual donor, however generous, enters the system without that signaling power. Scientists know this instinctively: not all money is created equal.
That’s because research funding isn’t just capital. It also carries non-monetary transactions, such as reputation, access, mentorship, narrative weight. Funders aren’t only financing experiments; they’re curating stories of impact for their constituencies. An NSF award says “this work serves national purpose.” A philanthropic grant says “this aligns with conviction.” Each funder defines its own brand of legitimacy.
But this brand logic also fragments the system. Each institution optimizes for its own measure of success, creating silos of prestige rather than a coherent signal of collective progress.
Treating money as a partial commodity, i.e. separating the capital from the brand, could make the system more fluid. If funding flowed through shared, transparent infrastructure, each dollar could act as a price signal of intent rather than identity. It would let public, private, and philanthropic funders co-invest in missions rather than compete for credit, and it would make capital itself a clearer signal of what society wants to achieve.
And there’s more than one way to deploy that capital. A unified marketplace could enable multiple layers of participation: direct funding of individual projects, portfolio-level investment across themes or institutions, and pooled programs like advanced market commitments or challenge prizes. These mechanisms already exist, but in isolation; connecting them through a shared infrastructure would let different kinds of funders coordinate risk, sequence interventions, and sustain long-term missions without waiting for a single agency to act.
Why Funders Can’t Find the Right Scientists (and Vice Versa)
Most research funding still operates like a radio broadcast: a call for proposals goes out, and whoever happens to be tuned in responds.
The problem is that the same listeners always hear the signal. As a funder leading BARDA DRIVe, I saw first hand that our ability to attract respondents to our requests for proposals was almost entirely dependent on our ability to get the attention of our target audience. The problem was we didn’t really have an ability to find our target audience. Yes we’d always get hundreds of submissions for every call, but were they really the ideal submissions based off our goals? Absolutely not.
This is one of two ways funders find scientists: open solicitations. The other is through relationships and networks. But those networks are finite, shaped by geography, institutions, and reputation. They work for the well-connected and exclude the rest.
Researchers face a mirror problem. Most grant seekers have only a partial understanding of who might be interested in their work. They write proposals to known programs, not to the full landscape of potential funders whose missions or geographies might align. This is further exacerbated by the serendipity problem, when a researcher has no ability to even know what impacts their work might have.
Both sides operate in the dark: funders can’t see the full range of ideas, and scientists can’t see the full range of capital.
Toward a Market for Discovery
A marketplace of sufficient scale for science could address many of the structural failures in how we fund and organize discovery, e.g. the silos between disciplines, the opacity of funding, and the inefficiencies that make serendipity a matter of luck rather than design.
At its core, such a marketplace would function as a coordination layer connecting all forms of capital, public, philanthropic, corporate, and civic, and letting them flow dynamically toward the full range of research we need.
For researchers, it would mean a single submission reaching many funders, with shared diligence and transparent feedback instead of opaque rejections. For funders, it would provide a living map of proposals, collaborations, and outcomes. For society, it would mean more shots on goal, more bold ideas tested instead of buried. For future “customers” of science, e.g. companies, governments, and communities, their needs could be expressed directly in the system, allowing research supply and demand to finally meet.
A marketplace could also help diversify who collaborates and who gets funded. Instead of relying on reputation or proximity, scientists could find partners across institutions and disciplines based on shared problems and complementary skills. This would lower the cost of coordination and expand the surface area for innovation.
Importantly, scale matters, especially for both funders and performers of “misfit” or “edgy” science. A sufficiently scaled community would allow funders to both express their preferences openly and researchers to shoot for the moon and to find another more easily.
The deeper value of a marketplace, though, is the data and insight it would generate. Today we have almost no visibility into who funds what, why, and with what outcomes. We don’t know how collaborations form, how ideas move between fields, or where progress stalls. Making that information machine-readable and interoperable would create an entirely new layer of intelligence about science itself, a feedback system that allows us to see, measure, and improve, where coordination, pluralism, and imagination reinforce one another.
For every Alexander Fleming, there are hundreds who notice something strange and move on. A marketplace could change that.
This essay is part of an opening four-part series exploring how science can work better to solve our most pressing problems, through new forms of coordination, capital, and intelligence.
Part 1: Rebuilding the Architecture of Science
Part 2: The AI Program Manager
Part 3: Why Science Needs a Marketplace
Part 4: From Grants to Portfolios: Index Logic for Science


What is the brand of legitimacy for this new marketplace? A marketplace transacts based on a clear KPI of X, but what is X for science productivity? The federal system's X is citations, VC deeptech's X is .... valuation markup. What do you propose?
Markets are the best way we've determined to allocate resources yet we have no concept in the multi hundred billion nonprofit R&D industry which is bewildering