Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Nov 26, 2024 4:19 PM
Written
The Convergence of AI & Decentralization will Catalyze the Deinstitutionalization of Science

The Convergence of AI & Decentralization will Catalyze the Deinstitutionalization of Science

http://www.triplicate.xyz

Triplicate Research 01

I originally published this post with Triplicate

Introduction

One of the interesting things about operating at the intersection of very disparate disciplines is that you see convergences in ways that few people can. I believe my time in basic science, translational research, and crypto has led to a rather unique perspective that might provide insight into a future many don’t see coming. This intersection is a very small space. While DeSci has been slowly gaining mindshare, that mindshare isn’t proportionately split between scientists and crypto enthusiasts – it skews heavily towards the latter.

I’ve spent the better part of the past decade trying to make the process of scientific discovery and translation more efficient. My work as co-founder of Molecule, BIO, and VitaDAO (see Decentralized Science, DeSci) tackled the problem of inefficiency and adverse outcomes in translational science through the lens of incentives. My current work with Triplicate expands that lens to include identifying technologies that will shape the future of scientific discovery. Prior to all of this, my experiences at Columbia and the NIH dramatically influenced my views about the efficiency of scientific institutions. While I still believe incentives are the primary lever we have to improve and influence the institution of science, and therefore, human progress, there are several other important factors at play that require reconciliation. This essay explores those topics. 

Of all the possible futures that lay ahead for humanity, a few are becoming increasingly probable. Those at the poles are perhaps the most interesting for the human imagination and tend to dominate the conversation. While it's easy to imagine all the ways that technological progress can go wrong, I’m particularly interested in understanding and optimizing how it can go right. In his article, "Machines of Loving Grace," Dario Amodei outlines the bull case for a techno-optimist utopia as a result of powerful AI disrupting science. While this represents one low-probability scenario, I am adopting the default view that this is the world we want to make more probable. 

The world is a chaotic system. When analyzing such systems, the goal is identifying the critical variables that disproportionately (see Pareto Law) shape outcomes while acknowledging the uncertainty inherent to any prediction. I’m primarily interested in three convergent forces that act as critical variables in shaping humanity’s near-term future. I believe these are science, artificial intelligence and cryptographic systems. While their independent effects might be understood by many, the way they interact is nonobvious, and in my opinion, will have a disproportionate effect on our world. Specifically, I’m interested in the novel, emergent properties of the science-AI-crypto convergence that will have an outsized impact on the institution of science itself. Like Amodei's framework of "factors complementary to intelligence," we should consider both the accelerants and constraints that will determine how these technologies evolve and interact.

While the universe of possibilities at the science-AI-crypto intersection is vast, there are three discrete subthemes that I’m particularly interested in:

  • The acceleration of scientific discovery through AI-powered research and decentralized collaboration
  • The deinstitutionalization of science through decentralized systems
  • The emergence of new organizational and funding models that accelerate scientific progress

We should be clear-eyed about both the possibilities, limitations, and risks associated with each of these. I believe the most profound changes to our world often come not from raw technological capability, but rather our collective decisions about their application. I want to focus this article on the optimistic. I’d like to examine how these forces are likely to interact, what futures they make possible, and ultimately how we can work to realize their positive potential while safeguarding against their risks.

Framework

This piece is split into two distinct parts. In the interest of holding the reader, I’m going to start by describing my vision for the transformation of science driven by crypto and AI. This section is called “The Transformation of Scientific Discovery”. After that, I’m going to look more deeply at how we arrived at the current moment and describe everything that has informed my worldview – that part is called “The Problem Space”.  I’d recommend reading both, but you could stop at the end of “The Transformation of Scientific Discovery” and have a perfectly clear sense of my thesis about how AI and crypto are going to lead to the deinstitutionalization of science.  

The Transformation of Scientific Discovery

Let’s start by individually analyzing the three forces I outlined above.

The acceleration of scientific discovery through AI-powered research and decentralized collaboration

Scientific discovery is currently rate limited on many fronts – intelligence, idea generation, funding, resources, lab space, the physical world. However, I believe the primary rate limits on progress are more bureaucratic than physical. Most researchers are dissatisfied with the amount of time they spend seeking funding and attempting to publish their work relative to the time they spend performing science, but they are willing to do so because otherwise, they wouldn’t be able to do science at all. While the institutionalization of science has led to clear pathways for funding, it has also bottlenecked the field.

I believe this is about to change dramatically as a function of powerful AI. When I speak about powerful AI, I’m adopting Dario Amodei’s definition. The most important properties to understand for our purposes are that powerful AI:

-          is smarter than a Nobel Laureate across multiple disparate fields

-          has all the interfaces available to humans working virtually and is multimodal

-          can perform tasks that take a long period of time to complete (days, weeks)

-          can absorb information and generate actions at 10-100x human speed

-          can generate infinite copies/agents and is rate limited primarily by the physical world -  compute, energy, physical limits

It doesn’t take much imagination to understand the ways that this technology will radically upend our institutions when you consider how our institutions operate. The sheer amount of latency between applying for a grant, performing work, and publishing is entirely incompatible with this model. I personally believe the geo-restrictive and centralized model of universities and research institutions – which made a lot of sense before the internet and globalization - functionally breaks down in a world where you have access to this level of intelligence from your laptop. Instead, a new paradigm emerges that is more conducive to independent scientists and looser groupings/affiliations forming via the internet – lets refer to this broadly as the decentralization of research. In this world, with enough compute resources, anyone can compete in the scientific arena armed with natural language and a computer. Science is no longer for scientists alone – or rather, everyone becomes a scientist.

Virtual space becomes the primary place where science occurs, and collaboration via the internet replaces most in-person work. We won’t require a human-legible and mechanistic understanding of fields like biology to make progress in them anymore – instead, we will simulate biology, chemistry, and physics to a much more sophisticated degree. Our computers will interface with automated and fully integrated physical laboratories when necessary.

The deinstitutionalization of science through decentralized systems

It’s hard to imagine how large research institutions maintain their relevance. Concentration of resources – like money and talent – does provide an edge for those that can quickly adapt. Unfortunately, universities are historically slow with respect to adaptation, largely as a function of the abovementioned bureaucracy. They are unlikely to compete with leaner organizations with little administrative overhead.

My best guess would be that the smartest large organizations leverage their endowments to acquire large amounts of compute to attract the highest profile scientists. These organizations will survive and grow. The scientists there will have an outsized impact working on large scale projects with abundant resources. This is somewhat at odds with the fact that many academic researchers are currently starved for compute, but we are still early.

Most will find an easier time doing what they love outside of academia’s hallowed halls. Agency and autonomy are going to be much more valuable in the world of tomorrow. Universities have always loved and depended on cheap labor and indentured servitude (yes, postdocs). In this new world, the academic underclass will have to learn to rely on themselves as opposed to institutions, as the surviving universities discover an even cheaper source of labor – intelligent computers.

When I refer to deinstitutionalization, I’m speaking relative to our current understanding of scientific institutions – large, centralized research organizations that exist in meatspace and have dominated scientific progress for the last century or so. Decentralization proposes an alternative (or complement, depending on radical you are) to the large research university model.

Blockchains enable a broad design space here. Decentralized organizations and protocols are not controlled by a single actor – rather, they are operated via a network of actors that provide something to the network in exchange for some reward or governance. In the case of proof-of-work blockchains, that thing has historically been compute. In proof-of-stake blockchains, it’s a currency or token. In DAOs, it is often providing capital or time in exchange for governance in the form of tokens. I believe that these models are much more applicable to our future. With a simple transaction and little overhead, you can access resources.

The emergence of new organizational and funding models that accelerate scientific progress

Organizations like Prime Intellect engage in decentralized training and envision a world where access to comparatively large compute clusters may be accessible to anyone with a great idea independent of university affiliation. They are commodifying compute, made possible by enabling individuals and organizations to contribute compute from anywhere in the world via a decentralized network and offer incentives for them to do so. It may be the case that the largest compute clusters in the word end up decentralized and open.

Decentralized autonomous organizations (DAOs) come to the forefront here too. In this organizational model, individuals can pool capital, resources, talent, and compute via onchain (blockchain-based) organizations in exchange for incentives and rewards. Few are doing so effectively here so far, but again, we are early.

In the model of decentralized science, you can imagine the following:

  • Individuals pool resources – compute, capital, talent, data – via onchain organizations in exchange for ownership and governance
  • These individuals, now bound by a common structure (a DAO, for example) vote on how to allocate those resources
  • Scientists apply directly for funding
  • The DAO receives something in exchange for its contribution (data, status, ownership, financial upside, the result itself)
  • Scientists use funding to access resources - compute, virtual lab space, agents, data
  • Scientists oversee a large number of agents to conduct their work
  • Agents can do everything from hypothesis generation to experimental execution with humans-in-the-loop – we arrive at something akin to closed-loop science

The trend here towards radical deinstitutionalization is like what we are observing with large financial institutions as a function of cryptocurrencies. Middlemen are being disintermediated and control is shifting hands. We are living through the largest transfer of wealth in human history and a flight to risk assets as many people lose faith in large, centralized powers and tire of the challenges and overhead that comes with them.

There is another way that things could go. While academic research institutions are losing relevance, another form of centralized actor - startups and corporations - are filling the gap, think OpenAI, Anthropic, Google, and Microsoft, for example. It’s entirely possible that nationally funded research institutes are supplanted by corporations leading the charge of scientific progress. I do believe this would be net worse for humanity, given that the nation state apparatus enables some level of democratic control whereas corporations do not. I firmly believe the decentralized model provides the most upside for humanity.

When I think about decentralized and cryptographic systems, the thing that excites me most is that we have a sandbox environment to construct incentives. Crypto on its own is an agnostic technology that can be leveraged in many ways. Given its flexibility, it has historically been used to construct perhaps even more broken incentives than the status quo. The crypto space is plagued by PvP dynamics, zero sum games, and bad actors largely due to an utter lack of regulation and enforcement. That said, I still believe it may be the only real hope we have to align the future in a positive sum way.

The model above is one example, yet there are an infinite variety of others.  It may be that energy is abundant and cheap and thus, so is compute. It may be that individuals begin to patronize scientist more directly. One can imagine a world where individuals with resources share those resources with individuals that can better use them, in exchange for some upside. That upside can be impact, financial, reputational, status-oriented, etc. There are no rules.

I’m particularly interested in a peer-to-peer model of scientific progress with respect to funding and collaboration. Crypto is one way to enable this. I look forward to a world where a smart person has a good idea, and anyone can quickly do something to help enable them. In the context of the world described above, this could unlock a lot of innovation. I think we should be building resource coordination networks and systems to ensure that the smartest people (and smartest agents, or some combination of both) have access to what they need. We could see hundreds or thousands of years of scientific progress compressed into an extremely narrow timeframe.

Of course, this world might have a lot of downsides too, both for scientists and the general population. It may be so hypercompetitive and fast moving that one doesn’t have much of an incentive to share their work. Blockchains can help here too – as immutable ledgers, they are exceptionally well-suited to help establish provenance of ideas and distribute rewards accordingly. It may be that the progress being made is radically destabilizing and that humans become rather irrelevant. But for now, we will focus on the positive and optimistic.

One final note, particularly on translational science - the current intellectual property system is probably going to be radically upended in a world where data is ingested en masse and fed to large systems to generate novel insights. As we move increasingly towards agentic models of progress, it becomes rather difficult to attribute anything to a single actor. The extent to which translational and commercial science are going to change in the new world will be the subject of a future Triplicate Research essay.

The vision laid out above is the world we want to enable at Triplicate – one where access to funding, tools, resources, data is abundant, and scientists are only rate limited by the physical world. I personally believe that our scientific institutions are failing us at every level. Below, I’d like to dedicate quite some time to understanding how and why. 

The Problem Space

Science and the institution of science are two separate things that often have less in common than one might think. When I say “science”, I’m referring to the scientific method and the process of reducing uncertainty. The institution of science refers to the set of organizations, practices, and structures that exist to - ideally - enable the scientific method. Concerningly, scientists working within the current institution of science face enormous barriers to doing science. 

While funding is at an all-time high, and technology is more advanced than ever before, there are structural challenges that make doing science rather cumbersome and difficult. For all the innovation it has enabled, the institutionalization of science has had massive downsides, all well-known to any academic who spends a disproportionate amount of time competing for funding, publications, and jobs - and not much doing science. I would go a step further and argue that the institutionalization of science has led to society placing scientists on a pedestal and the commonly held view that science is only for scientists. Here, I share the view of Imran Khan, Chief Executive of the British Science Association, who argues that science is "too important to just be left to scientists alone". 

So how did we get here and what is broken? Research is largely confined to centralized institutions, reproducibility remains painfully low, and incentive structures prioritize publication counts over real impact. Some of the most important limitations are summarized below:

The Current Limitations

The traditional scientific establishment faces several critical constraints:

Centralization and Access

  • High barriers to entry limit participation to established institutions
  • Research funding concentrated in traditional grants and venture capital
  • Critical data and tools often siloed within institutional walls
  • Talent restricted by geographic and institutional boundaries

Incentive Misalignment

  • "Publish or perish" culture prioritizing quantity over quality
  • Grant funding favoring incremental, "safe" research over bold exploration
  • Limited rewards for reproduction or validation of existing work
  • Publication bias toward positive results

Efficiency Bottlenecks

  • Manual, time-intensive experimental processes
  • Limited ability to process and synthesize vast amounts of research data
  • Slow peer review and publication cycles
  • Difficulty in coordinating large-scale collaborative efforts

At face value, we have a system that was designed for a world of physical papers, geographic boundaries, and limited data processing capabilities entrenched in bureaucratic structures with little incentive to evolve. Compounding the issue, this system is straining under the weight of exponentially increasing complexity (which it is failing to integrate) while clinging to incentive structures that actively impede progress. 

Looking at the above bullet point list, I feel that it fails to explain what is going on. To drive the point home, I will synthesize all the above into a caricature of the modern-day academic scientist.

A Day in The Life of Dr. Smith

Let’s call our academic researcher Dr. Smith. Dr. Smith is a senior PI that oversees a small laboratory with 3 postdocs. Dr. Smith relies on funding from the university and external grant sources – like the NIH – to run his laboratory. Similarly, he relies on postdocs as a cheap source of labor to run the experiments. To attract that funding – and thus, postdocs - his laboratory needs to be productive and impactful. Impact is measured via a metric called impact factor, which is disproportionately high in certain journals, like Nature and Cell, given their broad readership. Thus, Dr. Smith’s priorities are as follows:

  • Attract funding and postdocs
  • Win grants to pay the postdocs
  • Publish work in a high impact journal
  • Repeat the cycle

This cycle repeats ad infinitum and the incentives are quite interesting. Dr. Smith’s incentive is to attract money and talent (notably separate from doing great science). His postdocs’ incentives are similar, but perhaps more perverse given their less certain future. They have been training continuously for the better part of 10 years or more without much of a salary and without any assurance of a job. Their ability to secure a job, ideally a tenure track position (8% success rate) also depends on their publishing in a high impact journal. Their primary motivation is to work on a topic with a high likelihood of being interesting to Nature or Cell.

Many go to extraordinary lengths to secure their publication. They take up practices like p-hacking and selectively publishing experiments that confirm their hypothesis and ignoring those that don’t. Their negative results are not published – often they are not discussed much at all. Sometimes, individuals fabricate results or rig experiments to be able to publish. If they don’t, they might not have a job next year.

All of this distracts from doing good science. It has also resulted in a replication crisis and a significant amount of our scientific knowledge being built on a shaky foundation. This is a story unfolding in almost every academic laboratory in the world.

A Brief History of Scientific Centralization 

How did it get so bad? It starts with the centralization and institutionalization of research.

The centralization of science in research institutes wasn’t an accident - it was a rational response to the material and social constraints of the industrial era. Understanding this is key to understanding how the system must change in response to the current era. 

In early modernity, science emerged from three distinct traditions - the individual natural philosopher (think Galileo), the royal society model of gentleman scientists, and the university system. The massive shift towards institutionalization in the 19th and 20th centuries was driven by several factors:

Economics of Physical Infrastructure

  • Modern science required increasingly expensive equipment
  • Shared facilities enabled economies of scale
  • Concentrated funding could support long-term research programs
  • Specialized environments (clean rooms, particle accelerators) needed dedicated spaces

Knowledge Density Benefits

  • Physical proximity enabled rapid information exchange
  • Informal conversations sparked new collaborations
  • Mentorship happened organically
  • Tacit knowledge transferred through apprenticeship

Quality Control Mechanisms

  • Institutional reputation created accountabilit
  • Peer review could happen in real-time
  • Standards could be maintained and enforced
  • Experimental protocols could be directly verified

The period from 1940 - 1970 represented the ideal form of centralized science, producing achievements like The Manhattan Project, Bell Labs, and an unprecedented amount of technological advance. But this came with high costs. It resulted in cultural narrowing and a standardization of career paths. It fostered institutional conformity that discouraged radical thinking. It built prestigious institutions that became self-reinforcing gate keepers. Downstream, these led to resource misallocation and innovation bottlenecks. 

Today’s scientific enterprise restricts the flow of ideas, talent, and capital for the benefit of the institution. What started as an attempt to concentrate resources and enable collaboration in a capex intensive world has evolved into a gatekeeping apparatus that often optimizes for self-preservation at the cost of innovation. Centralization is not just about physical resources or funding, but rather who gets to participate in the scientific conversation. We do not live in a 1940s era world. The reality today is profoundly different, and we need to update our priors. 

Resources are not distributed based on merit or potential. The majority are concentrated in a few geographical niches that reward those in proximity. While this model fits well to the preglobalized world, it fails us today.

Show Me The Incentive, And I’ll Show You… the Publication

Perverse incentives abound, the "publish or perish" paradigm of academic science has created a system where career advancement depends more on publication metrics than research quality or impact. Researchers are leaving academia en masse, because scientists are tired of spending an inordinate amount of time competing for grants and churning out papers for little job security and money rather than conducting actual research.

This dynamic is exacerbated by a grant funding system that creates its own fatalistic cycle. Researchers spend countless hours writing and rewriting grants with an 80% rejection rate, while conservative review panels favor safe, incremental work over potentially transformative projects. The system creates a cruel catch-22: researchers need funding to generate preliminary data but need preliminary data to secure funding. Meanwhile, short grant cycles discourage the kind of long-term, systematic research programs that often yield breakthrough discoveries. They also fail to reward risk. The median age of first-time recipients of R01 grants, the most common and sought-after form of N.I.H. funding, is 42, while the median age of all recipients is 52. More people over 65 are funded with research grants than those under age 35. Young people are at a disadvantage, yet historically, many of the most important scientific contributions were made by people in their 20s and 30s.

The institutional architecture of science has basically calcified. What began as a rational response to the material constraints of the industrial era has evolved into an entrenched system that concentrates resources in elite institutions, creates high barriers to entry, and optimizes for self-preservation rather than innovation. This has profound consequences for humanity: in drug development, over 50% of preclinical research cannot be reproduced, negative results rarely see the light of day, and critical research outputs remain locked behind paywalls and proprietary databases. The path from discovery to real-world impact has become a treacherous "valley of death," where promising research often dies before reaching practical application.

The Data and Discovery Bottleneck

The exponential growth in scientific data generation has created an unexpected bottleneck in modern research: while we can produce more data than ever before, our capacity to extract meaningful insights from this deluge lags painfully behind. Modern tools like single-cell sequencing and high-throughput screening generate vast datasets, but as "A Future History of Biomedical Progress" observes, we face "a dearth of clear, unambiguous data that isolates a biological effect of interest from the other 10,000 confounding things that are going on." The sheer volume of data has, ironically, often obscured rather than illuminated our path to understanding.

This challenge manifests most clearly in the increasingly complex task of integration across different scales and modalities of scientific inquiry. Researchers must synthesize evidence across genomics, proteomics, and imaging while simultaneously considering molecular, cellular, and organismal scales. As Dr. Shelby notes in "Updating priors for AI in bio," this creates a particularly acute problem in biology, where "the most valuable data is the most scarce, most expensive, and can't significantly contribute to developing better drugs by a startup (given the 3-7 years to get to clinic)."

At the heart of these challenges lies a profound but rarely acknowledged constraint: our insistence on human-legible, mechanistic understandings of biology. The conventional approach of reducing complex systems into conceptual primitives that humans can reason about has yielded important insights, but may be fundamentally misaligned with biology's inherent complexity. Scientists can only process a tiny fraction of available literature, while increasingly complex systems defy intuitive understanding.

The Convergent Solution

Back to the solution space. These three core problems – centralization, broken incentives, and the data interpretability are systematically addressed by the introduction of cryptographic systems and AI into the broader scientific enterprise. 

Crypto and decentralized systems provide a potential alternative to centralized systems, enabling individuals to come together around shared priorities via the internet. The networks they form may replace universities in the future. By distributing validation work across a global network of researchers, incentivized through token systems and supported by community-driven quality control, we can create more robust verification processes. This approach, combined with open access to protocols and data, could transform how we coordinate large-scale scientific efforts. Crypto provides a sandbox to design new incentives from scratch, rewarding behaviors that foster the best possible outcomes. Given it is an agnostic technology, doing so will require careful thought and planning. Incentive design will be the subject of a future Triplicate Research article. This is an important and wide-open design space.

AI provides a direct answer to the data bottleneck and interpretability question. If the scaling hypothesis holds and we end up with AGI in the next decade, we will likely be able to leverage these systems to integrate our total body of scientific knowledge and create entirely new discoveries. We will no longer be dependent on a human-legible and mechanistic understanding of science to make progress in it. Although that may sound counterintuitive and scary, its necessary to compress a lot of progress into a short time frame.

The result of all of this will be the deinstitutionalization of science as we know it, and I think for the best. This will shift how research is conducted and who gets to participate. While the centralization of science in research institutes was once a rational response to constraints, we now occupy a different world. Just as Ramanujan upended mathematics despite being outside the academic establishment, the next generation of breakthrough insights may come from passionate founders and online collectives working at the periphery of traditional science. Optimistically, this achieves much more than democratizing access or improving reproducibility – it may unleash latent scientific potential that exists beyond institutional walls. By combining AI-powered research tools, cryptographic systems for trust and incentives, and decentralized collaboration networks, we can create an environment where curiosity and capability, not institutional affiliation, determine one's ability to contribute to human knowledge. The future of science lies not in further strengthening our existing institutions, but in empowering a global community of explorers driven by wonder and enabled by technology to tackle humanity's greatest challenges.