Dec 12, 2021

Building ecosystems with grant programs

Now that I’m helping some blockchain ecosystem projects, part of that is issuing grants. So thought I’d share some of great practices I’ve personally seen in and out of Web 3:

  • early-stage/high-growth VC programs in London backed by private investors. Stuff that made London’s startup scene kick off.
  • the European Commission’s Horizon 2020 program, which deployed €80 billion over the last decade, building the Europe-wide startup ecosystem
  • grant programs for commercialising novel engineering technologies in Sub-Saharan Africa
  • The Polygon Ecosystem DAO grant program to attract new projects that will grow the Polygon another 100 million users

Scalable & repeatable results, with resilience against bad actors

The European Commission’s grant system gets repeat results, and is relatively resistant to malicious actors like insiders with conflicts of interest. It’s stood the test of time, and works in widely different contexts. These are characteristics many ecosystem DAOs aspire to.

While the European Commission seems bureaucratic and hierarchical (especially when compared to a DAO) their grant process is surprisingly flat and scalable. Plus it worked. The Horizon 2020 program built out a diverse, cross-European startup ecosystem from a blank canvas.

How it works

The repeatability comes from a standard of evaluation. Each request is assessed on 3 factors (which I paraphrase here to simplify the bureaucratic language):

  1. Concept - Is it viable?
  2. Outcomes - will it achieve the outcomes?
  3. Capability - can they do it?

Each is ranked out of 5, with each number having a specific definition in relation to the grant criteria.

For each grant round, a new set of independent evaluators is invited to participate from industry and academia. 200 evaluators literally fill a floor of an office tower in Brussels. All evaluation happens by these externals, and Commission employees act only as stewards of the process.

Each application is scored separately by three independent evaluators, Then the three meet to discuss it, and must come to consensus on each score, writing a few bullet points to justify the final score in each category. This gives the process resilience to malicious actors who might tamper as insider evaluators, or try to use “external incentives” with commission employees. (While the overall process is heavy, the scoring system and consensus mechanisms within it are very efficient, and we can borrow from them.)

From there, the top-rated applications are shortlisted for discussion in a plenary with all evaluators for that grant round.

This process is rather opaque, but fair and consistent. It’s also stood the test of time and has manifested great results.

Drawbacks

Grant applications are a one-shot deal. You write your big pitch, throw it in and hope for the best.

Even with complex projects detailed over hundreds of pages, feedback is limited to the few bullet points made by the evaluators. This cuts off opportunities for projects with good potential to improve. Many of these are projects which would still achieve the goals of the grant program if they had better feedback and follow-on support.

This has also led to a significant “application-support” industry that has grown around this system – people who know how to write a successful proposal to the Commission.

In the rounds I participated, the final shortlist of projects was polluted by organisations known for getting grants and doing very little with them. They were just good at winning the money and then doing the minimum to keep it. There were always clear winners that were worthy, but it seemed so wasteful to allow any dead weight to get funded, when there were great projects that got rejected simply because they weren’t expert application writers.

Other grant programs have found ways to to displace these problems, so let’s move on.

The personal touch

I’ve just started with Polygon Ecosystem‘s grant program, but already seeing a lot of great practices.

The evaluation team is small, and very communicative.

Evaluators can pick up applications that are relevant to their experience, and they respond quickly and personally. This makes the application process more of a feedback loop, where the applicants can start with less imposing applications and learn from their evaluators as they go.

This allows the team to have more adaptable selection criteria, necessary in the highly dynamic world of crypto. And because the team communicates amongst itself, there’s still a sufficient level of consistency in the evaluation process as whole.

Quality creation, not quality control

Capital Enterprise (CapEnt) and the London Co-Investment Fund played a critical role in creating the London technology ecosystem. Capital Enterprise enabled the UK’s top-tier universities to rapidly expand their commercialisation rates, mainly driven by students launching successful companies. Later, they ran the London Co-Investment Fund, which rapidly grew the headcount of the top-performing private high-growth VC funds. The result was London becoming a global startup capital, rivaling San Francisco and New York.

Most grant review processes are designed to filter bad applications, controlling which progress to the later stages. At early stages of an ecosystem, this is a mistake. CapEnt used the same mechanics but flipped the paradigm. Instead, their evaluations were used to identify potential, and activate it. The focus was on quality creation, rather than quality control.

Another way to look at this is to ask the question, are we using our raw material to its best potential? When a technology ecosystem is at it’s early stages, the raw material is smart, motivated people. Since best practices are yet to be established, looking only at the quality of their proposals is too myopic, and leads to false negatives. Successful teams and projects are rejected. Quality Creation means looking at proposal evaluation as a starting point for a support relationship, whether or not a funding requested is approved at that moment.

In early stages, this approach leads to a much higher success rate for the overall program, particularly in terms of ROI. When you help smart people succeed, they become the support network for their peers and the next batch.

Attracting the best-performing projects requires non-financial support

Turning to startup investment models, let’s look at the top-performing funds, the ones that out-perform the others by many orders of magnitude, like Y Combinator, AngelPad, Techstars and Seedcamp. (The last 2 were my clients.) While the average startup accelerator loses money and invests at the idea-stage, the top-tier invests in real companies and typically convert $10-30M funds into $5B+ portfolios.

Investing in early-stage blockchain projects are similar to these in several ways:

  • Investing for blockbusters - overall success is achieved on a blockbuster model, where a fraction investments succeed but succeed so well that they produce the outsized ROIs for the whole portfolio.
  • Funding growth inflections - the best-performing portfolios succeed by attracting projects that are later-stage, ready to scale, or already at their inflection point.

If a project is showing signs of being a blockbuster, they already have access to funding. Waving a bit more money in front of them isn’t going to get their attention. What attracts them is solving their immediate challenges and fast-forwarding them to the level level. This comes down to a relevant mentor and alumni network that can open the right doors, and share the right practical experience.

Pushing support to a “post-grant” stage sets up the entire fund for failure, as the blockbusters have different challenges than the mediocre. Leaving support until later forces the fund to invest in the meeds of the mediocre, thereby attracting more mediocre projects later.

Starting out by surfacing the challenges of the projects with the highest potential attracts them, and the resulting support network is then marketed to attract similarly high-calibre applicants.

Co-developed roadmaps

How do you make the most of great teams applying, that have high potential but also need guidance to get there?

Breaking apart the grant payments into milestones is part of the answer. That reduces the investment in failed projects, but it still leaves those projects to fail when they could have taken off with a course-correction.

An informal practice used in high-growth accelerators is providing experienced advisors to applicants to co-develop their milestones and roadmap.

This is particularly effective with early stage technologies and fast-changing markets. The Africa Prize For Engineering needed to support projects deploying novel technologies in rapidly-changing markets. The dynamic environment was similar to what we see now with blockchain, and there was no way to dictate uniform roadmaps for every project we wanted to support. Co-developing milestones with mentors became a key practice, allowing the projects to benefit from directing themselves with expert advice, so their grant pitches were more focused on their ability to execute.

Co-developing milestones builds on the previous practices mentioned. It uses the granters non-financial resources to create quality applicants rather than just filter out the weak ones. It creates personal relationships between granters and grantees, allowing the grant program to build a knowledge advantage in the space. It also increases visibility of bad actors early, sot hey get filtered out.

The big picture is meaningful support that attracts the best applicants.

These practices are what I’ve seen skew the selection process to the best performers, which is the bread-and-butter of any grant-giving organisation.

What am I up to these days?

I’m a new parent, and prioritising my attention on our new rhythms as a family.

Work-wise, I’m trekking along at a cozy pace, doing stuff that doesn’t require meetings :)

I have a few non-exec/advisory roles for engineering edu programs. I’m also having fun making a few apps, going deep with zero-knowledge cryptography, and have learned to be a pretty good LLM prompt engineer.

In the past, I've designed peer-learning programs for Oxford, UCL, Techstars, Microsoft Ventures, The Royal Academy Of Engineering, and Kernel, careering from startups to humanitech and engineering. I also played a role in starting the Lean Startup methodology, and the European startup ecosystem. You can read about this here.

Contact me

Books & collected practices

  • Peer Learning Is - a broad look at peer learning around the world, and how to design peer learning to outperform traditional education
  • Mentor Impact - researched the practices used by the startup mentors that really make a difference
  • DAOistry - practices and mindsets that work in blockchain communities
  • Decision Hacks - early-stage startup decisions distilled
  • Source Institute - skunkworks I founded with open peer learning formats and ops guides, and our internal guide on decentralised teams