Keyword and Evaluator Selection for EIC Accelerator Applications (SME Instrument)
The EIC Accelerator blended financing (formerly SME Instrument Phase 2, grant and equity) allows all applying startups and Small- and Medium-Sized Enterprises (SME) to add keywords into the platform which will be used to select expert evaluators (read: AI Tool Review). In the past, this feature was a black box function since professional writers and consultancies did not know how different evaluators would grade an application or if it made a difference at all (read: Re-Inventing the EIC Accelerator). The common approach was to select the most relevant keywords that reflect the project (i.e. battery technology, machine learning, biomass) and hope for the best. While this is still a proven way to follow, this article presents an opinion on how keywords could be selected to maximize the success chances of a submission. Evaluator Pool and Keywords The total evaluator pool contains thousands of experts who will be selected based on availability and, importantly, the keywords entered into the platform. These keywords are selected from a dropdown list whereas multiple parent keywords contain multiple child keywords while a total of 3 parent- and child-keyword-pairs are selected for a project in a specific order. In addition, free keywords can be added to supplement the initial keyword selection. When selecting keywords, there are usually multiple options since an AI-battery startup can lead with Energy followed by Battery and then Machine-learning or could reverse this order. But what if the market is PropTech or real estate in particular since the project provides energy storage solutions for backup systems in commercial buildings? Then keywords could also focus on the real estate industry, certain customer segments (i.e. utility companies) or similar aspects. There are many different options to choose from but, thus far, it was unknown how they would affect the evaluation of the application since trial and error were hindered by the non-transparent evaluations, the randomness of reviews and the scarce deadlines in 2020. Evaluators’ Feedback The European Innovation Council (EIC) has introduced a feedback feature into the evaluation process which allows reviewers to leave comments for the applicants in a very detailed manner. While their identity and background are unknown to the applicant, the specific comments of evaluators often reveal the angle from which an evaluator is looking at the innovation. If it is someone who has a scientific perspective, a technical view or is embedded into the industry then comments will often focus on this aspect. For better or worse, the type of evaluator can have a significant impact on how the proposal is reviewed. After having studied multiple Step 1 evaluations, it is evident that evaluators have very different perspectives. The same aspect of a project can be praised or criticised in the same review which makes the viewpoint, not just the project quality, critical. From experience, positive Step 1 reviews were often praising the impact, feasibility and vision of the project if evaluators saw that there is a strong potential for disruption while critical reviews tended to be focused on isolated technical or commercial aspects. A Different Approach Instead of asking oneself: What keywords describe my project best? It seems to be a better approach to ask: What background does an evaluator need to be the most impressed? Very often, a machine-learning scientist might not be impressed with a certain AI application while someone from the industry it targets would immediately see the benefit and have a positive view. But the opposite could also be true if the industry impact is more difficult to imagine than the cutting-edge nature of the technology which would make a scientist have a better impression compared to an industry participant. The aim of selecting evaluators should be to pick experts who will understand the vision the company has and will view the innovation in a positive light. What should be avoided are thoughts such as: The back-end is sophisticated, follows a unique approach and disrupts a market but I do not think that it is cutting-edge enough from a scientific perspective The product is scientifically sound but how will you convince me to buy it? Especially when it comes to software solutions, there can be purists who neglect the EIC’s focus on industry disruption and new business models just to criticise an isolated aspect of the project. Conclusion It makes sense to think deeply about the keywords one chooses prior to submission and to make sure that the potential background an evaluator will have matches the scope and focus of the application. This approach is not a proven method of getting good evaluators but can clearly impact what the evaluation result will be. Every professional writer has seen applications with evaluations that are contradictory and lack consensus. Often, the reason as to why this is the case is very obvious from the evaluator’s comments and it always comes down to their perspective as defined by their background. Unfortunately, this approach will likely be very short-lived. The EIC is already collecting keywords throughout Step 1 of the EIC Accelerator and manually selecting additional keywords does seem redundant at this stage. Still, as long as the selection of evaluators can still be influenced, it should be done carefully.