Key takeaways

  • Most rejections are not about technology quality. They are about narrative, market analysis and team credibility.
  • The five most common failure patterns appear across all EU instruments, from EIC Accelerator to Horizon Europe collaborative projects.
  • Evaluator feedback on rejected applications is specific and actionable, yet it is consistently ignored at reapplication.
  • A weak Impact section poisons the entire proposal: doubts generated there carry into every subsequent section.
  • The jury interview (where applicable) is where the final decision is made, and the phase where applicants invest the least preparation time.

There is a persistent belief in the innovation ecosystem that EU funding applications are rejected because the technology was not good enough, or because the programme is too competitive, or because the evaluators did not understand the project. In the vast majority of cases I have reviewed, none of these explanations is accurate.

The technology was often solid. The competition was fierce but surmountable. And evaluators, who are domain experts with real-world experience, understood the project perfectly well, which is precisely why they could see the gaps that the applicants had glossed over.

What I consistently observe is that the same five failure patterns appear across instruments, applicant types and technology sectors. They are not random. They are predictable, diagnosable, and fixable with enough lead time.

The five failure patterns

01
Most common failure
The market analysis is built top-down from a report, not bottom-up from reality

The sentence I read most often in rejected proposals is some version of: "The global market for X is projected to reach Y billion by Z." This number is then divided by a percentage that feels aspirational rather than derived, and the result is presented as the company's addressable opportunity. Evaluators have read thousands of proposals structured this way. It signals one thing: the applicant does not actually know their market. A genuinely market-literate team knows who specifically buys, at what price point, through which channel, and with what purchase decision timeline. They have talked to those buyers. They can name them. The TAM figure from a market research report is a starting point for framing, not a substitute for customer understanding.

Fix: rebuild your market analysis from the bottom up. Start from your actual or target customers: how many organisations of this type exist in your primary geography, what budget do they allocate to this problem, and what fraction of them are reachable in your timeframe. Then layer the market report figure on top as context. This approach takes longer but produces numbers that hold up under the scrutiny of a thirty-five-minute jury session.

02
Second most common
Zero evidence of demand, with no apparent concern from the team

There is a version of this failure that is straightforward: no customers, no pilots, no letters of intent, nothing. The proposal describes a problem that exists in theory, a solution that works in the lab, and a market that will surely materialise. Evaluators are not convinced by "will" and "should". A more insidious version is when evidence of demand exists but is not presented: a team that has run pilots, spoken to dozens of potential customers and received verbal commitments, but writes the proposal as though none of this happened, because they assume the technology speaks for itself. It does not. In both cases, the evaluator's read is the same: this team has not validated whether anyone will actually pay for this.

Fix: document every signal of demand you have, no matter how early stage. A paying pilot, even at a reduced rate, is worth more than a thousand market report citations. A letter of intent from a named potential customer, even non-binding, demonstrates that someone outside the founding team finds this problem worth solving. If you genuinely have no demand evidence yet, that is important information too: it means the project may be at an earlier stage than the instrument requires.

03
Third most common
The competitive analysis either ignores competitors or dismisses them

"We have no direct competitors" is the sentence that makes evaluators most sceptical. It means one of two things: either the market does not yet exist (in which case the demand section is also weak), or the competitive analysis has not been done seriously. Both are problems. The alternative failure mode is the opposite: naming competitors, then asserting without evidence that the company's solution is superior across every dimension. Evaluators are often experts in the same sector. They know the competitors. They know their strengths. A competitive analysis that is obviously cherry-picked or superficial damages credibility across the entire proposal.

Fix: treat the competitive analysis as an opportunity to demonstrate domain expertise, not as a box to check. Name the real alternatives, including indirect substitutes and the "do nothing" option. Be specific about where your solution is genuinely better, and honest about where it is not yet. A well-argued competitive positioning that acknowledges trade-offs is far more convincing than a table full of green checkmarks next to your company's name.

04
Fourth most common
The team section reads like a CV collection, not an execution argument

The Implementation dimension of EU proposals is frequently treated as administrative: list the team members, attach their CVs, summarise their academic credentials. What evaluators are actually trying to determine is whether this specific group of people, with these specific backgrounds and this specific dynamic, can execute a plan of this ambition in this timeframe. That is a different question from "are these people qualified?" A collection of impressive individual CVs does not answer it. What the section needs to demonstrate is: why this team, why now, why together. What has each person already built or delivered that is relevant? Where are the gaps and how will they be closed? Who has done this before?

Fix: rewrite the team section as an argument, not a directory. Lead with the collective track record that is most relevant to the project's execution. If a team member previously scaled a B2B SaaS company from zero to ten million in revenue, that belongs in the opening, not buried in an appendix CV. Name the missing skills explicitly and explain the hiring plan. Evaluators respect honesty about gaps far more than proposals that pretend the team is complete when it obviously is not.

05
Fifth most common
The proposal does not demonstrate why public funding is necessary

This failure is subtle and often invisible to the applicant. EU funding instruments are designed to support projects that cannot be financed, or cannot be financed at the required speed or scale, through private channels. This is called additionality. A proposal that reads as though the company could easily raise the money from a VC, or that has already raised significant private capital, struggles to answer the implicit question every evaluator asks: why does this need public money? The opposite failure also exists: proposing a project so early-stage or so far from commercial application that the grant instrument is obviously the wrong fit.

Fix: articulate explicitly why private financing is insufficient or unavailable for this specific project at this specific stage. The most credible arguments are: the technological risk is too high for private investors at this TRL; the time to market is too long for standard VC return expectations; the application sector (defence, health, deep infrastructure) requires validation that private capital will not fund alone. Make this argument concrete, not generic.

The feedback loop that most applicants break

For instruments like the EIC Accelerator, the European Commission provides written evaluator feedback on every rejected full application. This feedback is specific, detailed, and directly actionable. It tells you, in the evaluators' own words, what was weak and why.

In my experience, this feedback is the single most underused resource in the EU funding ecosystem. The most common response to a rejection is a combination of frustration and rationalisation: the evaluators did not understand, the scoring was unfair, the competition was too strong. This may occasionally be true. But in the majority of cases, the feedback is accurate, and reapplications that do not substantially address it are evaluated by people who notice that nothing has changed.

A pattern I have observed repeatedly: a team receives feedback that their market analysis is weak and their competitive positioning is vague. They spend the next three months improving the technical sections of the proposal (which were already strong) and resubmit with the same market analysis lightly rephrased. The outcome is the same. The feedback was a map. They did not use it.

What a fixable proposal looks like

The proposals that succeed after an initial rejection share a common characteristic: the team treated the feedback as a diagnostic, not as a verdict. They did the work the feedback implied: usually market research, customer validation, or competitive analysis. Then they rewrote the proposal. The proposal improved because the underlying understanding improved, not because the sentences were better crafted.

There is a structural reason why this matters more than writing quality. Evaluators read proposals under time pressure, often in parallel batches. What they are doing is pattern recognition: does this proposal have the markers of a fundable project? The markers are not literary. They are evidential: specific customer names, defensible numbers, a team that has done this before, a competitive map that shows genuine understanding of the space. When those markers are present, the proposal reads as credible even if it is not perfectly written. When they are absent, no amount of polished prose compensates.

The one question worth asking before you apply

Before investing eight to sixteen weeks in a full application, there is a single diagnostic question worth sitting with: if a sceptical investor who knows your sector asked you to justify every number and assumption in this proposal in a thirty-minute conversation, how would that go?

If the answer is "fine": if you can defend the market sizing from the bottom up, name the customers behind the demand evidence, explain exactly why your team is the right one for this, and articulate why you need public funding rather than private capital, then you are probably ready to apply. If there are sections you would want to avoid or hand-wave through, those sections are exactly where the evaluators will focus.

Next step

Want an outside view on your application before you submit?

A structured review of your proposal from someone who has sat on the evaluation panels can identify the gaps before they become rejections. Let us talk about where your application stands.

Book a call