Talk to anyone who has been through a Notified Body CER review under EU MDR, and you will hear the same exhausting story. Months of work, neat compliance with MEDDEV 2.7/1 Rev.4 structure, an immaculate table of contents, careful traceability to MDR Annex XIV — and then a list of findings that all boil down to the same root cause. The science is not strong enough.
This is the awkward truth of the modern Clinical Evaluation Report. Most CERs do not fail because they look wrong. They fail because the evidence behind the words does not hold up to scrutiny. Industry surveys and the working notes that Team-NB has been publishing since 2021 keep pointing to the same set of recurring deficiencies: thin literature searches, generic state-of-the-art sections, equivalence claims that do not survive a careful read, weak critical appraisal, and a disconnect between PMCF data and the conclusions drawn from it. None of these are formatting problems. They are scientific problems.
If you are a Regulatory or Quality Manager at a small or mid-sized medtech company in Europe, the implication is uncomfortable. The compliance work you already pay for — whether to an internal regulatory team, a consulting firm, or both — was largely built to ensure your CER ticks the boxes. That is necessary but not sufficient. The boxes are easy. The science is hard, and Notified Bodies have spent the last three years getting much more rigorous about looking inside the boxes, not just at their labels.
This article unpacks the scientific evidence problem in CERs. Where it actually shows up, why it persists, what strong evidence looks like in practice, and the kind of expertise — often missing from regulatory teams — that closes the gap. If you have ever received a Notified Body finding that started with "The evidence presented is insufficient to support…", read on.
The two failure modes of a Clinical Evaluation Report
A useful way to think about CER quality is to separate two failure modes that are routinely conflated: format failures and evidence failures. They have different root causes and different fixes.
| Type of failure | What it looks like | Who usually owns the fix |
|---|---|---|
| Format failure | Missing Annex XIV section, broken cross-references to GSPR, unclear traceability between claims and supporting documents, version control issues, inconsistent terminology. | Regulatory Affairs specialist, technical writer. |
| Evidence failure | Literature search not reproducible, equivalence rationale unconvincing, state-of-the-art generic and outdated, critical appraisal missing or shallow, PMCF data inadequately integrated, statistical methods inappropriate. | Subject-matter scientist with domain expertise and literature appraisal skills. |
Format failures are visible at a first read. They are caught by checklists, peer review, or any competent regulatory consultant. They get fixed quickly because everyone agrees on what good looks like — the document either has the required section or it does not.
Evidence failures are different. They are visible only when a reviewer with subject-matter knowledge sits down with the document and the cited references and starts asking questions. Is this systematic review really systematic? Does the cited paper actually support the claim being made? Is the equivalent device actually equivalent on the three dimensions required by MDCG 2020-5? Is this statistical analysis appropriate for the data structure? These are not checklist questions. They are scientific questions, and most regulatory teams are not built to answer them.
What Notified Body findings actually say
Reading through anonymized findings collected from medtech companies in 2023-2026, a pattern emerges. The same five themes show up in nearly every CER that receives a deficiency letter on clinical evidence.
1. Literature searches that are not reproducible
The CER will say something like "A comprehensive search of PubMed and Embase was conducted." A Notified Body reviewer will look for the search strategy: the exact query strings, the date range, the inclusion and exclusion criteria, the screening protocol, the PRISMA-style flow diagram, the number of records identified and excluded at each stage. Often this is missing or presented in a way that cannot be replicated by an independent reader. The reviewer cannot tell whether the search captured all relevant evidence or missed half of it, so the entire downstream analysis is suspect.
2. Equivalence justifications that overreach
MDCG 2020-5 sets a high bar for clinical, technical, and biological equivalence. The clinical dimension requires the same intended purpose, same population, same severity of disease, and same body location. Technical equivalence requires similar design, specifications, principles of operation. Biological equivalence requires similar materials in contact with body tissues and similar release profiles. The recurring failure is presenting an equivalence claim as though it satisfies these conditions when a careful reader can identify multiple discrepancies. A reviewer who knows the device class will spot this quickly. Without scientific authorship, the gaps are invisible to the writer.
3. State-of-the-art that reads like a textbook
The state-of-the-art section of a CER is supposed to position your device against current alternatives, current standards of care, current scientific understanding of the underlying condition. Done well, it shows the Notified Body that you understand the field you are operating in and have a defensible rationale for the design choices you made. Done badly, it reads like a generic introduction copied from a review article, with no specific engagement with where the field has moved in the last 3-5 years. Notified Body reviewers see hundreds of these per year and recognize the difference immediately.
4. Critical appraisal that is missing or pro forma
Every clinical evidence source cited in a CER should be critically appraised — assessed for relevance to your device, methodological quality, applicability to your intended use, and weight of evidence. The frameworks exist (Oxford Centre for Evidence-Based Medicine levels, SIGN checklists, CASP tools), but in many CERs the appraisal is reduced to a single column in a table where every entry says "high quality" with no rationale. A Notified Body cannot verify that the conclusions of the CER are supported by the cited evidence if the evidence itself has not been honestly appraised.
5. PMCF integration as an afterthought
Post-Market Clinical Follow-up is supposed to feed back into the CER on a defined cycle. The recurring failure is a CER that references the PMCF plan in a single paragraph, with no clear loop between findings from PMCF activities and updates to the clinical evaluation conclusions. The Notified Body wants to see that your CER is a living document, not a one-time deliverable. Without scientific writing skill, this integration is hard to do well.
Why the evidence problem persists
These deficiencies are well documented. MDCG guidance addresses each of them directly. Notified Bodies have been flagging them consistently for years. So why does the problem persist?
The honest answer is that the people writing most CERs are not the right people to produce the scientific content. The dominant pattern in medtech regulatory work is to delegate the CER to a Regulatory Affairs consultant or an internal RA function. RA specialists are excellent at structure, traceability, format compliance, submission packaging, and Notified Body interaction. Those are demanding skills in their own right. But they are not the same skills as conducting a systematic literature search, critically appraising biomedical evidence, or writing scientifically defensible analysis of clinical performance.
Asking a Regulatory Affairs consultant to do all of this is asking them to operate well outside their training. Some do it adequately by working with subject-matter experts on a project-by-project basis. Many do not, because the project scope and budget were not designed to include scientific authorship. The CER ships, the Notified Body reviews, the findings come back, and everyone is surprised.
What strong scientific evidence looks like in a CER
It is easier to describe the gap by showing what good looks like. A scientifically strong CER does not need to be longer or more impressive. It needs to be auditable, honest, and specific.
A reproducible search strategy
The literature search is documented in enough detail that an independent reader could reproduce it. Database names, query strings, date of search, date range, language filters, inclusion and exclusion criteria, PRISMA-style flow diagram, number of records at each stage of screening, justification for exclusions. If grey literature was searched, the sources are named. If a vendor of literature search services was used, the methodology is still documented.
A specific, current state-of-the-art
The state-of-the-art section is tied to actual current literature, not generic textbook material. It identifies what alternatives exist, how they perform, where the gaps are, and how the subject device positions against this landscape. It uses recent references (last 5 years where relevant) and acknowledges where the evidence base is weak rather than overstating it. It engages with current clinical practice guidelines specific to the indication.
An honest equivalence analysis
Equivalence to an existing device is justified on each of the three dimensions specified by MDCG 2020-5, with technical evidence, not assertion. Where equivalence is partial, the gap is acknowledged and additional evidence is provided to close the gap. Where equivalence cannot be fully demonstrated, the document says so and pivots to a different evidentiary strategy — clinical investigation, well-justified literature evidence, or post-market data with appropriate weighting.
Critical appraisal with reasoning
Each piece of clinical evidence is appraised against an explicit framework. The output is not a single column of "high quality" labels but a documented assessment of methodology, applicability, and weight. Where evidence is downweighted, the reason is stated. Where it carries strong weight, the rationale is also stated. The CER reader can follow the logic from individual studies up to the overall conclusions.
Integrated PMCF
The PMCF plan is presented as part of the clinical evaluation cycle, with explicit feedback loops back into CER revisions. Trigger conditions for an unscheduled CER update are spelled out. Vigilance data, complaints, and PMS data are tied to specific elements of the clinical conclusions, not just attached as appendices.
The role of the Scientific Advisor
If the gap is in scientific content, the question is who has the expertise to fill it. The answer in many companies has been to muddle through with whoever is available — a Regulatory Affairs consultant doing their best on literature, a clinical advisor doing one-time reviews, the CEO writing the state-of-the-art at midnight before submission. None of this scales, and none of it produces consistently high-quality CERs.
The role that closes the gap is a Scientific Advisor: a credentialed expert (typically PhD, MD or both) whose primary function is to produce the scientific content behind regulatory documents. Not the format. Not the submission strategy. The science.
A Scientific Advisor working on a CER will own deliverables like the structured literature search, the state-of-the-art writing, the equivalence analysis, the critical appraisal matrix, the biostatistical input on clinical data, and the science portion of the PMCF integration. They work in collaboration with the Regulatory Affairs specialist, who continues to own structure, format compliance, traceability, GSPR mapping, and Notified Body interaction. Together they produce a CER that is both compliant and scientifically defensible — which is what passes Notified Body review.
| Dimension | Regulatory Affairs Specialist | Scientific Advisor |
|---|---|---|
| Primary expertise | MDR/IVDR, MEDDEV, ISO 13485, ISO 14971, submission strategy. | Biomedical science, study design, biostatistics, literature appraisal, clinical writing. |
| Typical background | Industry experience in medtech, regulatory training, often RAC certified. | PhD or MD with research publications and subject-matter depth in the device area. |
| CER deliverables | Document structure, GSPR mapping, traceability, format compliance, NB interaction. | Literature search, state-of-the-art, equivalence analysis, critical appraisal, scientific narrative, biostatistics. |
| Failure mode if absent | Format and process failures, missed submission requirements, NB miscommunication. | Evidence quality failures, weak scientific reasoning, deficiency letters on clinical content. |
When to bring in a Scientific Advisor
Not every CER project needs equal scientific authorship. The triggers below indicate when this expertise will measurably improve outcomes.
- Before a first MDR certification submission — you only get one chance to make a first impression with a Notified Body. Strong scientific authorship up front prevents an expensive cycle of findings and rework.
- After Notified Body deficiencies on clinical evidence — if your CER has been flagged for evidence quality, the issue rarely fixes itself with a redraft by the same team. Bringing in scientific expertise is often the fastest route to acceptance.
- Periodic CER updates — for higher-risk classes, CERs are updated at least every 1-2 years. A Scientific Advisor on the update cycle ensures the literature and state-of-the-art remain current and the PMCF integration stays meaningful.
- Before a CECP (Clinical Evaluation Consultation Procedure) — for Class III and certain Class IIb implantable devices, the Clinical Evaluation Consultation Procedure subjects your CER to expert panel review. Scientific rigor is non-negotiable.
- When entering a new indication — expanding a CE-marked device into a new indication often requires substantial new clinical evidence work. Scientific authorship is part of the cost of expansion.
- When the equivalence claim is doing heavy lifting — if you are relying on equivalence to an existing device for most of your clinical evidence, the rigor of that equivalence analysis will determine your outcome. Get this right.
How to select a Scientific Advisor
Not all PhDs make good Scientific Advisors. The role requires a specific combination of skills that not everyone with the right credential has.
- Verifiable academic credentials — PhD or equivalent doctorate in a relevant biomedical discipline. Public ORCID, peer-reviewed publications, evidence of methodological training.
- Domain match — expertise should overlap with your device area. A Scientific Advisor with a strong record in cardiovascular devices is not automatically the right person for an in vitro diagnostic for infectious disease.
- Scientific writing capability — the ability to produce structured, defensible scientific prose at the level needed for regulatory documents. This is a learned skill, distinct from research writing.
- Literature appraisal experience — comfort with systematic search methodology, PRISMA framework, appraisal tools (SIGN, CASP, GRADE).
- Biostatistical literacy — able to evaluate sample size, choice of test, handling of missing data, multiple comparison corrections. R, SPSS, or equivalent.
- Independence and integrity — no conflicts of interest with the device or sponsoring company. Willing to flag weaknesses in the evidence rather than rationalize them.
- Honest scope — a good Scientific Advisor will tell you what they cannot do (typically: regulatory submission strategy, QMS implementation, Notified Body interaction) rather than overpromise.
Common misconceptions
"Our regulatory consultant handles all of this"
They may. They may not. Ask them specifically who is conducting the literature search and writing the state-of-the-art. If the answer is vague or if the same person is responsible for format, content, and submission, ask how they are getting the science right. The best regulatory consultants will tell you they subcontract scientific writing to specialists, and that the budget covers that. The less-good ones will tell you they handle it themselves and the result will show.
"We are too small for this"
Small medtech companies are exactly the ones that benefit most from external scientific authorship. Large companies have in-house medical writers and clinical evaluation departments. Small companies have a CEO who writes the CER on weekends. The cost of a strong Scientific Advisor on a focused engagement is small compared to the cost of a delayed CE certification or a deficiency response cycle.
"It is just a literature review"
It is not just a literature review. The literature review is one component. Strong scientific input also produces the state-of-the-art section, equivalence justification, critical appraisal matrices, statistical analysis plans, and the scientific narrative that ties everything together. Treating it as "just literature" is part of why so many CERs fail.
"AI tools will solve this"
Large language models can accelerate parts of literature screening and produce drafts of routine sections. They cannot produce the judgement required for equivalence analysis, critical appraisal, or weighing conflicting evidence. AI is a force multiplier for someone who already has the expertise. It is not a replacement for the expertise itself, and submitting AI-generated content as your scientific input is unlikely to survive Notified Body scrutiny.
What changes when you fix this
Companies that integrate scientific authorship into their CER process report a few consistent outcomes. Notified Body findings on clinical evidence drop sharply — not to zero, but the recurring pattern of deficiency letters on literature, state-of-the-art, and equivalence becomes the exception rather than the rule. Time from submission to certification compresses because the back-and-forth on evidence quality is reduced. PMCF activities become more useful because they are designed by people who understand what data will close which gaps. And internal teams free up to focus on what they do best instead of trying to be experts in a domain they were not trained for.
None of this is magic. It is just putting the right expertise on the right work.
Bottom line
If your CER process is dominated by Regulatory Affairs format work and the science is being squeezed into whatever capacity is left over, the next Notified Body review will probably tell you so. The fix is not more hours from the same team. It is bringing in scientific authorship as a distinct, complementary role — a Scientific Advisor who owns the evidence work the way your RA specialist owns the format and submission work.
This is not a new idea. Large consulting firms have been quietly staffing CERs this way for years, charging accordingly. What is changing is that smaller medtech companies are recognizing they need the same approach and are finding ways to access scientific expertise without enterprise-scale budgets. That is the shift to watch over the next 18 months as MDR transition pressure peaks.
If you have just received a Notified Body finding on clinical evidence and are not sure where to start, the answer is usually the same: get a critical second read on your CER from someone outside your regulatory pipeline who has the credentials and the time to do it carefully. The cost of that read is small. The cost of guessing wrong about your evidence is not.
Frequently asked questions
What is the most common reason a CER is rejected by a Notified Body?
Insufficient or inadequately appraised clinical evidence is consistently the top finding category. The deficiencies cluster around unfocused literature searches, weak equivalence rationale, generic state-of-the-art sections, and missing or shallow critical appraisal of cited evidence.
Is a Regulatory Affairs consultant enough to write a strong CER?
Not in most cases. A regulatory consultant ensures correct structure, compliance with MEDDEV 2.7/1 Rev.4, and traceability to MDR Annex XIV. Producing the scientific content itself requires distinct subject-matter expertise. Effective CER teams combine both roles.
What is a Scientific Advisor in medtech?
A Scientific Advisor is a credentialed expert — typically holding a PhD, MD, or both — who provides the scientific content behind regulatory documents. Deliverables include structured literature reviews, state-of-the-art writing, equivalence analyses, performance evaluation drafting, and biostatistical input. They complement rather than replace a Regulatory Affairs specialist.
When should a medtech company engage a Scientific Advisor?
Common trigger points include first MDR certification submissions, response to Notified Body deficiencies on clinical evidence, scheduled CER updates, preparation for a CECP, expansion into a new indication, and any project where an equivalence claim is doing significant evidentiary work.
How much does scientific advisory work typically cost?
Scoped engagements like a structured literature review for a CER typically range from 1,500 to 4,000 EUR. A full scientific contribution to a Class IIa or IIb CER — literature, state-of-the-art, equivalence, critical appraisal — is in the range of 6,000 to 15,000 EUR, depending on complexity. For comparison, a full CER from a traditional consultancy is typically 15,000 to 40,000 EUR for the same risk class. The scientific work is roughly 30-50% of the cost.
What credentials should I look for in a Scientific Advisor?
PhD or equivalent doctorate in a biomedical discipline. Public ORCID. Peer-reviewed publications, ideally in the device or condition area. Demonstrated literature appraisal experience. Familiarity with biostatistical tools. Evidence of scientific writing capability. Independence from conflicts of interest with the sponsor.
Related reading
- Clinical Evaluation Report under EU MDR: a practical guide
- EU MDR & EUDAMED 2026: action plan for small manufacturers
- Biosensor regulatory pathway: EU MDR, IVDR & FDA classification guide
- IVDR transition guide for small labs & startups
Try the free MDR Classification tool
Before you start working on clinical evidence, you need to know which class your device falls under. Our free MDR Classification Wizard walks through all 22 rules of Annex VIII and produces a defensible classification report — no signup required.