Skip to content
Single-Proportion Meta-Analysis

Single-Proportion Meta-Analysis: How to Pool Prevalence and Interpret a Forest Plot

Single-Proportion Meta-Analysis: How to Pool Prevalence and Interpret a Forest Plot

A single-proportion meta-analysis pools a proportion from each study into one overall estimate. This is the method behind questions like: What is the prevalence of a complication? What proportion of patients respond to a treatment? What is the incidence of an outcome in a specific population?

This guide explains how to run a single-proportion meta-analysis step by step, how to choose a transformation, how to handle zero events, and how to interpret the forest plot like a reviewer.

Helpful related reads: How to Build a PubMed Search Strategy, How to Critically Appraise a Study, How to Choose Outcomes and Define Endpoints.

When single-proportion meta-analysis is the right tool

Use this approach when each study contributes one proportion, such as:

  • Prevalence of a condition in a defined group
  • Incidence of an outcome over a defined follow-up window
  • Event rates (complications, mortality) within a timeframe
  • Response rates to an intervention when there is no comparator arm

If you have two groups (intervention vs control), you usually want a comparative meta-analysis (risk ratio, odds ratio, mean difference) instead of a single-proportion model.

Step 1: Extract the correct data (events and totals)

For each study, you need:

  • x: number of events
  • n: total sample size
  • time window: for incidence, define follow-up consistently (30 days, 1 year, etc.)

Be strict with definitions. Mixing different endpoint definitions or different follow-up windows will inflate heterogeneity and make the pooled estimate less meaningful.

Step 2: Decide fixed vs random effects (most prevalence questions use random)

In prevalence and incidence meta-analyses, studies often differ in setting, population severity, and diagnostic criteria. That usually means true effects vary between studies, so a random-effects model is often more appropriate than fixed-effect.

Fixed-effect can be reasonable when studies are extremely similar in design and population, but that is less common in real-world prevalence questions.

Step 3: Choose a transformation (this is where many analyses go wrong)

Proportions near 0 or 1 have skewed variances. Transformations stabilize variance and improve model behavior. Common choices include:

  • Logit transformation: often a strong default for proportions, especially when not extreme.
  • Freeman-Tukey double arcsine: historically popular for rare events; easy to handle zeros, but interpretation can be less intuitive.
  • Arcsine: older option, used less often now.

Practical guidance: if your event rates include many zeros or are very close to 0 or 1, consider a transformation designed for extremes. If the event rates are moderate, logit often performs well.

Step 4: Handle zero-event studies correctly

Zero events are common in rare outcomes. Your approach depends on the transformation and software. Options include:

  • Use a transformation that can accommodate zeros without unstable variance
  • Use a small continuity correction (for example, add 0.5) when required, but document it
  • Do not drop zero-event studies by default, because that biases results upward

Always report how zero-event studies were handled. Reviewers look for this.

Step 5: Read heterogeneity like a reviewer (I2 is not the whole story)

Heterogeneity is expected in prevalence meta-analysis. You will usually see:

  • I2: percentage of variability not explained by sampling error
  • Tau2: estimate of between-study variance (important for random-effects)

High I2 does not automatically mean your analysis is wrong. It may mean the population truly differs between studies. Your job is to interpret and explore it, not hide it.

Step 6: How to interpret the forest plot (quick but accurate)

A forest plot in single-proportion meta-analysis shows:

  • Each study proportion (x/n) with a confidence interval
  • Study weights (often driven by sample size, plus random-effects variance)
  • The pooled estimate (diamond) with its confidence interval

Interpretation checklist:

  1. Range: are study estimates clustered or widely spread?
  2. Overlap: do confidence intervals overlap substantially?
  3. Outliers: does one study sit far away from the rest?
  4. Pooled meaning: does the pooled estimate represent a clinically coherent population?

If the studies are extremely diverse, the pooled estimate can become less clinically meaningful. In that case, subgroup analysis and narrative interpretation become essential.

Step 7: Subgroup analysis and meta-regression (use with discipline)

To explore heterogeneity, define subgroups before analysis when possible:

  • Adults vs pediatrics
  • High-risk vs low-risk settings
  • Different diagnostic criteria
  • Different eras (older vs newer practice)

Meta-regression can be useful, but it needs enough studies to avoid unstable conclusions. Treat it as exploratory unless prespecified and adequately powered.

Step 8: Sensitivity analyses that strengthen credibility

Recommended sensitivity checks:

  • Remove high risk of bias studies and compare pooled results
  • Try an alternate transformation (logit vs Freeman-Tukey) and compare
  • Leave-one-out analysis to see if one study drives the estimate
  • Restrict to studies with consistent endpoint definitions

If results are stable across these checks, your conclusions become much stronger.

Reporting: what reviewers expect

At minimum, report:

  • Databases searched, full search strategy, and date searched
  • Eligibility criteria and endpoint definitions
  • Model choice (fixed vs random), transformation used, and zero-event handling
  • Heterogeneity statistics (I2, tau2) and exploration approach
  • Sensitivity analyses

Outbound references for methods and reporting: Cochrane Handbook and PRISMA.

Internal workflow tip: keep extraction and analysis connected

If your platform supports it, link your extracted event counts directly to the analysis module. That reduces copy-paste errors and keeps an audit trail. In SciTrack, manage your review workflow here: Systematic Reviews Workspace.

Common mistakes (and how to avoid them)

  • Mixing definitions: combine only comparable endpoints and windows.
  • Dropping zero-event studies: this biases pooled prevalence upward.
  • Ignoring heterogeneity: explore it with subgroups and sensitivity checks.
  • Overinterpreting pooled estimate: if studies are too diverse, focus on ranges and context.

Conclusion

Single-proportion meta-analysis is powerful when done carefully. Extract clean event data, choose an appropriate transformation, use random-effects when populations vary, interpret heterogeneity honestly, and support conclusions with sensitivity analyses. When you do this well, your forest plot becomes a credible summary rather than a misleading average.

Share:

Comments (0)

Want to share your thoughts? Please sign in to leave a comment.

Sign In to Comment

No comments yet

Be the first to share your thoughts and start the discussion!