MECIR Manual
Synthesizing the results of included studies (C61-C73)

Synthesizing the results of included studies

Cochrane Interactive Learning (CIL): module 6 - analysing the data

  Standard Rationale and elaboration Resources
C61 Combining different scales Mandatory  
  If studies are combined with different scales, ensure that higher scores for continuous outcomes all have the same meaning for any particular outcome; explain the direction of interpretation; and report when directions are reversed. Sometimes scales have higher scores that reflect a ‘better’ outcome and sometimes lower scores reflect ‘better’ outcome. Meaningless (and misleading) results arise when effect estimates with opposite clinical meanings are combined.

 

Cochrane Training resource: analysing continuous outcomes

C62 Ensuring meta-analyses are meaningful Mandatory  
  Undertake (or display) a meta-analysis only if participants, interventions, comparisons and outcomes are judged to be sufficiently similar to ensure an answer that is clinically meaningful. Meta-analyses of very diverse studies can be misleading, for example where studies use different forms of control. Clinical diversity does not indicate necessarily that a meta-analysis should not be performed. However, authors must be clear about the underlying question that all studies are addressing.

 

Cochrane Training resources: intro to meta-analysis and exploring heterogeneity

C63 Assessing statistical heterogeneity Mandatory  
  Assess the presence and extent of between-study variation when undertaking a meta-analysis. The presence of heterogeneity affects the extent to which generalizable conclusions can be formed. It is important to identify heterogeneity in case there is sufficient information to explain it and offer new insights. Authors should recognize that there is much uncertainty in measures such as I2 and Tau2 when there are few studies. Thus, use of simple thresholds to diagnose heterogeneity should be avoided.

See Handbook (version 6), Section 10.10.2

Cochrane Training resource: exploring heterogeneity

C64 Addressing missing outcome data Highly desirable  
  Consider the implications of missing outcome data from individual participants (due to losses to follow-up or exclusions from analysis). Incomplete outcome data can introduce bias. In most circumstances, authors should follow the principles of intention-to-treat analyses as far as possible (this may not be appropriate for adverse effects or if trying to demonstrate equivalence). Risk of bias due to incomplete outcome data is addressed in the Cochrane 'risk- of-bias' tool. However, statistical analyses and careful interpretation of results are additional ways in which the issue can be addressed by review authors. Imputation methods can be considered (accompanied by, or in the form of, sensitivity analyses).

See Handbook (version 6), Section 10.12.1

Cochrane Training resources: assessing RoB included studies and RoB 2.0 webinar

C65 Addressing skewed data Highly desirable  
  Consider the possibility and implications of skewed data when analysing continuous outcomes. Skewed data are sometimes not summarized usefully by means and standard deviations. While statistical methods are approximately valid for large sample sizes, skewed outcome data can lead to misleading results when studies are small.

See Handbook version 6), Section 10.5.3

Cochrane Training resource: analysing continuous outcomes

C66 Addressing studies with more than two groups Mandatory  
  If multi-arm studies are included, analyse multiple intervention groups in an appropriate way that avoids arbitrary omission of relevant groups and double-counting of participants. Excluding relevant groups decreases precision and double-counting increases precision spuriously; both are inappropriate and unnecessary. Alternative strategies include combining intervention groups, separating comparisons into different forest plots and using network meta-analysis.

See Handbook (version 6), Section 6.2.9 and Chapter 11.

Cochrane Training resource: analysing non-standard data & study designs

C67 Comparing subgroups Mandatory  
  If subgroup analyses are to be compared, and there are judged to be sufficient studies to do this meaningfully, use a formal statistical test to compare them. Concluding that there is a difference in effect in different subgroups on the basis of differences in the level of statistical significance within subgroups can be very misleading.

See Handbook (version 6), Section 10.11.3.1

Cochrane Training resources: exploring heterogeneity and common interpretation errors

C68 Interpreting subgroup analyses Mandatory  
  If subgroup analyses are conducted, follow the subgroup analysis plan specified in the protocol without undue emphasis on particular findings. Selective reporting, or over-interpretation, of particular subgroups or particular subgroup analyses should be avoided. This is a problem especially when multiple subgroup analyses are performed. This does not preclude the use of sensible and honest post hoc subgroup analyses.

 

See Handbook (version 6), Section 10.11.5.2

Cochrane Training resources: exploring heterogeneity and common interpretation errors

 

C69 Considering statistical heterogeneity when interpreting the results Mandatory  
 

Take into account any statistical heterogeneity when interpreting the results, particularly when there is variation in the direction of effect.

The presence of heterogeneity affects the extent to which generalizable conclusions can be formed. If a fixed-effect analysis is used, the confidence intervals ignore the extent of heterogeneity. If a random-effects analysis is used, the result pertains to the mean effect across studies. In both cases, the implications of notable heterogeneity should be addressed. It may be possible to understand the reasons for the heterogeneity if there are sufficient studies.  

See Handbook (version 6), Section 10.10.3

Cochrane Training resource: exploring heterogeneity

C70 Addressing non-standard designs Mandatory  
  Consider the impact on the analysis of clustering, matching or other non-standard design features of the included studies Cluster-randomized trials, cross-over trials, studies involving measurements on multiple body parts, and other designs need to be addressed specifically, since a naive analysis might underestimate or overestimate the precision of the study. Failure to account for clustering is likely to overestimate the precision of the study,that is, to give it confidence intervals that are too narrow and a weight that is too large. Failure to account for correlation is likely to underestimate the precision of the study, that is, to give it confidence intervals that are too wide and a weight that is too small. 

See Handbook version 6), Section 6.2.1

Cochrane Training resource: non-standard study designs

C71 Sensitivity analysis Highly desirable  
  Use sensitivity analyses to assess the robustness of results, such as the impact of notable assumptions, imputed data, borderline decisions and studies at high risk of bias. It is important to be aware when results are robust, since the strength of the conclusion may be strengthened or weakened.

See Handbook version 6), Section 10.14

Cochrane Training resource: exploring heterogeneity

C72 Interpreting results Mandatory  
  (Do not describe results as statistically significant or non-significant. Interpret the confidence intervals and their width.)
Focus interpretation of results on estimates of effect and their confidence intervals, avoiding use of a distinction between “statistically significant” and “statistically non-significant".
 
Authors commonly mistake a lack of evidence of effect as evidence of a lack of effect.

See Handbook (version 6), Section 15.3.1

Cochrane Training resource: common interpretation errors

CIL: module 7 - interpreting the findings

C73 Investigating risk of bias due to missing results Highly desirable  
  Consider the potential impact of non-reporting biases on the results of the review or the meta-analyses it contains. There is overwhelming evidence of non-reporting biases of various types. These can be addressed at various points in the review. A thorough search, and attempts to obtain unpublished results, might minimize the risk. Analyses of the results of included studies, for example using funnel plots, can sometimes help determine the possible extent of the problem, as can attempts to identify study protocols, which should be a routine feature of Cochrane Reviews.

See Handbook (version 6), Section 13.4

Cochrane Training resources: small study effects & reporting biases

CIL: module 7 - interpreting the findings

 

Section info
Contact
methods@cochrane.org
Describe change
C73: Investigating reporting biases. Consider the potential impact of reporting biases on the results of the review or the meta-analyses it contains. Changed to: Investigating risk of bias due to missing results. Consider the potential impact of non-reporting biases on the results of the review or the meta-analyses it contains. Rationale and elaboration: There is overwhelming evidence of reporting biases of various types. These can be addressed at various points in the review. A thorough search, and attempts to obtain unpublished results, might minimize the risk. Analyses of the results of included studies, for example using funnel plots, can sometimes help determine the possible extent of the problem, as can attempts to identify study protocols, which should be a routine feature of Cochrane Reviews. Changed to: There is overwhelming evidence of non-reporting biases of various types. These can be addressed at various points in the review. A thorough search, and attempts to obtain unpublished results, might minimize the risk. Analyses of the results of included studies, for example using funnel plots, can sometimes help determine the possible extent of the problem, as can attempts to identify study protocols, which should be a routine feature of Cochrane Reviews.

8/8/2019
C61 column 4: Handbook 9.2.3.2 -changed to- BLANK
C62 column 4: See Handbook 9.1.4 -changed to- BLANK
C63 column 4: See Handbook 9.5.2 -changed to- See Handbook (version 6), Section 10.10.2
C64 column 3: Risk of bias tool -changed to- 'risk-of-bias' tool
C64 column 4: See Handbook 16.2 -changed to- See Handbook (version 6), Section 10.12.1
C65 column 4 See Handbook 9.4.5.3 -changed to- See Handbook (version 6), Section 10.5.3
C66 column 3: and using multiple treatments meta-analysis. -changed to- and using network meta-analysis.
C66 column 4: See Handbook 7.7.3.8, 16.5.4 -changed to- See Handbook (version 6), Section 6.2.9 and Chapter 11.
C67 column 4:See Handbook 9.6.3.1 -changed to- See Handbook (version 6), Section 10.11.3.1
C68 column 4: See Handbook 9.6.5.2 -changed to- See Handbook (version 6), Section 10.11.5.2
C69 column 4: See Handbook 9.5.4 -changed to- See Handbook (version 6), Section 10.10.3
C70 column 3: of the study, i.e., to give it (x2) -changed to- of the study, that is, to give it (x2)
C70 column 4: see Handbook 9.3, 16.3, 16.4 -changed to- See Handbook (version 6), Section 6.2.1
C71 column 4: see Handbook 9.7 -changed to- See Handbook (version 6), Section 10.14
C72 column 2: Interpret a statistically non-significant P value (e.g. larger than 0.05) as a finding of uncertainty unless confidence intervals are sufficiently narrow to rule out an important magnitude of effect. -changed to- (Do not describe results as statistically significant or non-significant. Interpret the confidence intervals and their width.)
Focus interpretation of results on estimates of effect and their confidence intervals, avoiding use of a distinction between “statistically significant” and “statistically non-significant".
C72 column 4: See Handbook 12.4.2, 12.7.4 -changed to- See Handbook (version 6), Section 15.3.1
C73 column 4: See Handbook 10.1, 10.2 -changed to- See Handbook (version 6), Section 13.4

26/09/2019: inserted links to Handbook (version 6)
Change date
26 September 2019