Why meta analysis




















In addition, combining primary studies with varying sample sizes and patient populations can increase the generalizability of the results of individual studies; this allows the results of the meta-analysis to be generalized to a wider population.

Appropriately examining the heterogeneity between individual studies allows the testing of novel hypotheses that have not been proposed in previous studies [ 17 ]. As meta-analysis summarizes currently existing knowledge, it may help in identifying areas that lack adequate evidence, thereby producing new research questions. Meta-analysis overcomes the problems and biases of traditional narrative reviews through a more transparent and subjective process that includes a systematic methodological approach.

Weaknesses of meta-analysis The methodological weaknesses of meta-analysis are listed in Table 2. In addition, the limitations of meta-analysis as well as suggestions for addressing these have been described [ 8 , 13 , 14 , 18 ].

One number cannot summarize a research field Summarizing large amounts of varying information using a single number is a controversial aspect of meta-analysis [ 19 ], as it ignores the fact that treatment effects may vary from study to study. However, a meta-analysis generalizes results despite differences in primary research and does not simply report a summary effect. If there is substantial heterogeneity, then the focus should shift from the summary effect to the heterogeneity itself.

Meta-analysis provides a variety of tools to assess the pattern of heterogeneity, and to possibly explain it. Meta-analysis should be avoided if studies are too heterogeneous to be comparable, as the metaanalytical results may be meaningless and true effects may be obscured.

However, meta-analyses, by their very nature, address broader questions than individual studies. Therefore, it can be said that a meta-analysis is similar to asking a question about fruits, for which both apples and oranges can contribute valuable information.

Meta-analysis includes a set of criteria for determining which studies to analyze. Hence, meta-analysis should be based on stricter criteria regarding the quality of studies to be included. When the available studies are flawed, a meta-analysis may employ sensitivity analyses to identify the influence of study biases. Heterogeneity In meta-analysis, heterogeneity refers to the degree of dissimilarity in the results of individual studies [ 2 ].

The main assumption for performing meta-analysis is that studies are homogenous in terms of populations, interventions, controls, and outcomes. Assessing the heterogeneity between primary studies is an important step in conducting a meta-analysis [ 21 ]. If there is substantial heterogeneity, the focus of the analysis should be on exploring and understanding the sources of the variation.

Meta-analysis examines the existence of heterogeneity among primary studies and analyzes the variance in their results [ 2 ]. Subgroup analyses and meta-regression are used to explore the sources of heterogeneity. However, if there is a considerable amount of heterogeneity, it may not be appropriate to pool data in a meta-analysis. Publication bias Studies that report positive effects tend to be published more frequently than those that do not, and studies that report no significant results usually remain unpublished [ 22 ].

As meta-analysis includes only published studies, it may overestimate the actual magnitude of an effect [ 22 ]. Not all variables are comparable Some variables have no comparable measure for meta-analysis.

Therefore, it may sometimes be necessary to construct new variables that present comparable concepts or restrict the analyses to common elements. Meta-analysis can disagree with randomized trials The main reason for discrepancies in meta-analysis is that it is based on heterogeneous and often small studies. The subjects in the individual studies may substantially differ with respect to diagnostic criteria, comorbidities, severity of disease, and geographic region. In contrast, in large randomized controlled trials, the target population is more limited.

However, meta-analysis that is conducted appropriately may provide complementary valuable information. Meta-analysis cannot overcome subjectivity Meta-analysis relies on shared subjectivity, rather than objectivity. There is often a certain amount of subjectivity when deciding how similar studies should be before it is appropriate to combine them. Every form of analysis, including narrative reviews, requires certain subjective decisions. However, such decisions are always explicitly stated in a meta-analysis.

Meta-analysis deals only with the main effects Meta-analysis deals with the main effects, and its results can be generalized to the target population. However, the effects of interactions may also be examined by moderator analysis. Moreover, meta-analysis provides a more objective appraisal of the evidence than narrative review, and attempts to minimize bias by utilizing a methodological approach. Meta-analysis provides a more precise estimate of the effect size and increases the generalizability of the results of individual studies.

Goyal, A. Impact of obesity on outcomes following lumbar spine surgery: A systematic review and meta-analysis. Clinical Neurology and Neurosurgery, , Nakamura, A. Physical activity during pregnancy and postpartum depression: Systematic review and meta-analysis.

To evaluate the translational potential of basic research, the validity of evidence must first be assessed, usually by examining the approach taken to collect and evaluate the data. Studies in the basic sciences are broadly grouped as hypothesis-generating and hypothesis-driven. The former tend to be small-sampled proof-of-principle studies and are typically exploratory and less valid than the latter.

An argument can even be made that studies that report novel findings fall into this group as well, since their findings remain subject to external validation prior to being accepted by the broader scientific community.

Alternatively, hypothesis-driven studies build upon what is known or strongly suggested by earlier work. These studies can also validate prior experimental findings with incremental contributions. Although such studies are often overlooked and even dismissed due to a lack of substantial novelty, their role in external validation of prior work is critical for establishing the translational potential of findings.

Another dimension to the validity of evidence in the basic sciences is the selection of experimental model. The human condition is near-impossible to recapitulate in a laboratory setting, therefore experimental models e. For these reasons, the best quality evidence comes from evaluating the performance of several independent experimental models. This is accomplished through systematic approaches that consolidate evidence from multiple studies, thereby filtering the signal from the noise and allowing for side-by-side comparison.

While systematic reviews can be conducted to accomplish a qualitative comparison, meta-analytic approaches employ statistical methods which enable hypothesis generation and testing. When a meta-analysis in the basic sciences is hypothesis-driven, it can be used to evaluate the translational potential of a given outcome and provide recommendations for subsequent translational- and clinical-studies. Alternatively, if meta-analytic hypothesis testing is inconclusive, or exploratory analyses are conducted to examine sources of inconsistency between studies, novel hypotheses can be generated, and subsequently tested experimentally.

Figure 2 summarizes this proposed framework. Figure 2. Schematic of proposed hierarchy of translational potential in basic research. The first stage of any review involves formulating a primary objective in the form of a research question or hypothesis.

Reviewers must explicitly define the objective of the review before starting the project, which serves to reduce the risk of data dredging, where reviewers later assign meaning to significant findings. Secondary objectives may also be defined; however, precaution must be taken as the search strategies formulated for the primary objective may not entirely encompass the body of work required to address the secondary objective.

Depending on the purpose of a review, reviewers may choose to undertake a rapid or systematic review. While the meta-analytic methodology is similar for systematic and rapid reviews, the scope of literature assessed tends to be significantly narrower for rapid reviews permitting the project to proceed faster.

Systematic reviews involve comprehensive search strategies that enable reviewers to identify all relevant studies on a defined topic DeLuca et al. Meta-analytic methods then permit reviewers to quantitatively appraise and synthesize outcomes across studies to obtain information on statistical significance and relevance.

Systematic reviews of basic research data have the potential of producing information-rich databases which allow extensive secondary analysis. To comprehensively examine the pool of available information, search criteria must be sensitive enough not to miss relevant studies. Truncations, wildcards, and proximity operators can also help refine a search strategy by including spelling variations and different wordings of the same concept Ecker and Skelly, Search strategies can be validated using a selection of expected relevant studies.

If the search strategy fails to retrieve even one of the selected studies, the search strategy requires further optimization. This process is iterated, updating the search strategy in each iterative step until the search strategy performs at a satisfactory level Finfgeld-Connett and Johnson, Therefore, the initial stage of sifting through the library to select relevant studies is time-consuming may take 6 months to 2 years and prone to human error.

At this stage, it is recommended to include at least two independent reviewers to minimize selection bias and related errors.

Nevertheless, systematic reviews have a potential to provide the highest quality quantitative evidence synthesis to directly inform the experimental and computational basic, preclinical and translational studies. The goal of the rapid review, as the name implies, is to decrease the time needed to synthesize information. Rapid reviews are a suitable alternative to systematic approaches if reviewers prefer to get a general idea of the state of the field without an extensive time investment.

Search strategies are constructed by increasing search specificity, thus reducing the number of irrelevant studies identified by the search at the expense of search comprehensiveness Haby et al. The strength of a rapid review is in its flexibility to adapt to the needs of the reviewer, resulting in a lack of standardized methodology Mattivi and Buchberger, Common shortcuts made in rapid reviews are: i narrowing search criteria, ii imposing date restrictions, iii conducting the review with a single reviewer, iv omitting expert consultation i.

English only , vi foregoing the iterative process of searching and search term selection, vii omitting quality checklist criteria and viii limiting number of databases searched Ganann et al. These shortcuts will limit the initial pool of studies returned from the search, thus expediting the selection process, but also potentially resulting in the exclusion of relevant studies and introduction of selection bias.

While there is a consensus that rapid reviews do not sacrifice quality, or synthesize misrepresentative results Haby et al. Nevertheless, rapid reviews are a viable alternative when parameters for computational modeling need to be estimated. While systematic and rapid reviews rely on different strategies to select the relevant studies, the statistical methods used to synthesize data from the systematic and rapid review are identical. When the literature search is complete the date articles were retrieved from the databases needs to be recorded , articles are extracted and stored in a reference manager for screening.

Before study screening, the inclusion and exclusion criteria must be defined to ensure consistency in study identification and retrieval, especially when multiple reviewers are involved. The critical steps in screening and selection are 1 removing duplicates, 2 screening for relevant studies by title and abstract, and 3 inspecting full texts to ensure they fulfill the eligibility criteria.

There are several reference managers available including Mendeley and Rayyan, specifically developed to assist with screening systematic reviews. Reference managers often have deduplication functions; however, these can be tedious and error-prone Kwon et al. A protocol for faster and more reliable de-duplication in Endnote has been recently proposed Bramer et al. The selection of articles should be sufficiently broad not to be dominated by a single lab or author.

In basic research articles, it is common to find data sets that are reused by the same group in multiple studies. Therefore, additional precautions should be taken when deciding to include multiple studies published by a single group. At the end of the search, screening and selection process, the reviewer obtains a complete list of eligible full-text manuscripts.

The entire screening and selection process should be reported in a PRISMA diagram, which maps the flow of information throughout the review according to prescribed guidelines published elsewhere Moher et al. Figure 3 provides a summary of the workflow of search and selection strategies using the OB [ATP] ic rapid review and meta-analysis as an example. Figure 3.

Example of the rapid review literature search. A Development of the search parameters to find literature on the intracellular ATP content in osteoblasts.

It is advised to predefine analytic strategies before data extraction and analysis. However, the availability of reported effect measures and study designs will often influence this decision. When reviewers aim to estimate the absolute mean difference absolute effect , normalized mean difference, response ratio or standardized mean difference ex. In basic research, it is common for a single study to present variations of the same observation ex. In such cases, each point may be treated as an individual observation, or common outcomes within a study can be pooled by taking the mean weighted by the sample size.

In such cases, conversion to a common representation is required for comparison across studies, for which appropriate experimental parameters and calibrations need to be extracted from the studies. While some parameters can be approximated by reviewers, such as cell-related parameters found in BioNumbers database Milo et al.

In many cases, reviewers may only be able to decide on a suitable effect size measure after data extraction is complete. It is regrettably common to encounter unclear or incomplete reporting, especially for the sample sizes and uncertainties. Reviewers may choose to reject studies with such problems due to quality concerns or to employ conservative assumptions to estimate missing data.

For example, if it is unclear if a study reports the standard deviation or standard error of the mean, it can be assumed to be a standard error, which provides a more conservative estimate. If a study does not report uncertainties but is deemed important because it focuses on a rare phenomenon, imputation methods have been proposed to estimate uncertainty terms Chowdhry et al. If a study reports a range of sample sizes, reviewers should extract the lowest value.

Strategies to handle missing data should be pre-defined and thoroughly documented. In addition to identifying relevant primary parameters, a priori defined study-level characteristics that have a potential to influence the outcome, such as species, cell type, specific methodology, should be identified and collected in parallel to data extraction.

This information is valuable in subsequent exploratory analyses and can provide insight into influential factors through between-study comparison. Formal quality assessment allows the reviewer to appraise the quality of identified studies and to make informed and methodical decision regarding exclusion of poorly conducted studies.

In general, based on initial evaluation of full texts, each study is scored to reflect the study's overall quality and scientific rigor. Several quality-related characteristics have been described Sena et al. We also suggest that the reviewers of basic research studies assess viii objective alignment between the study in question and the meta-analytic project. This involves noting if the outcome of interest was the primary study objective or was reported as a supporting or secondary outcome, which may not receive the same experimental rigor and is subject to expectation bias Sheldrake, Additional quality criteria specific to experimental design may be included at the discretion of the reviewer.

Once study scores have been assembled, study-level aggregate quality scores are determined by summing the number of satisfied criteria, and then evaluating how outcome estimates and heterogeneity vary with study quality. Significant variation arising from poorer quality studies may justify study omission in subsequent analysis. The next step is to compile the meta-analytic data set, which reviewers will use in subsequent analysis.

For each study, the complete dataset which includes parameters required to estimate the target outcome, study characteristics, as well as data necessary for unit conversion needs to be extracted. Data reporting in basic research are commonly tabular or graphical. Reviewers can accurately extract tabular data from the text or tables. However, graphical data often must be extracted from the graph directly using time consuming and error prone methods.

The Data Extraction Module in MetaLab was developed to facilitate systematic and unbiased data extraction; Reviewers provide study figures as inputs, then specify the reference points that are used to calibrate the axes and extract the data Figures 4A,B. Figure 4. MetaLab data extraction procedure is accurate, unbiased and robust to quality of data presentation. A,B Example of graphical data extraction using MetaLab. A Original figure Bodin et al. B Extracted data with error terms.

C—F Validation of MetaLab data-extraction module. C Synthetic datasets were constructed using randomly generated data coordinates and marker sizes.

E Data extraction was unbiased, evaluated with distribution of percent errors between true and extracted values. To validate the performance of the MetaLab Data Extraction Module, we generated figures using synthetic data points plotted with varying markers sizes Figure 4C. Bias was absent, with a mean percent error of 0. Data marker size did not contribute to the extraction error, as 0. There data demonstrate that graphical data can be reliably extracted using MetaLab.

Basic science often focuses on natural processes and phenomena characterized by complex relationships between a series of inputs e. The results are commonly explained by an accepted model of the relationship, such as Michaelis-Menten model of enzyme kinetics which involves two parameters—V max for the maximum rate and K m for the substrate concentration half of V max.

For meta-analysis, model parameters characterizing complex relationships are of interest as they allow direct comparison of different multi-observational datasets. However, study-level outcomes for complex relationships often i lack consistency in reporting, and ii lack estimates of uncertainties for model parameters. The study-level data can be fitted to a model using conventional fitting methods, in which the model parameter error terms depend on the goodness of fit and number of available observations.

Alternatively, a Monte Carlo simulation approach Cox et al. Figure 5. Model parameter estimation with Monte-Carlo error propagation method.

A Study-level data taken from ATP release meta-analysis. B Assuming sigmoidal model, parameters were estimated using Fit Model MetaLab module by randomly sampling data from distributions defined by study level data. Model parameters were estimated for each set of sampled data. C Final model using parameters estimated from simulations.

D Distributions of parameters estimated for given dataset are unimodal and symmetrical. It is critical for reviewers to ensure the data is consistent with the model such that the estimated parameters sufficiently capture the information conveyed in the underlying study-level data. In general, reliable model fittings are characterized by normal parameter distributions Figure 5D and have a high goodness of fit as quantified by R 2.

The advantage of using the Monte-Carlo approach is that it works as a black box procedure that does not require complex error propagation formulas, thus allowing handling of correlated and independent parameters without additional consideration.

The absolute effect size, computed as a mean outcome or absolute difference from baseline, is the simplest, is independent of variance, and retains information about the context of the data Baguley, However, the use of absolute effect size requires authors to report on a common scale or provide conversion parameters. In cases where a common scale is difficult to establish, a scale-free measure, such as standardized, normalized or relative measures can be used.

Standardized mean differences, such Hedges' g or Cohen d, report the outcome as the size of the effect difference between the means of experimental and control groups relative to the overall variance pooled and weighted standard deviation of combined experimental and control groups. The standardized mean difference, in addition to odds or risk ratios, is widely used in meta-analysis of clinical studies Vesterinen et al.

However, the standardized measure is rarely used in basic science since study outcomes are commonly a defined measure, sample sizes are small, and variances are highly influenced by experimental and biological factors. Other measures that are more suited for basic science are the normalized mean difference, which expresses the difference between the outcome and baseline as a proportion of the baseline alternatively called the percentage difference , and response ratio, which reports the outcome as a proportion of the baseline.

All discussed measures have been included in MetaLab Table 2. The goal of any meta-analysis is to provide an outcome estimate that is representative of all study-level findings. One important feature of the meta-analysis is its ability to incorporate information about the quality and reliability of the primary studies by weighing larger, better reported studies more heavily.

The two quantities of interest are the overall estimate and the measure of the variability in this estimate. The choice of a weighting scheme dictates how study-level variances are pooled to estimate the variance of the weighted mean.

The weighting scheme thus significantly influences the outcome of meta-analysis, and if poorly chosen, potentially risks over-weighing less precise studies and generating a less valid, non-generalizable outcome. Thus, the notion of defining an a priori analysis protocol has to be balanced with the need to assure that the dataset is compatible with the chosen analytic strategy, which may be uncertain prior to data extraction.

We provide strategies to compute and compare different study-level and global outcomes and their variances. To generate valid estimates of cumulative knowledge, studies are weighed according to their reliability. This conceptual framework, however, deteriorates if reported measures of precision are themselves flawed.

The most commonly used measure of precision is the inverse variance which is a composite measure of total variance and sample size, such that studies with larger sample sizes and lower experimental errors are more reliable and more heavily weighed. Inverse variance weighting schemes are valid when i sampling error is random, ii the reported effects are homoscedastic, i.

When assumptions i or ii are violated, sample size weighing can be used as an alternative. Email Support Meta-Analysis.

View our support packages here. Comprehensive Meta-Analysis. Resources for Meta-Analysis. Why perform a meta-analysis? Online courses on meta-analysis Workshops on meta-analysis Books on meta-analysis Papers on meta-analysis Other meta-analysis web sites. Contact us About the company Clients The developers. What is a meta-analysis? Meta-analysis in applied and basic research Pharmaceutical companies use meta-analysis to gain approval for new drugs, with regulatory agencies sometimes requiring a meta-analysis as part of the approval process.

Where does meta-analysis fit in the research process? Publications Many journals encourage researchers to submit systematic reviews and meta-analyses that summarize the body of evidence on a specific question, and this approach is replacing the traditional narrative review.



0コメント

  • 1000 / 1000