The Importance of Sample Size for Reproducibility of tDCS Effects

2016 
Cheap, easy to apply, well-tolerable, with the potential of altering cortical excitability, and for testing causalities—these are attributes that have made transcranial direct current stimulation (tDCS) a highly popular research tool in cognitive neuroscience. Since its reintroduction over 15 years ago by Nitsche and Paulus (2000), the number of publications reporting tDCS results has risen exponentially (a Scopus® literature search indicates over 500 such journal articles published in 2015 alone). Recently however, the efficacy of tDCS to alter cognitive performance has been called into question, in particular among healthy participants, but also in certain clinical samples (Horvath et al., 2015; Hill et al., 2016; Mancuso et al., 2016). A number of empirical studies reported not having been able to detect any facilitatory effects of anodal tDCS or inhibitory effects of cathodal tDCS on various cognitive processes (e.g., Wiethoff et al., 2014; Minarik et al., 2015; Sahlem et al., 2015; Horvath et al., 2016; Vannorsdall et al., 2016). In fact, in a recent meta-analysis Horvath et al. (2015) argue that in young, healthy participants there is no effect of tDCS on cognition whatsoever, whereas other meta- analyses do find specific modulation of cognitive processes by tDCS; however, these effects seem to be rather weak (Hill et al., 2016; Mancuso et al., 2016). In a recent commentary the field of tDCS research was even called a research area of bad science (Underwood, 2016) in desperate need of further meticulous evaluation. Although there seems to be some inconsistency of effects there is also current work by Cason and Medina (2016) suggesting no evidence for p-hacking (strategic testing and analysis procedures to increase likelihood of obtaining significant effects) in tDCS research. However, Cason and Medina (2016) find average statistical power in tDCS studies to be below 50%. Therefore, one potential reason for the reported inconsistencies might be that sample size is usually very small in most tDCS studies (including those from our research group). Whilst this issue is not specific to tDCS studies (in fact Button et al., 2013 estimate the median statistical power in neuroscience in general being only 21%), it could lead to weaker effects often not being detected, and consequently meta- analyses suggesting small or no efficacy of tDCS. In addition, the assessment of the real effect of tDCS is further complicated by potential publication bias (file drawer problem) leading to over-reporting significant tDCS findings. That is, a publication bias favoring studies with significant effects might lead to an inflation of the reported efficacy of tDCS. Thus, depending on which studies are included in systematic reviews and meta- analyses (i.e., findings published in peer-reviewed journals; unpublished nil-effects; nil-effects reported as an additional finding in papers with the actual focus on another, significant, effect, etc.), small sample size in tDCS research could lead to both under—and overestimation of tDCS efficacy. Some current meta- analyses (e.g., Mancuso et al., 2016), however, include an estimation of publication bias (e.g., using the “trim and fill” procedure in which funnel plots are used for determining whether there is a bias toward studies with significant effects in the literature included in the meta- analysis); and overall effect size can then be adjusted accordingly. Taking publication bias into account it becomes evident that efficacy of tDCS is rather weak (Mancuso et al., 2016). As indicated by quite some inconsistency in literature on the efficacy of the stimulation, the field of tDCS research is clearly struck by the replication crisis that we also find in psychology and neurosciences in general (Button et al., 2013; Open Science Collaboration, 2015). But how to estimate efficacy of tDCS, if it is not clear, how many unsuccessful experimental attempts end up in the file drawer? As discussed above, one possibility is to adjust for publication bias in meta- analyses. Another strategy is pre-registering tDCS studies and reporting their outcome, independent of whether the results are significant or not—be it in peer reviewed journals or platforms such as the Open Science Framework (https://osf.io); this can result in more accurate estimates of efficacy. Moreover, allowing open access to the acquired data (open data) offers the opportunity that researchers could pool raw data from experiments with small samples but similar experimental designs. By doing so, they overcome the problem of under-powering, an issue that seems so fundamental in tDCS research. Therefore, to investigate the effect of sample size on tDCS efficacy and to contribute to increased research transparency we designed a simple, pre-registered study (https://osf.io/eb9c5/?view_only=2743a0c4600943c998c2c37fbfb25846) with a sufficiently large number of young, healthy volunteers estimated with a priori power analysis. Furthermore, we make all the acquired data publicly available. In a choice reaction time task (CRT) participants underwent either anodal or cathodal tDCS applied to the sensorimotor cortex. Jacobson et al. (2012) suggest that for the motor domain with tDCS over sensorimotor cortex anodal-excitation and cathodal-inhibition effects (AeCi) are quite straight forward, whereas in other cognitive domains AeCi effects seem not particularly robust. Since we stimulated the sensorimotor cortex we decided to contrast anodal with cathodal tDCS (instead of sham stimulation) for obtaining the largest possible effect. We expected anodal stimulation to result in faster response times compared to cathodal tDCS in accordance with findings by Garcia-Cossio et al. (2015). To demonstrate the importance of sample size for finding the predicted effect, random samples of different sizes were drawn from the data pool and tested statistically. This way the probability of identifying the predicted effect was obtained as a function of sample size.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    21
    References
    95
    Citations
    NaN
    KQI
    []