A recent review suggested that little evidence exists for tDCS being able to modulate cognitive processing. However, when meta-analyses are performed properly, there is indeed evidence for tDCS working (specifically for language tasks).

Transcranial direct current stimulation (tDCS) is an increasingly popular technique for affecting neural activity using low levels of electrical current. Of course, an important question is whether tDCS actually affects neural activity or cognitive measures.

Earlier this year a review from Horvath and colleagues (2015) reported that there was no evidence for tDCS affecting cognitive function. The authors' conclusions were clear:

Of the 59 analyses conducted, tDCS was found to not have a significant effect on any – regardless of inclusion laxity. This includes no effect on any working memory outcome or language production task.

and

Our quantitative review does not support the idea that tDCS generates a reliable effect on cognition in healthy adults. Reasons for and limitations of this finding are discussed. This work raises important questions regarding the efficacy of tDCS, state-dependency effects, and future directions for this tool in cognitive research.

These conclusions garnered a decent amount of attention (for example, neuroskeptic, The New Yorker, The New York Times, The Economist, New Scientist), because they suggest that a popular neuroscience technique may not actually be effective. However (as noted in some comments), some of the authors' choices on what studies to include were questionable (or at least, subjective), leading to speculation that perhaps tDCS wasn't dead in the water after all.

Amy Price and Roy Hamilton at Penn went a step further. [Disclosure: Amy is a collaborator and how I heard about this work.] In a letter to the editor of Brain Stimulation (Price & Hamilton, 2015), they explain some of their difficulties in interpreting the Horvath et al. data:

We initially attempted to replicate the effect sizes (i.e., standard mean difference values; SMD values) for the individual studies in the language section based on the methods outlined by Horvath and colleagues. In doing so, we identified numerous problems in the way that the authors selected the behavioral data, which were not apparent based on the information provided in the methods section.

Amy and Roy also list a number of difficulties with the analysis approach taken by Horvath and colleagues, including:

Unclear or arbitrary inclusion of studies

Insufficient power (only one of meta-analyses reported in the main text included more than 5 studies)

Inconsistent data selection between studies

Horvath wrote a reply (Horvath, 2015) in which he reports re-doing some of the analyses based on Amy and Roy's comments. To paraphrase: "Price and Hamilton made some mistakes. On other points, they were right, and several of our original numbers were incorrect. We have now fixed them and the conclusions are the same."

I have not gone through the included studies in detail. However, at least two things strike me:

In his reply, Horvath emphasizes that in the original article they said "these analyses must be interpreted with caution". That's all well and good, but advice not followed particularly closely in the title or highlights of their own article.

It is worrying that several of the original numbers were incorrect in the first place, and showing that a subset of the conclusions does not change does not give me confidence that others would similarly not change.

However, these letters and replies can easily turn into arguing without resolution. More helpfully, Amy fully re-did the meta analysis on language studies (Price et al., 2015). In their approach—which they argue amends shortcomings of Horvath et al.—Price et al. report significant effects of tDCS on behavior:

We first conducted the main meta-analysis across the accuracy measures in the language studies (outlined in Table 1) in order to test for any effect of tDCS across the experiments. This approach generalized across behavioral measures for verbal fluency and novel word learning, and included both online and offline measures. This analysis revealed a significant effect from single-session tDCS on accuracy measures in language (t=3.255, p=0.002; Figure 1).

The work from Amy and Roy is a nice demonstration that statistical approaches and inclusion criteria in meta-analyses are important to consider, and for reviewers to carefully evaluate. In the extreme, it might suggest that the original review was sufficiently flawed that its conclusions can be disregarded. At the very least, Amy and Roy's work suggests that tDCS is likely effective at modulating language processing (despite assertion to the contrary by Horvath et al.).

So, what can we conclude? On balance I am convinced there is evidence for tDCS being able to affect cognitive processing, particularly for language tasks.

A positive lesson from all of this is that having detailed methods sections that allow others to understand what we have done is incredibly important for moving the discussion further. I also hope that others take on this issue and try meta-analyses in language, and in other domains that Amy and Roy didn't cover. It's tedious work but important, particularly when skepticism exists on the efficacy of a new technique.

Finally, if journal page limitations restrict explaining important methodological details, these can be put online in numerous ways—including github, figshare, the Winnower—which would help others replicate and extend analyses.