A few weeks ago, I posted an article about bad statistics in some fMRI work onto my gchat message. Their train of logic was as follows: the reliability of measurements in fMRI experiments should be no higher than about 0.7 (where reliability is the test-retest consistency). Since the correlation of variables is related to the reliability of the measurements, we have a ceiling on the observable correlations. Given that, there were a large number of studies with suspiciously large correlations. They believe that this arises because certain researchers are selecting voxels which cross some threshold of correlation between behavior and brain activity, then using only these voxels for analysis. Worryingly, this biased voxel selection leads to inflated across-subject correlations and can even find significant measures from pure noise. This is a non-independence error – one could be selecting noise exhibiting the effect being searched for.
Some of the criticized authors have posted a rebuttal defending their methods. They have a variety of rebuttals, most of which seem to rest on their assurance that they are performing corrections for using multiple-comparisons and therefore what is described above is okay. Many of the rest of their rebuttals seems to be, mystifyingly, more about defending ‘social neuroscience’ and suggests that the paper offers a biased view of how fMRI studies are performed. It surely doesn’t do that – remember how I said that only a subset of the population seemed to be implementing a flawed statistical method? – but they are pretty defensive anyway. It’s a funny read because they are clearly very peeved throughout the whole rebuttal.
Then the original authors offered a rebuttal-rebuttal. I don’t feel confident enough in my knowledge of statistics to properly evaluate it. It certainly seems reasonable. For instance, choosing a threshold necessarily truncates the noise distributation; correcting for multiple comparisons does not seem like it would help fix the problem at all. Anyone out there with more knowledge than me like to weigh in on this?
The moral of the story? Be careful with statistics! It is really a shame that people are not put through a rigorous year-long series of statistics courses that they must pass when in graduate school. Statistics underlies all our data analysis and is one of the most misunderstood parts of science. Hopefully this back-and-forth will bring a good discussion of the relevant statistical techniques so the problem isn’t propagated any further.