Wednesday, July 27, 2011

How not to argue in science

I'd like to expand a little on yesterday's post. I'm beginning to better understand what constitutes proper debate of a scientific work. Whether the following logic is actually practiced by most researchers is questionable, but this is nevertheless an interesting and important point.

Data from a study provides information that does one of three fairly obvious things to a conclusion: it increases, decreases, or leaves unaffected the likelihood that the conclusion is correct. I place emphasis on the word likelihood because any given conclusion can not be demonstrated as being correct with 100% certainty, and I highly doubt that conclusions can be proven false with certainty.

I think that this—the likelihood that a conclusion is correct given all information—as well as the competency with which an experiment was performed are the two objects open to debate within science. The debate becomes unscientific when researchers and journal reviewers perform the following errors:
  1. Assigning too much weight to prior information, thus making the likelihood that another work's results are correct less likely then it perhaps should be.
  2. As a corollary to the first point, workers would be in error if they didn't properly balance the weighting of all prior information. For example, the media, in their coverage of climate change, has been chastised by some for giving equal attention to climate change skeptics as they do to proponents. This is because the proportion of scientists against climate change is significantly fewer than the proportion who see it as a true occurrence.
  3. Assuming that a finding is false given prior information or prejudices. If one accepts that a finding can not be false but rather highly unlikely, then arguing to reject a journal article because it contradicts previous findings is itself fallacious. The wider scientific community should (with its more balanced opinions) be a better interpreter of the likelihood that the claims are real.
Of course, if these errors were corrected, they could very well lead to many more published works, which would in turn dilute the field. As a result, grants may be harder to obtain (since they are in part based on published works) and the dissemination of knowledge would become greatly impaired; there would simply be too much information to analyze.