Data from a study provides information that does one of three fairly obvious things to a conclusion: it increases, decreases, or leaves unaffected the likelihood that the conclusion is correct. I place emphasis on the word likelihood because any given conclusion can not be demonstrated as being correct with 100% certainty, and I highly doubt that conclusions can be proven false with certainty.
I think that this—the likelihood that a conclusion is correct given all information—as well as the competency with which an experiment was performed are the two objects open to debate within science. The debate becomes unscientific when researchers and journal reviewers perform the following errors:
- Assigning too much weight to prior information, thus making the likelihood that another work's results are correct less likely then it perhaps should be.
- As a corollary to the first point, workers would be in error if they didn't properly balance the weighting of all prior information. For example, the media, in their coverage of climate change, has been chastised by some for giving equal attention to climate change skeptics as they do to proponents. This is because the proportion of scientists against climate change is significantly fewer than the proportion who see it as a true occurrence.
- Assuming that a finding is false given prior information or prejudices. If one accepts that a finding can not be false but rather highly unlikely, then arguing to reject a journal article because it contradicts previous findings is itself fallacious. The wider scientific community should (with its more balanced opinions) be a better interpreter of the likelihood that the claims are real.