There's a cool talk posted at the blog Measure of Doubt that was given recently by one of the blog's authors, Julia Galef. The talk concerns the idea of a straw Vulcan, an idealized character based on Star Trek's race of ultra-logical humanoids. Galef argues that the Vulcans base their actions and decisions on a logic that's popularly perceived as rational, when it is in fact not. This is because she defines rationality in one of two related ways: 1) a method for obtaining an accurate view of reality, and 2) a method of achieving one's goals. To make her argument, she presents five beliefs about Vulcan behavior that are commonly held to be rational and then gives examples from both Star Trek and real life where this behavior has violated her definition of rationality.
I particularly like the second and third items on her list—never making a decision on incomplete information and never relying on intuition—because I find that these are common mistakes that scientists make. For example, suppose some graduate student wishes to setup an experiment that he or she is unsure will work. The student may take one of two courses of action (really there are three, the third being a combination of the first two). The first is to try the experiment and see if the outcome is desirable. The second is to carry out a number of calculations to determine if the desired outcome will be produced, and then perform the experiment. The fallacy occurs when the student attempts to plan too much and wastes time on arduous calculations when the experiment may have consumed less time. This is a case of failing to act simply because he or she did not possess the complete knowledge of whether the experiment would work in the first place.
The example above is irrational by Galef's definition because, in all likelihood, the graduate student would have liked to have obtained a yes-or-no answer to the question "does the experiment work?" in as little time as possible, and sometimes this means running the experiment before fully understanding what the outcome would be. Of course, it takes intuition to determine when it's time to put down the pen and paper and do the actual lab work, and that's why it's rational to rely on intuition.
In a sense, these arguments depend strongly on Galef's definition of rationality, but I see no reason why this isn't a good definition to work with.