Wednesday, November 30, 2011

Being rational with science

There's a cool talk posted at the blog Measure of Doubt that was given recently by one of the blog's authors, Julia Galef. The talk concerns the idea of a straw Vulcan, an idealized character based on Star Trek's race of ultra-logical humanoids. Galef argues that the Vulcans base their actions and decisions on a logic that's popularly perceived as rational, when it is in fact not. This is because she defines rationality in one of two related ways: 1) a method for obtaining an accurate view of reality, and 2) a method of achieving one's goals. To make her argument, she presents five beliefs about Vulcan behavior that are commonly held to be rational and then gives examples from both Star Trek and real life where this behavior has violated her definition of rationality.

I particularly like the second and third items on her list—never making a decision on incomplete information and never relying on intuition—because I find that these are common mistakes that scientists make. For example, suppose some graduate student wishes to setup an experiment that he or she is unsure will work. The student may take one of two courses of action (really there are three, the third being a combination of the first two). The first is to try the experiment and see if the outcome is desirable. The second is to carry out a number of calculations to determine if the desired outcome will be produced, and then perform the experiment. The fallacy occurs when the student attempts to plan too much and wastes time on arduous calculations when the experiment may have consumed less time. This is a case of failing to act simply because he or she did not possess the complete knowledge of whether the experiment would work in the first place.

The example above is irrational by Galef's definition because, in all likelihood, the graduate student would have liked to have obtained a yes-or-no answer to the question "does the experiment work?" in as little time as possible, and sometimes this means running the experiment before fully understanding what the outcome would be. Of course, it takes intuition to determine when it's time to put down the pen and paper and do the actual lab work, and that's why it's rational to rely on intuition.

In a sense, these arguments depend strongly on Galef's definition of rationality, but I see no reason why this isn't a good definition to work with.

Tuesday, November 29, 2011

Momentum and position—it's all you need

I've had a bit of downtime recently which has led to thumbing through my undergrad QM book, Griffiths' Introduction to Quantum Mechanics, out of curiosity.

In the very first chapter he makes the statement "The fact is, all classical dynamical variables can be expressed in terms of position and momentum." To be honest, I never fully realized this as an undergrad, and if I did, it certainly did not leave such an impression on me to have remained in my memory.

Is this bit of knowledge a common oversight in the education of physics students, or something that simply went unnoticed by me? Furthermore, how important is it to the development of a student's understanding of physics?

Friday, November 18, 2011

Deconvolution—I want it all!

A groupmate and I were discussing practical deconvolution of an instrument's response from data the other day. After some searching on the internet, he came across the following words of advice:
When it comes to deconvolution, don't be greedy.
This is actually pretty good advice. In theory, the idea works great. Fourier transform the data, divide by the instrument's transfer function, then inverse Fourier transform to get the deconvolved data. But with real data, you risk amplifying the noise or introducing artifacts, especially with FFT-based methods. This last point is pertinent if your instrument's impulse response is very short compared with the span of the data. So use some caution; just because you can deconvolve, that doesn't mean you should.

Thursday, November 17, 2011

Google Scholar Citations is up

This morning I received my notice that Google Scholar Citations, Google's new online tool for tracking your own publications, citations, etc., was up and running.

I haven't played around with it much, but I'm impressed that it automatically found everything I've produced that's on the web with only one mistake—it mistakenly concluded that I had authored a religious text.

I'm not quite so certain that it will be useful. After all, the number of citations that my papers generate don't improve the quality of my work. However, it's still fun to easily track these statistics, even if it only satisfies my own curiosity.

Wednesday, November 16, 2011

It's (not) just all a bunch of hippie crap

I think that we physical scientists possess a bit of hubris when it comes to our perceived understanding of nature. Its roots lie with the view that the laws of physics govern everything and the rest is just details. Unfortunately this hubris is shared by many of my physicist, engineering, and chemist colleagues. My issue is not, however, with their assurance that nature can be reduced to a set of relatively simple laws; rather it is with their perception of the social and some natural sciences as pseudo- or unscientific.

I witnessed this type of hubris many times as an undergrad where fields such as political science, economics, and sociology were labeled not as social science but as humanities (I went to an engineering school). Most of my classmates enjoyed their required classes in the humanities since they were a nice break from their challenging engineering courses. And perhaps this is the point where the hubris begins: as a belief that the humanities are somehow "easier."

It bothers me when I hear some of my grad school friends roll their eyes or make the quotation mark gesture with their fingers when they sarcastically refer to work in the humanities as science. This perception of the humanities as unscientific is outright false when they are weighed against the logical structure of any science. Hypotheses are formed, observations are made, and conclusions are drawn from the observations and prior knowledge, just as in the physical sciences. If there are differences, they lie not with this structure, but with the scales of measure involved; the physical sciences tend to employ more quantitative measures than the social sciences, but this by no means makes one field more scientific than another.

On a positive note, I don't see this arrogance in most of my grad school friends. I just wanted to explore why a few of them marginalized the humanities as a science so I could better defend my position in the future.

And if I'm being hypocritical through my continued use of the word "humanity" and not "social science," I only used it because I simply could not think of any better word to use. Ultimately, it's all science to me ;)


Wednesday, November 9, 2011

My relationship with curve fitting

My understanding of curve fitting has changed a lot since I took statistics in high school. Back then it was simply an exercise that produced a line through trivial data that my classmates and I had collected. The function for the line could predict the outcome of future experiments, and therein lay its usefulness.

In college, it's importance increased when I learned how to extract physically meaningful quantities from the fitting parameters. The fit was a tool to extract the information from the noise of experimental randomness. It became more complex as well—the types of models with which I could fit the data grew far beyond simple lines. Now models included Gaussians, decaying exponentials, and many other transcendental equations. The importance of curve fitting at this point of my education lay beyond simple prediction; it produced for me the reality that lay behind the noise, and this reality was encoded into the values of the fit parameters.Curve fitting had become absolute and always revealed the true physics behind some process.

Now, after four and a half years of graduate school I've learned that the human element to curve fitting is paramount. I no longer see it as the purely objective tool that I did before I received my B.S. The moment of change occurred when I realized that the results of a fit can be marginalized simply by adding too many parameters to the model (c.f. this post from Dr. Ross McKenzie where he noted a paper in Nature that contained a fit of a model containing 17 parameters to 20 plus data points). If one can fit an elephant to data using only five parameters, then clearly any other model, including one that a scientist is arguing for in a paper, can be made to "explain" data if it possesses enough free parameters. Furthermore, the initial values for the fitting procedure can change the outcome since the routine may settle on a local minimum in the solution space. Therefore, an educated guess performed by an informed human is a critical element to any curve fitting routine.

My experiences with curve fitting in graduate school have completely transformed my opinion of its value. It certainly no longer appears to me as an absolute tool. I'm also much more careful when assessing conclusions in papers that employ some sort of regression since I've personally experienced many of its pitfalls.

I think it's incredibly important to make undergraduates aware that curve fitting goes beyond a simple exercise of plugging data into a computer and clicking "Go." Both intuition about the physics that generated the data and the ability to make objective judgements about the value of a model are crucial to making sound conclusions. What is the variability in the parameters with the range of data included in the fit? Do the parameters represent physical quantities or are they used to simply facilitate further calculations? What is the degree of confidence in the fit parameters? Are there too many free parameters in the model? Is the original data logarithmic, and, if so, was the fit performed on a logarithmic or linear scale? All of these questions and more should be addressed before presenting results based on a fitting procedure.

Tuesday, November 8, 2011

The best abstract ever?

By way of my advisor, concerning the recently found anomaly in the neutrino speed measurement at CERN:


As far as abstracts go, it does succinctly summarize their findings ;)

Monday, November 7, 2011

Understanding the generalized Stokes-Einstein equation

Mason and Weitz published a paper in 1995 about a technique for extracting bulk material parameters from dynamic light scattering measurements on complex fluids. That is, they established a mathematical relationship between the fluctuations of scattered light intensity from a colloidal suspension and the shear moduli of the complex fluid as a whole.

One primary assumption in this derivation is the equivalence of the frequency-dependent viscosity to a so-called memory function:
 
where η(s) is the Laplace frequency-dependent viscosity and ς(s) is the memory function. As a special case example, ς(s) is a delta-function at s=0 for purely viscous fluids since they do not store energy (i.e. they do not possess any elasticity). Substituting this into the well-known Stokes-Einstein equation leads to a relation between the colloidal particles' mean-squared-displacement (measured by dynamic light scattering) and the complex shear modulus of the fluid, G*(ω) (after conversion to the Fourier frequency domain):


The authors note in the end of the paper that it's unknown why light scattering techniques should produce the shear modulus of the fluid since they measure elements along the diagonal of the system's linear response tensor, whereas the shear moduli are contained in the off-diagonal elements.

They also note (with explanations I don't quite understand) that "...the light scattering may not provide a quantitatively exact measure of the elastic moduli; nevertheless, as our results show, the overall trends are correctly captured, and the agreement is very good." (emphasis mine)

Thursday, November 3, 2011

Question everything

As you may know, my major field is optics, which concerns the study and application of light. Throughout my studies I've been constantly amazed that Maxwell's electromagnetic theory of light, which has been around since the late 1800's, still contains features that have not been settled or have been simply overlooked by scientists. One such artifact is the dissimilarity between Minkowski's and Abraham's descriptions of the momentum carried by an electromagnetic wave.

In a 2010 PRA Rapid Communication, Chaumet et al. expand on earlier work by Hinds and Barnett that examines the force on a dipole in a time-varying (i.e. pulsed) plane wave. This force is written completely as
 
where Pj is the dipole moment, E is the electric field, B is the magnetic induction, and ε is the Levi-Civita tensor. The first term in the sum relates to both the radiation pressure and the gradient force. The second term, according to Hinds and Barnett, is usually absent in laser trapping and cooling texts because it is proportional to the time-derivative of the Poynting vector, which is zero in common cooling setups. This term is responsible for repulsion of systems such as a two-level atom from the leading edge of the wave when the first term alone predicts an attraction.

Works like this make me wary of blindly using formulas when performing calculations since it reminds me that a theory may not be complete or its assumptions explicit when presented to a niche audience.

Wednesday, November 2, 2011

No deep insights for today

Well, I just returned from a great wedding in Columbus this past weekend (no, not my own), which means I've been incredibly busy catching up at school... again. I wanted to write a post on curve fitting since I've been involved with the task in my data analysis lately, but I just haven't had the time to flesh out a coherent post.

So instead, I leave you with the URL of a cool new website from Ben Goldacre: http://nerdydaytrips.com/. The site is a user-fed collection of short day trips that might appeal to the more—ahem—academic of us. There seems to be a nice garden near me in Lake Wales, FL called Bok Tower. Perhaps if I can get a free weekend I'll pay it a visit.

And did anyone see the Buckeyes game last Saturday? I think we have a future with Braxton Miller.