Thursday, December 29, 2011

So I've been busy...

Yes, I realize that I missed a regular post day. However, I've been home drinking lots of coffee and beer and eating creamed cabbage, creamed potatoes, creamed, well, everything. So, I humbly apologize that my thoughts have shifted to gastronomy this week. I should mention that I will also likely miss next week's post as well.

In the meantime, since I have been thinking much about food, I direct your attention to the Maillard reaction, one of the most important chemical reactions in cooking! If you like toast, roasted vegetables, or browned steaks (I'm vegetarian, but fondly remember this part of my past), you have this non-enzymatic reaction to thank.

Happy holidays!

Wednesday, December 21, 2011

The scientific method is not universal

I've often heard others quip that physicists worry too little about the details, whereas chemists worry too much. Recently I've come to better understand the attitudes that give rise to statements such as these.

If you've read other posts on this blog, you've probably realized that I most closely associate with experimental physicists. When I approach a problem, I form a model in my mind that makes intuitive sense. Then, I make the model more rigorous through pictures and equations. Throughout this step, I take extra care to ensure that the model parameters can be easily measured, an approach which no doubt adds a certain flavor to my models. My experiments then become realizations of the model to confirm its predictive power. Sometimes, modeling occurs after an experiment, and, almost always, the process jumps back to an earlier step but proceeds with more refinement.

What I've learned to appreciate is that this process flow varies between individuals in different scientific fields. For example, theoretical physicists don't place as much emphasis on how easy it is to measure a model's parameters, but their theories often extend over a wider range of phenomena. As another example, field biologists must take much more care in preparing an experiment than a physicist in the lab due to costly resources, limited time, and small sample sets. All of these factors leave their mark on the steps in the scientific method.

This understanding is valuable because multidisciplinary research is becoming vital to solving many scientific problems. Before we can help each other, scientists must learn to appreciate and understand our differences; otherwise, we'll never take proper account of the details.

Tuesday, December 20, 2011

A "wow" moment

Every once in a while I come across a scientific image and say, "wow." It happened yesterday when I saw the image from this article in last Wednesday's Nature (the image should not require a subscription).

The full image is from this article and compares the size of a tiny, parasitic wasp to two single-celled paramecia.

Wednesday, December 14, 2011

Harmful errors in science journalism

Popular science reporting is incredibly important for making current research known to the lay public in a concise and simplified manner. It often acts as the go-between for channeling information from scientists and engineers to the people who could benefit from the research. Unfortunately, translating new concepts from the jargon of specialists to untrained individuals poses challenges, and often the science in these articles is erroneous. This is a rather well-known problem and the reason why I read such articles with skepticism.

Two different friends sent me this article recently from MIT News reporting on the development of a camera for generating "Trillion-frame-per-second video." Armed with my usual dose of skepticism, I glanced quickly through it and the accompanying YouTube video. I think the significance of the work is a bit overstated [1], but the article sufficiently addresses its limitations and applications. Unfortunately, in describing the operation of a streak camera, the author Larry Hardesty notes that
"The aperture of the streak camera is a narrow slit. Particles of light — photons — enter the camera through the slit and pass through an electric field that deflects them in a direction perpendicular to the slit. Because the electric field is changing very rapidly, it deflects late-arriving photons more than it does early-arriving ones."
Electric fields do not deflect photons. Electric fields deflect moving electrons. What is likely happening inside the camera is that photons scattered from the bottle impinge upon a phosphor screen, which in turn ejects electrons. The electrons in turn are deflected by the time-varying field within the camera to different points on the detector. Electric fields can be modulated with RF equipment up to a few hundred GHz, which roughly corresponds to a few tenths of a trillion oscillations of the field per-second. I believe that this is how the noted number was obtained.

So what's the big deal? The point of the article, after all, is to inform a general audience about a newly developed camera—the details of its operation should not matter. Here are a few reasons why errors such as these are harmful:
  1. Many people who read these types of articles are students or young people who aspire to be scientists. To provide incorrect information hurts their education by establishing false ideas within their minds.
  2. It's dishonest. Whether the author knew he had made a mistake or not, the article should have been reviewed by one of the scientists before its publication [2]. Eventually, people could learn to associate dishonesty with science and this is clearly undesirable.
  3. It's distracting and draws attention away from the article's main message. I'll admit that I had made up my mind that the research was flawed after I saw the error. Only a second, careful reading revealed that nothing about the research was in error.
This article is just a minor example of an all-too-common problem with science journalism. Unfortunately, I've seen much worse mistakes. I wonder how many times I've been fooled by a false proposition in a piece of reporting.

[1] It images processes that are stationary over at least one hour in time. For example, it can not image a turtle that walks through its field of view. Dr. Velten's quote in the beginning of the article might be placed in better context by noting that while few things are too fast for the camera, many things are too slow for it.

[2] I was once quoted several times in an article about my research that I had not realized had been published until I chanced upon it while Googling my name. I didn't find it until nine months after it appeared in print. I received no word from the author what-so-ever that it had appeared in the Orlando Sentinel, and was misinterpreted as having stated that our work existed only in theory. If the author had asked me to review the article prior to its publication, this error would not have been made.

Monday, December 12, 2011

An optical method for finding exoplanets

This morning I read an Optics Letter from 2005 entitled "Optical Vortex Coronograph" that described an optical system for detecting exoplanets orbiting a star that could be up to 1e8 times brighter than the planet's reflected light.

The system is detailed below. In a traditional coronograph (i.e. one not employing a vortex phase mask), the mask in focal plane FP1 is a zero light-transmitting block of very small angular extent. Because the image of a star that the system is pointed at is formed in plane FP1, its light is filtered out of the final image by this mask. The Lyot stop in plane PP2 then blocks the light from the star that is diffracted by the mask. The resulting intensity collected in plane FP3 is largely contributed to by any point source near the star, e.g. an exoplanet.


What is not clear to me is why replacing the block in FP1 by a vortex phase mask improves the performance of the coronograph. Mathematical arguments are presented, but I find an intuitive explanation lacking.

Wednesday, December 7, 2011

How best to view the internet as a learning tool

I work on some projects that require knowledge of basic biology, such as cell structure, biochemistry, and laboratory technique. However, I was trained as a physicist and engineer, and, as a result, have had an extremely limited education in the biological sciences. For example, my last biology class was anatomy during my junior year in high school.

The internet has been essential in bringing me up to speed in these topics. I've put resources such as MIT's OpenCourseWare and the independent OpenWetWare to good use. Companies such as Invitrogen provide valuable tutorials and explanations on laboratory practices as well. The best part about these resources is that I can find exactly the information that I need to know when I need to know it.

I believe that a very few doubt the usefulness of the web as a learning tool, but how to use it as a tool is certainly a topic of debate. Based on my own experiences, I think that internet learning is best used as an independent collection of bits of knowledge that are accessed as needed.

Let's break this definition down into parts. By independent, I mean that the value of internet resources is determined by the individual who needs to know something. A catalog of optical parameters of semiconductor materials will likely serve little purpose to a field biologist. The downside to this is that the web must contain an exhaustive amount of knowledge to be useful to everyone. If there's a possibility that someone may wish to know something, then it must be contained already on the web [1].

"Bits of knowledge" makes intuitive sense, but a formal definition may not exist. If I wish to know how to stain a cell using immunofluorescence, is each step considered a bit of knowledge, or is the entirety of the process considered one "chunk?" I don't think that this detail is particularly relevant to my discussion, but it is interesting to think about how one may quantify knowledge [2].

Finally, the ability to access knowledge as needed makes it efficient. The human brain can only hold on to a limited amount of data. Some details are best stored on machines; otherwise numerous human specialists would be required to perform complex tasks, each one intimate with one small part of the task. In my graduate work, I can learn about cytoskeletal filaments as needed, or my advisor could hire on a cell biologist to consult me on a small number of issues. The first option is decidedly cheaper. In addition, ease of access is important, and spans topics such as mobile devices, bringing the internet to developing countries, and search algorithms.

So, in my opinion, internet learning is best utilized as a user-valued collection of information that is accessed accordingly. Communications through the internet, such as e-mail correspondence with teachers, is important, and is compatible with my definition since I do not put limits on how knowledge is delivered. Failure to properly use the internet as a learning tool usually comes from poor access (e.g. bad search engine algorithms) or a user improperly identifying what they need to know. In the last case, the success of internet learning cannot be determined by machines; like many things it boils down to the human element.

[1] I can't get the thought of the internet as a causal knowledge database out of my head right now, since it can only contain knowledge that has already been generated. It will never contain knowledge from the future, unless, perhaps, new knowledge can be generated from data it already holds, but that opens the question of the definition of knowledge.

[2] Information theory comes to mind here. The information content of a signal is quantified as a logarithm of the number of symbols in the signal.