Sunday, March 28, 2010

A hierarchy of concepts

Richard Feynman, in his Lectures on Physics, had a habit of discussing both the philosophical and practical issues of the science that he taught. One such issue was on the idea that waves could possess particle-like properties, such as momentum and position. As he notes in his Lectures, Vol. 3,

"Only measurable quantities are important to physics. This is false. We need to extend current concepts to unknown areas and then test these concepts. It was not wrong for classical physicists to extend their ideas of momentum and position to quantum particles. They were doing real science, so long as they then checked their assumptions."

What I believe is of value in this statement is the idea that understanding new phenomena is achieved by applying concepts from already well-understood processes and things. So what if a wave didn't traditionally possess momentum or a position? These two concepts (waves and particles) could at least be used to further our understanding of quantum entities, which possess both wave and particle-like properties but do not act entirely like one or the other of these classical constructs.

The same idea I think can be applied in teaching. First find a concept that students are familiar with, then show how this concept can be extended to describe a new phenomenon. However, to be self-consistent and complete, a discussion of how the concept fails to completely describe the phenomenon is required as well. Momentum and position obviously can't describe quantum interference of particles. In this manner, a knowledge of the world is built up of a patchwork of prior understanding.

Addendum
I found this statement particularly enlightening:
"When a data set is mutilated (or, to use the common euphemism, ‘filtered’) by processing according to false assumptions, important information in it may be destroyed irreversibly. As some have recognized, this is happening constantly from orthodox methods of detrending or seasonal adjustment in econometrics. However, old data sets, if preserved unmutilated by old assumptions, may have a new lease on life when our prior information advances."

Sunday, March 14, 2010

A big theory for tiny things

Much of the theory for plasmonics is derived from Maxwell's electromagnetic theory, a theory which works incredibly well for systems that are very large compared to the atoms that make up the comprising media and to the wavelength of the light. However, plasmonics deals with the phenomena that occur in nanoscale devices, which often are smaller than the wavelength of the light and can contain a few thousand atoms. In the case of surface-enhanced Raman scattering, single molecule detection is even possible.

I wonder if the researchers in plasmonics are ever unsettled by the fact that they are using a theory that is based on the concept of effective material properties (permittivity and permeability) to describe media that aren't quite effective materials. This state of affairs seems to suggest that we should expect deviations from the predictions of Maxwell's equations in plasmonics experiments, but does this now take away much of the credibility in the results in these experiments? Consider this question: how does a researcher sell her novel plasmonics device or experiment to reviewers and grant committees if it is expected not to work as, well, expected?

Taking a more positive viewpoint, perhaps this is what makes a field exciting for experimentalists. Here there is a phenomena for which a complete theory does not exist. Experiments are needed to establish the working principles from which a theory can evolve.

Monday, March 8, 2010

Guidelines for writing

I've been thinking a lot about effective writing and communication within academia the past few weeks, which is why this post from Ross McKenzie's blog grabbed my attention. He's also posted other links to guidelines in the past, such as this one for writing good Physical Review Letters (PRL) papers.

I think that it would be a good exercise to create my own list of guidelines when writing papers. Or, for that matter, it might be a good exercise for anyone in academia to do so. Having these ideas down in print, as opposed to remaining vaguely defined in my mind, might make them more concrete and allow me to better define what I mean by the quality of a paper.

Saturday, March 6, 2010

A purpose for every project

A popular term in the field of optical sensing right now is "task-specific sensing." It is a system design paradigm in which the relationships between a system's components are optimized towards the purpose of the system. This is opposed to the idea of making the components perform as efficiently as possible on their own. For example, a system that only needs to detect an object in its field of view does not need to have a lens design that reduces aberrations and increases spatial resolution. Instead, a scene simply has to be imaged onto the sensor in a way that facilitates efficient image processing by the software. In other words, the relationships between the optics, electronics, and software should be optimized towards the goal of detecting an object, not seeing it clearly.

Nature has been performing task-specific design for a long time. The compound eye of a fly has very poor resolution since each bump on the eye acts as a single lens that couples to a sensing structure. Fortunately for the fly, it does not need to see well to find food. It does however need to avoid predators if it wishes to remain alive. The fly's eye has an extremely large field of view so that it can see things such as flyswatters coming at it from many different angles.

The task-specific paradigm has also led me to think about how to value research projects in academia. There is a very common notion that any research is good research. However, if a project creates some device or accomplishes some goal with no particular application in mind, then the idea of task-specific sensing might suggest that the simplest and least costly approach was not to have done the research at all since no need for it existed. I'll pose the question like this: which should come first, the need for a researched solution, or the solution itself?

Of course, the argument exists that research performed without a particular need may eventually find its uses, but I think that my question is still a valid one to ask before placing value upon research.