Wednesday, December 1, 2010

More absolutes

Yesterday I wrote on the significance of relative and absolute measurements, concluding that relative measurements were in some sense less arbitrary than absolute ones because they do not depend on the definition of a physical constant. I claimed that this dependence is the practical problem with absolute measurements.

Further reflection has led me to believe that this is not the practical problem that has presented itself many times over during the course of my studies. Rather, the practical problem is that an absolute measurement is incredibly sensitive to the manner in which it is performed. Many parameters that are found in physical theories simply do not take account of the limitations in a measurement, such as integration times and nonlinearities in measuring devices. It is extremely difficult to extract a number from a measurement performed under realistic constraints.

Fortunately, any condition that ruins the agreement between a parameter obtained from a measurement and one obtained from calculation will be present amongst many different measurements. The effects of these conditions will essentially "divide out" under comparison, leaving only the signature of any independent variables that were changed between the measurements. Therein lies the strength of conclusions drawn from relative measurements.

Tuesday, November 30, 2010

Absolutes vs. relatives

Though my advisor has stressed this for the entirety of my grad school career, I today finally appreciated the significance of relative measurements over absolute ones.

An absolute measurement is one in which a value is extracted from a data set that is physically important in a particular model. A relative measurement, on the other hand, is one that extracts the effect of varying a parameter amongst two or more data sets.

Model specific parameters are obtained from absolute measurements. Curve fitting is usually performed to find the values of parameters. Alternatively, relative measurements establish relationships between two variables. For example, the reading on a scale will increase proportionally with the mass added on top of it. From this observation, one can infer that weight is linear with mass. A constant, namely the acceleration of a particle due to gravity at the earth's surface, is needed to obtain the absolute value of the weight from a single measurement.

The practical problem with absolute measurements is that they require certain standards to have any significance. At the start of graduate school, I would often puzzle over why a parameter from a curve fitting routine would so often differ from theory. I would often vary different parameters in my calculation and struggle in vain to determine which independent quantity I had measured "wrongly." However, I failed to realize that each measurement was against some standard. A time is measured relative to an internal clock in a circuit; a length is measured relative to a ruler; mass is measured relative to a scale which was calibrated relative to some mass standard.

From the above it seems that the nature of measurement itself is a relative process, and as such a measurement can not be "wrong." If standards differ between two measurements, the measured variable will differ as well. And no one can say which measurement produced the "correct" value. Both conclusions are correct so long as they are logically consistent with how they are derived from the measurement.

I am aware of the definitions of the second and other fundamental quantities, but the definitions are simply agreed to based upon the precision of the measurement that produced the standard. They are arbitrary.

If I have to assert anything from this, it is that I value relative measurements above absolute measurements in scientific papers. Relative measurements reveal physical truths where as absolute ones tell us how well data fit into some theory.

I hope to write more on this in the future once my thoughts have more fully materialized.

Wednesday, October 20, 2010

A physical standard for time

I've now completed Chapter 2 of Cook's "The Observational Foundations of Physics," my current lunchtime reading. In this chapter, Cook describes a thought experiment in which a beam of caesium atoms is polarized by a strong magnetic field, then enters a region where a strong RF field is applied. Following the RF region, the beam passes through another magnetic field such that atoms whose magnetic dipole moments are not flipped by the RF field are deflected into a beam block. Those atoms that do undergo an electronic transition that is accompanied by a flip of the magnetic dipole moment reach a detector that reports the intensity of the beam. A feedback mechanism adjusts the frequency of the RF field so that the beam intensity at the detector is maximized; in this way, a quantum standard of time is established through the frequency of the RF field that maximizes the atomic beam intensity (you may note the similarity to the Stern-Gerlach appartus).

Cook then proceeds to argue for his thesis, namely that the experiments and observations that are available to us dictate the form of our physical theories. He starts first with the theory. The time-evolution of the caesium atoms is described by the Schroedinger equation. This equation contains a first order time derivative which is a consequence of the wavefunction containing all information about the system at any one point in time. If only one initial condition is required to establish the wavefunction, then it must be first order in time (this is in contrast to the wave equation which is second order in time and whose solution requires an initial condition on the wavefunction and its derivative).

Cook next mathematically defines the operations of the experiment described above, postulating that the two magnetic states of the atoms are described by stationary states of a wavefunction. Using only mathematical arguments derived from the nature of the experiment, he obtains the form of the equations governing the time evolution of the system; the wavefunction is affected by a first order time derivative. This suggests that how we perform experiments determines the form of our theories. The time standard need not be quantum in nature as he repeats the argument for a classical, mechanical oscillator. Again he stresses that once the time standard is set, it is meaningless to ask whether or not its period remains invariant with time, since the standard defines time itself. It is recognized that differences between the same apparatus for establishing the standard exist when the apparatus are spatially separated due to the geometry of spacetime.

These are some of the thoughts I had while reading this chapter:
  1. Many times physical theories are developed first and then experiments follow that verify their predictions. Does this fact weaken Cook's argument that experiments shape our theories? If the purpose of theory is to predict experimental outcomes, then why argue for the reverse? Which came first, the chicken or the egg?
  2. Cook was careful to explain that his arguments are based on a physical world that is independent of a subjective observer. Still, I wonder how the human perception of time can be reconciled with these arguments. As stated earlier, it makes no sense to ask whether or not the time standard is invariant within the context of observation and theory. But a human can perceive large changes in the period of a slow mechanical oscillator. What is it that acts as an internal time standard for a subjective observer and can it be related to the physical standard?
  3. Cook only obtains the form of the equations of motion for the systems he describes. On the other hand, the theories give meaning to his unspecified parameters, such as energy and the unit of electronic charge. What determines how these mental concepts are developed? Energy is a relatively easy concept to understand. Was this why the fathers of thermodynamics used it as a core concept in physics as opposed to some other mental construct?

Wednesday, October 13, 2010

Creativity in academia

I recently read this very interesting article that is a followup to the author's original book "Hackers," a look into the subculture of the computer geeks who laid the foundation for today's computer-based society. Two of the common qualities of these influential tech giants is their obsessive drive for quality and their playful creativity. Indeed, many modern companies, such as Google, go to great lengths to foster creativity in their employees by giving them freedom and resources to work on side projects and time to think about new products. The idea, I think, is to keep employees' minds fresh and slightly unfocused so that inspiration strikes more often to the company's benefit.

A similar and equally interesting article came out recently on Talking Philosophy's blog in which the author, Benjamin S. Nelson, discusses the creative process itself in relation to a man, John Kanzius, who invented a radio frequency generator to both attack cancer cells and split water molecules (awesome!). Philosophers, starting with Poincare, have broken the creative process into four successive steps: preparation, incubation, illumination, and verification. I will take these steps to be self-evident in their meaning, but I only wish to note that I believe that creative environments strive to improve the preparation and incubation steps so that illumination happens more often and with better results.

This being said, I wonder now why such environments are not fostered in academia. Graduate students are frequently overburdened with many menial tasks such as grading papers and acting as teaching assistants, attending class, writing portions of grant reports, attending frequent group meetings, and staying up-to-date on the relevant literature. Add to this exercise, chores, and hope for a meaningful social life and one can quickly see that this lifestyle does not support creative solutions to research problems. In no way are these other tasks without benefits, but if the resources of the mind are constantly employed for a menagerie of many simple duties, then what room is there to allow ideas to incubate in their minds?

I think academia could really benefit from adopting some of the creative strategies that many companies now use to better the quality of their products. What do you think?

Note: In college there was a video that was often shown in our engineering business classes from some evening tabloid (Dateline or something similar) which followed a company's process for developing a new and improved shopping cart. I can't remember the name of the show or the company, but it is highly relevant here. Does anyone know what I'm talking about?

 Update: Found part of the video: The company's name is IDEO and focus on innovative designs. Their take on the creative process is very characteristic of the stance that some new companies are taking.

Friday, October 8, 2010

Let's be clear about what I mean

In Section 1.5 of "The Observational Foundations of Physics," Cook poses this question:

Why is it that mathematics appears as almost essential to physics, is it because the world is made that way, a notion that goes back to the Pythagoreans, or is it because we choose to study those aspects of the world that can be put into mathematical form... or do we bend the world to make it conform to our mathematics?

I'm not so sure that these questions can be answered by investigating the relationship between mathematics and observations as Cook proposes. Rather, the questions seem best dealt with in terms of language and meaning. What does Cook really mean when he asks whether or not the world was "made" to be mathematical? What are the "aspects" of the world that we study; are they objects or ideas? In what way do we "bend" the world?

I think Cook (and myself) may be constrained by the language in which the questions are posed. If that is the case, then it seems reasonable to assert that language plays a significant role in the formation of scientific hypotheses and consequently in science itself.

Wednesday, October 6, 2010

Notes from "The Observational Foundations of Physics"

Section 1.3, Measurements and Standards, is a continuation of the setup for the arguments for Cook's thesis on how measurement affects the logical structure of physics. First, Cook states that the equations of physics are simply relationships between physical states or quantities. These relationships are congruent to the relationships between observations. I am a bit unclear as to what congruent means here, but aside from that the setup so far seems fairly obvious.

He continues onto a more lengthy discussion of the role that standards play in measurement. Every measurement consists of comparing some quantity to a standard quantity. When measuring the length of an object, for example, one simply compares the object's length to the length of a ruler (the standard). Our system of standards plays a significant role in shaping the nature of physical theories.

What was very surprising is that, traditionally, standards for all physical measurements can be derived from four independent standards: length, mass, time, and current. These standards have since been replaced by other physical constants and quantities, but the number of independent standards has remained the same. For example, length is measured as a ratio between the speed of light in free space to a unit of time, which is derived from a standard of frequency from a certain atomic process.

The standard of voltage comes from the standard of frequency and the Josephson effect with help from another fundamental constant, the ratio of Planck's constant to the unit of electronic charge. Mass currently (as of the book's publishing) escapes a relation to the standard of frequency, but it's conceivable that it could be related to energy, voltage, and current through the quantum Hall effect.

The shift from mechanical standards to electronic and quantum standards has greatly increased the precision with which we can measure physical quantities. It has also changed the nature of our physical theories, Cook claims.

Thursday, September 23, 2010

Lunchtime reading: The Observational Foundations of Physics

I have started reading "The Observational Foundations of Physics" by Sir Allan Cook during my lunch breaks. The book's purpose, as Cook states in the first sentence of Section 1.1, "is to attempt to unravel some ways in which the practice of physics determines the form and content of physics and physical theory." In other words, Cook wishes to understand how the practices found in physics affect physical theories and the practices themselves. It is as if there existed a feedback loop such that performing experiments changed not simply the theory used to describe a phenomenon but the nature of theory itself.

Further in Section 1.1, he poses these questions that are central to his analysis:
  1. "Why should physics be so effective, and what does that tell us about the world of physics and our ways of gaining knowledge of it?"
  2. "Is there a real world that exists independently of whether I or anyone else is looking at it, or are all the ideas I have about a world external to me just the construction of my mind?"
He defers a thorough answer to the second question until the end of the book, but does offer that he believes that most physicists, while working at the bench or on a computer, act as if an external world existed independent of their attention.

Section 1.2 deals with observations and sets many of the premises of his arguments. Observation and experiment are decided to be equivalent. Observations also consist of two aspects: objective and subjective. Of the subjective aspect, only the communal nature of observation is of consequence to his arguments. Science is a social construct and scientists hold great influence over each other such that the act of observation is never truly independent of people other than the experimenter.

Cook goes to some length to explain that physics is empirical, "with observation primary and theory secondary," but he concedes that rarely can observation be performed without some theory underlying the act of observing. He gives the example of reading a voltage from a digital multimeter. The direct observation is of figures on a LCD readout, a consequence of numerous electronic circuits that respond to potential differences between two probes and relates to the potential energy difference of electrons between two points in a circuit. Of course, electrons are theoretical constructs. The theories underlying an observation can in some ways assure an experimenter that the results are telling us something of the real world and not subject to some extraneous errors or misinterpretations. For simplicity, an observation is defined as the operations that lead to a measurement and result in  "raw data." The data is considered "raw" regardless of the complexity of the measurement.

Finally, theories are models of observations, not a model of the real world itself. "I take a theory to be a mathematical realisation of an abstract system that has properties corresponding to those of a set of observations... It is in that sense that I take a theory to be a model of the world of observations, with the implication that there is a more fundamental correspondence than just giving the right answers..." Theory is an abstraction of the real world, not vice versa.

Friday, September 3, 2010

Consistency vs. Accuracy

Here's an interesting footnote from Chap. 3 of Goodman's Introduction to Fourier Optics:
"The fact that one theory is consistent and the other is not does not necessarily mean that the former is more accurate than the latter."
The footnote is in reference to the Kirchhoff diffraction integral which was derived under two inconsistent assumptions for the boundary conditions on the field. Despite these inconsistencies, the theory gives a very good prediction for the diffracted field far from a large aperture.

Kirchhoff's theory is also a good demonstration of the fact that mathematical consistency and exactness does not mean that a theory makes good predictions or can be used to calculate physical quantities. Experimentation must validate a theory's ability to do so.

Wednesday, September 1, 2010

Osmotic fun

In my readings on how to properly maintain cell cultures underneath a microscope, I came across the entry for osmotic pressure on Wikipedia. In the introduction, a thought experiment is described as such:
In order to visualize this effect, imagine a U shaped clear tube with equal amounts of water on each side, separated by a membrane at its base that is impermeable to the sugar molecules (made from dialysis tubing). Sugar has been added to the water on one side. The height of the water on each side will change proportional to the pressure of the solutions.
Just like my post on a proposed experiment dealing with entropic elasticity in rubber bands, this looks like an interesting and easy demonstration to perform. That is, if I have time to devise the setup (I'm also still working on the entropic elasticity experiment and a demonstration of Schlieren photography for CREOL's student group's outreach program, CAOS).

Thursday, August 26, 2010

Cells = Organisms?

I just finished reading this brief article about the history and (somewhat) current state of cell culture engineering. My academic background has been almost entirely focused in the physical sciences, so I am continuously amazed by the body of common knowledge that exists in the life sciences.

The article states that the methods used to maintain cell cultures are very consistent and reproducible. However, one of the main limitations to maintaining cell cultures over a long period of time is that the individual cells adapt to their artificial environment and begin to take on traits that are different from the primary cultures. I was not expecting this; normally, I am used to the idea that organisms adapt to external stimuli. Of course, an adaption by an organism could be argued as being caused by changes at the (sub)cellular level, but still I am amazed that the individual and basic units of life behave much like the larger organisms they conspire to form, despite their much smaller degree of complexity.

I also found it interesting that massive vats of cultures are used to produce many therapeutic drugs and biochemicals. This too could be considered common sense, but for a humble physicist who remains naive on the subject of biology and medicine it is very much different than the common image of drugs being harvested by dutiful lab technicians from individual Petri dishes.

Sunday, August 22, 2010

I wished to live deliberately

An article in the New York Times about a seven day rafting and hiking trip taken by a few neuroscientists has been making its way around my circle of friends lately. This article in part inspired my previous post and takes a scientific (as opposed to philosophical) look at the relaxing effects of a vacation and time spent away from distractions, namely those caused by technology.

Interesting excerpts from the article and my thoughts:
The study indicates that learning centers in the brain become
taxed when asked to process information, even during the
relatively passive experience of taking in an urban setting. By
extension, some scientists believe heavy multitasking fatigues
the brain, draining it of the ability to focus...
Behavioral studies have shown that performance suffers when
people multitask. These researchers are wondering whether
attention and focus can take a hit when people merely anticipate the arrival of more digital stimulation.
This suggests that some neuroscientists believe that the total amount of mental processing power available during a given time is limited and quantifiable. I interpret the second paragraph as meaning that the brain is capable of performing a limited number of tasks at a time, and when a person is anticipating inputs, the act of anticipation takes away from the available processing power.

The comparison of the brain to a PC might be a dangerous one. Our ideas and preconceptions with how a PC works might cloud our understanding of the actual working principles of the brain.

“To the extent you have less working memory, you have less space for storing and integrating ideas and therefore less to do the reasoning you need to do.”
Neuroscientists quantify this processing power in terms of something called working memory. This could be analogous to RAM in a PC.
Mr. Kramer says he wants to look at whether the benefits to the brain — the clearer thoughts, for example — come from the experience of being in nature, the exertion of hiking and rafting, or a combination.
It is not clear what specific aspects of a vacation can lead to more attentive thought. A parent who has to herd three children through Disney World is not likely to experience the same benefits of a vacation as someone who went hiking for a few days with one or two other adults.
Even without knowing exactly how the trip affected their brains, the scientists are prepared to recommend a little downtime as a path to uncluttered thinking. As Mr. Kramer puts it: “How many years did we prescribe aspirin without knowing the exact mechanism?"
There's really nothing new in terms of the perceived benefits of such a vacation. The real challenge is correlating changes in the brain---electrical activity, blood flow, etc.---to feelings of relief and increased attentiveness.

I also often wondered about one particular point while reading the article. The author speaks about performing better or to our "cognitive potential." In what sense is our thinking made better by removing an abundance of digital stimuli? While the quality of any one particular project may suffer if a person is inundated with digital information, could not the increase in the work output make up for it (if such an increase actually exists)?

Friday, August 20, 2010


I'm back from my trip to Hawaii (specifically, Kauai). It  was an amazing trip, the first half of which consisted of a hike along the Kalalau Trail to the Na Pali Coast. Before I set out, I established a rule for myself such that I would allow my brain to wander and think on any topic freely and without effort; in this way I hoped to allow my thoughts to constantly cycle in both my conscious and unconscious and eventually settle into some logical structure. Immediately prior to the hike I had been focusing entirely upon my candidacy exam for the better part of two months and had been having difficulties in processing any new information. Vacations are a great time to let things settle in one's brain and make room for more knowledge.

So why talk about any of this? I did at least come to one philosophical realization that I think is worth mentioning. I have for a long time felt that the feeling of complete and total relaxation that accompanies camping after a long day of backpacking is made possible by the extreme effort that a backpacker puts into a hike. In other words, to truly relax one must really work hard. On this particular hike, I realized that there is a reciprocal relationship here: to do quality work, one must really relax. Like I said above, if one's thoughts aren't allowed to settle, then one can't really make the best of his or her time spent working.

Wednesday, July 28, 2010

You spin me right round

Ben Goldacre always has interesting things to say about the science behind health care and the pharmaceutical industry. In one of his recent posts he writes about a research project that examined 72 trials with negative results, i.e. an investigated drug or treatment did not cause a desired effect. Out of all of these trials, he quotes that only 9 gave any figures in the trials' abstracts and that 28 gave no numerical results at all.

What was in the reports was "spin," or the authors' attempts to project the results in a positive light. In order to prevent this, he says, trials are supposed to be registered before they are performed so that their intended purpose can not be changed. Additionally, there are guidelines that dictate what must be included in a report. These rules, however, are more akin to suggestions since there is no enforcement of them.

Can such a system be implemented in the physical sciences? I don't think so. Often, we're actually learning about the topic as we proceed through the research. No amount of preparation can allow us to establish a hypothesis sufficient for inclusion in a detailed report before we undertake the experiment. Hypotheses, I feel, are best constructed concurrent with an experiment. And as for report guidelines? Well, anyone who has had to deal with reviewers' critiques of their papers will tell you that there is rarely any consensus about what makes a good report.

I suppose that one could argue that a grant proposal tries to satisfy this purpose, but I can't say that I'm experienced enough to comment one way or another on it.

Thursday, July 15, 2010

On science and faith

Here is an interesting article from Talking Philosophy Magazine. The author discusses the similarity and difference between religions faith and scientific faith. I believe it is often taken for granted that much of what we know about the natural world does not exist in the strict sense; all that we truly know is the outcome of an experiment. Theoretical models, such as the concept of protons and electrons or the theory of gravity, create entities or concepts that don't actually exist in the same way that a ball or dog exists. They are simply mental constructs that are used to explain repeated experimental outcomes and predict future behavior.

Of course, one can always argue that these constructs are "true" in the sense that they are predictive and can be tested as opposed to religious concepts. But I'm not so certain that their predictive powers and testability prove that they exist. "Truth" and "existence" seem to be two separate ideas here. So, in some sense, all of us, whether religious or not, believe in things that don't exist.

Thursday, July 1, 2010

Fun with thermodynamics

Admittedly, the thought experiment I'm about to tell you about is simply explained by thermodynamics. Despite this, I puzzled over it for a while since it is very counter-intuitive, at least to me.

Suspend a weight from an elastic band so it is stretched (only slightly) beyond its equilibrium point. Now, heat the band with a hair dryer. Does the weight move up or down?

Give up? It moves upward. Do you know why?

I plan to verify this experimentally at some point.

Sunday, June 27, 2010

If you swim after eating, your stomach will cramp

As a student of the physical sciences, the importance of experimentation for determining the true principles behind many natural phenomena is impressed upon me on a near daily basis. However, I am becoming increasingly convinced that carefully designed experiments are even more important for the social sciences.

Within the the social sciences, there are (to my untrained eye at least) few theories to predict the behavior of individuals or groups. Furthermore, their behavior is often influenced greatly by the interests of other groups. For example, McDougall's Born to Run contains a chapter about the drastic increase in foot and knee injuries that occurred following the development of the athletic shoe in the 1970's. Despite an enormous amount of evidence that running shoes are the cause of many running-related injuries, companies such as Nike create a "false truth" for the public: the more cushioned a running shoe is (and the more expensive), the better it is for your feet and knees. Though this is a misconception perpetrated by a company in the field of sports medicine, the idea can be carried over quite easily to the social sciences (see Levitt's Freakanomics). Thus, common wisdom in the social sciences can be attributed to a lack of predictive power and conflicting interests.

The importance of these fields to society is enormous when compared to the physical sciences. After all, if the common wisdom is wrong in the physical sciences, the general public is likely to be affected by not having a new iPod or smart phone until the misconception is discovered and the science is applied to new technologies. However, if misconceptions exist in the social sciences, large groups of people could go without health care, school curricula could be poorly engineered by state governments (New Math, anyone?), and governments could be buried by incredible deficits.

Thus, carefully designed and controlled experiments in the social sciences, and really any science, are important for everyone. Without them, the truth might remain buried in speculation and deception.

Friday, June 4, 2010

Like cures like?

Yesterday some fellow CREOL students and I visited a high school in Sanford to discuss our roles as graduate students and to demonstrate some basic scientific principles of our research with the students. The high school is a special school that is administered by Seminole County for students who have been expelled from normal public high schools. The idea (at least how I understand it) is that placing students with similar behavioral problems in the same setting will allow them to receive more attention from teachers since they are no longer overshadowed by the well-performing students. Of course, the obvious objection to a school such as this is that packing many students who all have had disciplinary issues into the same classroom will prevent everyone from learning effectively since the teachers will be less likely to control the students given their nature.

After speaking with one of the teachers, the consensus seemed to be that the system was working and that the students were more eager to learn (on the average) than they were at a normal institution. Specifically she cited the personal attention that the students receive as a major cause for their better performance. Of course, the school still has a wealth of issues with discipline, but if a few students end up for the better, then I suppose that the school has served some good utilitarian purpose.

Keeping with a utilitarian discussion, it would be worthwhile to consider the cost per student that is paid by the government (and indirectly by taxpayers) to run such a school. Suppose only a small percentage of the students actually perform better academically at this school after having been expelled from a normal public high school. Would the additional costs of running this school justify the improvement in the education of this small percentage?

To be honest, I'm not quite sure what my opinion is on the matter. However, I sincerely respect the teachers, both here and at all schools, who have to deal with both the duty of educating the youth and the need to maneuver through an often hostile bureaucratic system of school administration.

Monday, May 31, 2010

Language leads to scientific understanding

I am currently working on my Ph.D. candidacy report. The topic is on optical sensing and manipulation of cells. While brainstorming for the abstract, I wrote the following expression: "Organisms are organized hierarchically..."

Of course I immediately realized that "Organisms are organized" sounds redundant. But it did make me notice the connection between the two words and their common root. Life is built from inter-dependent structures that form higher levels of  organized complexity; from a reductionist standpoint, it is a system of organization built on lower level systems of organization. Hence, we have the word organism.

This little personal epiphany probably impresses only me, but I think it serves as a reminder to everyone that language can very often lead to a deeper understanding of a topic. We need only look to the literal meaning behind an object or phenomenon's name to connect it with a more familiar idea.

What's also interesting about this particular example is that the meaning of the word "organism" suggests that when the term was coined people already understood that lifeforms were made of some sort of hierarchical structure. According to, the word's origin comes from around 1650. Not surprisingly, this is roughly the same time that Robert Hooke began looking at cells through microscopes. So not only can the literal meaning of a phenomenon's name lead to a more intuitive understanding of the phenomena, but so to can the name's etymology place the idea within a historical context.

Friday, May 28, 2010

One more note about numerics

A friend of mine pointed out that numerics are very useful in fabrication since they allow one to predict the behavior of a material or device without having to perform costly trials in the lab. I completely agree that this is another strength of simulations.

A demonstration of this idea can be found in this Nature paper, where the authors first performed calculations of the surface free energy of a crystal before fabricating them. These calculations enabled them to develop titanium dioxide crystals with high surface reactivity for use in solar cells and photocatalysis. Without the numerics, it is likely that countless experimental trials would have been required to grow the crystals.

(I came across this paper via Ross McKenzie's condensed matter blog)

Thursday, May 27, 2010

The purpose of numerics

Optics is a field to which numerical simulations contribute greatly. For example, Maxwell's equations can only be solved for a very small range of geometries. Different routines such as finite-difference time-domain (FDTD), the coupled dipole approximation (CDA), and the T-matrix method have been developed to solve specific problems dealing with the propagation and scattering of electromagnetic waves.

I find that numerics such as these are used in research for a large number of purposes. Some groups publish entire journal papers about the design of a certain algorithm. Other papers deal with numerical simulations of a phenomenon, often times without carrying out a physical experiment because of limits on money, time, and practicality. Still more use numerics to check experimental data against.

Are all of these good reasons to use numerics in research, or are some better reasons than others? I ask this question because I tend to give more credit to papers that perform a physical experiment that is backed up by numerics than to a paper that contains only a study involving a simulation. In the second example in the paragraph above, I mentioned that some papers perform numerical studies because a physical experiment is not practical. If, however, the goal of science is to discern the workings of the natural world, then I think that these types of studies provide little contribution to our body of scientific knowledge, since no knowledge about the real world is gained. A simulation is, after all, subject to the bounds and constraints of a model, and models are often far from accurate representations of the world.

If, however, we (the scientific community) only assign merit to studies that use numerics to verify experimental data, then it seems to me that the role of simulations becomes greatly diminished to the point that they are unnecessary. After all, data is presumably collected by honest and accurate means. Any other reproduction of the data, numerical or otherwise, could be seen as mere redundancy. Under what context then, should numerics be used in research?

My personal opinion is that they are fantastic aids in helping one understand a problem and predict outcomes from experiments. This, I think, is a subtle but important point. A good scientific theory predicts the results of an experiment; a good simulation will do the same. Under this context I think simulations can be put to good and valid use within research.

Saturday, May 8, 2010

Guidelines for writing, pt. 1

I've spent some time thinking about it, so I suppose I had better get started. These are the first few points I've thought of for my personal list of writing guidelines for a scientific paper. I've tried to focus on the issues that I find most problematic when reading other papers. Some are technical while others are cosmetic. Of course, I can't claim that the following suggestions are without their own flaws, so judge them critically and apply them in any manner you see fit. In no particular order:

  1. Figure captions should be able to stand alone. Do not include abbreviations or references to the text.
  2. Make all your points in the introduction, figures and their captions, and conclusion. Use the body of the text to add detail and repeat your main points.
  3. Eliminate redundant and unnecessary words. My favorite example from papers on light scattering is the term "material system" for describing matter that interacts with light. Either "material" or "system" can work, but using both just takes up space.
  4. Do not list the paper's section titles and their descriptions as is commonly done in the last paragraph of the introduction. Almost every experimental paper (at least in optics and physics) follows the same format of introduction/theory/experiment/results/conclusion, so an outline of the paper is rarely needed by the reader.
  5. Use descriptive titles for sections. Compare "Theory" to "Model for Partially Developed Speckle."
  6. Use one tense throughout. I prefer past tense since you are reporting what procedure you followed and the results you obtained to the scientific community, not experiments that you are currently doing.
  7. Do not refer to previous publications to describe experimental setups.
Feel free to add your own. I will be adding more in time.

Tuesday, May 4, 2010

But that makes it sound obvious

There is a brief article in Nature Physics addressing the issue of poor writing skills in scientific journal papers and other communications. The article mentions that writing in technical fields is a skill that is assumed to be had by students at the beginning of their research careers. I think that it's fairly obvious that this is very often a false assumption. The article also mentions the common misunderstanding among scientists that verbose and complicated sentences communicate a deeper understanding possessed by the authors. In reality, however, this practice only serves to befuddle readers and acts as a barrier for the proliferation of one's research.

I find it interesting that several experts suggest a linear style of writing journal papers in which ideas are presented in series. This reminds me of how a manual is written. I suppose journal papers should ultimately communicate the steps that were taken to obtain a given set of data and their corresponding analyses.

The article can be found here.

Wednesday, April 14, 2010

One last comment

My advisor offered some advice to me recently about collaborative work, and I think it's the toughest advice to follow for a stubborn scientist (myself being very much included in this category). He said that you need to listen to the experts in the other fields and not assume that you can know everything.

Monday, April 12, 2010

Please give my draft a meaning

I often find myself reviewing drafts of manuscripts and presentations for my colleagues. Usually I end up reviewing the direct results of a first draft. Consequently, the writing is almost always incoherent and does not feel like a "whole" document. I  fix many grammatical and mechanical flaws but find myself unable to critique the most important part of the document, that being the overall message itself. I believe that it is because of the fact that I am reviewing a first draft that I cannot perform this essential review function.

A major emphasis of my composition classes in college was that writing is a long process with many steps. A document will go through a large number of changes and will only superficially resemble any drafts that led to its completion. Proper placement of peer review in the writing process is essential to maximizing its effectiveness; if utilized too early, it reduces the quality of a finished work and costs valuable time.

Tuesday, April 6, 2010

Cosmic collaboration

This month's issue of Nature Physics contains an interesting article about astronomy's move from the realm of individual efforts to the collective findings of large teams of scientists. While the article is cautionary in tone and recommends that astronomer's reground themselves in the nuances of experimental astronomy, I did find some interesting comments that can compliment my previous post on collaborative research.

For one, the article criticizes a recent trend in astronomy papers whereby authors neglect error bars or even the data upon which their conclusions are drawn. The reason for this, as I understood the article, is that the scientists are largely unfamiliar with how the data was collected or what can contribute to the error in general. This can be one risk of collaborative research projects in general. Eventually, material will arise from a party which has no direct interest in the paper being written and as a result this material will slip by the critical eye of the authors.

Another issue I see is that, according to the article, students and post-docs have become mere data slaves. I think that collaborative efforts can in some way reduce scientists to a tool for performing the mechanical tasks that belong to their field of specialization. I think that this dehumanization of the role of a scientist can squelch creativity and ruin the spirit of scientific pursuits.

The article I am referring to can be found here.

Sunday, April 4, 2010

Collaborative efforts

This weekend was CREOL's annual Optics Day, an educational outreach event for students and adults of all educational levels. The purpose of Optics Day is to educate the public about the benefits of optics and photonics-based technologies and how their lives are shaped by the science that we do. In addition to the usual day-long event, the SPIE student chapter hosted a small symposium for graduate students from nearby universities, such as Florida Atlantic University and the University of North Carolina, Charlotte.

During the panel discussion at the symposium I asked a question of the speakers concerning collaborative research efforts. All the panelists agreed that diversity and interdisciplinary research is very important to science today, especially given that many fields have become highly specialized and esoteric towards scientists outside of the field. However, I was interested to know what all the researchers in a collaborative effort need to have in common to produce good research.

Dr. Michael Bass, from CREOL, suggested that personalities have to be compatible. This includes work ethic, vision, and the individual desires of the collaborators. Dr. Alex Vitkin, from the University of Toronto, suggested that possessing knowledge of a wide range of topics, such as that obtained from a physics education, is very important to be able to communicate with the other researchers. However, he cautioned that we can not be generalists; we must specialize in one area. Otherwise, we risk not being able to contribute to the effort.

It seems to me that in order to contribute to interdisciplinary research, I might wish to have one niche area that I can claim a specialization in.

Sunday, March 28, 2010

A hierarchy of concepts

Richard Feynman, in his Lectures on Physics, had a habit of discussing both the philosophical and practical issues of the science that he taught. One such issue was on the idea that waves could possess particle-like properties, such as momentum and position. As he notes in his Lectures, Vol. 3,

"Only measurable quantities are important to physics. This is false. We need to extend current concepts to unknown areas and then test these concepts. It was not wrong for classical physicists to extend their ideas of momentum and position to quantum particles. They were doing real science, so long as they then checked their assumptions."

What I believe is of value in this statement is the idea that understanding new phenomena is achieved by applying concepts from already well-understood processes and things. So what if a wave didn't traditionally possess momentum or a position? These two concepts (waves and particles) could at least be used to further our understanding of quantum entities, which possess both wave and particle-like properties but do not act entirely like one or the other of these classical constructs.

The same idea I think can be applied in teaching. First find a concept that students are familiar with, then show how this concept can be extended to describe a new phenomenon. However, to be self-consistent and complete, a discussion of how the concept fails to completely describe the phenomenon is required as well. Momentum and position obviously can't describe quantum interference of particles. In this manner, a knowledge of the world is built up of a patchwork of prior understanding.

I found this statement particularly enlightening:
"When a data set is mutilated (or, to use the common euphemism, ‘filtered’) by processing according to false assumptions, important information in it may be destroyed irreversibly. As some have recognized, this is happening constantly from orthodox methods of detrending or seasonal adjustment in econometrics. However, old data sets, if preserved unmutilated by old assumptions, may have a new lease on life when our prior information advances."

Sunday, March 14, 2010

A big theory for tiny things

Much of the theory for plasmonics is derived from Maxwell's electromagnetic theory, a theory which works incredibly well for systems that are very large compared to the atoms that make up the comprising media and to the wavelength of the light. However, plasmonics deals with the phenomena that occur in nanoscale devices, which often are smaller than the wavelength of the light and can contain a few thousand atoms. In the case of surface-enhanced Raman scattering, single molecule detection is even possible.

I wonder if the researchers in plasmonics are ever unsettled by the fact that they are using a theory that is based on the concept of effective material properties (permittivity and permeability) to describe media that aren't quite effective materials. This state of affairs seems to suggest that we should expect deviations from the predictions of Maxwell's equations in plasmonics experiments, but does this now take away much of the credibility in the results in these experiments? Consider this question: how does a researcher sell her novel plasmonics device or experiment to reviewers and grant committees if it is expected not to work as, well, expected?

Taking a more positive viewpoint, perhaps this is what makes a field exciting for experimentalists. Here there is a phenomena for which a complete theory does not exist. Experiments are needed to establish the working principles from which a theory can evolve.

Monday, March 8, 2010

Guidelines for writing

I've been thinking a lot about effective writing and communication within academia the past few weeks, which is why this post from Ross McKenzie's blog grabbed my attention. He's also posted other links to guidelines in the past, such as this one for writing good Physical Review Letters (PRL) papers.

I think that it would be a good exercise to create my own list of guidelines when writing papers. Or, for that matter, it might be a good exercise for anyone in academia to do so. Having these ideas down in print, as opposed to remaining vaguely defined in my mind, might make them more concrete and allow me to better define what I mean by the quality of a paper.

Saturday, March 6, 2010

A purpose for every project

A popular term in the field of optical sensing right now is "task-specific sensing." It is a system design paradigm in which the relationships between a system's components are optimized towards the purpose of the system. This is opposed to the idea of making the components perform as efficiently as possible on their own. For example, a system that only needs to detect an object in its field of view does not need to have a lens design that reduces aberrations and increases spatial resolution. Instead, a scene simply has to be imaged onto the sensor in a way that facilitates efficient image processing by the software. In other words, the relationships between the optics, electronics, and software should be optimized towards the goal of detecting an object, not seeing it clearly.

Nature has been performing task-specific design for a long time. The compound eye of a fly has very poor resolution since each bump on the eye acts as a single lens that couples to a sensing structure. Fortunately for the fly, it does not need to see well to find food. It does however need to avoid predators if it wishes to remain alive. The fly's eye has an extremely large field of view so that it can see things such as flyswatters coming at it from many different angles.

The task-specific paradigm has also led me to think about how to value research projects in academia. There is a very common notion that any research is good research. However, if a project creates some device or accomplishes some goal with no particular application in mind, then the idea of task-specific sensing might suggest that the simplest and least costly approach was not to have done the research at all since no need for it existed. I'll pose the question like this: which should come first, the need for a researched solution, or the solution itself?

Of course, the argument exists that research performed without a particular need may eventually find its uses, but I think that my question is still a valid one to ask before placing value upon research.

Sunday, February 28, 2010

A few questions about presenting

I was fortunate enough to speak with Dr. Jean-luc Doumont over dinner Friday night following a talk he gave on effective presentations at CREOL. The man is a truly gifted speaker, and if you are at all interested in improving your technical (or otherwise) communication skills, I highly recommend that you read his blog or look at his book, though I have not read it. I have been following his blog for a short while now and have to admit that it is one of the reasons I decided to start writing my own. More precisely, I am trying to practice the methods he espouses through my own writing.

What I found most interesting from our conversation is his belief that his principled approach to presenting is universal, i.e. it applies to many people in many occupations and settings. Given that the culture and content underlying a business or academic field can be drastically different from another, I find it very intriguing that the same skill set can be effectively employed in different settings.

For example, a presentation on new data at a conference on astrophysics will surely contain very little information that is pertinent to topics discussed at a meeting of marketing executives, yet his methods for communicating the presenter's message can successfully be applied in both situations.

This may not seem too surprising since communication is a basic function of society. It does however bring a few questions to my mind:
  1. Is there more than one structured approach to accurately conveying information to an audience? If so, does it relate to Dr. Doumont's methods, and to what degree does it overlap with them? 
  2. Is there a better way to do it, or has Dr. Doumont described the best possible way to communicate a message to a large number of people. 
  3. Is his approach only applicable to writing and speaking, or does it apply to multimedia and mass marketing as well?

Saturday, February 20, 2010

Hello, world!

Or is it "Hello world!"? Or even, "hello world!"? Is the exclamation mark too much?

I hope that this is the first of many posts in my very first blog. What shall I write about? Honestly, I'm not too sure right now, but let's see what comes of this post...

I tend to be a hyperactive thinker. I notice a problem or oddity and turn it over in my head, much like I am untangling some knot. First I test this strand. Is it loose? No. Let me examine the other side. Ah, here's an end. I may give up for a time and unleash my subconscious so that I can return with a new approach. I may even lose interest with the issue entirely.

The key to solving a problem, as any good scientist knows, is to keep a careful record of observations, failed attempts, and insights into possible solutions. I hope for this blog to serve as my lab notebook to the less technical side of a scientific life. Think of it as a collection of my conclusions to the questions raised by the issues I encounter in my life as a scientist.

And for whom am I writing? Myself, primarily. With these keystrokes I begin sorting and shuffling what I have learned (or learnt?) during my years as a curious individual in an even more curious world.