Wednesday, March 28, 2012

Advisors should allow students to make conclusions from their work

There is a problem in my mind concerning ownership of the message of an academic work. By message I mean the presentation of the conclusions and impact that the work may have on its field. The demarcation of duties between a student and his or her advisor is typically as follows: the student does the manual labor that goes into a project, writes the bulk of the manuscript, makes the graphs, and in general does the "dirty work." Advisors provide the ideas and problems, offer guidance when the student is having difficulties, and provide the lab space and equipment that the student needs.

So who exactly should have more of a say in selling the work? Advisors (in my opinion) often overreach with their conclusions due to a misunderstanding of the details of the work and the need to acquire grant money. On the other hand, students typically undersell the work since they are too concerned about the accuracy of the details, even if the big picture remains correct. Furthermore, the research usually correlates with the advisor's career work, not the student's.

Though I am biased, I'm inclined to side with the students since it is more in line with the scientific spirit. Students will more often argue honestly for their work and let their peers decide its value. It's also crushing to a student's morale if they are shut down by their advisor in this part of the process. Providing students with little-to-no say on the presentation will make them feel as if their creative faculties are unwanted and that they are simply performing mechanically. I think this problem stems from academia commonly being viewed by advisors as a career, not as a means to pursue science.

Note: This post is based partly on a discussion I've had with fellow students recently. All of us were complaining that we spend large amounts of time writing the drafts of manuscripts only to have the introduction and conclusions completely rewritten by the advisors. One friend quipped, "If the advisor wanted it written his way all along, why didn't he write it in the first place?"

Thursday, March 22, 2012

A point about negative temperatures

Negative temperatures occur when the derivative of entropy with respect to a system's energy is negative. In other words, the entropy decreases with added energy. As Daniel Schroeder points out in his Introduction to Thermal Physics, this may only occur when the total energy that a system may take is limited, such as a two-state paramagnet. In other, more common systems, such as a gas in a container, the total energy that the system may absorb is practically unlimited. This is why negative temperatures are not observed.

I think a lot of confusion in learning thermodynamics is that the common sense notion of temperature is very different from its thermodynamic definition. Other ideas, such as force or work, do not contradict common sense quite so much and are more readily adopted.

Finally, Schroeder references this article for an experiment in which negative temperature was observed.

Wednesday, March 21, 2012

A focus on statistics

My biology colleagues know a lot about statistics. They routinely perform hypothesis tests, ANOVA, and perform Box-Cox analyses to be sure that the residuals of a fit to data are normally distributed. Ask many physicists and chemists about these tools and you'll likely be met with a blank stare. It could be argued that this points to a problem in physics and chemistry education. To some extent I agree that we (physicists) are not well trained in the art of proper data analysis.

On the other hand (and based on my limited knowledge of the biological sciences), I wonder if biologists place too much emphasis on statistics. If a student's first thoughts in a data analysis are about which type of regression is valid for the data, then I fear that they may miss obvious trends that may answer that question for them.

My belief is that data analysis is best approached intuitively first and formally second. I'm also afraid that a biology curriculum that focuses on the technicalities of statistics may under-emphasize this "human aspect" of analysis. Likewise, there is a point where rigor must be included and I see many physical scientists unable to provide it.

Again, these thoughts are based on my own limited understanding of the biological sciences. There are biologists, physicists, and chemists who excel in all areas. But differences in curricula and courses may bias us towards one aspect or another when in reality good scientists are capable of both.

Monday, March 12, 2012

What is the origin of 1/f noise?

Though I've come across it numerous times, I only today questioned the physical origin of the seemingly ubiquitous 1/f noise. I was both surprised and excited to learn that there is, as of now, no general theory of 1/f noise; it has been applied to signals of various types but eludes an interpretation outside of the signal's source.

In contrast, white noise and Brownian noise have well understood theories and are related through an integration/differentiation.

There are some references to 1/f noise and its relation to self-organized criticality, but I'm going to refrain on making a comment about this until I read more.

Thursday, March 8, 2012

CV's and résumés—when am I an expert?

I'm never quite certain when it's appropriate to add a particular skill or specialty to my CV or Linkedin profile. For example, I've been working on stochastic differential equations lately and can now simulate them for a few special cases. Does this mean that I can honestly state that I have experience with stochastic differential equations on my CV?

Logically speaking, I do have some experience, so the answer to the above question is "yes." However, I still have some reservations since I'm only familiar with such a small portion of the subject that I feel it would be dishonest to claim them as a job skill. The issue, then, is whether I possess a degree of competency with the subject that allows me to ethically place it as a specialty area on my CV.

I can think of a few ways to justify doing so. The first—and I know that this is a fallacious argument—is that many people add skill sets to their résumés or CV's that they are not entirely proficient in. If I am to compete against others for a job, then I should play by the same rules. Though this argument is logically flawed and morally questionable, it is simply one rule of the game.

Second, and much more sound, is that some areas of expertise, like stochastic differential equations, contain so much material that few people can claim that they are intimate with all of this knowledge. I could argue that my brief exposure to them has provided me with the resources to respond appropriately to a problem, i.e. I am now able to research its solution in a collection of references that I've already compiled. Without the prior experience, I would not have these resources and so I'm justified in claiming this subject as a skill. To put it another way, it's not so much the content of the subject matter but rather the ability to find appropriate solutions within its toolbox that counts. Of course, this detracts from individuals who really are experts in the subject matter, so this argument is morally questionable as well.

One final solution is simply to sort my skills in degrees of competency, such as expert, proficient, and familiar. This lessens any moral ambiguities because I may still add skills that I have limited experience with. On my CV, I have taken this approach and feel quite satisfied with it.