Yes, I realize that I missed a regular post day. However, I've been home drinking lots of coffee and beer and eating creamed cabbage, creamed potatoes, creamed, well, everything. So, I humbly apologize that my thoughts have shifted to gastronomy this week. I should mention that I will also likely miss next week's post as well.
In the meantime, since I have been thinking much about food, I direct your attention to the Maillard reaction, one of the most important chemical reactions in cooking! If you like toast, roasted vegetables, or browned steaks (I'm vegetarian, but fondly remember this part of my past), you have this non-enzymatic reaction to thank.
Happy holidays!
Thursday, December 29, 2011
Wednesday, December 21, 2011
The scientific method is not universal
I've often heard others quip that physicists worry too little about the details, whereas chemists worry too much. Recently I've come to better understand the attitudes that give rise to statements such as these.
If you've read other posts on this blog, you've probably realized that I most closely associate with experimental physicists. When I approach a problem, I form a model in my mind that makes intuitive sense. Then, I make the model more rigorous through pictures and equations. Throughout this step, I take extra care to ensure that the model parameters can be easily measured, an approach which no doubt adds a certain flavor to my models. My experiments then become realizations of the model to confirm its predictive power. Sometimes, modeling occurs after an experiment, and, almost always, the process jumps back to an earlier step but proceeds with more refinement.
What I've learned to appreciate is that this process flow varies between individuals in different scientific fields. For example, theoretical physicists don't place as much emphasis on how easy it is to measure a model's parameters, but their theories often extend over a wider range of phenomena. As another example, field biologists must take much more care in preparing an experiment than a physicist in the lab due to costly resources, limited time, and small sample sets. All of these factors leave their mark on the steps in the scientific method.
This understanding is valuable because multidisciplinary research is becoming vital to solving many scientific problems. Before we can help each other, scientists must learn to appreciate and understand our differences; otherwise, we'll never take proper account of the details.
If you've read other posts on this blog, you've probably realized that I most closely associate with experimental physicists. When I approach a problem, I form a model in my mind that makes intuitive sense. Then, I make the model more rigorous through pictures and equations. Throughout this step, I take extra care to ensure that the model parameters can be easily measured, an approach which no doubt adds a certain flavor to my models. My experiments then become realizations of the model to confirm its predictive power. Sometimes, modeling occurs after an experiment, and, almost always, the process jumps back to an earlier step but proceeds with more refinement.
What I've learned to appreciate is that this process flow varies between individuals in different scientific fields. For example, theoretical physicists don't place as much emphasis on how easy it is to measure a model's parameters, but their theories often extend over a wider range of phenomena. As another example, field biologists must take much more care in preparing an experiment than a physicist in the lab due to costly resources, limited time, and small sample sets. All of these factors leave their mark on the steps in the scientific method.
This understanding is valuable because multidisciplinary research is becoming vital to solving many scientific problems. Before we can help each other, scientists must learn to appreciate and understand our differences; otherwise, we'll never take proper account of the details.
Tuesday, December 20, 2011
A "wow" moment
Every once in a while I come across a scientific image and say, "wow." It happened yesterday when I saw the image from this article in last Wednesday's Nature (the image should not require a subscription).
The full image is from this article and compares the size of a tiny, parasitic wasp to two single-celled paramecia.
The full image is from this article and compares the size of a tiny, parasitic wasp to two single-celled paramecia.
Wednesday, December 14, 2011
Harmful errors in science journalism
Popular science reporting is incredibly important for making current research known to the lay public in a concise and simplified manner. It often acts as the go-between for channeling information from scientists and engineers to the people who could benefit from the research. Unfortunately, translating new concepts from the jargon of specialists to untrained individuals poses challenges, and often the science in these articles is erroneous. This is a rather well-known problem and the reason why I read such articles with skepticism.
Two different friends sent me this article recently from MIT News reporting on the development of a camera for generating "Trillion-frame-per-second video." Armed with my usual dose of skepticism, I glanced quickly through it and the accompanying YouTube video. I think the significance of the work is a bit overstated [1], but the article sufficiently addresses its limitations and applications. Unfortunately, in describing the operation of a streak camera, the author Larry Hardesty notes that
Electric fields do not deflect photons. Electric fields deflect moving electrons. What is likely happening inside the camera is that photons scattered from the bottle impinge upon a phosphor screen, which in turn ejects electrons. The electrons in turn are deflected by the time-varying field within the camera to different points on the detector. Electric fields can be modulated with RF equipment up to a few hundred GHz, which roughly corresponds to a few tenths of a trillion oscillations of the field per-second. I believe that this is how the noted number was obtained.
So what's the big deal? The point of the article, after all, is to inform a general audience about a newly developed camera—the details of its operation should not matter. Here are a few reasons why errors such as these are harmful:
[1] It images processes that are stationary over at least one hour in time. For example, it can not image a turtle that walks through its field of view. Dr. Velten's quote in the beginning of the article might be placed in better context by noting that while few things are too fast for the camera, many things are too slow for it.
[2] I was once quoted several times in an article about my research that I had not realized had been published until I chanced upon it while Googling my name. I didn't find it until nine months after it appeared in print. I received no word from the author what-so-ever that it had appeared in the Orlando Sentinel, and was misinterpreted as having stated that our work existed only in theory. If the author had asked me to review the article prior to its publication, this error would not have been made.
Two different friends sent me this article recently from MIT News reporting on the development of a camera for generating "Trillion-frame-per-second video." Armed with my usual dose of skepticism, I glanced quickly through it and the accompanying YouTube video. I think the significance of the work is a bit overstated [1], but the article sufficiently addresses its limitations and applications. Unfortunately, in describing the operation of a streak camera, the author Larry Hardesty notes that
"The aperture of the streak camera is a narrow slit. Particles of light — photons — enter the camera through the slit and pass through an electric field that deflects them in a direction perpendicular to the slit. Because the electric field is changing very rapidly, it deflects late-arriving photons more than it does early-arriving ones."
So what's the big deal? The point of the article, after all, is to inform a general audience about a newly developed camera—the details of its operation should not matter. Here are a few reasons why errors such as these are harmful:
- Many people who read these types of articles are students or young people who aspire to be scientists. To provide incorrect information hurts their education by establishing false ideas within their minds.
- It's dishonest. Whether the author knew he had made a mistake or not, the article should have been reviewed by one of the scientists before its publication [2]. Eventually, people could learn to associate dishonesty with science and this is clearly undesirable.
- It's distracting and draws attention away from the article's main message. I'll admit that I had made up my mind that the research was flawed after I saw the error. Only a second, careful reading revealed that nothing about the research was in error.
[1] It images processes that are stationary over at least one hour in time. For example, it can not image a turtle that walks through its field of view. Dr. Velten's quote in the beginning of the article might be placed in better context by noting that while few things are too fast for the camera, many things are too slow for it.
[2] I was once quoted several times in an article about my research that I had not realized had been published until I chanced upon it while Googling my name. I didn't find it until nine months after it appeared in print. I received no word from the author what-so-ever that it had appeared in the Orlando Sentinel, and was misinterpreted as having stated that our work existed only in theory. If the author had asked me to review the article prior to its publication, this error would not have been made.
Monday, December 12, 2011
An optical method for finding exoplanets
This morning I read an Optics Letter from 2005 entitled "Optical Vortex Coronograph" that described an optical system for detecting exoplanets orbiting a star that could be up to 1e8 times brighter than the planet's reflected light.
The system is detailed below. In a traditional coronograph (i.e. one not employing a vortex phase mask), the mask in focal plane FP1 is a zero light-transmitting block of very small angular extent. Because the image of a star that the system is pointed at is formed in plane FP1, its light is filtered out of the final image by this mask. The Lyot stop in plane PP2 then blocks the light from the star that is diffracted by the mask. The resulting intensity collected in plane FP3 is largely contributed to by any point source near the star, e.g. an exoplanet.
The system is detailed below. In a traditional coronograph (i.e. one not employing a vortex phase mask), the mask in focal plane FP1 is a zero light-transmitting block of very small angular extent. Because the image of a star that the system is pointed at is formed in plane FP1, its light is filtered out of the final image by this mask. The Lyot stop in plane PP2 then blocks the light from the star that is diffracted by the mask. The resulting intensity collected in plane FP3 is largely contributed to by any point source near the star, e.g. an exoplanet.
What is not clear to me is why replacing the block in FP1 by a vortex phase mask improves the performance of the coronograph. Mathematical arguments are presented, but I find an intuitive explanation lacking.
Wednesday, December 7, 2011
How best to view the internet as a learning tool
I work on some projects that require knowledge of basic biology, such as cell structure, biochemistry, and laboratory technique. However, I was trained as a physicist and engineer, and, as a result, have had an extremely limited education in the biological sciences. For example, my last biology class was anatomy during my junior year in high school.
The internet has been essential in bringing me up to speed in these topics. I've put resources such as MIT's OpenCourseWare and the independent OpenWetWare to good use. Companies such as Invitrogen provide valuable tutorials and explanations on laboratory practices as well. The best part about these resources is that I can find exactly the information that I need to know when I need to know it.
I believe that a very few doubt the usefulness of the web as a learning tool, but how to use it as a tool is certainly a topic of debate. Based on my own experiences, I think that internet learning is best used as an independent collection of bits of knowledge that are accessed as needed.
Let's break this definition down into parts. By independent, I mean that the value of internet resources is determined by the individual who needs to know something. A catalog of optical parameters of semiconductor materials will likely serve little purpose to a field biologist. The downside to this is that the web must contain an exhaustive amount of knowledge to be useful to everyone. If there's a possibility that someone may wish to know something, then it must be contained already on the web [1].
"Bits of knowledge" makes intuitive sense, but a formal definition may not exist. If I wish to know how to stain a cell using immunofluorescence, is each step considered a bit of knowledge, or is the entirety of the process considered one "chunk?" I don't think that this detail is particularly relevant to my discussion, but it is interesting to think about how one may quantify knowledge [2].
Finally, the ability to access knowledge as needed makes it efficient. The human brain can only hold on to a limited amount of data. Some details are best stored on machines; otherwise numerous human specialists would be required to perform complex tasks, each one intimate with one small part of the task. In my graduate work, I can learn about cytoskeletal filaments as needed, or my advisor could hire on a cell biologist to consult me on a small number of issues. The first option is decidedly cheaper. In addition, ease of access is important, and spans topics such as mobile devices, bringing the internet to developing countries, and search algorithms.
So, in my opinion, internet learning is best utilized as a user-valued collection of information that is accessed accordingly. Communications through the internet, such as e-mail correspondence with teachers, is important, and is compatible with my definition since I do not put limits on how knowledge is delivered. Failure to properly use the internet as a learning tool usually comes from poor access (e.g. bad search engine algorithms) or a user improperly identifying what they need to know. In the last case, the success of internet learning cannot be determined by machines; like many things it boils down to the human element.
[1] I can't get the thought of the internet as a causal knowledge database out of my head right now, since it can only contain knowledge that has already been generated. It will never contain knowledge from the future, unless, perhaps, new knowledge can be generated from data it already holds, but that opens the question of the definition of knowledge.
[2] Information theory comes to mind here. The information content of a signal is quantified as a logarithm of the number of symbols in the signal.
The internet has been essential in bringing me up to speed in these topics. I've put resources such as MIT's OpenCourseWare and the independent OpenWetWare to good use. Companies such as Invitrogen provide valuable tutorials and explanations on laboratory practices as well. The best part about these resources is that I can find exactly the information that I need to know when I need to know it.
I believe that a very few doubt the usefulness of the web as a learning tool, but how to use it as a tool is certainly a topic of debate. Based on my own experiences, I think that internet learning is best used as an independent collection of bits of knowledge that are accessed as needed.
Let's break this definition down into parts. By independent, I mean that the value of internet resources is determined by the individual who needs to know something. A catalog of optical parameters of semiconductor materials will likely serve little purpose to a field biologist. The downside to this is that the web must contain an exhaustive amount of knowledge to be useful to everyone. If there's a possibility that someone may wish to know something, then it must be contained already on the web [1].
"Bits of knowledge" makes intuitive sense, but a formal definition may not exist. If I wish to know how to stain a cell using immunofluorescence, is each step considered a bit of knowledge, or is the entirety of the process considered one "chunk?" I don't think that this detail is particularly relevant to my discussion, but it is interesting to think about how one may quantify knowledge [2].
Finally, the ability to access knowledge as needed makes it efficient. The human brain can only hold on to a limited amount of data. Some details are best stored on machines; otherwise numerous human specialists would be required to perform complex tasks, each one intimate with one small part of the task. In my graduate work, I can learn about cytoskeletal filaments as needed, or my advisor could hire on a cell biologist to consult me on a small number of issues. The first option is decidedly cheaper. In addition, ease of access is important, and spans topics such as mobile devices, bringing the internet to developing countries, and search algorithms.
So, in my opinion, internet learning is best utilized as a user-valued collection of information that is accessed accordingly. Communications through the internet, such as e-mail correspondence with teachers, is important, and is compatible with my definition since I do not put limits on how knowledge is delivered. Failure to properly use the internet as a learning tool usually comes from poor access (e.g. bad search engine algorithms) or a user improperly identifying what they need to know. In the last case, the success of internet learning cannot be determined by machines; like many things it boils down to the human element.
[1] I can't get the thought of the internet as a causal knowledge database out of my head right now, since it can only contain knowledge that has already been generated. It will never contain knowledge from the future, unless, perhaps, new knowledge can be generated from data it already holds, but that opens the question of the definition of knowledge.
[2] Information theory comes to mind here. The information content of a signal is quantified as a logarithm of the number of symbols in the signal.
Wednesday, November 30, 2011
Being rational with science
There's a cool talk posted at the blog Measure of Doubt that was given recently by one of the blog's authors, Julia Galef. The talk concerns the idea of a straw Vulcan, an idealized character based on Star Trek's race of ultra-logical humanoids. Galef argues that the Vulcans base their actions and decisions on a logic that's popularly perceived as rational, when it is in fact not. This is because she defines rationality in one of two related ways: 1) a method for obtaining an accurate view of reality, and 2) a method of achieving one's goals. To make her argument, she presents five beliefs about Vulcan behavior that are commonly held to be rational and then gives examples from both Star Trek and real life where this behavior has violated her definition of rationality.
I particularly like the second and third items on her list—never making a decision on incomplete information and never relying on intuition—because I find that these are common mistakes that scientists make. For example, suppose some graduate student wishes to setup an experiment that he or she is unsure will work. The student may take one of two courses of action (really there are three, the third being a combination of the first two). The first is to try the experiment and see if the outcome is desirable. The second is to carry out a number of calculations to determine if the desired outcome will be produced, and then perform the experiment. The fallacy occurs when the student attempts to plan too much and wastes time on arduous calculations when the experiment may have consumed less time. This is a case of failing to act simply because he or she did not possess the complete knowledge of whether the experiment would work in the first place.
The example above is irrational by Galef's definition because, in all likelihood, the graduate student would have liked to have obtained a yes-or-no answer to the question "does the experiment work?" in as little time as possible, and sometimes this means running the experiment before fully understanding what the outcome would be. Of course, it takes intuition to determine when it's time to put down the pen and paper and do the actual lab work, and that's why it's rational to rely on intuition.
In a sense, these arguments depend strongly on Galef's definition of rationality, but I see no reason why this isn't a good definition to work with.
I particularly like the second and third items on her list—never making a decision on incomplete information and never relying on intuition—because I find that these are common mistakes that scientists make. For example, suppose some graduate student wishes to setup an experiment that he or she is unsure will work. The student may take one of two courses of action (really there are three, the third being a combination of the first two). The first is to try the experiment and see if the outcome is desirable. The second is to carry out a number of calculations to determine if the desired outcome will be produced, and then perform the experiment. The fallacy occurs when the student attempts to plan too much and wastes time on arduous calculations when the experiment may have consumed less time. This is a case of failing to act simply because he or she did not possess the complete knowledge of whether the experiment would work in the first place.
The example above is irrational by Galef's definition because, in all likelihood, the graduate student would have liked to have obtained a yes-or-no answer to the question "does the experiment work?" in as little time as possible, and sometimes this means running the experiment before fully understanding what the outcome would be. Of course, it takes intuition to determine when it's time to put down the pen and paper and do the actual lab work, and that's why it's rational to rely on intuition.
In a sense, these arguments depend strongly on Galef's definition of rationality, but I see no reason why this isn't a good definition to work with.
Tuesday, November 29, 2011
Momentum and position—it's all you need
I've had a bit of downtime recently which has led to thumbing through my undergrad QM book, Griffiths' Introduction to Quantum Mechanics, out of curiosity.
In the very first chapter he makes the statement "The fact is, all classical dynamical variables can be expressed in terms of position and momentum." To be honest, I never fully realized this as an undergrad, and if I did, it certainly did not leave such an impression on me to have remained in my memory.
Is this bit of knowledge a common oversight in the education of physics students, or something that simply went unnoticed by me? Furthermore, how important is it to the development of a student's understanding of physics?
In the very first chapter he makes the statement "The fact is, all classical dynamical variables can be expressed in terms of position and momentum." To be honest, I never fully realized this as an undergrad, and if I did, it certainly did not leave such an impression on me to have remained in my memory.
Is this bit of knowledge a common oversight in the education of physics students, or something that simply went unnoticed by me? Furthermore, how important is it to the development of a student's understanding of physics?
Friday, November 18, 2011
Deconvolution—I want it all!
A groupmate and I were discussing practical deconvolution of an instrument's response from data the other day. After some searching on the internet, he came across the following words of advice:
When it comes to deconvolution, don't be greedy.This is actually pretty good advice. In theory, the idea works great. Fourier transform the data, divide by the instrument's transfer function, then inverse Fourier transform to get the deconvolved data. But with real data, you risk amplifying the noise or introducing artifacts, especially with FFT-based methods. This last point is pertinent if your instrument's impulse response is very short compared with the span of the data. So use some caution; just because you can deconvolve, that doesn't mean you should.
Thursday, November 17, 2011
Google Scholar Citations is up
This morning I received my notice that Google Scholar Citations, Google's new online tool for tracking your own publications, citations, etc., was up and running.
I haven't played around with it much, but I'm impressed that it automatically found everything I've produced that's on the web with only one mistake—it mistakenly concluded that I had authored a religious text.
I'm not quite so certain that it will be useful. After all, the number of citations that my papers generate don't improve the quality of my work. However, it's still fun to easily track these statistics, even if it only satisfies my own curiosity.
I haven't played around with it much, but I'm impressed that it automatically found everything I've produced that's on the web with only one mistake—it mistakenly concluded that I had authored a religious text.
I'm not quite so certain that it will be useful. After all, the number of citations that my papers generate don't improve the quality of my work. However, it's still fun to easily track these statistics, even if it only satisfies my own curiosity.
Wednesday, November 16, 2011
It's (not) just all a bunch of hippie crap
I think that we physical scientists possess a bit of hubris when it comes to our perceived understanding of nature. Its roots lie with the view that the laws of physics govern everything and the rest is just details. Unfortunately this hubris is shared by many of my physicist, engineering, and chemist colleagues. My issue is not, however, with their assurance that nature can be reduced to a set of relatively simple laws; rather it is with their perception of the social and some natural sciences as pseudo- or unscientific.
I witnessed this type of hubris many times as an undergrad where fields such as political science, economics, and sociology were labeled not as social science but as humanities (I went to an engineering school). Most of my classmates enjoyed their required classes in the humanities since they were a nice break from their challenging engineering courses. And perhaps this is the point where the hubris begins: as a belief that the humanities are somehow "easier."
It bothers me when I hear some of my grad school friends roll their eyes or make the quotation mark gesture with their fingers when they sarcastically refer to work in the humanities as science. This perception of the humanities as unscientific is outright false when they are weighed against the logical structure of any science. Hypotheses are formed, observations are made, and conclusions are drawn from the observations and prior knowledge, just as in the physical sciences. If there are differences, they lie not with this structure, but with the scales of measure involved; the physical sciences tend to employ more quantitative measures than the social sciences, but this by no means makes one field more scientific than another.
On a positive note, I don't see this arrogance in most of my grad school friends. I just wanted to explore why a few of them marginalized the humanities as a science so I could better defend my position in the future.
And if I'm being hypocritical through my continued use of the word "humanity" and not "social science," I only used it because I simply could not think of any better word to use. Ultimately, it's all science to me ;)
I witnessed this type of hubris many times as an undergrad where fields such as political science, economics, and sociology were labeled not as social science but as humanities (I went to an engineering school). Most of my classmates enjoyed their required classes in the humanities since they were a nice break from their challenging engineering courses. And perhaps this is the point where the hubris begins: as a belief that the humanities are somehow "easier."
It bothers me when I hear some of my grad school friends roll their eyes or make the quotation mark gesture with their fingers when they sarcastically refer to work in the humanities as science. This perception of the humanities as unscientific is outright false when they are weighed against the logical structure of any science. Hypotheses are formed, observations are made, and conclusions are drawn from the observations and prior knowledge, just as in the physical sciences. If there are differences, they lie not with this structure, but with the scales of measure involved; the physical sciences tend to employ more quantitative measures than the social sciences, but this by no means makes one field more scientific than another.
On a positive note, I don't see this arrogance in most of my grad school friends. I just wanted to explore why a few of them marginalized the humanities as a science so I could better defend my position in the future.
And if I'm being hypocritical through my continued use of the word "humanity" and not "social science," I only used it because I simply could not think of any better word to use. Ultimately, it's all science to me ;)
Labels:
science
Wednesday, November 9, 2011
My relationship with curve fitting
My understanding of curve fitting has changed a lot since I took statistics in high school. Back then it was simply an exercise that produced a line through trivial data that my classmates and I had collected. The function for the line could predict the outcome of future experiments, and therein lay its usefulness.
In college, it's importance increased when I learned how to extract physically meaningful quantities from the fitting parameters. The fit was a tool to extract the information from the noise of experimental randomness. It became more complex as well—the types of models with which I could fit the data grew far beyond simple lines. Now models included Gaussians, decaying exponentials, and many other transcendental equations. The importance of curve fitting at this point of my education lay beyond simple prediction; it produced for me the reality that lay behind the noise, and this reality was encoded into the values of the fit parameters.Curve fitting had become absolute and always revealed the true physics behind some process.
Now, after four and a half years of graduate school I've learned that the human element to curve fitting is paramount. I no longer see it as the purely objective tool that I did before I received my B.S. The moment of change occurred when I realized that the results of a fit can be marginalized simply by adding too many parameters to the model (c.f. this post from Dr. Ross McKenzie where he noted a paper in Nature that contained a fit of a model containing 17 parameters to 20 plus data points). If one can fit an elephant to data using only five parameters, then clearly any other model, including one that a scientist is arguing for in a paper, can be made to "explain" data if it possesses enough free parameters. Furthermore, the initial values for the fitting procedure can change the outcome since the routine may settle on a local minimum in the solution space. Therefore, an educated guess performed by an informed human is a critical element to any curve fitting routine.
My experiences with curve fitting in graduate school have completely transformed my opinion of its value. It certainly no longer appears to me as an absolute tool. I'm also much more careful when assessing conclusions in papers that employ some sort of regression since I've personally experienced many of its pitfalls.
I think it's incredibly important to make undergraduates aware that curve fitting goes beyond a simple exercise of plugging data into a computer and clicking "Go." Both intuition about the physics that generated the data and the ability to make objective judgements about the value of a model are crucial to making sound conclusions. What is the variability in the parameters with the range of data included in the fit? Do the parameters represent physical quantities or are they used to simply facilitate further calculations? What is the degree of confidence in the fit parameters? Are there too many free parameters in the model? Is the original data logarithmic, and, if so, was the fit performed on a logarithmic or linear scale? All of these questions and more should be addressed before presenting results based on a fitting procedure.
In college, it's importance increased when I learned how to extract physically meaningful quantities from the fitting parameters. The fit was a tool to extract the information from the noise of experimental randomness. It became more complex as well—the types of models with which I could fit the data grew far beyond simple lines. Now models included Gaussians, decaying exponentials, and many other transcendental equations. The importance of curve fitting at this point of my education lay beyond simple prediction; it produced for me the reality that lay behind the noise, and this reality was encoded into the values of the fit parameters.Curve fitting had become absolute and always revealed the true physics behind some process.
Now, after four and a half years of graduate school I've learned that the human element to curve fitting is paramount. I no longer see it as the purely objective tool that I did before I received my B.S. The moment of change occurred when I realized that the results of a fit can be marginalized simply by adding too many parameters to the model (c.f. this post from Dr. Ross McKenzie where he noted a paper in Nature that contained a fit of a model containing 17 parameters to 20 plus data points). If one can fit an elephant to data using only five parameters, then clearly any other model, including one that a scientist is arguing for in a paper, can be made to "explain" data if it possesses enough free parameters. Furthermore, the initial values for the fitting procedure can change the outcome since the routine may settle on a local minimum in the solution space. Therefore, an educated guess performed by an informed human is a critical element to any curve fitting routine.
My experiences with curve fitting in graduate school have completely transformed my opinion of its value. It certainly no longer appears to me as an absolute tool. I'm also much more careful when assessing conclusions in papers that employ some sort of regression since I've personally experienced many of its pitfalls.
I think it's incredibly important to make undergraduates aware that curve fitting goes beyond a simple exercise of plugging data into a computer and clicking "Go." Both intuition about the physics that generated the data and the ability to make objective judgements about the value of a model are crucial to making sound conclusions. What is the variability in the parameters with the range of data included in the fit? Do the parameters represent physical quantities or are they used to simply facilitate further calculations? What is the degree of confidence in the fit parameters? Are there too many free parameters in the model? Is the original data logarithmic, and, if so, was the fit performed on a logarithmic or linear scale? All of these questions and more should be addressed before presenting results based on a fitting procedure.
Tuesday, November 8, 2011
The best abstract ever?
By way of my advisor, concerning the recently found anomaly in the neutrino speed measurement at CERN:
As far as abstracts go, it does succinctly summarize their findings ;)
As far as abstracts go, it does succinctly summarize their findings ;)
Monday, November 7, 2011
Understanding the generalized Stokes-Einstein equation
Mason and Weitz published a paper in 1995 about a technique for extracting bulk material parameters from dynamic light scattering measurements on complex fluids. That is, they established a mathematical relationship between the fluctuations of scattered light intensity from a colloidal suspension and the shear moduli of the complex fluid as a whole.
One primary assumption in this derivation is the equivalence of the frequency-dependent viscosity to a so-called memory function:
One primary assumption in this derivation is the equivalence of the frequency-dependent viscosity to a so-called memory function:
where η(s) is the Laplace frequency-dependent viscosity and ς(s) is the memory function. As a special case example, ς(s) is a delta-function at s=0 for purely viscous fluids since they do not store energy (i.e. they do not possess any elasticity). Substituting this into the well-known Stokes-Einstein equation leads to a relation between the colloidal particles' mean-squared-displacement (measured by dynamic light scattering) and the complex shear modulus of the fluid, G*(ω) (after conversion to the Fourier frequency domain):
The authors note in the end of the paper that it's unknown why light scattering techniques should produce the shear modulus of the fluid since they measure elements along the diagonal of the system's linear response tensor, whereas the shear moduli are contained in the off-diagonal elements.
They also note (with explanations I don't quite understand) that "...the light scattering may not provide a quantitatively exact measure of the elastic moduli; nevertheless, as our results show, the overall trends are correctly captured, and the agreement is very good." (emphasis mine)
Thursday, November 3, 2011
Question everything
As you may know, my major field is optics, which concerns the study and application of light. Throughout my studies I've been constantly amazed that Maxwell's electromagnetic theory of light, which has been around since the late 1800's, still contains features that have not been settled or have been simply overlooked by scientists. One such artifact is the dissimilarity between Minkowski's and Abraham's descriptions of the momentum carried by an electromagnetic wave.
In a 2010 PRA Rapid Communication, Chaumet et al. expand on earlier work by Hinds and Barnett that examines the force on a dipole in a time-varying (i.e. pulsed) plane wave. This force is written completely as
In a 2010 PRA Rapid Communication, Chaumet et al. expand on earlier work by Hinds and Barnett that examines the force on a dipole in a time-varying (i.e. pulsed) plane wave. This force is written completely as
where Pj is the dipole moment, E is the electric field, B is the magnetic induction, and ε is the Levi-Civita tensor. The first term in the sum relates to both the radiation pressure and the gradient force. The second term, according to Hinds and Barnett, is usually absent in laser trapping and cooling texts because it is proportional to the time-derivative of the Poynting vector, which is zero in common cooling setups. This term is responsible for repulsion of systems such as a two-level atom from the leading edge of the wave when the first term alone predicts an attraction.
Works like this make me wary of blindly using formulas when performing calculations since it reminds me that a theory may not be complete or its assumptions explicit when presented to a niche audience.
Wednesday, November 2, 2011
No deep insights for today
Well, I just returned from a great wedding in Columbus this past weekend (no, not my own), which means I've been incredibly busy catching up at school... again. I wanted to write a post on curve fitting since I've been involved with the task in my data analysis lately, but I just haven't had the time to flesh out a coherent post.
So instead, I leave you with the URL of a cool new website from Ben Goldacre: http://nerdydaytrips.com/. The site is a user-fed collection of short day trips that might appeal to the more—ahem—academic of us. There seems to be a nice garden near me in Lake Wales, FL called Bok Tower. Perhaps if I can get a free weekend I'll pay it a visit.
And did anyone see the Buckeyes game last Saturday? I think we have a future with Braxton Miller.
So instead, I leave you with the URL of a cool new website from Ben Goldacre: http://nerdydaytrips.com/. The site is a user-fed collection of short day trips that might appeal to the more—ahem—academic of us. There seems to be a nice garden near me in Lake Wales, FL called Bok Tower. Perhaps if I can get a free weekend I'll pay it a visit.
And did anyone see the Buckeyes game last Saturday? I think we have a future with Braxton Miller.
Labels:
fun
Wednesday, October 26, 2011
Back in action... and optomechanical backaction
I returned to Orlando on the red eye from LA Monday morning and am back in the swing of things (paper writing, data analysis, fixing broken equipment, etc.). I unfortunately had the flu for part of the FiO conference, so I did not attend many talks. A few that I did see and were interesting included FMD1: Near Threshold Optomechanical Backaction Amplifier, FTuZ1: Extracting information from optical fields through spatial
and temporal modulation, and FTuS7: Optically Induced and Directed Manipulation on Surfaces. The abstracts and submissions should be up at http://www.opticsinfobase.org/ within the next month.
FTuS7 was especially interesting. This group out of Oxford used an optically heated metallic substrate to form colloidal crystals from thermophoretically and convectively trapped silica microspheres. They employed standard video microscopy to observe the grain boundaries between two crystals and recorded the annealing time—the amount of time it took for the grain boundary to disappear due to large scale reorientation of the two crystals. The position of the nucleation sites for the crystals were controlled by splitting and directing the laser beam through the microscope objective with a spatial light modulator.
Pretty cool stuff. Fortunately, I got better in time to do some climbing in Yosemite. This time we hit the Five Open Books and I followed on my first Yosemite 5.9, Commitment. The crux on Commitment is ridiculous.
and temporal modulation, and FTuS7: Optically Induced and Directed Manipulation on Surfaces. The abstracts and submissions should be up at http://www.opticsinfobase.org/ within the next month.
FTuS7 was especially interesting. This group out of Oxford used an optically heated metallic substrate to form colloidal crystals from thermophoretically and convectively trapped silica microspheres. They employed standard video microscopy to observe the grain boundaries between two crystals and recorded the annealing time—the amount of time it took for the grain boundary to disappear due to large scale reorientation of the two crystals. The position of the nucleation sites for the crystals were controlled by splitting and directing the laser beam through the microscope objective with a spatial light modulator.
Pretty cool stuff. Fortunately, I got better in time to do some climbing in Yosemite. This time we hit the Five Open Books and I followed on my first Yosemite 5.9, Commitment. The crux on Commitment is ridiculous.
Wednesday, October 12, 2011
Conference preparations
I'm preparing for next week's Frontiers in Optics conference, which means I'm pretty busy. So, instead of my usual philosophical ramblings, I leave you with the following YouTube video:
And come to my poster presentation in San Jose on Tuesday between noon and 1:30 PM (Presentation number JTA23)!
And come to my poster presentation in San Jose on Tuesday between noon and 1:30 PM (Presentation number JTA23)!
Thursday, October 6, 2011
"It's an exothermic reaction!" "Your face is an exothermic reaction."
I learned yesterday in the lab that aluminum reacts with water (though not always so spectacularly like in the video above). The barrier to witnessing this is that it usually is coated in a thin layer of alumina (oxidized aluminum). If you want to witness this reaction without resorting to molten aluminum, you can treat the surface with mercury (II) chloride, which forms an amalgam with the aluminum. The reaction may then proceed, producing heat and hydrogen gas.
Common sense would dictate that you PLEASE EXERCISE CAUTION WHILE ATTEMPTING THIS OR ANY EXPERIMENT.
Wednesday, October 5, 2011
Philosophy isn't useful for science? Don't be crass.
Richard Feynman once quipped, "Philosophy of science is about as useful to scientists as ornithology is to birds." Dr. Feynman is one of my scientific heroes, but this quote often tempers my admiration for the man. I respect him because he appreciated what was good in many different things, from the simple aesthetics of a flower to the intricate mathematics underlying quantum mechanics. He was not one to dismiss ideas simply because they were "artsy" or not of pure science. This is why I often puzzle over how he could have made such a statement concerning philosophy.
Without philosophy, I would not be the same scientist that I am, and I would venture that I would not be as good of one, either. Philosophy is the art of critical analysis; the philosophy of science examines the methods and logic that form the foundation of our field. The result of this investigation exposes the mental machinery that powers our work. But this knowledge has also produced a number of practical applications throughout history. The following are two examples.
E. T. Jaynes and R. T. Cox, among others, re-examined the long-held rules of inference. A simple redefinition of probability led to an explosion of techniques for drawing conclusions where information was limited, from signal processing to economics. This is the well known Bayesian revolution. As another example, Henri Poincaré established a daily routine that complemented his work as a scientist. He postulated that the subconscious was at the focal point of discovery, and so he took steps to nurse its well-being. A modern day analog to this is the development of programming philosophies that are intended to increase a software engineer's productivity by tailoring work methods to the structure of the mind.
My justification of philosophy to science was to make a point, but I don't think it was necessary. It is unfortunate that we must always find utility in our work. As I've stated before, science is ultimately a creative process that is powered by our primal nature and instincts. To remove its basis from the romantic mindset and place it solely in the context of practicality is a fallacy. I do science and philosophy because I like to, and no better justification is required than that.
Without philosophy, I would not be the same scientist that I am, and I would venture that I would not be as good of one, either. Philosophy is the art of critical analysis; the philosophy of science examines the methods and logic that form the foundation of our field. The result of this investigation exposes the mental machinery that powers our work. But this knowledge has also produced a number of practical applications throughout history. The following are two examples.
E. T. Jaynes and R. T. Cox, among others, re-examined the long-held rules of inference. A simple redefinition of probability led to an explosion of techniques for drawing conclusions where information was limited, from signal processing to economics. This is the well known Bayesian revolution. As another example, Henri Poincaré established a daily routine that complemented his work as a scientist. He postulated that the subconscious was at the focal point of discovery, and so he took steps to nurse its well-being. A modern day analog to this is the development of programming philosophies that are intended to increase a software engineer's productivity by tailoring work methods to the structure of the mind.
My justification of philosophy to science was to make a point, but I don't think it was necessary. It is unfortunate that we must always find utility in our work. As I've stated before, science is ultimately a creative process that is powered by our primal nature and instincts. To remove its basis from the romantic mindset and place it solely in the context of practicality is a fallacy. I do science and philosophy because I like to, and no better justification is required than that.
Wednesday, September 28, 2011
A diverse basis for science
The sum of our experiences, environment, and genetic predispositions form the basis for how we view and interpret the world. This is a principle of many philosophies and helps to explain the broad diversity in human behavior.
The practical view of science is of a rigid structure built upon basic assumptions and prior knowledge. After the proper application of "the scientific method," this foundation leads us to new discoveries. Roughly speaking, the method goes as such: we start from our current state of knowledge, make a hypothesis followed by observations, formulate a proper model that accommodates the data, then draw our conclusions while taking into account the prior information. This outline must follow the rules of logic and not contradict what we already know to be true (and if it does, we place this contradiction under extreme scrutiny until the contradiction is resolved).
This generalization of the scientific method is confounded by the inherent variability in the assumptions from which it starts. In reality, every one possesses a different set of beliefs regarding scientific inquiry. This is exactly analogous to the variety of metaphysical beliefs held by people across the globe. And, just as this variety gives rise to the diversity of people, it leads scientists to different interpretations of theirs and others' work.
Rather than enforce a common basis for the pursuit of science, scientists ought to respect the basic diversity within their own field. Science is more than rote application of a formula; it engages the scientist to the point that discovery becomes an act of self-expression. A study is flavored with the thoughts and feelings of the people involved and can not be separated from them. Once this is understood, we can see that science is a very human endeavor and not the cold, calculated formula known as the scientific method.
The practical view of science is of a rigid structure built upon basic assumptions and prior knowledge. After the proper application of "the scientific method," this foundation leads us to new discoveries. Roughly speaking, the method goes as such: we start from our current state of knowledge, make a hypothesis followed by observations, formulate a proper model that accommodates the data, then draw our conclusions while taking into account the prior information. This outline must follow the rules of logic and not contradict what we already know to be true (and if it does, we place this contradiction under extreme scrutiny until the contradiction is resolved).
This generalization of the scientific method is confounded by the inherent variability in the assumptions from which it starts. In reality, every one possesses a different set of beliefs regarding scientific inquiry. This is exactly analogous to the variety of metaphysical beliefs held by people across the globe. And, just as this variety gives rise to the diversity of people, it leads scientists to different interpretations of theirs and others' work.
Rather than enforce a common basis for the pursuit of science, scientists ought to respect the basic diversity within their own field. Science is more than rote application of a formula; it engages the scientist to the point that discovery becomes an act of self-expression. A study is flavored with the thoughts and feelings of the people involved and can not be separated from them. Once this is understood, we can see that science is a very human endeavor and not the cold, calculated formula known as the scientific method.
Wednesday, September 21, 2011
Grad students != drones
A major difference I notice between graduate students and my friends who work in industry is that there is a significant sense of pride and ownership taken by the latter group with their work. This could be for a variety of reasons, including higher monetary compensation, the real-world potential of their work, and motivation provided by their employers.
On the other hand, a natural curiosity (apart from the promise of a degree) would ideally drive a graduate student to perform quality research, but I find that this is sometimes not the case. Many graduate students feel that they are forced into their projects solely because it furthers the career of their advisor. A student-advisor relationship grounded in this sentiment leads the student to do just enough work to graduate, but does not provide enough motivation for the student to fully realize the potential of their work. Some ownership in the project is needed.
How can a mutually beneficial environment be created within an academic setting? I think advisors should think hard about this issue since they stand to benefit greatly from an increase in the quality of their students' research. I don't think that the prospect of earning an advanced degree is enough to establish such an environment; graduate students need to see their research as something beyond a means to graduate if science is truly going to progress.
Some things that advisors do that remove the sense of ownership from their graduate students' work include
On the other hand, a natural curiosity (apart from the promise of a degree) would ideally drive a graduate student to perform quality research, but I find that this is sometimes not the case. Many graduate students feel that they are forced into their projects solely because it furthers the career of their advisor. A student-advisor relationship grounded in this sentiment leads the student to do just enough work to graduate, but does not provide enough motivation for the student to fully realize the potential of their work. Some ownership in the project is needed.
How can a mutually beneficial environment be created within an academic setting? I think advisors should think hard about this issue since they stand to benefit greatly from an increase in the quality of their students' research. I don't think that the prospect of earning an advanced degree is enough to establish such an environment; graduate students need to see their research as something beyond a means to graduate if science is truly going to progress.
Some things that advisors do that remove the sense of ownership from their graduate students' work include
- having their graduate students attempt numerous "impossible" experiments in the hope that one might actually work;
- writing the journal papers on the projects themselves;
- delaying the graduation of senior students for their experience in the lab;
- and frequently deferring communication and guidance to post-docs.
Wednesday, September 14, 2011
Measurement efficiency
While I was helping one of the undergraduates in our group yesterday I started thinking about the efficiency of measurements and how we can avoid wasting time in the laboratory with poorly planned experiments.
By an efficient measurement I mean an experimental procedure that extracts the most information possible from the collected data with the least effort. The second part of my definition, expending the least effort, is usually common sense. After all, I require nothing more than a ruler to measure the length of something that is nearly the size of the ruler. Pulsed radar or laser interferometry are obviously too complicated for this task.
However, it is usually common sense because people erroneously think that more data is always better. If a certain phenomenon is known to be a linear function of some variable, for example, then I only need to take enough data points to assign statistically meaningful values to the best-fit line's slope and intercept. I've frequently seen my colleagues painstakingly collect so many data points as to make their plot appear continuous when in fact the curve describing the data was unimodal or uniformly increasing or decreasing. Much less effort could have been expended by reducing the number of measurements performed in these cases [1].
These examples also help illustrate the first part of my definition of an efficient measurement—extracting the most information possible. In the example about the data modeled by a line, there are only two pieces of relevant information: the slope and intercept. In fact, if it weren't for noise and measurement uncertainty, only two data points would be needed to maximize the amount of information gained. More complicated situations would likely involve performing measurements to increase my belief in a certain conclusion but may not outright prove that conclusion true. In these cases, an efficient measurement would optimize my belief based on the data it provides.
There is one subtle point to an information theoretic viewpoint of measurements that I've failed to discuss so far. The information that is extracted depends entirely upon the hypotheses being tested. That is, information is not physical. Measurements of voltage across a piece of material are only relevant if I want to know the material's electrical properties. So identifying exactly what I want to know about my system before I measure something about it is crucial in optimizing my measurement's efficiency.
In summary, an efficient measurement simplifies the means of data collection while maximizing the amount of information provided by the data. The information that a measurement provides is determined by the questions asked of the experimentalist; therefore, measurement efficiency is judged against these questions.
[1] Automated data acquisition has to a large extent made the number of data points collected irrelevant, but perhaps it has also caused many of us to neglect the question of efficiency in the first place.
By an efficient measurement I mean an experimental procedure that extracts the most information possible from the collected data with the least effort. The second part of my definition, expending the least effort, is usually common sense. After all, I require nothing more than a ruler to measure the length of something that is nearly the size of the ruler. Pulsed radar or laser interferometry are obviously too complicated for this task.
However, it is usually common sense because people erroneously think that more data is always better. If a certain phenomenon is known to be a linear function of some variable, for example, then I only need to take enough data points to assign statistically meaningful values to the best-fit line's slope and intercept. I've frequently seen my colleagues painstakingly collect so many data points as to make their plot appear continuous when in fact the curve describing the data was unimodal or uniformly increasing or decreasing. Much less effort could have been expended by reducing the number of measurements performed in these cases [1].
These examples also help illustrate the first part of my definition of an efficient measurement—extracting the most information possible. In the example about the data modeled by a line, there are only two pieces of relevant information: the slope and intercept. In fact, if it weren't for noise and measurement uncertainty, only two data points would be needed to maximize the amount of information gained. More complicated situations would likely involve performing measurements to increase my belief in a certain conclusion but may not outright prove that conclusion true. In these cases, an efficient measurement would optimize my belief based on the data it provides.
There is one subtle point to an information theoretic viewpoint of measurements that I've failed to discuss so far. The information that is extracted depends entirely upon the hypotheses being tested. That is, information is not physical. Measurements of voltage across a piece of material are only relevant if I want to know the material's electrical properties. So identifying exactly what I want to know about my system before I measure something about it is crucial in optimizing my measurement's efficiency.
In summary, an efficient measurement simplifies the means of data collection while maximizing the amount of information provided by the data. The information that a measurement provides is determined by the questions asked of the experimentalist; therefore, measurement efficiency is judged against these questions.
[1] Automated data acquisition has to a large extent made the number of data points collected irrelevant, but perhaps it has also caused many of us to neglect the question of efficiency in the first place.
Wednesday, September 7, 2011
Working hours do not correlate with productivity
A common topic in discussions I have with other graduate students concerns the proper amount of time that we should dedicate to our studies. The topic is relevant to helping us find the optimum work schedule, i.e. one that allows us to both find fulfillment with our studies and graduate in a timely manner. To simplify, let's say that the optimum work schedule maximizes our productivity.
Let's first begin by grouping graduate students into broad categories by their work habits. These categories are by no means mutually exclusive or exhaustive. I do however believe that a majority of graduate students can be placed within at least one of them.
And don't worry. I have more than my own anecdotal evidence to support my claim. Two articles in Nature [here and here] reported on two separate research groups: one with a brutal schedule and one that strongly supported life outside the lab. Both are successful and well-respected amongst their peers. Furthermore, American researchers (and workers from all occupations) work notoriously more hours a year than Europeans. It might be argued that this does grant the Americans a technological edge, but I have my suspicions. Besides, Europeans are much happier.
I now wonder if the question of the proper amount of time spent working as a graduate student carries any real meaning. It presupposes that there exists some balance that's suitable for everyone, which is clearly not realistic.
And if this article is a bit incoherent, I apologize. I was too busy working this week to think it through thoroughly.
Let's first begin by grouping graduate students into broad categories by their work habits. These categories are by no means mutually exclusive or exhaustive. I do however believe that a majority of graduate students can be placed within at least one of them.
- The 9-to-5'er: This graduate student treats her research as a regular job. She often works for three or four hour chunks of time, takes a half hour lunch, and generally leaves her work in the office/lab. Five day work weeks are the norm. I believe that this is a somewhat rare work schedule for graduate students.
- The 8-to-6'er: Pretty much the same as the previous category, except the longer time spent at school means that a graduate student following this work schedule will take more or longer breaks during the day. Some weekend work may also occur. Most students at CREOL fall into this category, myself included.
- The Night Owl: These students usually don't get to school until 1:00 PM and work until the late hours of the night. They also tend to consume the most coffee.
- The Stay-At-Home Grad Student: These students typically have advisors who frequently travel or are not present in the lab. They may also have projects requiring a lot of programming and simulation—work that's easily done at home (thank you Remote Desktop).
- The Stay-At-School Grad Student: Hygiene and a social life are extraneous for these students. Perhaps a product of www.hulu.com and the digital media revolution, the stay-at-school graduate student finds no need to go home when TV can be piped directly to her computer.
- The Random Worker: If there's work to do, the random worker will spend all her time, day and night, in the lab until it's done. Then she'll spend the next week at the beach. The random worker, like the stay-at-home graduate student, is usually a product of a hands-off advisor. They also tend to share the most in common with undergraduate work habits.
And don't worry. I have more than my own anecdotal evidence to support my claim. Two articles in Nature [here and here] reported on two separate research groups: one with a brutal schedule and one that strongly supported life outside the lab. Both are successful and well-respected amongst their peers. Furthermore, American researchers (and workers from all occupations) work notoriously more hours a year than Europeans. It might be argued that this does grant the Americans a technological edge, but I have my suspicions. Besides, Europeans are much happier.
I now wonder if the question of the proper amount of time spent working as a graduate student carries any real meaning. It presupposes that there exists some balance that's suitable for everyone, which is clearly not realistic.
And if this article is a bit incoherent, I apologize. I was too busy working this week to think it through thoroughly.
Wednesday, August 31, 2011
On blog writing (and happy 1.5'ish year anniversary!)
I've been writing on this blog for about a year and a half. As with any milestone (ok, so I missed the anniversary of the first post by six months), this fact has put me in a reflective mood and I'm eager to analyze what I've learned from this endeavor.
My original intent for starting the blog was to have a place to jot down my ideas and sort them according to some logical framework. My mind was a discombobulated mess of tasks I had to do, dates I had to keep, and theories that intermingled to the extent that I had difficulties keeping them straight. Of course, I could always jot down notes in a notebook or journal (and I do), but I found that my handwriting simply couldn't keep up with the rate at which my thoughts crystallized into a pattern in my mind. Thus, I started the blog, but honestly did not care whether others read it.
My first post sums up this intent very well. (Having written it so long ago, it actually seems as if someone else wrote it.) So has the blog succeeded in this regard? I think so. I've certainly managed to find a broad range of topics on which to flesh out my thoughts, and my thinking is much more clear than when I started.
The most amazing thing I've found from writing, especially through this medium, is that my intent for a post almost always changes as I write it, leading me to discover something in the process. For example, I first started this post as a discussion on writing techniques, inspired by this article. I suspect that the reason for this is that the public nature of a blog impresses upon me the need to be logical. I dare not commit a logical fallacy in front of the entire digital world, especially since I purport to be a "scientist." So, as I place my thoughts into writing, I'm better able to catch errors in my thinking, fix them, and as a result find new conclusions and ideas.
For this reason I've recently come to appreciate the value of an audience. Sure, my readership is next to nil, but building one takes work that I simply have not put in. That's changing now. Through the magic of Google, I've discovered ways toattract any sort of attention possible popularize my blog, including http://www.twitterfeed.com. If more people are reading what I write, I'll probably pay closer attention to it.
And what of the future? Do I have any more hopes for my writing? Well, trying to keep a regular post schedule has been challenging, so I hope that I can maintain it. I also hope that, if enough people do start reading this blog, that I can gain a lot from comments. I realize my viewpoint is very particular in a large field. My biases likely keep me from conclusions that are just as valid but I otherwise discount or miss entirely.
This was actually a rather easy and fun post to write. If you do enjoy this blog, feel free to share it with your friends who might find it interesting. I'd appreciate it ;)
On a side note: thank you Google for changing the layout of the editing screen on Blogger. The newer, more professional look means I don't have to Alt-Tab to Matlab when my advisor enters my office unexpectedly.
My original intent for starting the blog was to have a place to jot down my ideas and sort them according to some logical framework. My mind was a discombobulated mess of tasks I had to do, dates I had to keep, and theories that intermingled to the extent that I had difficulties keeping them straight. Of course, I could always jot down notes in a notebook or journal (and I do), but I found that my handwriting simply couldn't keep up with the rate at which my thoughts crystallized into a pattern in my mind. Thus, I started the blog, but honestly did not care whether others read it.
My first post sums up this intent very well. (Having written it so long ago, it actually seems as if someone else wrote it.) So has the blog succeeded in this regard? I think so. I've certainly managed to find a broad range of topics on which to flesh out my thoughts, and my thinking is much more clear than when I started.
The most amazing thing I've found from writing, especially through this medium, is that my intent for a post almost always changes as I write it, leading me to discover something in the process. For example, I first started this post as a discussion on writing techniques, inspired by this article. I suspect that the reason for this is that the public nature of a blog impresses upon me the need to be logical. I dare not commit a logical fallacy in front of the entire digital world, especially since I purport to be a "scientist." So, as I place my thoughts into writing, I'm better able to catch errors in my thinking, fix them, and as a result find new conclusions and ideas.
For this reason I've recently come to appreciate the value of an audience. Sure, my readership is next to nil, but building one takes work that I simply have not put in. That's changing now. Through the magic of Google, I've discovered ways to
And what of the future? Do I have any more hopes for my writing? Well, trying to keep a regular post schedule has been challenging, so I hope that I can maintain it. I also hope that, if enough people do start reading this blog, that I can gain a lot from comments. I realize my viewpoint is very particular in a large field. My biases likely keep me from conclusions that are just as valid but I otherwise discount or miss entirely.
This was actually a rather easy and fun post to write. If you do enjoy this blog, feel free to share it with your friends who might find it interesting. I'd appreciate it ;)
On a side note: thank you Google for changing the layout of the editing screen on Blogger. The newer, more professional look means I don't have to Alt-Tab to Matlab when my advisor enters my office unexpectedly.
Friday, August 26, 2011
I'm not the only one with these suspicions...
“I have known more people whose lives have been ruined by getting a Ph.D. in physics than by drugs.” - Dr. Jonathan Katz
This quote, by way of this article, is a bit distressing. Still, the article reaffirms several thoughts I've had recently about the prospects of obtaining a job in academia. Academia is simply too saturated with workers. This article is a great summary of some recent data and opinion pieces and I think should be read by anyone who is planning to obtain their PhD in a scientific field in the near future.
http://promega.wordpress.com/2011/08/26/the-ph-d-glut-do-we-have-too-many-ph-d-s/
This quote, by way of this article, is a bit distressing. Still, the article reaffirms several thoughts I've had recently about the prospects of obtaining a job in academia. Academia is simply too saturated with workers. This article is a great summary of some recent data and opinion pieces and I think should be read by anyone who is planning to obtain their PhD in a scientific field in the near future.
http://promega.wordpress.com/2011/08/26/the-ph-d-glut-do-we-have-too-many-ph-d-s/
Wednesday, August 24, 2011
When is a conclusion good enough?
I've been doing a lot of reading recently on modern probability theory, Bayesian analysis, and information theory. One of the central tenets of these theories is that any proposition has associated with it a degree of plausibility, i.e. a probability of being true, that reflects the amount of information available. The proposition, "the sky is blue" is extremely plausible since it is based off of daily self-observations and confirmed by others. As another example, I judge the proposition, "thirty million Americans have blue eyes" as plausible based on the knowledge of the current population of the United States and my own observations on the frequency of encountering blue-eyed people. However, this is not as likely to be true as the first statement.
A conclusion in a scientific paper is nothing more than a proposition and thus possesses its own degree of plausibility. The information available to the reader for determining the plausibility of the conclusion is the data presented in the paper and all previously published work on the same topic. Of course, other factors weigh in, such as prejudices for particular theories and affinities or dislikes for the authors on the paper. Temporarily placing these other factors aside, I wonder, "when does a conclusion possess a large enough degree of plausibility to be considered true?"
Of course, a definite answer doesn't exist. No scientific proposition can be true with 100% certainty and it is silly to think that we can even assign a threshold probability for evaluating a scientific paper's correctness. Just imagine a paper successfully passing through the peer-review process so long as it is evaluated to be 78.63% or more true by its reviewers. But the question remains relevant. Science is a culture and every culture has criteria by which it evaluates claims.
In a closely-related post I wrote about the fallacies that workers commit while evaluating other work. But I find it much more difficult to identify the criteria for establishing the truthfulness and quality* of research. Unfortunately, I don't think I'll be able to fulfill my full potential as a scientist until I am capable of doing so.
Note: The problem of defining quality has been approached at great length by American author Robert Pirsig in his popular novel "Zen and the Art of Motorcycle Maintenance." One of his primary arguments is that Quality is actually an undefined construct present at the seminal moment when an observation is made and processed by the brain. People may know if something possesses Quality, but it is inherently undefinable.
A conclusion in a scientific paper is nothing more than a proposition and thus possesses its own degree of plausibility. The information available to the reader for determining the plausibility of the conclusion is the data presented in the paper and all previously published work on the same topic. Of course, other factors weigh in, such as prejudices for particular theories and affinities or dislikes for the authors on the paper. Temporarily placing these other factors aside, I wonder, "when does a conclusion possess a large enough degree of plausibility to be considered true?"
Of course, a definite answer doesn't exist. No scientific proposition can be true with 100% certainty and it is silly to think that we can even assign a threshold probability for evaluating a scientific paper's correctness. Just imagine a paper successfully passing through the peer-review process so long as it is evaluated to be 78.63% or more true by its reviewers. But the question remains relevant. Science is a culture and every culture has criteria by which it evaluates claims.
In a closely-related post I wrote about the fallacies that workers commit while evaluating other work. But I find it much more difficult to identify the criteria for establishing the truthfulness and quality* of research. Unfortunately, I don't think I'll be able to fulfill my full potential as a scientist until I am capable of doing so.
Note: The problem of defining quality has been approached at great length by American author Robert Pirsig in his popular novel "Zen and the Art of Motorcycle Maintenance." One of his primary arguments is that Quality is actually an undefined construct present at the seminal moment when an observation is made and processed by the brain. People may know if something possesses Quality, but it is inherently undefinable.
Monday, August 22, 2011
Why I like Dr. Ben Goldacre
I've been trying to keep my posts to once a week on Wednesdays, but sometimes I simply just want to share something interesting. In a recent Bad Science post, Dr. Ben Goldacre discussed sampling error in relation to unemployment figures in the UK. The article itself is interesting, but I found the description of systematic sampling error especially amusing:
Firstly, you’ll be familiar with the idea that a sample can be systematically unrepresentative: if you want to know about the health of the population as a whole, but you survey people in a GP waiting room, then you’re an idiot.
Wednesday, August 17, 2011
The great physicist Subrahmanyan Chandrasekhar gave a famous lecture at the International Symposium in Honor of Robert R. Wilson in April, 1979 entitled "Beauty and the Quest for Beauty in Science." In the second paragraph of the lecture, he quotes Poincaré:
And this is why, despite my affection for Poincaré, I disagree with part of the above quote. Beauty does not exist within Nature because its parts are inherently balanced and "harmonious." Beauty exists as a result of the mind imparting its own structure to random titillations of the senses. Men and women do not render Nature meaningful through physical exertion alone. The reduction of Nature to coherent understanding is the realm of Science.
The Scientist does not study nature because it is useful to do so. He studies it because he takes pleasure in it; and he takes pleasure in it because it is beautiful. If nature were not beautiful, it would not be worth knowing and life would not be worth living.... I mean the intimate beauty which comes from the harmonious order of its parts and which a pure intelligence can grasp....
It is because simplicity and vastness are both beautiful that we seek by preference simple facts and vast facts; that we take delight, now in following the giant courses of the stars, now in scrutinizing with a microscope that prodigious smallness which is also a vastness, and, now in seeking in geological ages the traces of the past that attracts us because of its remoteness.Good science is not forced and does not evolve from long hours in the lab and a work-centric lifestyle alone, though I admit that these are necessary to maintain a healthy scientific career. Rather, science is intimately related to one of the most basic of human traits: the impassioned drive to create order out of chaos.
And this is why, despite my affection for Poincaré, I disagree with part of the above quote. Beauty does not exist within Nature because its parts are inherently balanced and "harmonious." Beauty exists as a result of the mind imparting its own structure to random titillations of the senses. Men and women do not render Nature meaningful through physical exertion alone. The reduction of Nature to coherent understanding is the realm of Science.
Wednesday, August 10, 2011
What a timely survey
Nature Jobs posted an article last week that summarizes the results of a survey conducted on graduate students in the sciences about their satisfaction with graduate school. Here are some of the more interesting points:
In contrast, scientific jobs often carry a certain stigma with them in the US. Amusingly, I had a travel buddy--an engineer for an aerospace company--once explain to me what lengths he went to to hide the fact that he studied engineering when hitting on girls in college.
I'm not certain that being an intellectual is a turn-off, however. Consider the tech industry (Google, etc.) and the fairly well-received social status that its employees enjoy despite their aptitude for computers and technology. The real substance here is that they also enjoy good incomes and a stable job. So, the point is that intelligence is not socially undesirable; rather, it's that a good career and income is more highly valued.
The survey's findings seem to corroborate my recent conclusions about life as a graduate student. I must admit that I'm a little jealous of my engineer friends who went to work immediately after school. They're making much more money, have more free time, and are generally moving forward with their lives at a comfortable pace. And since we came from the same educational background, it's sometimes demoralizing to consider where I'm at with my life and find that I'm lagging them in these respects due to my role as a graduate student.
But I'll be damned if a job in industry can elicit the same oh-so-sweet feeling when an experiment works, the results fit neatly within the model, and one little mystery of nature finds itself tamed by my own cognitive exertions that is rewarded by a career in science.
- 78.8% of first year PhD students responded that they were "very" or "quite" likely to continue on to a university research position after graduate school. In comparison 62% of fifth years answered the same.
- Competition is the biggest factor in steering students away from academic careers.
- 44.6% of students thought about post-school career options before entering graduate school.
- 71.6% of European-based students reported that they were somewhat or very satisfied with their overall graduate school experience, compared to just 57.1% in the US and 62.3% in Japan.
In contrast, scientific jobs often carry a certain stigma with them in the US. Amusingly, I had a travel buddy--an engineer for an aerospace company--once explain to me what lengths he went to to hide the fact that he studied engineering when hitting on girls in college.
I'm not certain that being an intellectual is a turn-off, however. Consider the tech industry (Google, etc.) and the fairly well-received social status that its employees enjoy despite their aptitude for computers and technology. The real substance here is that they also enjoy good incomes and a stable job. So, the point is that intelligence is not socially undesirable; rather, it's that a good career and income is more highly valued.
The survey's findings seem to corroborate my recent conclusions about life as a graduate student. I must admit that I'm a little jealous of my engineer friends who went to work immediately after school. They're making much more money, have more free time, and are generally moving forward with their lives at a comfortable pace. And since we came from the same educational background, it's sometimes demoralizing to consider where I'm at with my life and find that I'm lagging them in these respects due to my role as a graduate student.
But I'll be damned if a job in industry can elicit the same oh-so-sweet feeling when an experiment works, the results fit neatly within the model, and one little mystery of nature finds itself tamed by my own cognitive exertions that is rewarded by a career in science.
Wednesday, August 3, 2011
What I wish I had known about academia (before I entered graduate school)
I'm entering my fifth year of graduate school this upcoming semester and, accordingly, have been increasingly thinking about my life afterward. Moments of reflection and talks with other students have revealed that there are a good number of things that I was unaware of concerning a career in academia when I began my graduate studies. Most of these things have taken me a long time to learn because the points were subtle or I was too naive to honestly assess the matter. Though these thoughts may not be true by the actual numbers (e. g. I haven't looked at the availability of teaching positions or the average income of post-docs), they certainly have found other voices, such as a few of the authors in this April Nature issue. And since the thought of a large number of people holds a good deal of influence regardless of its validity, I will take these thoughts to be true and offer them as advice to those who are considering a career in academia.
Career Point Number One: A career in academia—specifically in science—requires more hard work and dedication than most careers. This is due to a number of reasons, including a saturation of workers in the field, competition over resources, and a career trajectory that is difficult to advance through. Too many people within academia could be considered the cause of the competition over resources, but it's significant in its own right since it dilutes the quality of work being done. And as for a difficult career trajectory: a colleague told me the scariest thing one could do of all career moves is enter upon an assistant professorship with a family and mortgage with no guarantee of tenure.
CPN Two: Teaching positions are sparse (this was a surprise to me!). Many tenured professors or industry veterans enjoy retiring into academic teaching positions. There will not be much leverage for new graduates in obtaining a desired teaching position against those more experienced in the field.
CPN Three: No matter how amiable your advisor, it is not in their personal interest to graduate students. Losing experienced and knowledgeable students hurts their ability to publish, obtain funding, and generally proceed through their own academic career paths.
CPN Four: A post-doc may not be the best option for advancing a scientific/academic career. Many advisors interpret the post-doc role as one similar to a graduate student's but unencumbered by educational burdens such as attending class. A move to industry following graduation, however, may provide better networking opportunities and more chances to evolve one's professional skill set. One can always return to academia.
CP Five: One need not work in academia to retain one's interest in science. This was in no way obvious to me from the start. After having identified my interest in physics, I proceeded through my education with the idea that physics would be my career. But herein lies the most important distinction of all: science is not a career.
Science is the pursuit of truth and the delight in discovering new things through experimentation. It need not be grand in scale or require a large amount of resources. It is an unfortunate development that a career in academia and science have assumed the same position in most people's minds.
I admit that this assessment may appear a bit bleak at first, but for me it is rather liberating. I'm OK finding a career that is not centered squarely within academia because the requirements of such a job are too demanding given my other interests. This in no way means that my life or work can not contribute significantly to science. But to conclude that academia alone is the only way to significantly impact science is to commit a fallacy that could launch one onto a difficult and unsatisfying career path.
Career Point Number One: A career in academia—specifically in science—requires more hard work and dedication than most careers. This is due to a number of reasons, including a saturation of workers in the field, competition over resources, and a career trajectory that is difficult to advance through. Too many people within academia could be considered the cause of the competition over resources, but it's significant in its own right since it dilutes the quality of work being done. And as for a difficult career trajectory: a colleague told me the scariest thing one could do of all career moves is enter upon an assistant professorship with a family and mortgage with no guarantee of tenure.
CPN Two: Teaching positions are sparse (this was a surprise to me!). Many tenured professors or industry veterans enjoy retiring into academic teaching positions. There will not be much leverage for new graduates in obtaining a desired teaching position against those more experienced in the field.
CPN Three: No matter how amiable your advisor, it is not in their personal interest to graduate students. Losing experienced and knowledgeable students hurts their ability to publish, obtain funding, and generally proceed through their own academic career paths.
CPN Four: A post-doc may not be the best option for advancing a scientific/academic career. Many advisors interpret the post-doc role as one similar to a graduate student's but unencumbered by educational burdens such as attending class. A move to industry following graduation, however, may provide better networking opportunities and more chances to evolve one's professional skill set. One can always return to academia.
CP Five: One need not work in academia to retain one's interest in science. This was in no way obvious to me from the start. After having identified my interest in physics, I proceeded through my education with the idea that physics would be my career. But herein lies the most important distinction of all: science is not a career.
Science is the pursuit of truth and the delight in discovering new things through experimentation. It need not be grand in scale or require a large amount of resources. It is an unfortunate development that a career in academia and science have assumed the same position in most people's minds.
I admit that this assessment may appear a bit bleak at first, but for me it is rather liberating. I'm OK finding a career that is not centered squarely within academia because the requirements of such a job are too demanding given my other interests. This in no way means that my life or work can not contribute significantly to science. But to conclude that academia alone is the only way to significantly impact science is to commit a fallacy that could launch one onto a difficult and unsatisfying career path.
Wednesday, July 27, 2011
How not to argue in science
I'd like to expand a little on yesterday's post. I'm beginning to better understand what constitutes proper debate of a scientific work. Whether the following logic is actually practiced by most researchers is questionable, but this is nevertheless an interesting and important point.
Data from a study provides information that does one of three fairly obvious things to a conclusion: it increases, decreases, or leaves unaffected the likelihood that the conclusion is correct. I place emphasis on the word likelihood because any given conclusion can not be demonstrated as being correct with 100% certainty, and I highly doubt that conclusions can be proven false with certainty.
I think that this—the likelihood that a conclusion is correct given all information—as well as the competency with which an experiment was performed are the two objects open to debate within science. The debate becomes unscientific when researchers and journal reviewers perform the following errors:
Data from a study provides information that does one of three fairly obvious things to a conclusion: it increases, decreases, or leaves unaffected the likelihood that the conclusion is correct. I place emphasis on the word likelihood because any given conclusion can not be demonstrated as being correct with 100% certainty, and I highly doubt that conclusions can be proven false with certainty.
I think that this—the likelihood that a conclusion is correct given all information—as well as the competency with which an experiment was performed are the two objects open to debate within science. The debate becomes unscientific when researchers and journal reviewers perform the following errors:
- Assigning too much weight to prior information, thus making the likelihood that another work's results are correct less likely then it perhaps should be.
- As a corollary to the first point, workers would be in error if they didn't properly balance the weighting of all prior information. For example, the media, in their coverage of climate change, has been chastised by some for giving equal attention to climate change skeptics as they do to proponents. This is because the proportion of scientists against climate change is significantly fewer than the proportion who see it as a true occurrence.
- Assuming that a finding is false given prior information or prejudices. If one accepts that a finding can not be false but rather highly unlikely, then arguing to reject a journal article because it contradicts previous findings is itself fallacious. The wider scientific community should (with its more balanced opinions) be a better interpreter of the likelihood that the claims are real.
Tuesday, July 26, 2011
Uniqueness of probability allows for assertion
But until it had been demonstrated that [the probabilities] are uniquely determined by the data of a problem, we had no grounds for supposing that [the probabilities] were possessed of any precise meaning.--E. T. Jaynes and G. Larry Bretthorst, from "Probability Theory: The Logic of Science (emphasis mine)
I take this to mean that any experiment whose results are debated is under question because either 1) the logic of the critics is flawed or 2) there is not enough information in the data to reach the argued for conclusion to a large degree of plausibility. Uniqueness of the probabilities assures this. Furthermore, there can be no argument for the truth or falsehood of the claims.
Tuesday, July 12, 2011
More notes from Jaynes
The introduction to Probability Theory: The Logic of Science has been useful for explaining what various statistical procedures are used for when making an inference about data.
Maximum entropy is a technique used to establish probabilities for outcomes from data given no prior information or assumptions. It is essentially an algorithm that comes to a conclusion without any bias from the experimenter. Bayesian techniques, on the other hand, require some prior information, and this will affect the conclusion.
Typically, when performing acts of inference, one begins with maximum entropy if very little is known except what's given in the data. Once more is known, one may turn to Bayesian analysis.
Bayesian analysis requires five things: a model, sample space, hypothesis space, prior probabilities, and sampling distributions.
There is much work to be done in developing techniques when even little is known about the raw data; this could lead to steps that can assist in studies where even maximum entropy may fail to give adequate results.
Maximum entropy is a technique used to establish probabilities for outcomes from data given no prior information or assumptions. It is essentially an algorithm that comes to a conclusion without any bias from the experimenter. Bayesian techniques, on the other hand, require some prior information, and this will affect the conclusion.
Typically, when performing acts of inference, one begins with maximum entropy if very little is known except what's given in the data. Once more is known, one may turn to Bayesian analysis.
Bayesian analysis requires five things: a model, sample space, hypothesis space, prior probabilities, and sampling distributions.
There is much work to be done in developing techniques when even little is known about the raw data; this could lead to steps that can assist in studies where even maximum entropy may fail to give adequate results.
Monday, July 11, 2011
Coming to conclusions
In the introduction of E. T. Jaynes's Probability Theory: The Logic of Science, Jaynes states
And in the light of Bayesian analysis, one will never be able to claim with 100% certainty that the conclusion is true (or false, for that matter).
...the emphasis was therefore on the quantitative formulation of Polya’s viewpoint, so it could be used for general problems of scientific inference, almost all of which arise out of incomplete information rather than ‘randomness’.As I learn more about the field of sensing, I find that this is the mentality, whether acknowledged by a practitioner or not, that is adopted when coming to a conclusion about the interpretation of data. The uncertainty involved in coming to a conclusion is not because the measurement process is inherently random but rather that one has not collected enough data to say whether this conclusion is true or false.
And in the light of Bayesian analysis, one will never be able to claim with 100% certainty that the conclusion is true (or false, for that matter).
Thursday, July 7, 2011
Fighting through boredom
Graduate school can often be frustrating, disruptive, and downright stressful. I probably did not need to state this since all graduate students are aware of this fact. I have been especially bored with my graduate studies lately and have been struggling to identify both the cause and a solution. Through talking with friends and lots of time thinking (especially during my recent vacation to the UK), I've slowly been able to reason out the cause.
Quite simply, I've forgotten my large-scale fascination with science. As an undergrad, I would marvel at every piece of popular physics literature I read, from discoveries at particle accelerators to the development of new nanotechnologies. In graduate school, I've become so mired in one specific area of research that I forgot that very cool things are happening all over the scientific world, such as the "bump" in the data seen at Fermilab.
Though not the only reason for my recent lull, it is a major one. And it points to a solution: take time out of my day to peruse the myriad of popular scientific articles and rekindle my interests. Though I may not be working on these famous projects, I find that I am much happier in the lab after having contemplated these things. They place my work within a greater context, and though I tend to be an individualist, I think that I at least need to do this to find satisfaction with my own work.
Quite simply, I've forgotten my large-scale fascination with science. As an undergrad, I would marvel at every piece of popular physics literature I read, from discoveries at particle accelerators to the development of new nanotechnologies. In graduate school, I've become so mired in one specific area of research that I forgot that very cool things are happening all over the scientific world, such as the "bump" in the data seen at Fermilab.
Though not the only reason for my recent lull, it is a major one. And it points to a solution: take time out of my day to peruse the myriad of popular scientific articles and rekindle my interests. Though I may not be working on these famous projects, I find that I am much happier in the lab after having contemplated these things. They place my work within a greater context, and though I tend to be an individualist, I think that I at least need to do this to find satisfaction with my own work.
Tuesday, June 28, 2011
Notes on Bayesian Analysis
This afternoon I encountered one of those rare moments when I had little to do since I was waiting on input from a number of collaborators, so I read through half of this site on intuitively understanding Bayes' Theorem. Here are some things that I learned or that were made clearer than my previous understanding:
- The outcome of Bayesian analysis is a modification of the prior (the probability of an event that is known before the analysis) which produces the posterior (the probability of an event given certain conditions and dependent upon the prior).
- Bayesian analysis requires three pieces of information: the prior and two conditional probabilities (a true positive and a false positive outcome).
- If the two conditional probabilities are equal, the prior is unmodified and equals the posterior. This is because the result of the test is uncorrelated with the outcome.
- The degree to which the prior is modified can be described by the concept of differential pressure, i.e. the relative difference between the two conditional probabilities. The process of changing the prior due to differential pressure is known as selective attrition.
- People use spatial intuition to grasp numbers. Teachers can use this and the idea of natural frequencies to their advantage to teach difficult concepts.
- Related to number 5, the way in which given information is presented in a word problem (e.g. percentages vs. ratios) will affect the percentage of correct scores.
Friday, June 10, 2011
A day in the life
It's almost 5:00 PM on a Friday, which means I'm tired and don't want to do anymore work. While staring blankly at my computer screen for ten minutes waiting for happy hour and the beer that awaits, I reflected upon my day and what I accomplished. During this time of meditation, it struck me that my day, a fairly typical day in my grad student life, might be a bit unusual. Here's my summary:
8:00 AM: Get into office after working out. Check e-mail/Facebook. So far so good.
8:30 AM: Get first cup of coffee.
8:35 AM: Begin 12 page derivation of expression for decorrelation times of a time series of images of microparticles undergoing anisotropic Brownian motion.
9:30 AM: Get second cup of coffee. The caffeine's really producing some crazy math now.
10:10 AM: Finish derivation. Try and place all the scratch paper strewn about the office in order. Fail at this.
10:20 AM: Tweak Matlab code for simulating anisotropic Brownian motion. Become upset when a function I want to use is in the Spline Toolbox. We don't have a license for the Spline Toolbox.
10:30 AM: Get bored of this. Get more coffee.
10:35 AM: Read journal article on causality and the Kramers-Kronig relations. Spend too much time wondering why differential equations with higher order time derivatives of the force on a system than the system response are not causal. (P. Kinsler, "How to be causal," arXiv:1106.1792v1 (2011)
11:45 AM: Eat lunch.
12:15 PM: Check Facebook. Join the collective graduate student world in celebrating the announcement of the PhD movie.
12:30 PM: Build a setup using Ni:Chrome wire, a current regulated supply, and spare lab parts for cutting slots into plastic Petri dishes.
1:00 PM: Cut slots into the sides of the Petri dishes. Try not to breath the fumes from the molten plastic.
2:00 PM: Glue fibers that were previously tapered with a CO2 laser into the slots with a silicone-based glue that is normally used for gluing colostomy bags to people.
2:30 PM: Become dizzy from the plastic/glue fumes. Take a walk outside.
3:00 PM: Go to biology and observe HeLa cells growing in a petri dish on a quartz surface I prepared last week. Take minor awe in noting that this exact cell line can be traced to a woman who died in 1951.
3:15 PM: Debate with a biology professor why structured networks can induce stress in the actin filaments of the cytoskeleton and why ex vivo observations can facilitate in vivo understandings of cell motility.
3:30 PM: Return to office, which now smells like colostomy bag glue.
3:50 PM: Read notes that came with glue. Make note that isopropyl alcohol-the very chemical I use to sterilize dishes-will dissolve the glue and has rendered this work useless. State an expletive rather loudly at this finding.
3:55 PM: Go to lab and play with the alignment of my imaging fiber setup.
4:05 PM: Try and find the other imaging fiber (needed to be prepared by Monday for the biology professor) but realize my groupmate took it and put it in his own setup.
4:10 PM: Attempt to contact groupmate. He left for the day already and is not answering his cell phone. State another expletive at the realization that I will have to come in on the weekend to prepare the fiber.
4:20 PM: Return to lab. Note that the Ti:Sapphire laser head cooling system reports an error due to lack of cooling water. Check water tank. It's full.
4:30 PM: Return to office. Note giant glob of colostomy bag glue on desk. Luckily it's removed with isopropyl alcohol.
4:35 PM: Summarize life.
5:15 PM: Go drink beer.
8:00 AM: Get into office after working out. Check e-mail/Facebook. So far so good.
8:30 AM: Get first cup of coffee.
8:35 AM: Begin 12 page derivation of expression for decorrelation times of a time series of images of microparticles undergoing anisotropic Brownian motion.
9:30 AM: Get second cup of coffee. The caffeine's really producing some crazy math now.
10:10 AM: Finish derivation. Try and place all the scratch paper strewn about the office in order. Fail at this.
10:20 AM: Tweak Matlab code for simulating anisotropic Brownian motion. Become upset when a function I want to use is in the Spline Toolbox. We don't have a license for the Spline Toolbox.
10:30 AM: Get bored of this. Get more coffee.
10:35 AM: Read journal article on causality and the Kramers-Kronig relations. Spend too much time wondering why differential equations with higher order time derivatives of the force on a system than the system response are not causal. (P. Kinsler, "How to be causal," arXiv:1106.1792v1 (2011)
11:45 AM: Eat lunch.
12:15 PM: Check Facebook. Join the collective graduate student world in celebrating the announcement of the PhD movie.
12:30 PM: Build a setup using Ni:Chrome wire, a current regulated supply, and spare lab parts for cutting slots into plastic Petri dishes.
1:00 PM: Cut slots into the sides of the Petri dishes. Try not to breath the fumes from the molten plastic.
2:00 PM: Glue fibers that were previously tapered with a CO2 laser into the slots with a silicone-based glue that is normally used for gluing colostomy bags to people.
2:30 PM: Become dizzy from the plastic/glue fumes. Take a walk outside.
3:00 PM: Go to biology and observe HeLa cells growing in a petri dish on a quartz surface I prepared last week. Take minor awe in noting that this exact cell line can be traced to a woman who died in 1951.
3:15 PM: Debate with a biology professor why structured networks can induce stress in the actin filaments of the cytoskeleton and why ex vivo observations can facilitate in vivo understandings of cell motility.
3:30 PM: Return to office, which now smells like colostomy bag glue.
3:50 PM: Read notes that came with glue. Make note that isopropyl alcohol-the very chemical I use to sterilize dishes-will dissolve the glue and has rendered this work useless. State an expletive rather loudly at this finding.
3:55 PM: Go to lab and play with the alignment of my imaging fiber setup.
4:05 PM: Try and find the other imaging fiber (needed to be prepared by Monday for the biology professor) but realize my groupmate took it and put it in his own setup.
4:10 PM: Attempt to contact groupmate. He left for the day already and is not answering his cell phone. State another expletive at the realization that I will have to come in on the weekend to prepare the fiber.
4:20 PM: Return to lab. Note that the Ti:Sapphire laser head cooling system reports an error due to lack of cooling water. Check water tank. It's full.
4:30 PM: Return to office. Note giant glob of colostomy bag glue on desk. Luckily it's removed with isopropyl alcohol.
4:35 PM: Summarize life.
5:15 PM: Go drink beer.
Subscribe to:
Posts (Atom)