Wednesday, December 11, 2013

Beginning a post-doc abroad

As you may know, I recently accepted a position as a post-doc at EPFL in Lausanne, Switzerland. Just about two weeks ago, my wife and I moved to Lausanne and I began my new work.

This is my first actual "job" that I've had. Sure, it carries many simularities to graduate school, but my responsibilities are greater and, on top of that, I'm now living in a new culture. All of these aspects taken together have made for a very exciting two weeks.

By moving to a new country, I hope to experience a little bit of what so many students and researchers experience when they move to the US from abroad. Working in scientific research is tough, but I could only imagine what it was like for many of my colleagues to adapt to a new lifestyle as well. I was, in a sense, jealous of this great experience they were having and of the worldliness they obtained as a consequence.

Of course, there have been difficulties with the move. Language is one such difficulty. I'm learning French, but my skills still remain atrocious. And little things, like the idea that one never gets on at the front of a bus in Lausanne, have led to embarrassing moments with bus drivers and the public.

Overall, though, I'm quite satisfied with my new location and even more so with my work. I have switched my research to biophysics and can feel the excitement of a field that still has quite a bit of knowledge remaining to discover. My lab environment is fantastic and the EPFL campus is energetic and knowledge-driven, which is similar to my undergraduate institution, Rose-Hulman.

To my mind, Academia is special because it not only drives discovery, but also encourages a global-attitude and feeling of trust amongst people from all nations.

Wednesday, November 6, 2013

Tips on communicating science to a class

Yes, I am still alive.

My wife and I just returned from a long hiking and climbing trip out in the western states. It's tough readjusting to normal life after having been on the road for a month, but I've managed to slowly get back into the world of science while I wait out the visa application process for Switzerland.

Today I came across an interesting article in Wired called "A Media Guide for Physics" by Rhett Allain. In the article, Rhett gives a few tips for producers of science TV shows that would help them communicate science better. His tips are
  1. don't be wrong;
  2. it's better to say nothing than to be wrong;
  3. don't be misleading;
  4. [don't focus] on comparisons and numbers;
  5. don't get out of control crazy.
I agree with Allain on all these points, especially the last one. Though this example is not from TV, I've noticed that graduate students (myself included) will tend to describe the minutiae of their research to lay people because they don't want to be wrong and because they've been so immersed in it that they forget what other people know and don't know.

The article also got me thinking about what tips teachers and educators should use when communicating science to their students. The list must necessarily be different because the audience is different. What follows is my own list of tips that I think are valuable to college professors when communicating concepts to a class.
  1. Don't assume that students are well-grounded in the "background" material and concepts.
  2. Don't gesture too much while lecturing. When you gesture, you're referring to an image in your head that only you can see. At least put that image on the board.
  3. Present ideas visually and verbally before going into derivations.
  4. Include a little history behind the concept you're about to teach if there's time. The reasons for why a concept is important are often found in the history of the development of the idea. For example, Newton's laws seem obvious now, but philosophers had some very muddy ideas about motion before Newton formulated them. And all of thermodynamics arose out of a need to understand how newly invented engines and devices worked. If you start a class by talking about molecules in a box, the relevance is lost.
The list is obviously not exhaustive, but I've seen many professors violate one or more of these to the detriment of their students' understanding. What other tips might be included?

Thursday, September 5, 2013

How much time should be spent on coding?

Probably any scientist that went through college starting sometime in the early 2000's knows how to program to some degree. They may have learned how to write computer code to do data analysis, to implement simulations, or simply for fun. In my own case, I had been interested in computer programming since middle school when I first taught myself Visual Basic and HTML (I know, I know, HTML is not a programming language, but it did get me interested in programming).

Now, as a researcher, I often have to temper my enthusiasm for writing programs. It's true that creating scripts to automate data analysis and processing will save me a lot of time in the long run, but I also think that tweaking my code too much or trying to engineer it to do too many things may be counter-productive. After all, code is a means to an end, and my goal is not to write as many programs as possible.

So what's the right amount of time that should be invested into writing code to help with usual laboratory tasks? This is a tough but important question that scientists should ask themselves, since it putting in just the right amount of time should maximize their productivity.

And beyond maximizing productivity, scientists should also dedicate time to writing code that makes it easier to document and reproduce what they did. For example, I have recently written scripts to take the output of a curve fit routine and write a report that includes the fitting bounds, initial guesses, fit functions, and other data that is relevant to reproducing a fit. Hopefully, a future grad student can read my automatically-generated reports and figure out exactly what was done.

So in the end, enough time should be invested in coding when it significantly cuts down on the amount of time taken to do repetitive tasks and when it streamlines documentation and note taking, which may not happen otherwise.

Saturday, August 24, 2013

Onward to the future...

One week ago from yesterday I successfully defended my dissertation, "Mesoscale light-matter interactions," which means that I am effectively finished with my graduate school career (woot woot!). I would offer my dissertation for viewing here on the web, but due to publication restraints associated with UCF, I have to wait one year before disseminating the work publicly. This is too bad, but the decision to do this was largely out of my control.

I'm excited to be moving on to a slightly new line of work. Later this year, I'll be moving to EPFL in Lausanne, Switzerland to work on STORM microscopes and problems in single molecule biophysics.

I'm particularly excited about this work because now I'll be able to apply my knowledge of optics to biology problems. Most of my PhD work was focused on designing new optics-based techniques and then looking for a problem to solve; I hope to become immersed enough in the biophysics community that I can identify unknowns in biology first and then design measurements to address these unknowns afterward. I think a problem-driven route is a more appropriate for the design of optical sensing techniques and I just can't wait to begin.

During my last year at graduate school, I've also begun several endeavors that I believe make me a much better researcher and contributor to science. Some of these new practices include:
  1. Following a paradigm that makes my research as open as possible. This includes making easily reproducible code and sharing data openly. (I'm exploring the use of Figshare and following @openscience on Twitter. Looking at these resources are great starting points).
  2. Related to the first point, I'm writing a Python package for processing dynamic light scattering data and simulating experiments that I intend to release freely and open sourced. I think that my expertise in this area would serve many others extremely well, since DLS is often treated like a "black box" technique by a lot of people who use it for macromolecular studies.
  3. Though my writing has waned since I began serious work on my defense, I want to begin writing again as a means of exploring more topics in research and academia.
  4. I'm structuring my own research around simple questions. I think simple questions, such as "how do cells respond to light" form great, long term questions in science. More complicated questions often lead to short term research goals.
I'll expound on this last point in a later post. Overall, though, I'm beginning to see how I can contribute to my field beyond just publishing.

I won't get started in my new position until November or December. In the mean time, my wife and I are going on a long climbing and hiking trip out West, so you may have to be patient if you're waiting for posts between September and November.

And if you're still working on your PhD or Masters degree, keep up the hard work. It will pay off :)

Friday, August 9, 2013

Making presentation slides flow

Today I gave the first practice talk for my defense presentation to my research group. Prior to today I had been dragging my feet with working on it because making a presentation is fairly boring. It also requires a lot of work, so yesterday I put together quite a few, rather dense slides without taking the time to ensure that the content on each side had a certain "flow," that is,  a logical spatial order to the information presented on them. Now, the slides made sense in the order that they came in, but the information on each individual slide was not so well ordered with respect to other text and figures on the same slide.

Of course, being the first practice, it was a bit rough. But one thing in particular struck me as insightful. I had attempted to tell a story for each slide based on the information that the slides contained. However, since the information was mixed up on any given slide, I often stumbled with the explanations because my attention would jump randomly from one region of the slide to the next.

In prior presentations I took the time to add animations such that graphs or illustrations would appear on a slide as I talked about them. This was a good thing since it automatically gave some nice order to each slide. More importantly, it kept my speech coherent because I wouldn't get confused about what point to talk about.

The downside to making slides like these is that it takes a lot more time. For example, if I have one plot with three curves on it and I want to make the curves appear one at a time as I click the mouse, then I have to in reality make three separate plots!

In the end, though, I think it's worth it. It keeps me talking coherently, it makes the slides aesthetically pleasing, and it enforces a flow with which information is communicated to the audience.

Monday, August 5, 2013

Why we should just say no

Well, it's certainly been a while since I last wrote a post. I have a good reason for it, though. My defense is in less than two weeks and life has been crazy. I'm a bit sorry that I haven't had the time to write lately since it's a good stress outlet, but my mental energies have been absolutely and continuously drained by other tasks.

It's perhaps ironic then that I'm writing this post because it was my lack of time to think about writing that inspired me to, well, write.

From the last few months I've learned the value of saying "no" to requests that people ask of me. It was never really necessary before and I was usually happy to oblige people who needed help with something.

These days, however, I must say "no" if I want to finish the things I need to graduate. And I've come to appreciate that saying "no" to things should apply to more people than just graduate students nearing the end of their work.

I think academics have a hard time with trying to limit the number of projects and tasks they take on. As a result, they and their lab members may become overworked and so attention to detail slips. This often leads to sloppy science, such as not checking hypotheses and assumptions, making conclusions on poorly measured data, etc. At the extreme, it might also be fatal to academic careers.

Unfortunately, I think sloppy science has become very common because, in part, people just take on too many things. I can think of a personal reason for why academics take on too much. I become excited at the start of a new project, but bored near the end, so I tend to start more than I can handle while letting others die off. I shouldn't do this, but I do.

Recognizing that this is an issue is the first step to fixing it. I am glad to see that other academics are slowly fighting back against the status quo and saying "no" to too many tasks. I realize it might be hard at times, but it is very necessary to stay happy and to do good science.

Thursday, July 11, 2013

Challenging dogmatic science

I am back from Europe after a brief visit to a couple institutions at which I am applying for a post-doc position. The trip was brief, taxing, and very informative. I'll try to write about the experience when I have the time. Overall, I'm glad to be back and set on finishing my dissertation. I only hope I've succeeded in obtaining a job from this trip!

On another note, I wanted to highlight a very nice paragraph that sums up a concern many have with some scientific fields. It comes from an article entitled "Thinking outside the simulation box" and appears in this month's issue of Nature Physics. Here it is:

One would have naively expected scientific activity to be open-minded to critical questioning of its architectural design, but the reality is that conservatism prevails within the modern academic setting. Orthodoxy with respect to mainstream scientific dogmas does not lead to extreme atrocities such as burning at the stake for heresy but it propagates other collective punishments, such as an unfair presentation of an innovative idea at conferences, bullying and drying up of resources for creative thinkers.
The author, Abraham Loeb, is arguing that too many cosmologists are concerned with building support for an existing paradigm, rather than challenging these paradigms or building new ones. One reason, he thinks, is that it is very difficult to build a career as junior scientists by challenging beliefs that are held true by members of academic job selection committees.

However, for science to remain healthy, scientists must challenge the assumptions and beliefs of their paradigm. It is too bad, in my mind, that the architecture of an academic career is setup to encourage scientists to avoid this line of thinking.

Note: I also very much liked the following excerpt.
Conceptual work is often undervalued in the minds of those who work on the details. A common misconception is that the truth will inevitably be revealed by working out the particulars. But this highlights the biggest blunder in the history of science: that the accumulation of details can be accommodated in any prevailing paradigm by tweaking and complicating the model. A classic example is Ptolemaic cosmology — a theory of epicycles for the motion of the Sun and planets around the Earth — which survived empirical scrutiny for longer than it deserved.

Thursday, June 20, 2013

Signalling in intrinsically disordered proteins

I take a small interest in biology and biophysics because of its complexity and the large number of unsolved but important problems. Lately I've noticed an increase in the number of popular articles on intrinsically disordered proteins (IDP's). These proteins are shaking up conventional wisdom on how proteins work and the importance of their structures.

This interesting News & Views article in Nature summarizes some recent work on disordered proteins and how they respond to activators and inhibitors. While I don't understand much of the jargon in the article, the overall message is exciting. I found the following excerpts of interest:

The observation of striking differences in the crystal structures of haemoglobin in the presence and absence of oxygen seemed to validate the idea that allostery [the link is my own] can be rationalized, and possibly even quantitatively accounted for, by examining the structural distortions that connect the different oxygen-binding sites... This structural view of allostery has largely guided the field ever since. However, the realization that more than 30% of the proteome — the complete set of proteins found in a cell — consists of intrinsically disordered proteins (IDPs), and that intrinsic disorder is hyper-abundant in allosteric signalling proteins such as transcription factors, raises the possibility that a well-defined structure is neither necessary nor sufficient for signal transmission.
The take-home message of Ferreon and colleagues' work, and the reason that a switch is possible, is that proteins should not be thought of as multiple copies of identical structures that respond uniformly to a signal. Instead, proteins — especially IDPs — exist as ensembles of sometimes radically different structural states. This structural heterogeneity can produce ensembles that are functionally 'pluripotent', a property that endows IDPs with a unique repertoire of regulatory strategies.

I absolutely love that IDP's are currently rewriting the dogmas of much of molecular biology.

Wednesday, June 19, 2013

Fourier transforms are not good for analyzing nonstationary signals

I'm currently thumbing through parts of "Image Processing and Data Analysis: The Multiscale Approach." In Chapter 1, I found this enlightening comment on the Fourier transform:
The Fourier transform is well suited only to the study of stationary signals where all frequencies have an infinite coherence time, or – otherwise expressed – the signal’s statistical properties do not change over time. Fourier analysis is based on global information which is not adequate for the study of compact or local patterns.
You can find a free pdf of this book here:

Monday, June 10, 2013

Understanding the static structure factor

The structure factor \(S(q)\) is an important quantity for characterizing disordered systems of particles, like colloids. Its significance comes from the fact that it can be directly measured in a light scattering experiment and is related to other quantities that characterize a system's microscopic arrangement and inter-particle interactions. However, it's difficult to learn about it the context of disorder because it's primarily used in crystallography. Crystals are far from being disordered.

In this post, I'll explore the nature of the static structure factor, which is something of an average structure factor over many microscopic configurations of a material. A dynamic structure factor describes the statistics of a material in time as well as space.

The structure factor of a disordered material can be measured by illuminating the material with a beam of some type of radiation (usually X-rays, neutrons, or light). The choice of radiation depends on the material. It is also important that the material should not scatter the incident beam too strongly because the structure function is typically found in singly-scattered radiation (see the first Born approximation for a discussion about a related concept). If the material is multiply scattering, the information about the material's structure, which is largely carried by the singly scattered light, is washed out.

In a measurement, the sample is usually placed at the center of rotation of a long rotating arm. A detector for the radiation is placed at the opposite end of the arm. The arm is rotated about this axis and the intensity of the scattered radiation as determined by the detector is recorded as a function of the angle. This data set essentially contains the structure factor, but must be transformed and corrected for, accordingly.

First, the structure factor is usefully represented as a function of the scattering wave number, \(q\), and not as a function of the scattering angle. In optics, \(q\) is usually given by
\[ q = \frac{4 \pi n}{\lambda} \sin( \theta / 2) \]

where \(n\) is the refractive index of the background material (usually a solvent like water) and \(\lambda\) is the wavelength of the light. \(\theta\) is the scattering angle.

Additionally, the structure factor must be corrected for a large number of confounding factors, such as scattering from the sample cell and radiation frequency-dependent detectors. A classic paper that details all these corrections to find \(S(q)\) in a neutron scattering experiment is given here.

Once the structure factor is found in an experiment, it may be Fourier transformed numerically to give the radial distribution function, \(g(r)\) (see Ziman for the proper conditions for which this applies) of particles. This function gives the probability of finding a particle at a radial distance from another particle in the system. Many important thermodynamic properties are related to \(g(r)\). Importantly, the pair-wise interaction potential between any two particles is related to \(g(r)\), and the pair-wise interaction determines many macroscopic system properties.

The structure factor as \(q\) (or equivalently the scattering angle) goes to zero is also an important quantity in itself. \(S(0)\) is equal to the macroscopic density fluctuations of particles in the medium (see Ziman, Section 4.4, p. 130). But density fluctuations can be calculated from thermodynamics and leads to the isothermal compressibility of a material.

In the language of optics, which I'll stick to for the rest of this post, the density fluctuations would correspond to large regions of refractive index variations across the sample.

This leads to an interesting problem, the resolution of which reminds me of the fallibility in taking some models too literally: for a homogeneous and non-scattering optical material, like a very nice piece of glass, the value of the density fluctuations in the refractive index are essentially zero (this is true because the disorder in a glass is at a length scale that is much smaller than the wavelength of light). This means that \(S(0) = 0\). At the same time, the scattered intensity in the type of experiment measured above is directly proportional to the structure factor:
\[I(q) \sim S(q).\]
So, if I illuminate a nice piece of glass with a laser beam, and I know that \(S(0)\) is equal to zero, the above expression means that there should be no scattered intensity in the forward direction. But this is a silly conclusion, because when I do this experiment in the lab I see the laser beam shining straight through the glass! In other words, \(I(0)\) is not zero.

The problem is that this expression is for the scattered intensity. In random media, we often talk about the scattered light and the ballistic light. The latter of these two is not scattered but directly transmitted through the material as if the material were not there. So, even though no light is scattered into the forward direction, there is still the ballistic, unscattered beam, that is passing straight through the sample.

Most small angle light scattering experiments measure as close as they can to \(q=0\) and extrapolate to the structure function's limiting value. \(S(0)\) can't actually be measured. But, it's determination is important for materials with significant long-range order, such as those near a phase transition, because the small angles correspond to large distances, due to their inverse Fourier relationship.

One can also engineer a material to not transmit any light into the forward direction. To do this, \(S(q)\) must be zero AND there must be no ballistic light passing through the material. This can be achieved with a crystal that diffracts all the light into directions other than the forward direction, such as a blazed grating.

On a final note, the structure factor can sometimes be related to important material properties beyond the radial distribution function. Ziman says in section 4.1, pg. 126 that the direct correlation function (which measures interactions between pairs of particles) can be derived directly from the structure factor. This correlation function is related to the Percus-Yevick model for liquids.

Wednesday, June 5, 2013

Logical indexing with Numpy

I just discovered one nuance of Python's Numpy package which, given my Matlab background, is a bit unintuitive.

Suppose I have an array of numeric data and I would like to filter out only elements whose values lie between, say, 10 and 60, inclusive. In Matlab, I would do this:
filterData = data(data >= 10 & data <=60)

However, logical operations with Numpy sometimes require special functions. The equivalent expression in Python is
filterData = data[np.logical_and(data >= 10, data <= 60)].

np.logical_and() computes the element-wise truth values of (data >= 10) AND(data <= 60), allowing me to logical index (or fancy index, as pythonistas say) into data.

Sunday, June 2, 2013

IguanaTeX puts TeX equations into PowerPoint

As OSA's CLEO conference draws closer, I find myself drawn back into the deep, dark abyss of PowerPoint engineering for the design of my talk.

One constant disappointment I've had with PowerPoint is the lack of a good equation editor since 2007. It was in this year, I believe, that Microsoft chose to dump almost all of the useful features of its native equation editor. Most appalling was that they removed the option to change the color of equations. This is absolutely crucial if you're working with dark backgrounds to enhance the contrast of your text, but for some reason this disappeared.

To replace those lost features, a new piece of software called Math Type came about from Design Science. If you've seen the movie Looper, then I would equate the rise of Math Type with that of the Rainmaker, the mysterious, telepathic character who arose out of nowhere and is intent on killing off people he doesn't like and making everyone live by his rules.

What I don't like about Math Type is that they charge what I think is too much for incremental improvements to their software. For example, I bought a license for Math Type 6.6 before Office 2010 came out, but to get easy integration with 2010, I need to spend about $40 just to get a new version. There is a way to get 6.6 to work, but they make it very difficult to find this information. I also don't like that I've seen Design Science employees trolling PowerPoint forums telling people to buy Math Type to solve some issues and that it's very difficult to find older versions of their software on their website.

Now, to be fair, the software is quite nice and it does make it easy to type equations into PowerPoint and other Office software products. I just don't like how all the functionality that I enjoyed as a part of Office for free was moved into a separate, commercial product. And since I am required to use PowerPoint and Word at work, I'm stuck with this situation.

So you can imagine how happy I was to have found a wonderful alternative to Math Type called IguanaTeX. It's a plugin for PowerPoint 2003, 2007, and 2010 that let's you add LaTeX equations to slides as png files. You need to have MikTeX installed, and it's a bit finnicky getting the right packages at first, but once it's setup it works like a charm. AND you don't have to pay just to change the colors of your equations.

Check it out, especially if you know TeX.

Saturday, June 1, 2013

My latest LabVIEW code now up on GitHub

If anyone is curious to see my implementation of my LabVIEW code for the project I've been mentioning, I've now placed it in this GitHub respository: git:// There is a design document called program_design.html that should give you a good idea about what I was trying to do.

Now, it's not complete. I've only just managed to get it working. I need to clean it up A LOT, by creating more SubVI's and being more consistent with when I use global variables and when I pass data from the front panel. But if you have any ideas or comments, send them my way. I'd be happy to receive feedback from any LabVIEW pros out there.

---Update: November 6, 2013---

The code is no longer on GitHub, but you may e-mail me if you need it, as some of you have already done.

Friday, May 31, 2013

String/number conversions in LabVIEW

My LabVIEW program is mostly working after a long week. All in all, the design and implementation of the program went well, since it's now working. However, the implementation is very rough and I know that my LabVIEW programming skills can be improved a lot. In particular, I need to decide when global variables are necessary to use and when they're not, make better use of error handling and other forms of feedback for moving through sequences (beyond just using the Wait function for a long time), and make better use of subVI's.

One annoying aspect of LabVIEW that I've learned to appreciate through all of this is its string/number conversions and data types. One 12-hour measurement was wasted because I thought "Convert to Decimal Number" meant that I could take a string such as "42.5" and convert it to a number with decimal places, like 42.5. Unfortunately, inputting "42.5" into this function produces 42 at the output. A quick get-around is to use "Convert to Exponential Number," but I think I'll want to use "Format Value" and supply a format string to ensure that I'm converting correctly.

Here is a list of string/number conversion functions and their descriptions. I will be using this list, as well as the list of numeric data types, more frequently since I find them unintuitive.

Wednesday, May 29, 2013

There's now an academic Stack Exchange

This morning I was pleased to come across Academia Stack Exchange. Stack Exchange is a network of web forums for collecting crowd-sourced answers to questions. I believe that it originally came from Stack Overflow, a Q&A web forum for programming, and has expanded since.

I hope that such a site will provide academics answers to questions that they just can't find in other places. Academia can be frustrating because there's a large disconnect between people at different levels: post-docs don't understand graduate students, graduate students don't understand faculty, and faculty are just plain space cadets... just kidding about that last one :)

In all seriousness though, I'm glad to see such a support site and I hope to make good use of it.

Monday, May 27, 2013

More on LabVIEW, this time with RS232

My LabVIEW adventures continued today (yes, being close to graduation means you work on US national holidays). I began designing and implementing the pieces of my LabVIEW program for controlling the experimental setup that I discussed briefly in my last post.

After getting all the hardware pieces independently working with LabVIEW example VI's, I started working on some custom VI's that did exactly what I needed each piece of hardware to do. My idea is that I will use each custom subVI in a large control script. This control script should contain the commands to pass to each subVI and the order in which they'll be executed. All the data from the SR400 Photon Counter should be saved at each step, along with the current state of all the hardware pieces.

I started working on controlling the laser I have connected to a RS232 port. This should have been simple, but I my first bit of code simply didn't do anything. If I told the laser to turn off, it wouldn't. To mock me, the example "Basic Serial Write and" that's included with the LabVIEW base DID work.

After an hour or two of playing around, I discovered that the strings I was passing to the laser in my custom VI did not contain the line feed ASCII character "\n" that is required to mark the end of a command, even though I included it in the string. The strings were stored in a drop down menu of common commands I frequently use. The line feed character was also included in the string I entered into the example VI and this was actually sent to the laser, which is why it worked.

I ended up fixing the situation using the information on this NI community page. It's a bit strange that it requires this much work to place a line feed at the end of a string, but oh well, it works.

I also ran into some issues with having the commands from the laser echoed back to me when I read the data from its buffer with a VISA read VI. I didn't want the echoes, I wanted the information that each command should return. I fixed this by doing to VISA reads. Apparently, LabVIEW's VISA read only reads until the carriage return/line feed and leaves the rest of the information in the read buffer.

Though I ran into some bumps, I'm very grateful for the help I've been able to find on the NI support and community sites. Without them I probably would still be stuck.

Saturday, May 25, 2013

Learning how to operate GPIB buses with Labview

I'm currently engaged in a project which requires me to coordinate four separate pieces of hardware (two lasers, a stepper motor, and a photon counter). As is the case with all good experimental scientists educated in the 2000's, my first thought was to turn to Labview for making this happen.

However, I'm not terribly excited about this. I haven't actually programmed in Labview since my REU studies at the University of Colorado, Boulder, during which I made a visible grating-based spectrometer for working with lasers in vacuum systems. Since then, I've somehow managed to perform ALL of my experiments without automation and by writing down results in my lab notebook. I know that automated data acquisition can make my life easier, but my experiments have been relatively simple until now and learning Labview would have taken too much time.

Furthermore, I'm much more interested in using open-source software whenever I can (PyVISA is one such Python package which might help me). I don't ever want to be dependent upon possessing some company's latest toolbox or software version to do my research.

Despite all of this, I'm under a time-crunch with my current project and Labview is arguably the de facto standard for automation and measurement, so... here I go!

I first installed Labview 2012 on a Windows 7 system. There really wasn't any problem here; the software took a while to install, but it wasn't a big deal.

Following this, I quickly discovered that I needed to install the DAQmx software to get nearly all of my lab's previous VI's working. This wasn't a big deal either (it's free if you're registered with National Instruments), but I wonder why something that is apparently so important to Labview is left out of the installation. Some slight perusing on the NI community pages led me to find that since the 2012 version, NI changed up what is included from DAQmx with the base Labview install. I found an old repository for these now "missing" VI's, but it was pretty confusing to navigate.

Now, onto the GPIB control. I found an old NI PCI-GPIB controller card in one of our lab computers, removed it, and inserted it into the computer that I'm currently using. Windows recognized the card but didn't know what to do with it and Labview didn't know it existed. Fortunately, I found and downloaded the NI-488.2 driver, version 3.1.1 from NI's website and installed it without a hitch. After reboot, the card was recognized in NI's Measurement and Automation Explorer (MAX).

Now, I admit that getting the first device, a Stanford Research SR400 Photon Counter, was a breeze. I ran a GPIB cable from the PCI card to the SR400, and another one from the SR400 to my motion controller (of the two lasers, one is controlled via USB and the other via RS232). Both GPIB devices were recognized in MAX. This is what my MAX window looked like after clicking "Scan for Instruments" while having PCI-GPIB "GPIB0" selected in the left-most itemized list:

In this picture I had already renamed the devices in MAX. I could tell which was which by turning one off and rescanning. Most of this information is found on an NI community website:

One final note: after reading some discussions on the SR400, I found that this particular device is tricky to work with because it's so old. Fortunately, on this same discussion I found custom VI's for working with it from Paul Henning which have worked like a charm.

So far, I'm quite happy with how easy it was to get to this point. This was facilitated by software that worked seamlessly at finding the hardware and a strong user-base for getting specific hardware to work. I still anticipate having some difficulty with the actual programming and getting the instruments to work in sync, but that can't be helped so much.

Kudos to you, NI.

Wednesday, May 15, 2013

Metamaterials for heat

There's a really cool experiment described in a recent PRL and summarized here about creating a metamaterial for cloaking objects from heat flow.

What connections to metamaterials for light can be drawn? The transport of heat is governed by a diffusion equation, which is very different from the wave equation and Maxwell's equations for governing light transport.

However, the diffusion equation can apply to light transport in disordered materials (see Ishimaru or van Rossum and Nieuwenhuizen). Is there some way, then to cloak objects in randomly scattering media from light?

The trouble with this thought is that one would have to add structure to a random material, and the only way I can think of doing this would be to create large-scale structures from a material with small-scale disorder.

Wednesday, May 8, 2013

Becoming a scientist

I'm not sure if you can read this without a subscription to Nature Jobs, but if you can read it, please do so.

This is an essay written by a post-doctoral fellow in neuroscience named Thomas Schofield. He was tragically killed in a bus accident in 2010.

This brief essay on becoming a scientist is very insightful and may help others to shed some misconceptions about a career in science.

Wednesday, May 1, 2013

Cell arrays for collecting a variable number of output arguments in Matlab

I learned some new tricks in Matlab today for dealing with functions that return a variable number of output arguments. I honestly think that there's something less-than-ideal about Matlab if I need to employ tricks to get my code working, but that's an entirely different topic...

If, for some reason, I have a function that returns a comma-separated list of arguments, I would normally write them explicitly like this:
[output1, output2, output3] = someFunction(input1, input2);
This means that myFunction would return three arguments and store them in output1, output2, and output3.

However, I recently wrote some code in which I could not say a priori how many output arguments I was going to have. The number of outputs depended on one of my input arguments, which happened to be a function handle.

To explain what I learned, consider a function of one input called myFunction. The input argument is a function handle @myHandle. The number of output arguments that myFunction returns is the same as the function linked to @myHandle and can change depending on my implementation of @myHandle. The following code first finds the number of outputs of @myHandle, then saves the outputs from myFunction into a cell array:

numArgsOut = nargout(@myHandle);
[myOutputs{1:numArgsOut}] = myFunction(@myHandle);
Tricky, but effective. This post on Stack Overflow helped me.

Today I also learned that I can insert a tilde into a comma-separated list of output variables if I don't want to save a particular variable. For example, if I don't care about output2 from someFunction above, but I do want to keep output1 and output3, I can type (in Matlab 2009 and later)
[output1, ~, output3] = someFunction(input1, input2);


Monday, April 29, 2013

Dr. McKenzie defines emergence

Dr. Ross McKenzie has provided a definition of emergence on his blog, Condensed Concepts. This definition nicely summarizes and simplifies emergence by laying out the properties that describe emergent phenomena.

I've always struggled with defining emergence because it seems that a definition by example is the best way to do it, but this is often difficult to do simply. Trying to define emergence in one sentence, rather than the list that Dr. McKenzie provided, seems nearly impossible. There's always some form of discussion that's required.

What other kinds of models or theories require expanded definitions involving enumerations of their qualities?

Sunday, April 28, 2013

The rules of the scientific game

While listening to NPR recently I encountered the book "The Structure of Scientific Revolutions" by Thomas S. Kuhn. This book, which was published in the seventies, is an explanation of how normal science emerges from times of discord between scientists (the so-called revolutionary periods). It also places the practice of science onto a firm sociological foundation, addressing how scientists think and what drives them to pursue their art.

Kuhn's influential work is credited with introducing the word paradigm into science. A paradigm is a set of beliefs which provides some basis for scientists to do their work. For example, in optics, I always assume the validity of Maxwell's equations to explore more esoteric optical phenomena. A paradigm is not just a scientific theory, though. It is a mode of thought that dictates what theories, experiments, methodologies, and instruments constitute valid science.

A particularly enlightening section of the book has been in Chapter 4, which explains why scientists do their work. Kuhn likens science to puzzle-solving, which is an often-used analogy for attracting students to scientific fields. A scientific puzzle consists in rendering coherent observations and theories from some class of phenomena. A scientific puzzle also consists in extending the theory to a broader range of phenomena, like the particle-physicist's search for a Theory of Everything.

What's necessary for this puzzle-solving to advance, Kuhn argues, is a set of rules for ensuring that a solution to a scientific problem can be found. A paradigm thus becomes a useful tool in filtering out what class of observations can even be addressed with the theory. If observations can not be adequately accounted for, then a theory must either be modified or thrown out altogether.

These ideas are important for any scientist wishing to make his or her own scientific revolution. Moving into uncharted waters is very difficult because one must first upend the existing paradigm, which most-likely exists for good reason. But the most difficult part is addressing those problems which lie outside the scope of the paradigm itself. In some sense, a scientist addressing such observations needs to start from scratch. I credit anyone who is explicitly working in these areas of science because they are likely to be shunned by their peers and encountering problems never before rationalized.

I highly recommend this book to anyone working in science, since its discussions effectively remove us from the constraints of our own tools for understanding scientific problems and lays them bare before us, without the hubris that is sometimes associated with trying to rationalize our own mode of thought.

Friday, April 26, 2013

CDMS and big science reports

This week's issue of Science contains an article entitled "Dark-Matter Mystery Nears Its Moment of Truth." The article details the latest findings by the CDMS experiment that is searching for weakly interacting massive particles (WIMPs). WIMPs are candidates for dark matter because they do not interact strongly with other matter and may fill large areas of "emtpy" space, effectively providing the additional gravitational energy that is missing from observations of the cosmos.

I very much liked the press-release on the CDMS website. It is incredibly clear and honest. Possibilities that would negate their findings are mentioned and the statistics are presented as-is with little attempt to make them sound better than they are.

I wonder if big science is immune to the hubris and bias that is often found in the work of publications numbering less than about ten authors. Is there something that makes science more objective as more people are involved? Of course, there are many difficulties with managing these large scientific endeavors, but I don't think we should discount the fact that involving so many people may make results more objective. This is an argument in favor of big science.

Thursday, April 25, 2013

DNA and its current state

There's a really good review article in the Guardian today about DNA and how we currently understand its operation and importance.

I was surprised to learn that scientists are able to encode writings and even videos onto DNA and decode them later. The information density is huge compared to electronic memory, but the read/write abilities are very slow.

I'm also struck by the fact that nature has provided such a versatile molecule and that humans are now able to capitalize on it.

Monday, April 15, 2013

Evaluating multiple expressions in Scheme with (begin ...)

I just learned the begin procedure in Scheme. Essentially, this procedure evaluates a number of expressions and returns the last one.

I found it particularly useful in if conditionals where I wanted to evaluate multiple expressions in the true or false clauses. For example, my implementation of for-each in MIT's Structure and Interpretation of Computer Programs Problem 2.23 goes as follows:
(define (for-each proc items)
  (if (null? items)
      (begin (proc (car items))
             (for-each proc (cdr items)))))
Here, if the list items is not empty, then I apply the procedure in proc to the first element in items and recall for-each, supplying it with the remaining elements in items.

I learned about the begin procedure from this Stack Overflow discussion.

Thursday, April 11, 2013

A beautiful experiment on a nonequilibrium thermodynamic system

I just finished reading an impressive article from 2003 entitled "Observing Brownian motion in vibration-fluidized granular matter". In the article's beginning, the authors established a simple question: can linear response theory describe a nonequilibrium thermodynamic system? This question is important because systems that are not in thermodynamic equilibrium are both difficult to analyze and serve as appropriate models for most natural phenomena. However, very powerful mathematical tools exist for systems that are in equilibrium, so it would be very convenient if their mathematical formalism could be extended to nonequilibrium cases.

In particular, the authors explore whether a torsion oscillator driven by a "heat bath" of randomly vibrating glass beads can be described by the fluctuation-dissipation theorem (FDT). The FDT is arguably the hallmark of linear response theory and describes the return to equilibrium of a many-body system subjected to a small perturbation. (A small perturbation means that the response is linearly proportional to the perturbation.)

A canonical example where the FDT finds use is in describing the motion of ions in a fluid between two plates of a capacitor after a voltage difference has been applied to the plates. Prior to the voltage being applied, the motion of the ions is erratic and Brownian. A long time after the constant voltage is applied, they move with an average velocity that is proportional to the electric field between the plates, the proportionality constant being called the mobility. The FDT describes the very short times immediately after the field is applied. It also links the noise (the random movement of the ions in equilibrium) to the mobility of the ions.

Returning to the article, the authors find that the motion of the oscillator is described by the FDT so long as an "effective" temperature is adopted. Effective temperatures are very appealing as analytical tools for describing nonequilibrium systems because they are very, very simple modifications to the FDT. Simply replace T with T_eff and you're done.

As Cugliandolo points out, to be a good thermodynamic descriptor, an effective temperature should be measurable by a thermometer. I'm not sure what the thermometer is in this system, but I suspect that it's the torsion oscillator itself. Furthermore, she stresses that not all nonequilibrium phenomena are describable by effective temperatures. It seems that one requires coupling between fast processes and slower observables, among other requirements. The beauty of the Nature article is that the authors not only confirmed this point (which seems to currently be an area of contention), but did so convincingly by measuring the relevant quantities directly and under a number of different conditions.

I'm not sure whether the effective temperature is a universal property of nonequilibrium systems; I'm inclined to say it is not. Hopefully more experiments like this one will be done that may further elucidate the current maze of theoretical papers on the topic.

Wednesday, April 10, 2013

Another look at the Schroedinger equation

I just read this article on PhysOrg about a paper recently published on the origins of the Schroedinger equation. One interesting thing I learned is that, in the classical wave equation for matter waves, the phase of the wave determines the amplitude. However, in Schroedinger's equation, the amplitude and phase of the wave are coupled to one another.

The authors of the PNAS article demonstrate that this coupling leads to the linearity of the Schroedinger equation, which is one of its most important properties. If it were not linear, I'm not sure that the mathematics would have turned out so relatively simple in quantum mechanics; i.e. it may not have been formulated in terms of linear algebra.

Unfortunately, I think the PhysOrg article was a bit misleading. They repeatedly referred to the classical wave equation when speaking of the Hamilton-Jacobi equation. To my knowledge, the classical wave equation and the HJ equation describe different things. More importantly, the classical wave equation is linear.

Is it better to be absolutely truthful in popular science articles or to minimize the amount of jargon and smooth over some minute but important points?

Saturday, April 6, 2013

Order of evaluation for nested lambda expressions in Scheme

I have been working through MIT's Structure and Interpretation of Computer Programs. I'm interested in learning computer science and have found the book to be approachable, though a bit heavy on the theoretical side of the science.

Exercise 2.6 deals with the Church numerals, i.e. natural numbers that are defined in terms of the lambda calculus. I have struggled a bit to understand these concepts, but the struggle ended when I made the following realization.

Consider the definition of 'one' in Scheme's notation for the lambda calculus:
(define one (lambda (f) (lambda (x) (f x))))
Here, f is any function and x is any variable. I was initially confused by the order of operations in this expression. For example, if I had a function called square
(define square (lambda (x) (* x x)))
then I didn't understand how I could use the definition of one to apply the function square to x. The two approaches I tried were
((one 2) square)
((one square) 2)
As it turns out, the second line of code returns the number 4, whereas the first one returns a function. The reason I was confused was because in the definition of one, the expression (lambda (f) ...) occurs in the outermost set of nested parentheses, whereas when calling one, the function that is substituted for the formal parameter f is entered in the innermost set of parentheses.

Saturday, March 30, 2013

Post-docs and individual exploitation

There's a wonderful interview in this week's Science Careers of Ed Lazowska, a computer scientist and policy expert at the University of Washington. The interview is about careers in computer science, especially at the PhD level. I encourage you to read it if you are in or considering entering any STEM field. I think it is foretelling of a very important trend in both science and career fields.

One particular response that stood out in the interview addressed a question concerning the numbers of individuals presently earning PhD's in computer science. The end of his reply to this question was
I do think we need to be cautious. We need to avoid the overproduction—and, honestly, exploitation—that characterizes other fields. Hopefully we'll be smart enough to learn from their behavior.

What's interesting is that Lazowska has identified the overproduction of PhD's in some STEM fields with exploitation. I believe he's claiming that other fields use the competition for limited faculty and industrial positions to obtain cheap labor, such as in the form of the post-doctoral position.

In other words, the culture of a particular field may dictate that one must perform multiple post-docs as the way to get a faculty position. However, this is just exploitation in disguise: promise someone a faculty position, but only if they work for you and for lousy pay and benefits.

One important thing to do if you are a PhD student is to realize this attitude early. I'm not saying that you shouldn't try for a faculty position if you really want one, but realize that the motives behind the establishment known as a post-doc position may well be more than just to help you gain experience.

I for one am currently applying to post-doc positions, since they are a good fit for me. Ultimately, it comes down to critically thinking about the best position for yourself and what would make you happiest.

Wednesday, March 27, 2013

1000 scientists determine genes linked to common cancers

A rather neat and important example of the shift towards big science driven by data has been reported in this Guardian article. It explains how a recent large-scale study in the UK has linked faults in the DNA from thousands of individuals to an increased likelihood of developing prostrate, breast, and ovarian cancers, which are some of the most prevalent and dangerous forms of cancer.

I suspect that the most important problems that mankind faces will be most effectively solved in this manner, combining the efforts of many individuals of differing expertise to mine large banks of data for meaningful correlations.

I also wonder if the Information Age will lead to new advances in how data is collected or generated. Most of what I read about data-centric science assumes that the data we need to solve a problem is already available somewhere in some database connected to the net. But the fundamental hypothesis of science is that our models must match observations, so making a number of observations, in my mind, should come before anything else.

I expect that how we perform our observations and measurements will change as the Information Age matures, not just how we process our data.

Tuesday, March 26, 2013

A satisfying definition of emergence (at least, for me)

Building a bit off of yesterday's blog topic concerning biology, mathematics, and complexity, I wanted to note a satisfying and simple explanation of emergent phenomena in P. W. Anderson's "Physics: The Opening to Complexity."

To paraphrase, emergent phenomena are not logical consequences of underlying physical laws. In other words, one can't deduce the behavior of, say, monarch butterflies from the laws of quantum electrodynamics. However, emergent behavior cannot contradict the physical laws upon which they are built.

Monday, March 25, 2013

Mathematics has a new friend: biology

In 2004 Joel E. Cohen wrote an article in PLoS Biology entitled "Mathematics Is Biology's Next Microscope, Only Better; Biology Is Mathematics' Next Physics, Only Better." This short article, which has been on my "To Read" list for a long time, is a brief history and assessment of the contributions that each field has and continues to make in the other.

As the title suggests, half of Cohen's claim is that the problems in modern biology will fuel the development of new mathematics. These mathematics should address the fundamental questions posed by biologists, such as "How does it work?" and "What is it for?" There are six such questions, and they are further divided according to the many, many orders of magnitude of space and time that are spanned by biological processes: from molecular biology to global ecosystems and from ultrafast photosynthesis to evolution. Scale-dependent systems and emergent phenomena are the primary themes in modern biological problems.

To illustrate his idea of mathematics, Cohen paints a picture of a tetrahedron with the topics of data structures, theories and models, algorithms, and computers and software located at the four vertexes. Each topic affects the others and has certain strengths for addressing the problems of biology. No weaknesses in the current state of mathematics are mentioned per-se, but any real weakness is likely the notion that, for some problems, the appropriate mathematics simply don't yet exist.

For Cohen, mathematics is decidedly applied mathematics. I doubt he has much to say about topics with no direct relevance for biological applications.

The article is divided into past, present, and future. Cohen first goes into a brief review of the historical interplay between math and biology, starting with what I think is an excellent example: William Harvey's discovery of the circulation of the blood. Just enough background is given to appreciate how novel and unexpected this discovery was. Notably, empiricism, aided by calculations, was in its infancy during Harvey's time. Cohen then pays homage to several other co-developments of math and biology, some of which are nicely summarized in the article's Table 1.

For present matters, Cohen notes that issues of emergence and complexity should lead to great discoveries in mathematics. What is notable here is that emergence in biological systems at one particular level of organization is driven by events at both lower and higher levels. For example, both the genes of an organism and evolution determine many aspects of species. Cohen also provides an example of recent research that marries ideas from statistics, hierarchical clustering, and cancer cell biology. This example is a bit difficult to follow, but I think it is a good analogy of the interplay he is discussing. (To be fair, I was reading the article in an airplane flying through some turbulence, so it was difficult to give this section my full attention.)

The article finishes with a future outlook for his thesis and very briefly presents some ethical problems and opportunities for the continued correspondence between the two fields. I didn't find this section terribly insightful.

This article and those similar to it can't help but make me feel like the Age of Physics is near an end. The problems that occupy most practical people's minds today seem to be concerned with complexity. Physics, which is concerned with constructing models based on the simplest possible assumptions, is by its very nature a difficult tool for understanding phenomena that emerge from the entangled interactions of many heterogeneous parts. Biology just happens to be one field that can push forward our understanding of complex systems. Computation, information science, and neuroscience are other fields that will help further mathematics.

Physics will always be important, but the domain of natural phenomena in which it finds itself useful is lessening as the Information Age comes into full swing.

Tuesday, March 19, 2013

Understanding the correlations between model parameters of speckle

Today I read "Structural correlations in Gaussian random wave fields," an old PRE by Freund and Shvartsman. The authors analytically found the existence of correlations between the amplitude and phase gradients in random electromagnetic fields commonly known as speckle. Notably, while the amplitude and phase are not correlated at certain points, the amplitude is correlated to the gradient of the phase. Higher amplitudes usually are found with smaller phase gradients and vice-versa.

What's not clear to me is if this treatment works for vector fields or only scalar fields. Notably, I'm not sure what phase means for a random vector field.

Perhaps the authors make the assumption that the components of the vector are independent and thus a scalar treatment is sufficient, but I'm not sure that this is so.

Wednesday, March 13, 2013

Computation and theory are now better defined

I've written several times in an attempt to find the reasoning that leads many authors in optics to present computations (a.k.a. simulations, or numerics) as arguments for the validity of an analytical model for a set of observations. Usually, the computations merely "print the results" of some theory, which I think necessarily eliminates its ability to independently confirm the theory. I have argued in the past that the computations usually add nothing new to the paper but merely fill it with trivial content. However, I know this isn't always the case and haven't found a good reason for why some computations just don't add much to a paper.

This time I will try to tackle this problem by starting with a question: what is the difference between computation and theory? If I feel that presenting both as arguments in a paper is redundant, then finding the differences (or lack-there-of) between them ought to highlight exactly why this is so.

Let's start by considering that computation is only as good as the assumptions that go into it. For example, many-body solvers may use a set of approximations about the microscopic entities in a system, such as assuming that molecules are hard spheres. For this reason, any phenomenon that requires the molecules to interact in a manner other than as hard spheres cannot be simulated with this assumption [1].

Assumptions like these, however, are also central to deriving many physical theories. If theory and computation are based on the same assumptions, then there's probably little difference between the two, but experience still tells us that there is some difference.

To probe deeper, consider the difference between function and procedure as described in Structure and Interpretation of Computer Programs:
The contrast between function and procedure is a reflection of the general distinction between describing properties of things and describing how to do things, or, as it is sometimes referred to, the distinction between declarative knowledge and imperative knowledge. In mathematics we are usually concerned with declarative (what is) descriptions, whereas in computer science we are usually concerned with imperative (how to) descriptions.
SICP, Section 1.1.7
Based on this quote, I think that a theory is a declaration about how some variables or parameters relate to one another [2]. It is no different than the idea of "function" described above. On the other hand,a computation typically takes some input parameters and produces an output, like reading a URL in a web browser and displaying the content of that webpage onto the computer screen. A computation, in this sense, also relates different parameters to one another. Again, there is little difference between the two.

However--and I think this is the key point--the means by which a computation obtains the outputs is through a series of steps, not through some statement or equation.

Let's recap so far. Computations and theories are very much alike. They relate parameters to one another and are many times based on the same assumptions. As arguments for an explanation for some set of observations they are both tools. The means by which they relate parameters, though, are different.

With this I think I've finally arrived at what bothers me about the use of computation in so many journal articles. The difference between theory and computation lies in the "means" or the "how-to." But  as arguments, they are merely communicating the exact same "end," and are therefore redundant as arguments.

The choice between presenting an analytical theory or a simulation often comes down to which is simpler and produces results that are easier to understand. Additionally, some problems are simply better-suited to one approach or the other.

A colleague of mine suggested that both may be included in a journal article because some types of people better understand theory while others better understand a computation.

Finally, I admit that I started writing this thinking there was some deeply hidden distinction between theory and computation. The problem was in how I defined theory and computation. I better defined them in footnote 1, which I wrote after writing the bulk of this entry. Computation and analytical theories are two ways of exploring the results of a model. In this case, models, computations, and analytic theories form a hierarchy with models located above the other two. In other words, computations and analytical theories are both types of models.

[1] Admittedly, one could argue here that a computation is just one way to arrive at the results of a theory, which would therefore make computation a subset of theories. In this case, a better distinction would be drawn by contrasting analytical theories, which are those expressed by equations and derived from mathematical principles, and computational models. Both of these are subsets of just "models." When I use the word "theory" in this essay, I usually mean analytical theory.

[2] The question as to what is a variable or a parameter is also important, since not all the parameters in a theory are necessarily measurable. I think this point is subtle, yet completely obvious given some thought. For example, typically voltages and currents are measured in electromagnetism, not electric or magnetic fields. For the time being, I think that a parameter is some quantity that either a) is measurable, or b) is not measurable but aids in the construction of a theory.

Tuesday, March 12, 2013

Notes on "Biological Physics," Part II

I finished the review of "Biological Physics" today, which included the sections on bioenergetics, forces, and single-molecule experiments. I skipped the section on reaction theory because I am not familiar with the topic and it didn't interest me as much as the others.

There are two primary topics in bioenergetics at the biomolecular level: charge transport and light transduction. Charge transport refers to the process by which isolated charges travel amongst different sites in a complex molecule. This process is inherently quantum mechanical, since electrons and holes may actually tunnel into different sites in the molecule, depending on the molecule's conformation.

Light transduction refers to the conversion of energy in a photon to chemical or electronic energy. A paragraph is dedicated to human vision and the photo-induced isomerization that is central to its operation, but the rest of this sub-section is devoted to photosynthesis.

During photosynthesis, "antenna systems" in the chlorophyll molecules capture light energy, which is transferred to other parts of the plant cell along excited molecular states, much like in Foerster resonance energy transfer. The transfer is so fast that the quantum mechanical coherence of the excited states likely plays a role. It seems that most of the work done up to the point in time when the article was written has been performed by theorists.

The various forces in the cell are typically "effective" forces models typically neglect the fundamental electromagnetic nature of the primary forces in the cell. At the protein level, enzymes may actually pull apart the covalent bonds in "violent" events. It's also been hypothesized that mechanical vibrations in the form of solitons can propagate along the covalently-bonded protein backbone, but this is strongly debated.

The transmission of forces through a heterogeneous medium, like the cell membrane is also a topic of study.

Finally, single-molecule studies are gaining prominence as experimental techniques become more refined, but "the challenge of studying individual protein molecules is still very much in its infancy... The key is to use extreme dilution so that only a single biomolecule is in the reaction volume."

Much single-molecule work has been done on DNA because it is simple and readily obtained. Spring-like forces in DNA are both enthalpic, which means they depend on the energy change due to deformation of electronic orbitals, and entropic, which means the DNA resists changing its shape due to interaction with its thermal environment.

In the conclusion, the authors anticipate that problems relating to the brain lie ahead as major areas of work in biological physics.

It would seem that the experimental study of proteins remains a major challenge to biological physics, but also is perhaps the most worthwhile to pursue. Photosynthesis, the effects of a protein's environment on its folding and charge transport, disordered protein behavior, and the forces between parts of proteins are not very well-understood. If there are new discoveries to be made, then I think they lie in protein dynamics.

Monday, March 11, 2013

Manatees and cabbage palms

My fiancée K and I paid a visit yesterday to Blue Springs Park, a Florida state park which is just outside Orange City, Florida. Of course, the highlight of the park are the manatees, which swim up the spring to warm themselves during the winter months in water that is a near constant 72 degree F. The warm water comes up from the Florida aquifer, a very large pool of underground water situated in porous rock below the state. One unfortunate result of the porosity of this rock is that sink holes may develop rapidly and unexpectedly below buildings.

On this trip K showed me a type of tree known as the cabbage palm, whose scientific name is Sabal palmetto. This tree is very common in the central Florida area and has been very important for people living in Florida since the time of the Native Americans. Its leaves were interwoven to provide roofs for shelters and its trunks provided lumber to early settlers.

The cabbage palm lies very low to the ground for a long period of time in its youth while saving energy stores. At some particular time (I'm not sure when this occurs in its life cycle), it shoots up and grows rapidly to a very tall height. The reason for this behavior is that much of the central Florida ecosystem relies on fire for its maintenance. The cabbage palm is relatively resistant to fire in both its low-lying state and as a very tall palm tree. For intermediate heights when its upper trunk is exposed, however, it can be killed by the frequent fires in the area. For this reason, it must grow quickly or succumb to the flames.

This quick-growing behavior reminds me of  the yagrumo tree that I encountered in Puerto Rico.

Thursday, March 7, 2013

Improve your writing: write to a newspaper

I submitted an editorial piece to the Orlando Sentinel recently in my continuing efforts to improve my writing and expose myself to other forms of publishing. This article was published yesterday and may be found here:,0,933320.story. It concerns the use of bicycles as a form of transportation near the UCF campus.

One thing I learned from this experience was that newspapers prefer much shorter paragraphs than scientific publishers. I originally had three or four paragraphs in the 400-word essay; the editor turned it into nine. I suppose having multiple paragraphs makes the article easier to read and allows the reader more opportunities to "abandon" the article once they've started reading.

I also learned that newspaper editors won't ask writers if it's OK to perform edits beyond simply breaking a piece of writing into paragraphs. I was a bit disappointed that a few phrases were cut from my original article since they argued for a few points that I felt were important. However, the overall message of the article is more clear in its published form since it is not obfuscated by too many arguments.

Overall, I'm pleased with the experience, and I'm contemplating how to move forward with increasing bike awareness in Orlando.

Friday, March 1, 2013

Considering the value of a PhD... now that I almost have one

I've lately been looking into reasons for people getting PhD's, job placements and outlooks, income levels, etc. I'm doing this partly as a response to the question of "what have I done with my life these past five and a half years?" I'm also just curious what other people think on the topic.

Here is a nice discussion of a PhD holder and academic on his blog about getting a PhD in physics. His advice: get a PhD in physics because you want to be a graduate student for five or six years.

Well, this is advice I've never heard before.

Notes on "Biological Physics," Part I

There is an article from 1999 in Reviews of Modern Physics entitled "Biological Physics." This review summarizes research during the twentieth century where "physics has influenced biology and where investigations on biological systems have led to new physical insights." The exchange of ideas between the two fields has not been of equal magnitude, the authors note. Many tools from physics have found their way into the biological sciences, though some biological systems have led to new physics, usually in the form of providing experimental testbeds for new physical theories. The article is primarily concerned with molecular biological physics.

The seven primary sections of the review are
  1. The structures of biological systems
  2. Complexity in biology
  3. Dynamics, mostly within proteins
  4. Reaction theory, where biology has provided testbeds for new physical theories
  5. Bioenergetics
  6. Forces
  7. Single-molecule experiments.
Most of the interesting ideas I've found so far in the article are associated with the complexity and dynamics of biomolecules. Particularly, there is an idea known as the principle of minimal frustration. From the Wikipedia article,
"This principle says that nature has chosen amino acid sequences so that the folded state of the protein is very stable. In addition, the undesired interactions between amino acids along the folding pathway are reduced making the acquisition of the folded state a very fast process. Even though nature has reduced the level of frustration in proteins, some degree of it remains up to now as can be observed in the presence of local minima in the energy landscape of proteins."
This idea came from a theory of energy landscapes for proteins that was developed by Bryngelson and Wolynes. In language that I'm more familiar with, the potential energy of the molecules has some fractal-like structure, because from Section III in the article the authors state that
"The kinetic observations suggest that the energy landscape might have a hierarchical structure, arranged in a number of tiers, with different tiers having widely separated average barrier heights."
It seems like structural determination of proteins and other biomolecules has become something akin to bookkeeping. The tools exist and are refined to find static structures, like neutron scattering and NMR. Additionally, the energy landscape theory for protein folding seems to be mature at this point as well. So what open-ended questions still exist in biological physics? After reading up to section V, I've compiled the following grand problems in biological physics as I've interpreted them from this paper only:
  1. "A synthesis that connects structure, energy landscape, dynamics, and function has not yet been achieved." This seems to suggest that there is some degree of incoherence between these individual fields of study, so ideas that link them together are required.
  2. Biochemists can now synthesize their own proteins, but can they do this in a useful manner, for, say, molecular and microscopic engineering purposes?
  3. Sensing and characterizing phase transitions, especially in glassy systems, could lead to better experimental investigations into protein folding.
  4. "Understanding protein folding can improve the capability of predicting protein structure from sequence." Apparently there's a lot of DNA sequence information, but predicting what proteins come from it is nontrivial.

Thursday, February 28, 2013

How do you teach what polarization is?

Today is Optics Day at CREOL, our annual public open house where we present demonstrations of various optical phenomena and technologies, speakers, and pizza. :)

During this year's Optics Day I am charged with explaining the phenomenon of polarization to visitors. Now, I find polarization incredibly difficult to explain to non-scientists, and here's why: the usual treatment of optical polarization in physics involves describing the direction of the electric field vector of an electromagnetic wave. If I were to start with this definition while speaking with somebody not trained in physics, I would then have to explain electromagnetic waves. This would be followed by an explanation of the equivalence of light and electromagnetic waves, wave phenomena in general, linear, circular and the more general elliptical polarization states, etc. etc. until the poor person who has come to see a cool demonstration and learn something new has completely been befuddled because it takes so much background understanding to comprehend what polarization even means.

This year, I am determined to find an explanation of polarization that is more intuitive to a non-scientist. A rough outline that I intend to give for polarization's foundation in observation goes as follows:

1) Our sense of sight is perhaps the most obvious sense we have. We see objects and from these objects we discern shape, size, color and other properties.

2) There are physical quantities that cannot be sensed by our eyes. For example, flowers have fragrance that our noses can detect. Wind is another example. We feel its effects or we see its effects on other things, but we don't directly see "wind." Therefore, there are physical quantities that cannot be seen but nevertheless may be sensed.

3) There are still more phenomena that exist but cannot be sensed by any of our sense organs. For example, a compass points north because the needle experiences a magnetic force. Additionally, small objects all fall towards the earth because of gravity. Magnetism and gravity require tools that sense things that we cannot: magnetic and gravitational fields. Where our senses fail us, we use tools to measure some quantity.

4) Polarization lies in this last classification of phenomena. It cannot be sensed by us (which isn't strictly true), but can be determined by appropriate tools. These tools are things that are found in nature, like quartz crystals, and man-made objects like polarizers and waveplates.

From this foundation, I will explain some of the consequences of the polarization of light, what it can be used for, and may even digress into the physicist's model if the visitors are interested enough. My hope is to build the concept of polarization up from a basis of observation, not to start with our model first, followed later by how we observe polarization.

Sunday, February 24, 2013

Math is not always the best form of communication

I attended a seminar at CREOL this past week concerning similarities between quantum entanglement and the theory of polarization of light. While the research that the speaker presented was interesting, I found the means by which he gave his presentation to be more enlightening. Specifically, I realized something very important about the role of mathematics in communication.

This talk, like every single scientific seminar I can recall attending but one [1], was given in a slideshow program like PowerPoint. The first half of the talk consisted of an overview of Bell's theorem, quantum non-locality, historical interpretations of polarization, etc. The corresponding slides complemented the speaker's words; they were full of illustrations, sentences, and diagrams that helped to convey his message. The second half discussed recent theoretical research by the presenter. The slides contained a lot of mathematics. To explain the math, he would often make statements like,
"From this equation we can see..."
 "It's clear that these two equations reveal..."

After the talk I was struck by how clear and easy to follow the first half was, while the second half was completely lost on me. His statements above were not true! The reason for this is, I believe, not my lack of familiarity with the material but because equations are not always good means to communicate ideas.

The strengths of mathematics are that they are unambiguous and succinct. Additionally, in terms an engineer might understand, they compress and encode ideas. The downside is that during a talk the listeners must uncompress these ideas to understand them, which takes time and distracts from the speaker's message. Additionally, if the audience doesn't have the background required to make sense of the equations, they can't even decode them to begin with.

Equations are most useful when they're easy to understand and when the speaker absolutely cannot allow for any ambiguity in their message. However, since the purpose of a talk is to transfer information to the audience, the speaker must consider more efficient tools, like illustrations and words. More than likely, if an idea can't be represented in words, then it's not a good idea.

[1] The exception was given by a physics Nobel laureate using transparencies and an overhead projector.

Tuesday, February 19, 2013

A better place for philosophy

A while back I started a new blog called "I Wish to Blog Deliberately" (corny name but accurate in its account). With this new blog I intended to write on philosophical topics and keep more practical discussions focused at MQRL. Aside from being just a collection of philosophical discussions, its creation was important because I was concerned that MQRL might become diluted with esoteric discussions if I were I to maintain only one blog.

However, since that time I've rarely contributed to IW2BD; but my temperament lately has been philosophical and I need an outlet for it. As a result, I'm beginning to post again to IW2BD. I've also been motivated by the observation that my writing is much better now such that I may write coherently on topics such as teleology and ethics. This has arisen in no small part because my writing and thinking has improved as I explored ideas at MQRL.

So, if you're interested in what I have to say, pay IW2BD a visit. I plan on making no changes to MQRL and will continue its theme of the practicalities and execution of science from an academic standpoint.

And if you're really, really interested, e-mail me sometime at I'd love to hear from you.

Friday, February 15, 2013

A short review of best computing practices for scientists

Best Practices for Scientific Computing is a good read if you, like me, are a scientist who frequently programs but never received proper training in software development. It simply enumerates a list of practices that helps improve the productivity of coders and the reusability of code written in an academic environment. The techniques on this list are well known to software development professionals and have been extensively developed over many years.

Some of the suggestions and points in the article that are of note include:
  1. 90% of scientists are self-taught programmers
  2. All aspects of software development should be broken into tasks roughly an hour long
  3. Provenance of data refers to data that is accompanied by a detailed list of code and operations for recreating the data and code output
  4. Programmers should work in small steps with frequent feedback and course corrections
  5. Use assertions (executable documentation) to avoid mistakes in code
  6. Scientists should reprogram complicated tasks to make them simpler for a human to read instead of including paragraphs of comments explaining how the code works.

While largely approachable, the paper still suffers from a slight overuse of jargon from the software development field. As a result, the importance of some of their recommendations escapes me.

Thursday, February 14, 2013

Pre-allocating an array of objects of a structure array in Matlab

I often run into the issue of how to pre-allocate a structure or array before populating it inside a loop in Matlab. This discussion at Stack Overflow, along with some other internet searches, provided the answer:
For objects the pre-allocation works by assigning one of the objects to the very last field in the array. Matlab then fills the other fields before that with objects (handles) that it creates by calling the constructor of that object with no arguments (see Matlab help)
So if I want to create a structure array with one hundred elements and two fields (called xCoord and yCoord), I would enter

myStruct(100).xCoord = 0;
myStruct(100).yCoord = 0;
 and then proceed to populate all the previous elements of myStruct.

Tuesday, February 12, 2013

Two types of breakthroughs

An editorial in this month's Nature Photonics entitled "Transcending limitations" asserts that there are two types of breakthroughs: technological and conceptual. Technological breakthroughs occur when some experiment manages to measure something better or more accurately than in previous works. Conceptual breakthroughs often lead to greater scientific understanding because they force us to look at some phenomenon in a new way.

Often, conceptual breakthroughs require strong patience and steady work to explain previously unexplainable results in an experiment.

I would guess that funding agencies and governments prefer technological breakthroughs because of their immediate economic payoff, whereas academic institutions prefer conceptual breakthroughs.

Monday, February 11, 2013

'Living crystals' reported in Science

Living Crystals of Light-Activated Colloidal Surfers is a recent publication in Science. It presents a study of the dynamics of interacting particles that are propelled by a light-catalyzed reaction between hematite (located on the surface of the colloidal particles) and hydrogen peroxide. These particles experience a nonequilibrium driving force from the reaction, repulsive forces between one another due in part to SDS surfactant present in the solvent, and attractive phoretic forces towards other particles. They observe that when the system is illuminated with blue light and the hydrogen peroxide reaction is catalyzed, the particles form crystalline arrangements that dynamically grow, shrink, merge and split. This is a form of self-organization and is fueled by the energy delivered to the system in the form of light.

Importantly, the attractive pair forces and and driving forces are not present when the light is off, which demonstrates that the formation of the crystals occurs under nonequilibrium conditions.

This rather elegant work demonstrates how complex behavior in systems can emerge from interactions between the parts of the system.

A PopSci article summarizes the work, though I think it focuses too much on the properties of life that the crystals satisfy.