In my last article I began exploring the relationship between optics and biology to better determine to what extent optics is capable of solving problems in biology, particularly molecular and microbiology. I posed a set of questions, one of them asking whether "...the current trends in improving microscopies [will] lead to answers of the
fundamental questions of molecular and microbiology."
Let me start this brief essay by stating my own current opinion, which is based primarily on speaking with biologists and perusing the internet. I believe that the fundamental problems in biology lay at the molecular level and at the systems level. The molecular-level problems include how certain proteins fold and are transported through organelles like the Golgi bodies [1]. The systems-level problems deal with the coherent interaction of the many elements within an organism. To illustrate this, consider how the coordinated actions of various cells (such as Schwann cells, astrocytes, and neurons) lead to an effective functionality of the nervous system.
Microscopy unfortunately is ill-suited to exploring either of these two levels. It is true that fluorescence microscopy has allowed us to specifically target some structures of interest inside a cell and that superresolving microscopies for beating the diffraction limit exist, like PALM and STORM. However, fluorescent markers--which are also used in PALM and STORM--are known to adversely affect the behaviors of live cells. PALM and STORM are furthermore very complex to implement and limited to some degree by their data acquisition times [2].
One popular line of microscopy research is label-free microscopy, whereby images are acquired without introducing any artificial contrast-generating mechanism into the sample. One example is based on stimulated Raman spectroscopy (SRS). This approach usually is a spectroscopic technique that involves inferring what collection of known substances contributed to a measured spectrum from an image. Achieving a good resolution with SRS or any other label-free technique usually means allowing for a severe increase in the measurement time. At the time of this writing, I see neither the spatial nor temporal resolution of label-free microscopies as good enough for addressing the current open-ended biological problems.
At the systems level, microscopy is simply not the tool to use. I think that computer modeling and experiments on live animals are the norm here, though I am not saying that optics cannot play any role.
Overall, I think that we optical scientists are placing too much emphasis on improving light microscopy [3]. It seems to me that the information that biologists require is not found in images but rather in some other form. This is not to say that optics is of no use to biology. Take the technique known as dual polarization interferometry, for example, which uses light to probe protein crystal and lipid bilayer growth on waveguides. As another example, consider that optical tweezers have been influential in measuring the mechanics of biopolymers like DNA.
So what should we focus our attention on? I think label-free sensing mechanisms are in the right direction because they risk minimal alteration to cell and biomolecule functionality. I also think that techniques for sensing dynamic phenomena will trump anything that looks at the structure of fixed (dead) cells. Structure at this point seems well-known to biologists, but how structure evolves in time is not. Finally, controlling biological systems with light seems incredibly promising (e.g. optogenetics), though I think it is too early to tell whether it'll be valuable in deepening our knowledge.
I will hopefully address whether optics is the best tool for fulfilling these characteristics in the future.
Some references
The Wikipedia articles on Molecular biology, the Central dogma of molecular biology, and Biophysics are worth reading.
Stanford Encyclopedia of Philosophy entry on Molecular Biology
Seven fundamental, unsolved questions in molecular biology: Cooperative storage and bi-directional transfer of biological information by nucleic acids and proteins: an alternative to “central dogma”
Notes
[1] I have personally been exposed to a problem of the mechanics of certain
biopolymers in regulating the structure of mitochondria. Biopolymers may arguably lie outside the realm of molecular biology since they are made of many, many molecules and not just a few, but I believe that my experience with this problem gave me some good insight.
[2] I have heard that Nikon microscopes are now offering STORM capabilities.
[3] I should point out that I think that the work in improving microscopies IS worth doing, I'm just not so certain that so much attention should be given to it.
Monday, January 28, 2013
Friday, January 25, 2013
An optical scientist considers the question: what do biologists want from a microscope?
Optics and biology have been intertwined for hundreds of years. Robert Hooke and Antonie van Leeuwenhoek both contributed greatly to the fields of microscopy and microbiology in their infancy, advancing each field by establishing a greater understanding in the other. As optics evolved and technologies derived from it became more refined, the number of discoveries in the realm of microbiology witnessed a concomitant increase. This fact was perhaps recognized in part with the award of the Nobel Prize in Physics in 1953 to Fritz Zernike for the phase contrast microscope, a tool which rendered otherwise invisible cells visible with relatively modest modifications to an existing microscope. Much work in microbiology followed as a result of this along with other developments in optics.
The relationship seemed to change, though, starting in the mid-twentieth century with the advent of molecular biology. During this time, molecular biological technologies evolved and matured to the point where discoveries were facilitated primarily by non-optical means, with microscopes serving as more of a tool for routine lab work than as significant drivers for learning something new. After all, a traditional light microscope is limited to observing structures no smaller than about one wavelength of light across, or about half a micron (one millionth of a meter). DNA, proteins, and all the other biomolecules are just too small to see, even for the most powerful microscope objectives.
Of course one could argue that the development of the targeted fluorescent proteins that reveal the location of a molecule's existence within a cell helped to advance the field of optics, but in this case the role of enabler switched sides; molecular biology led to an increase in the number of optical technologies for imaging fluorescent markers, such as fluorescence correlation spectroscopy. From the viewpoint of a scientist, this reversal is a bit distasteful. We would like for technology to enable new discoveries about the fundamentals of life, not for new discoveries to lead to technology that tells us what we already know.
Now we are well into the twenty first century and are rooted firmly within the scientific age of molecular biology and biotechnology. (The age of physics is now past and now concerns itself primarily with the ultimate limits of space: the infinitesimal quark and the awesomely large cosmos.) Given the history between optics and biology and the recent change in their relationship, I think it's necessary to make an assessment, so to speak, of this relationship.
In the near future I will write posts that explore this topic. I hope to answer questions like
The relationship seemed to change, though, starting in the mid-twentieth century with the advent of molecular biology. During this time, molecular biological technologies evolved and matured to the point where discoveries were facilitated primarily by non-optical means, with microscopes serving as more of a tool for routine lab work than as significant drivers for learning something new. After all, a traditional light microscope is limited to observing structures no smaller than about one wavelength of light across, or about half a micron (one millionth of a meter). DNA, proteins, and all the other biomolecules are just too small to see, even for the most powerful microscope objectives.
Of course one could argue that the development of the targeted fluorescent proteins that reveal the location of a molecule's existence within a cell helped to advance the field of optics, but in this case the role of enabler switched sides; molecular biology led to an increase in the number of optical technologies for imaging fluorescent markers, such as fluorescence correlation spectroscopy. From the viewpoint of a scientist, this reversal is a bit distasteful. We would like for technology to enable new discoveries about the fundamentals of life, not for new discoveries to lead to technology that tells us what we already know.
Now we are well into the twenty first century and are rooted firmly within the scientific age of molecular biology and biotechnology. (The age of physics is now past and now concerns itself primarily with the ultimate limits of space: the infinitesimal quark and the awesomely large cosmos.) Given the history between optics and biology and the recent change in their relationship, I think it's necessary to make an assessment, so to speak, of this relationship.
In the near future I will write posts that explore this topic. I hope to answer questions like
- What do biologists want out of a measurement technique?
- Will the current trends in improving microscopies lead to answers of the fundamental questions of molecular and microbiology, or are we moving in the wrong direction?
- Are optical scientists misguided in the search for improved images? Are there other forms of information carried by light that are more useful than images?
- Will it be possible to better control biological processes using light?
Labels:
biophysics,
optics,
sensing
Thursday, January 24, 2013
Physical processes described by pathological distributions
I am reading a decade-old paper entitled Above, below, and beyond Brownian motion which explains rather nicely random walk processes that are governed by probability distributions with infinite moments, such as the increasingly famous Lévy flight.
What I like about this paper is that it provides many real examples of processes that are explained by probability distributions with infinite moments. This means that usual descriptors of the random variable, such as its mean or variance, do not exist. One surprising example is the distribution of light rays hitting a wall from a point source emitting uniformly in all directions.
A random walk described by a "pathological" distribution (as they are sometimes described by mathematicians) is conceptually understood as some process with a very large spread in characteristic times or distances. By spread I mean that the random variable may take any value with a probability that is not exponentially vanishing (this property is also known as scale-free). Probability distributions modeled as power law decays are popular for encompassing this behavior since they decay slower than an exponential distribution.
I admit that much of the theory in the paper is best understood by someone who already is familiar with many of the ideas from probability theory, such as moment generating functions.
What I like about this paper is that it provides many real examples of processes that are explained by probability distributions with infinite moments. This means that usual descriptors of the random variable, such as its mean or variance, do not exist. One surprising example is the distribution of light rays hitting a wall from a point source emitting uniformly in all directions.
A random walk described by a "pathological" distribution (as they are sometimes described by mathematicians) is conceptually understood as some process with a very large spread in characteristic times or distances. By spread I mean that the random variable may take any value with a probability that is not exponentially vanishing (this property is also known as scale-free). Probability distributions modeled as power law decays are popular for encompassing this behavior since they decay slower than an exponential distribution.
I admit that much of the theory in the paper is best understood by someone who already is familiar with many of the ideas from probability theory, such as moment generating functions.
Saturday, January 19, 2013
Numpy's memory maps for increased file read/write efficiency
I mainly use Matlab and Python when I need to analyze and visualize data and run simulations. As I mentioned in my previous post, I like to document when I learn something new that makes my programs written with these tools cleaner or more efficient.
I've just discovered Numpy's memory map object for efficiently reading and writing to very large arrays in files. A memory map is used to access only part of an array that is saved in a binary file on a hard disk. The memory map may be indexed like any other array that exists in memory, but only the part of the array that you are working on will be loaded into memory. This is similar to the idea of generators that I wrote about in my previous post.
As an example, I create a memory map object called inFile which interacts with a file whose location is stored in the variable fName:
I've noticed an increase in speed of my programs by using memory maps, though I haven't tested exactly how much time is saved.
I've just discovered Numpy's memory map object for efficiently reading and writing to very large arrays in files. A memory map is used to access only part of an array that is saved in a binary file on a hard disk. The memory map may be indexed like any other array that exists in memory, but only the part of the array that you are working on will be loaded into memory. This is similar to the idea of generators that I wrote about in my previous post.
As an example, I create a memory map object called inFile which interacts with a file whose location is stored in the variable fName:
inFile = numpy.memmap(filename, dtype = 'float32', mode = 'w+', shape = (3,4))The mode argument 'w+' tells the program that I want to create the file if it doesn't exist or overwrite it in case that it does. Printing the contents of inFile at this point will result in a 3x4 array of zeros being displayed on the screen. When I want to write to this memory map, I can write to a portion of it like:
inFile[0,3] = 3.2which writes 3.2 to element in the first row, fourth column of the array. When I delete inFile, the contents will be flushed to memory:
del inFileReading parts of this array from fName is done by expressions like the following:
inFile = numpy.memmap(filename, dtype = 'float32', mode = 'r', shape = (3,4))Now, to move some of the file's contents to an array in memory, I only need to type
myNewVariable = inFile[:,2]which, for example, moves the third column of the file to the array myNewVariable.
I've noticed an increase in speed of my programs by using memory maps, though I haven't tested exactly how much time is saved.
Labels:
Python
Thursday, January 17, 2013
Increase a Python program's memory efficiency with generators
I've recently been working with very large data sets (more than a million data points) and have encountered a serious reduction in the efficiency of my Python programs' computations. One reason for this is that I have been reading a large data file into memory all at once, performing my computation on this data file, and then moving onto the next file, all the while keeping my calculation results in memory so that I can plot them at the end.
One tool I've discovered for increasing the efficiency of these types of operations is the idea of a generator. Now, I realize that these are well-known in Python circles. However, I am not primarily a programmer, so I am not fully aware of the tools available to me when I write programs. Hence, I sometimes use this blog as a notebook so I can easily find these tools again in the future.
Simply put, a generator is an object that executes commands and yet may be iterated over like the elements in the list. One example, shown here, is the readline() method. The code looks like this:
Another nice description of the use of generators is given here. And, because I am having some trouble plotting all this data, a discussion of possible solutions to plotting large data sets is presented here at Stack Overflow.
One tool I've discovered for increasing the efficiency of these types of operations is the idea of a generator. Now, I realize that these are well-known in Python circles. However, I am not primarily a programmer, so I am not fully aware of the tools available to me when I write programs. Hence, I sometimes use this blog as a notebook so I can easily find these tools again in the future.
Simply put, a generator is an object that executes commands and yet may be iterated over like the elements in the list. One example, shown here, is the readline() method. The code looks like this:
inFile = open('sampledata.txt')w is a generator function that reads only one line in the file into memory. Each time readline() is called, w is assigned the string representing the next line in the file sampledata.txt. According to the above link, an object that offers the abilities to load just parts of an object into memory and yet still iterate over all the parts is a generator.
w = inFile.readline()
Another nice description of the use of generators is given here. And, because I am having some trouble plotting all this data, a discussion of possible solutions to plotting large data sets is presented here at Stack Overflow.
Monday, January 14, 2013
A primary school method for multiplication of numbers
I found this link that was posted by a friend on Facebook this morning. It's a pictorial description of an algorithm for computing the multiplication of two-digit numbers. I haven't had time yet to think about it, but so far as I can see now it's useful only for computing the product of numbers with one or two digits. Still, it's pretty cool.
In 2011 I had a calendar in my office on mental math algorithms called Lightning Calculation, which I believe I also mentioned in an earlier post. Check this out if you wish to train your brain in the powers of mental computation!
In 2011 I had a calendar in my office on mental math algorithms called Lightning Calculation, which I believe I also mentioned in an earlier post. Check this out if you wish to train your brain in the powers of mental computation!
Tuesday, January 8, 2013
Improve your writing by removing dangling modifiers
Communicating concepts and results in a cogent manner is a very important skill, and unfortunately is not emphasized enough in Academia. As a result, I take an active interest in improving the quality of my writing, not
just in the posts to this blog but also in the large amount of technical
writing I do. This means that I am careful to notice any portions of text that I suspect of being incorrect or ambiguous and follow up immediately with a search for a clarification.
As an example, I was working on my dissertation today and encountered a use of a Latin expression (i.e.) but was unsure whether it should be italicized or not. I found no general answer online, but it appears safe to leave it unformatted unless a journal or editor requires it. More importantly, though, in my search I came across this link on dangling modifiers, a term which I was unfamiliar with. So, I decided to peruse the webpage out of curiosity.
I found that a dangling modifier is a word or phrase that describes the wrong subject in a sentence. In other words, it implies a subject other than the one that is actually stated. As I read, I realized that they are usually manifest in an erroneous use of the passive voice, the voice that many scientists, including myself, employ liberally within their technical writing.
The first example given on the website is the sentence "Using the survey data, the effects of education on job satisfaction were examined." The dangling modifier, "using the survey data," implies that the scientists use the data, yet this phrase is grammatically (and incorrectly) modifying "the effects."
Unfortunately, this is a very common practice in my writing which I anticipate will be very difficult to overcome. Principally, I can avoid it by using more of the active voice, which, though it implies some partiality, appears as a stronger form of statement. I don't think that this will be easy given that my adviser and other senior researchers often review and modify my writing. Since the passive voice is so entrenched in modern technical writing, it will take some effort to surmount the opinions of my peers.
As an example, I was working on my dissertation today and encountered a use of a Latin expression (i.e.) but was unsure whether it should be italicized or not. I found no general answer online, but it appears safe to leave it unformatted unless a journal or editor requires it. More importantly, though, in my search I came across this link on dangling modifiers, a term which I was unfamiliar with. So, I decided to peruse the webpage out of curiosity.
I found that a dangling modifier is a word or phrase that describes the wrong subject in a sentence. In other words, it implies a subject other than the one that is actually stated. As I read, I realized that they are usually manifest in an erroneous use of the passive voice, the voice that many scientists, including myself, employ liberally within their technical writing.
The first example given on the website is the sentence "Using the survey data, the effects of education on job satisfaction were examined." The dangling modifier, "using the survey data," implies that the scientists use the data, yet this phrase is grammatically (and incorrectly) modifying "the effects."
Unfortunately, this is a very common practice in my writing which I anticipate will be very difficult to overcome. Principally, I can avoid it by using more of the active voice, which, though it implies some partiality, appears as a stronger form of statement. I don't think that this will be easy given that my adviser and other senior researchers often review and modify my writing. Since the passive voice is so entrenched in modern technical writing, it will take some effort to surmount the opinions of my peers.
Labels:
writing
Monday, January 7, 2013
To specialize or not to specialize
Well, I am finally back from a long break at home in Ohio. I saw my sister get married, ate lots of cookies, and visited with my entertaining inlaws-to-be. I feel that the break from thinking about science has done me quite a bit of good and given me some new perspectives on things. Now, as I settle into work on completing my dissertation, I still plan on taking time to write here since it is a nice break from the slow, boring work of drilling away in Word on an endless technical document. Besides, I've gained so much from writing this blog that I don't think that I will ever let it go.
I don't have much to say for today, but I do want to put into words something I've thought about previously. Namely, it seems that the degree of specialization varies between research groups and what this means for individuals looking to enter graduate school or change jobs. By degree of specialization, I'm referring to the extent to which a researcher's problems encompass a number of different academic fields. A high degree of specialization will involve problems that do not utilize concepts from other fields and vice-versa.
**The following lists are my opinions only. I realize that some statements may be regarded as contentious. However, these opinions are based on my experiences and to me, at least, carry a degree of truth.**
Characteristics of a highly specialized field are:
Characteristics of a less-specialized field are:
I don't have much to say for today, but I do want to put into words something I've thought about previously. Namely, it seems that the degree of specialization varies between research groups and what this means for individuals looking to enter graduate school or change jobs. By degree of specialization, I'm referring to the extent to which a researcher's problems encompass a number of different academic fields. A high degree of specialization will involve problems that do not utilize concepts from other fields and vice-versa.
**The following lists are my opinions only. I realize that some statements may be regarded as contentious. However, these opinions are based on my experiences and to me, at least, carry a degree of truth.**
Characteristics of a highly specialized field are:
- There are many unsolved problems; these often involve refinements to simpler models
- One does not need to maintain a wide breadth of knowledge across other fields
- One's skills may be extremely marketable, but in a field of limited size
- One may control the direction of his or her field more easily since there are fewer competing researchers and ideas to contend with
Characteristics of a less-specialized field are:
- You can apply to many different jobs, though you may not usually be the best candidate
- Solutions to problems make a large impact since the problems apply to many settings
- It is more difficult to read a majority of research articles that one finds since one is not as well-versed in the details
- One's research is more likely to be attacked by those more specialized if you neglect some newer and precise models (I often find that this is the case, even if the more precise models do not weigh upon the problem I am solving)
Subscribe to:
Posts (Atom)