Today is Optics Day at CREOL, our annual public open house where we present demonstrations of various optical phenomena and technologies, speakers, and pizza. :)
During this year's Optics Day I am charged with explaining the phenomenon of polarization to visitors. Now, I find polarization incredibly difficult to explain to non-scientists, and here's why: the usual treatment of optical polarization in physics involves describing the direction of the electric field vector of an electromagnetic wave. If I were to start with this definition while speaking with somebody not trained in physics, I would then have to explain electromagnetic waves. This would be followed by an explanation of the equivalence of light and electromagnetic waves, wave phenomena in general, linear, circular and the more general elliptical polarization states, etc. etc. until the poor person who has come to see a cool demonstration and learn something new has completely been befuddled because it takes so much background understanding to comprehend what polarization even means.
This year, I am determined to find an explanation of polarization that is more intuitive to a non-scientist. A rough outline that I intend to give for polarization's foundation in observation goes as follows:
1) Our sense of sight is perhaps the most obvious sense we have. We see objects and from these objects we discern shape, size, color and other properties.
2) There are physical quantities that cannot be sensed by our eyes. For example, flowers have fragrance that our noses can detect. Wind is another example. We feel its effects or we see its effects on other things, but we don't directly see "wind." Therefore, there are physical quantities that cannot be seen but nevertheless may be sensed.
3) There are still more phenomena that exist but cannot be sensed by any of our sense organs. For example, a compass points north because the needle experiences a magnetic force. Additionally, small objects all fall towards the earth because of gravity. Magnetism and gravity require tools that sense things that we cannot: magnetic and gravitational fields. Where our senses fail us, we use tools to measure some quantity.
4) Polarization lies in this last classification of phenomena. It cannot be sensed by us (which isn't strictly true), but can be determined by appropriate tools. These tools are things that are found in nature, like quartz crystals, and man-made objects like polarizers and waveplates.
From this foundation, I will explain some of the consequences of the polarization of light, what it can be used for, and may even digress into the physicist's model if the visitors are interested enough. My hope is to build the concept of polarization up from a basis of observation, not to start with our model first, followed later by how we observe polarization.
Thursday, February 28, 2013
Sunday, February 24, 2013
Math is not always the best form of communication
I attended a seminar at CREOL this past week concerning similarities between quantum entanglement and the theory of polarization of light. While the research that the speaker presented was interesting, I found the means by which he gave his presentation to be more enlightening. Specifically, I realized something very important about the role of mathematics in communication.
This talk, like every single scientific seminar I can recall attending but one [1], was given in a slideshow program like PowerPoint. The first half of the talk consisted of an overview of Bell's theorem, quantum non-locality, historical interpretations of polarization, etc. The corresponding slides complemented the speaker's words; they were full of illustrations, sentences, and diagrams that helped to convey his message. The second half discussed recent theoretical research by the presenter. The slides contained a lot of mathematics. To explain the math, he would often make statements like,
After the talk I was struck by how clear and easy to follow the first half was, while the second half was completely lost on me. His statements above were not true! The reason for this is, I believe, not my lack of familiarity with the material but because equations are not always good means to communicate ideas.
The strengths of mathematics are that they are unambiguous and succinct. Additionally, in terms an engineer might understand, they compress and encode ideas. The downside is that during a talk the listeners must uncompress these ideas to understand them, which takes time and distracts from the speaker's message. Additionally, if the audience doesn't have the background required to make sense of the equations, they can't even decode them to begin with.
Equations are most useful when they're easy to understand and when the speaker absolutely cannot allow for any ambiguity in their message. However, since the purpose of a talk is to transfer information to the audience, the speaker must consider more efficient tools, like illustrations and words. More than likely, if an idea can't be represented in words, then it's not a good idea.
[1] The exception was given by a physics Nobel laureate using transparencies and an overhead projector.
This talk, like every single scientific seminar I can recall attending but one [1], was given in a slideshow program like PowerPoint. The first half of the talk consisted of an overview of Bell's theorem, quantum non-locality, historical interpretations of polarization, etc. The corresponding slides complemented the speaker's words; they were full of illustrations, sentences, and diagrams that helped to convey his message. The second half discussed recent theoretical research by the presenter. The slides contained a lot of mathematics. To explain the math, he would often make statements like,
"From this equation we can see..."or
"It's clear that these two equations reveal..."
After the talk I was struck by how clear and easy to follow the first half was, while the second half was completely lost on me. His statements above were not true! The reason for this is, I believe, not my lack of familiarity with the material but because equations are not always good means to communicate ideas.
The strengths of mathematics are that they are unambiguous and succinct. Additionally, in terms an engineer might understand, they compress and encode ideas. The downside is that during a talk the listeners must uncompress these ideas to understand them, which takes time and distracts from the speaker's message. Additionally, if the audience doesn't have the background required to make sense of the equations, they can't even decode them to begin with.
Equations are most useful when they're easy to understand and when the speaker absolutely cannot allow for any ambiguity in their message. However, since the purpose of a talk is to transfer information to the audience, the speaker must consider more efficient tools, like illustrations and words. More than likely, if an idea can't be represented in words, then it's not a good idea.
[1] The exception was given by a physics Nobel laureate using transparencies and an overhead projector.
Tuesday, February 19, 2013
A better place for philosophy
A while back I started a new blog called "I Wish to Blog Deliberately" (corny name but accurate in its account). With this new blog I intended to write on philosophical topics and keep more practical discussions focused at MQRL. Aside from being just a collection of philosophical discussions, its creation was important because I was concerned that MQRL might become diluted with esoteric discussions if I were I to maintain only one blog.
However, since that time I've rarely contributed to IW2BD; but my temperament lately has been philosophical and I need an outlet for it. As a result, I'm beginning to post again to IW2BD. I've also been motivated by the observation that my writing is much better now such that I may write coherently on topics such as teleology and ethics. This has arisen in no small part because my writing and thinking has improved as I explored ideas at MQRL.
So, if you're interested in what I have to say, pay IW2BD a visit. I plan on making no changes to MQRL and will continue its theme of the practicalities and execution of science from an academic standpoint.
And if you're really, really interested, e-mail me sometime at kyle.m.douglass@gmail.com. I'd love to hear from you.
However, since that time I've rarely contributed to IW2BD; but my temperament lately has been philosophical and I need an outlet for it. As a result, I'm beginning to post again to IW2BD. I've also been motivated by the observation that my writing is much better now such that I may write coherently on topics such as teleology and ethics. This has arisen in no small part because my writing and thinking has improved as I explored ideas at MQRL.
So, if you're interested in what I have to say, pay IW2BD a visit. I plan on making no changes to MQRL and will continue its theme of the practicalities and execution of science from an academic standpoint.
And if you're really, really interested, e-mail me sometime at kyle.m.douglass@gmail.com. I'd love to hear from you.
Labels:
blogs,
philosophy
Friday, February 15, 2013
A short review of best computing practices for scientists
Best Practices for Scientific Computing is a good read if you, like me, are a scientist who frequently programs but never received proper training in software development. It simply enumerates a list of practices that helps improve the productivity of coders and the reusability of code written in an academic environment. The techniques on this list are well known to software development professionals and have been extensively developed over many years.
Some of the suggestions and points in the article that are of note include:
While largely approachable, the paper still suffers from a slight overuse of jargon from the software development field. As a result, the importance of some of their recommendations escapes me.
Some of the suggestions and points in the article that are of note include:
- 90% of scientists are self-taught programmers
- All aspects of software development should be broken into tasks roughly an hour long
- Provenance of data refers to data that is accompanied by a detailed list of code and operations for recreating the data and code output
- Programmers should work in small steps with frequent feedback and course corrections
- Use assertions (executable documentation) to avoid mistakes in code
- Scientists should reprogram complicated tasks to make them simpler for a human to read instead of including paragraphs of comments explaining how the code works.
While largely approachable, the paper still suffers from a slight overuse of jargon from the software development field. As a result, the importance of some of their recommendations escapes me.
Thursday, February 14, 2013
Pre-allocating an array of objects of a structure array in Matlab
I often run into the issue of how to pre-allocate a structure or array before populating it inside a loop in Matlab. This discussion at Stack Overflow, along with some other internet searches, provided the answer:
For objects the pre-allocation works by assigning one of the objects to the very last field in the array. Matlab then fills the other fields before that with objects (handles) that it creates by calling the constructor of that object with no arguments (see Matlab help)So if I want to create a structure array with one hundred elements and two fields (called xCoord and yCoord), I would enter
myStruct(100).xCoord = 0;and then proceed to populate all the previous elements of myStruct.
myStruct(100).yCoord = 0;
Labels:
Matlab
Tuesday, February 12, 2013
Two types of breakthroughs
An editorial in this month's Nature Photonics entitled "Transcending limitations" asserts that there are two types of breakthroughs: technological and conceptual. Technological breakthroughs occur when some experiment manages to measure something better or more accurately than in previous works. Conceptual breakthroughs often lead to greater scientific understanding because they force us to look at some phenomenon in a new way.
Often, conceptual breakthroughs require strong patience and steady work to explain previously unexplainable results in an experiment.
I would guess that funding agencies and governments prefer technological breakthroughs because of their immediate economic payoff, whereas academic institutions prefer conceptual breakthroughs.
Often, conceptual breakthroughs require strong patience and steady work to explain previously unexplainable results in an experiment.
I would guess that funding agencies and governments prefer technological breakthroughs because of their immediate economic payoff, whereas academic institutions prefer conceptual breakthroughs.
Labels:
research
Monday, February 11, 2013
'Living crystals' reported in Science
Living Crystals of Light-Activated Colloidal Surfers is a recent publication in Science. It presents a study of the dynamics of interacting particles that are propelled by a light-catalyzed reaction between hematite (located on the surface of the colloidal particles) and hydrogen peroxide. These particles experience a nonequilibrium driving force from the reaction, repulsive forces between one another due in part to SDS surfactant present in the solvent, and attractive phoretic forces towards other particles. They observe that when the system is illuminated with blue light and the hydrogen peroxide reaction is catalyzed, the particles form crystalline arrangements that dynamically grow, shrink, merge and split. This is a form of self-organization and is fueled by the energy delivered to the system in the form of light.
Importantly, the attractive pair forces and and driving forces are not present when the light is off, which demonstrates that the formation of the crystals occurs under nonequilibrium conditions.
This rather elegant work demonstrates how complex behavior in systems can emerge from interactions between the parts of the system.
A PopSci article summarizes the work, though I think it focuses too much on the properties of life that the crystals satisfy.
Importantly, the attractive pair forces and and driving forces are not present when the light is off, which demonstrates that the formation of the crystals occurs under nonequilibrium conditions.
This rather elegant work demonstrates how complex behavior in systems can emerge from interactions between the parts of the system.
A PopSci article summarizes the work, though I think it focuses too much on the properties of life that the crystals satisfy.
Friday, February 8, 2013
Some notes on colloids, surfactants, and the Debeye length
In the lab I frequently handle and create colloids--suspensions of one type of material in another, such as solid particles in water. A good reference site I just found for understanding and working with colloids is at SubsTech's website.
Things I also learned today:
Things I also learned today:
- The Debeye length may be conceptualized as the characteristic size of a fluctuation of the electrostatic potential inside a solution of charged particles. Longer Debeye lengths mean longer distances for which the potential remains roughly the same.
- Surfactants may be used to stabilize a colloidal dispersion. This is different from electrostatic stabilization, whereby some charge is permanently located on/inside the particles to cause them to repel one another.
Thursday, February 7, 2013
The debate between data-centric science and hypothesis testing in soil microbiology
I've recently explored the topic of data-centric science, i.e. the art of answering scientific questions using data mining instead of generating hypotheses and testing them. I was therefore very interested by an article in this week's issue of Nature entitled "Microbiology: The life beneath our feet." The article was written by two scientists who study the relation between the microbial content of soil and its encompassing environment. One of them, Janet K. Jansson, promotes the use of data-mining from "omic" studies while the other, James I. Prosser, argues more in favor of hypothesis-driven experiments. Omic studies is jargon for the practice of identifying microbial species through detection of DNA (genomics), RNA (transciptomics), proteins (proteomics), and metaboloites (metabolomics).
Dr. Jansson argues that data-mining reveals new species of microbes and provides a sufficient base from which to carry out further experiments and tests of hypotheses. She claims that the primary critique of omic studies (which is that it provides only descriptive data) is weak since we simply don't know enough about the different species of microbes to begin with. Data mining from omics fills those gaps in our knowledge. Finally, she provides several examples where omic data mining has led to new discoveries and understanding around the world.
Dr. Prosser, on the other hand, claims that better value is obtained from hypothesis-driven research since it provides new concepts and logical frameworks for understanding the microbe-environment relationship. One quote from him that I particularly liked was the following:
One final sentiment worth noting is delivered by his statement "In practice, purely descriptive studies of microbial communities are rare." I believe that he is arguing that, while one would argue there is value in using both approaches, the payoff from hypothesis testing is much greater and should therefore receive more resources.
I am quite pleased to see someone arguing against the use of data mining for no other reason than to balance out the arguments, but since I believe that scientific man power is growing faster than the number of hypotheses that can be generated, I see no reason to abandon data-mining as tool in this regard.
Dr. Jansson argues that data-mining reveals new species of microbes and provides a sufficient base from which to carry out further experiments and tests of hypotheses. She claims that the primary critique of omic studies (which is that it provides only descriptive data) is weak since we simply don't know enough about the different species of microbes to begin with. Data mining from omics fills those gaps in our knowledge. Finally, she provides several examples where omic data mining has led to new discoveries and understanding around the world.
Dr. Prosser, on the other hand, claims that better value is obtained from hypothesis-driven research since it provides new concepts and logical frameworks for understanding the microbe-environment relationship. One quote from him that I particularly liked was the following:
Hypotheses lack value, however, if they are based solely on observations, or if they are relevant only to the data used to construct them. They are worthwhile if they incorporate novel ideas and flashes of inspiration; they can propose (ideally universal) explanations and mechanisms; and they generate predictions that can be tested by experimentation. It is this process, and not the initial observations, that truly increases understanding. Hypothesis-driven research can thus provide counter-observational, non-intuitive predictions and conceptual frameworks, and can indicate which techniques are, and are not, needed to test them.He furthermore argues that the information obtained from data mining can generate new hypotheses but cannot be used to test these hypotheses, an argument that I have never before considered but believe to be true.
One final sentiment worth noting is delivered by his statement "In practice, purely descriptive studies of microbial communities are rare." I believe that he is arguing that, while one would argue there is value in using both approaches, the payoff from hypothesis testing is much greater and should therefore receive more resources.
I am quite pleased to see someone arguing against the use of data mining for no other reason than to balance out the arguments, but since I believe that scientific man power is growing faster than the number of hypotheses that can be generated, I see no reason to abandon data-mining as tool in this regard.
Wednesday, February 6, 2013
What do biologists want? It's probably best to ask them.
Just a thought: more than likely, if a biologist wants to learn something, they'll need a tool that is specialized for measuring exactly the quantity that they're interested in. A general measurement tool will almost always be less-than-ideal for measuring some specific quantity. Therefore, we optical scientists should spend less effort in optimizing a mature imaging technology and instead work routinely with biologists to help solve their problems.
Labels:
optics
The relevance of multiphoton microscopy to physicists and biologists
There is a comprehensive review article in this month's Nature Photonics concerning many of the technological capabilities and recent advancements of multiphoton microscopy (MPM). The article details many recent advances in MPM engineering for achieving faster image acquistion, increased signal-to-noise ratios, and deeper imaging capabilities.
These advances are impressive and lead me to believe that MPM has become a rather mature technology. I'm curious to know to what extent biologists have used MPM to solve problems in their research since the review article is somewhat lacking in references that come from journals outside of physics and optics.
This is the same problem I encounter again and again in optics. It's very difficult to identify worthwhile work in a research field that primarily develops tools for researchers from other fields to use. I do not blame the optical scientists for this difficulty, though, and here's why. An optical sensing technique is usually not suited for publication in pure biology journals, so they must publish in optics and applied science journals. In this arena, they must argue for their technique relative to other related techniques, not to the suitability of their work for solving biological problems. A sentence in the introduction and conclusion of an article is usually sufficient for reviewers to acknowledge the technique's worth towards a biological problem of interest.
These advances are impressive and lead me to believe that MPM has become a rather mature technology. I'm curious to know to what extent biologists have used MPM to solve problems in their research since the review article is somewhat lacking in references that come from journals outside of physics and optics.
This is the same problem I encounter again and again in optics. It's very difficult to identify worthwhile work in a research field that primarily develops tools for researchers from other fields to use. I do not blame the optical scientists for this difficulty, though, and here's why. An optical sensing technique is usually not suited for publication in pure biology journals, so they must publish in optics and applied science journals. In this arena, they must argue for their technique relative to other related techniques, not to the suitability of their work for solving biological problems. A sentence in the introduction and conclusion of an article is usually sufficient for reviewers to acknowledge the technique's worth towards a biological problem of interest.
Labels:
microscopy,
optics
Tuesday, February 5, 2013
Horse Pens 40: My favorite bouldering location
I returned Monday morning from a three day trip up to Chattanooga, Tennessee for a bouldering trip to LRC, a.k.a. Stonefort. Unfortunately, we got only one good day in before a large amount of snow fell on the site and made travel up and down the mountain a bit dangerous. So, we packed up and drove to Horse Pens 40, a natural boulder field atop Chandler Mountain in northeast Alabama.
I love HP40, and not just for the climbing. It has a rich human and natural history. HP40 has been inhabited by humans for about 15,000 years. The sandstone rock formations have served as natural horse corrals and Native American burial sites, among other things. This rock is full of huecos (climber lingo for holes or pockets) and many large slopers (more climber lingo for big, rounded features). Much of the rock has been shaped by water grooves, where water running off the top of a boulder has etched shallow grooves into it. This rock is tough and harsh on the hands. I found it rougher than LRC's boulders.
The setting of HP40 is also very scenic, being located in the woods atop the mountain. I'm not sure what the dominant tree here is, but I did notice many shells that looked similar to the hickory nuts we have back home in Ohio. I also noticed many seed-bearing structures that were spherical in shape with a diameter roughly larger than a quarter. These structures had already released their seeds (the date was early February), which I presume covered the surface of this sphere.
The outstanding science question of this trip is: why is rock "stickier" when it's cold? By sticky, I mean that a climber's hands and shoes are less likely to slip.
I love HP40, and not just for the climbing. It has a rich human and natural history. HP40 has been inhabited by humans for about 15,000 years. The sandstone rock formations have served as natural horse corrals and Native American burial sites, among other things. This rock is full of huecos (climber lingo for holes or pockets) and many large slopers (more climber lingo for big, rounded features). Much of the rock has been shaped by water grooves, where water running off the top of a boulder has etched shallow grooves into it. This rock is tough and harsh on the hands. I found it rougher than LRC's boulders.
The setting of HP40 is also very scenic, being located in the woods atop the mountain. I'm not sure what the dominant tree here is, but I did notice many shells that looked similar to the hickory nuts we have back home in Ohio. I also noticed many seed-bearing structures that were spherical in shape with a diameter roughly larger than a quarter. These structures had already released their seeds (the date was early February), which I presume covered the surface of this sphere.
The outstanding science question of this trip is: why is rock "stickier" when it's cold? By sticky, I mean that a climber's hands and shoes are less likely to slip.
Labels:
climbing
Subscribe to:
Posts (Atom)