Monday, April 29, 2013

Dr. McKenzie defines emergence

Dr. Ross McKenzie has provided a definition of emergence on his blog, Condensed Concepts. This definition nicely summarizes and simplifies emergence by laying out the properties that describe emergent phenomena.

I've always struggled with defining emergence because it seems that a definition by example is the best way to do it, but this is often difficult to do simply. Trying to define emergence in one sentence, rather than the list that Dr. McKenzie provided, seems nearly impossible. There's always some form of discussion that's required.

What other kinds of models or theories require expanded definitions involving enumerations of their qualities?

Sunday, April 28, 2013

The rules of the scientific game

While listening to NPR recently I encountered the book "The Structure of Scientific Revolutions" by Thomas S. Kuhn. This book, which was published in the seventies, is an explanation of how normal science emerges from times of discord between scientists (the so-called revolutionary periods). It also places the practice of science onto a firm sociological foundation, addressing how scientists think and what drives them to pursue their art.

Kuhn's influential work is credited with introducing the word paradigm into science. A paradigm is a set of beliefs which provides some basis for scientists to do their work. For example, in optics, I always assume the validity of Maxwell's equations to explore more esoteric optical phenomena. A paradigm is not just a scientific theory, though. It is a mode of thought that dictates what theories, experiments, methodologies, and instruments constitute valid science.

A particularly enlightening section of the book has been in Chapter 4, which explains why scientists do their work. Kuhn likens science to puzzle-solving, which is an often-used analogy for attracting students to scientific fields. A scientific puzzle consists in rendering coherent observations and theories from some class of phenomena. A scientific puzzle also consists in extending the theory to a broader range of phenomena, like the particle-physicist's search for a Theory of Everything.

What's necessary for this puzzle-solving to advance, Kuhn argues, is a set of rules for ensuring that a solution to a scientific problem can be found. A paradigm thus becomes a useful tool in filtering out what class of observations can even be addressed with the theory. If observations can not be adequately accounted for, then a theory must either be modified or thrown out altogether.

These ideas are important for any scientist wishing to make his or her own scientific revolution. Moving into uncharted waters is very difficult because one must first upend the existing paradigm, which most-likely exists for good reason. But the most difficult part is addressing those problems which lie outside the scope of the paradigm itself. In some sense, a scientist addressing such observations needs to start from scratch. I credit anyone who is explicitly working in these areas of science because they are likely to be shunned by their peers and encountering problems never before rationalized.

I highly recommend this book to anyone working in science, since its discussions effectively remove us from the constraints of our own tools for understanding scientific problems and lays them bare before us, without the hubris that is sometimes associated with trying to rationalize our own mode of thought.

Friday, April 26, 2013

CDMS and big science reports

This week's issue of Science contains an article entitled "Dark-Matter Mystery Nears Its Moment of Truth." The article details the latest findings by the CDMS experiment that is searching for weakly interacting massive particles (WIMPs). WIMPs are candidates for dark matter because they do not interact strongly with other matter and may fill large areas of "emtpy" space, effectively providing the additional gravitational energy that is missing from observations of the cosmos.

I very much liked the press-release on the CDMS website. It is incredibly clear and honest. Possibilities that would negate their findings are mentioned and the statistics are presented as-is with little attempt to make them sound better than they are.

I wonder if big science is immune to the hubris and bias that is often found in the work of publications numbering less than about ten authors. Is there something that makes science more objective as more people are involved? Of course, there are many difficulties with managing these large scientific endeavors, but I don't think we should discount the fact that involving so many people may make results more objective. This is an argument in favor of big science.

Thursday, April 25, 2013

DNA and its current state

There's a really good review article in the Guardian today about DNA and how we currently understand its operation and importance.

I was surprised to learn that scientists are able to encode writings and even videos onto DNA and decode them later. The information density is huge compared to electronic memory, but the read/write abilities are very slow.

I'm also struck by the fact that nature has provided such a versatile molecule and that humans are now able to capitalize on it.

Monday, April 15, 2013

Evaluating multiple expressions in Scheme with (begin ...)

I just learned the begin procedure in Scheme. Essentially, this procedure evaluates a number of expressions and returns the last one.

I found it particularly useful in if conditionals where I wanted to evaluate multiple expressions in the true or false clauses. For example, my implementation of for-each in MIT's Structure and Interpretation of Computer Programs Problem 2.23 goes as follows:
(define (for-each proc items)
  (if (null? items)
      #t
      (begin (proc (car items))
             (for-each proc (cdr items)))))
Here, if the list items is not empty, then I apply the procedure in proc to the first element in items and recall for-each, supplying it with the remaining elements in items.

I learned about the begin procedure from this Stack Overflow discussion.

Thursday, April 11, 2013

A beautiful experiment on a nonequilibrium thermodynamic system

I just finished reading an impressive article from 2003 entitled "Observing Brownian motion in vibration-fluidized granular matter". In the article's beginning, the authors established a simple question: can linear response theory describe a nonequilibrium thermodynamic system? This question is important because systems that are not in thermodynamic equilibrium are both difficult to analyze and serve as appropriate models for most natural phenomena. However, very powerful mathematical tools exist for systems that are in equilibrium, so it would be very convenient if their mathematical formalism could be extended to nonequilibrium cases.

In particular, the authors explore whether a torsion oscillator driven by a "heat bath" of randomly vibrating glass beads can be described by the fluctuation-dissipation theorem (FDT). The FDT is arguably the hallmark of linear response theory and describes the return to equilibrium of a many-body system subjected to a small perturbation. (A small perturbation means that the response is linearly proportional to the perturbation.)

A canonical example where the FDT finds use is in describing the motion of ions in a fluid between two plates of a capacitor after a voltage difference has been applied to the plates. Prior to the voltage being applied, the motion of the ions is erratic and Brownian. A long time after the constant voltage is applied, they move with an average velocity that is proportional to the electric field between the plates, the proportionality constant being called the mobility. The FDT describes the very short times immediately after the field is applied. It also links the noise (the random movement of the ions in equilibrium) to the mobility of the ions.

Returning to the article, the authors find that the motion of the oscillator is described by the FDT so long as an "effective" temperature is adopted. Effective temperatures are very appealing as analytical tools for describing nonequilibrium systems because they are very, very simple modifications to the FDT. Simply replace T with T_eff and you're done.

As Cugliandolo points out, to be a good thermodynamic descriptor, an effective temperature should be measurable by a thermometer. I'm not sure what the thermometer is in this system, but I suspect that it's the torsion oscillator itself. Furthermore, she stresses that not all nonequilibrium phenomena are describable by effective temperatures. It seems that one requires coupling between fast processes and slower observables, among other requirements. The beauty of the Nature article is that the authors not only confirmed this point (which seems to currently be an area of contention), but did so convincingly by measuring the relevant quantities directly and under a number of different conditions.

I'm not sure whether the effective temperature is a universal property of nonequilibrium systems; I'm inclined to say it is not. Hopefully more experiments like this one will be done that may further elucidate the current maze of theoretical papers on the topic.


Wednesday, April 10, 2013

Another look at the Schroedinger equation

I just read this article on PhysOrg about a paper recently published on the origins of the Schroedinger equation. One interesting thing I learned is that, in the classical wave equation for matter waves, the phase of the wave determines the amplitude. However, in Schroedinger's equation, the amplitude and phase of the wave are coupled to one another.

The authors of the PNAS article demonstrate that this coupling leads to the linearity of the Schroedinger equation, which is one of its most important properties. If it were not linear, I'm not sure that the mathematics would have turned out so relatively simple in quantum mechanics; i.e. it may not have been formulated in terms of linear algebra.

Unfortunately, I think the PhysOrg article was a bit misleading. They repeatedly referred to the classical wave equation when speaking of the Hamilton-Jacobi equation. To my knowledge, the classical wave equation and the HJ equation describe different things. More importantly, the classical wave equation is linear.

Is it better to be absolutely truthful in popular science articles or to minimize the amount of jargon and smooth over some minute but important points?

Saturday, April 6, 2013

Order of evaluation for nested lambda expressions in Scheme

I have been working through MIT's Structure and Interpretation of Computer Programs. I'm interested in learning computer science and have found the book to be approachable, though a bit heavy on the theoretical side of the science.

Exercise 2.6 deals with the Church numerals, i.e. natural numbers that are defined in terms of the lambda calculus. I have struggled a bit to understand these concepts, but the struggle ended when I made the following realization.

Consider the definition of 'one' in Scheme's notation for the lambda calculus:
(define one (lambda (f) (lambda (x) (f x))))
Here, f is any function and x is any variable. I was initially confused by the order of operations in this expression. For example, if I had a function called square
(define square (lambda (x) (* x x)))
then I didn't understand how I could use the definition of one to apply the function square to x. The two approaches I tried were
((one 2) square)
((one square) 2)
As it turns out, the second line of code returns the number 4, whereas the first one returns a function. The reason I was confused was because in the definition of one, the expression (lambda (f) ...) occurs in the outermost set of nested parentheses, whereas when calling one, the function that is substituted for the formal parameter f is entered in the innermost set of parentheses.