In a post that I wrote nearly two years ago, I wondered about the usefulness of numerics and simulations for science. After all, a simulation necessarily tests the outcomes of a model, and a model is only a
n approximation for the real world. Because of this, I placed a higher emphasis on understanding experimental data than on numerics.
At the time of that writing I was irritated by a trend I had found in papers to present numerical results that exactly matched an experimental outcome. This mode of thinking, I believe, consisted of
- performing an experiment;
- developing a model that explained the experiment;
- using computer software to show that the model can exactly reproduce the experiment and was therefore good.
Unfortunately the only value I can see in doing this is confirming that the model reproduces the experimental data. What is perhaps more interesting to pursue numerically is to explore the effect of varying the model's parameters on the observable quantities, which is especially true for simulations of multi-body interactions, like the
Ising model. For cases such as this, either it is not feasible to explore the full parameter space of this model experimentally or the outcome of such models is too difficult to predict analytically.
Essentially I've restated what I said in that earlier post, but I needed to do it. It's easy, at least for me, to dive head first into developing a computer simulation because it's great fun, but if I don't slow down and think about why I'm doing the simulation, I risk wasting my time on a menial task that adds little to the body of scientific knowledge.