Puzzle
I saw this strange insect-eye mirror on the ceiling inside a bank in Vienna. What is its purpose?
Quote
This puts into words something I’ve been feeling strongly over the past few years:
“Excellence is an art won by training and habituation. We do not act rightly because we have virtue or excellence, but we rather have those because we have acted rightly. We are what we repeatedly do. Excellence, then, is not an act but a habit.” – Aristotle
Link
Lifehacker brings together different pieces of research to look at the Cognitive Cost of Doing Things. For me the most important one is Activation Energy (emphasis mine):
[S]tarting an activity seems to take a larger [amount] of willpower and other resources than keeping going with it. Required activation energy can be adjusted over time – making something into a routine lowers the activation energy to do it. Things like having poorly defined next steps increases activation energy required to get started.
The idea in the Aristotelian quote above is the reason I’ve built routines around all the things I want to do; the above sounds like the reason it’s been working.
Picture
Physical cutaway of a Leica lens, one of a few different angles you can see here.
Last Week’s Puzzle
Last week I linked to a New Yorker article that implicitly asked “What is Going Wrong with the Scientific Method?”
The article brings together an interesting collection of anecdotes, observations and studies that suggest in different ways that across many fields, after an effect is observed (e.g. effectiveness of a drug to treat a disease, ability of an individual to telepathically identify Zener cards) subsequent measurements of the same thing will see progressively weaker versions of that effect. This seems to undermine the scientific method, which uses replicability to sort chance results from real ones.
Unfortunately, the article is constructed in a way that tends to disguise how the different pieces of the puzzle relate to one another. I think the apparent effect can be adequately explained by the following:
1) Regression to the Mean
The article mentions this key idea relatively late on, but this is an essential background problem that many of the anecdotes have to be considered against. Cut straight to the ‘conceptual background’ section in the Wikipedia article to understand how this will tend to arise. (Note that this also tends to explain the Sports Illustrated Cover Jinx).
2) Bad Luck
The main thread of the article follows Jonathan Schooler’s experience of the “decline effect”. The poor fellow saw his most interesting result seem to decay away with subsequent replication attempts; he later tried measuring some more fanciful things specifically to see if those would also show effects that seemed to weaken over time, and sure enough, they did. He could put the first instance down to some kind of Regression to the Mean, but to have this happen repeatedly seemed all too unlikely.
He doesn’t really help his case by testing for paranormal effects, but in any case with hundreds of thousands of scientists testing different things all over the world, statistically, someone will end up seeing a lot of Regressions to the Mean.
3) Intentional and Unintentional Cheating or Bias
In the article, a telepathy experiment from the 30s is cited in which one undergraduate defied chance to make a series of seemingly miraculous correct guesses of Zener cards. Just as the experimenter was about to write papers on the result, the student “lost” this ability. It’s very hard to take such a result seriously, as it seems far more likely the undergraduate had found some way of cheating, which he chose to stop using as soon as he saw how high the stakes were going to get.
More importantly for conventional research, the paper “Why Most Published Research Findings Are False” highlights the kind of systematic effects that will unfortunately tend to produce a misleading overall impression if one considers the evidence for an effect purely based on published results. The New Yorker article mentions this paper by name and covers some of the observations, but it’s well worth a detailed read.
4) Placebo Effects in medicine
Even taken together, the above three ideas don’t seem to refute the large-scale “decline effects” the article mentions being observed in the field of medicine. I would suggest this is due to something else: problems with the placebo effect.
Richard recalled an article from New Scientist (which I can’t find online) that pointed to a general problem with double-blind drug studies: active drugs will often have side-effects, and placebos won’t. Patients in such a study that experience side effects are likely to assume they have been given the real drug and not the placebo, and will therefore enjoy a stronger placebo effect, so confounding the ability of any medical study to be truly double-blind.
Even more disastrously, as this Wired article notes, the placebo effect seems to be getting stronger over time, presumably because it relates to social perception of drug efficacy. This is exactly the kind of thing that would drive an apparent decline in effectiveness of many different drugs over time.
In Conclusion
The Scientific Method is fine. We just need to remember a few things about statistics. This XKCD should help somewhat.