Think Like a Scientist: Don’t be a doof about “scientific proof”

“Scientifically proven”  is often thrown in as a final punch to convince you to buy something (“Scientifically proven to support liver health!” or take a certain political stance (“there is no scientific proof that teaching abstinence in public schools reduces teenage pregnancy.”). But regardless of my own susceptibility to marketing plots or personal political beliefs, this phrase often makes me cringe.

Cringe Smile GIF - Find & Share on GIPHY

Now, my goal here is *not* to convince you to doubt anyone and everyone who talks about scientific proof. There is, for example, strong scientific proof of gravity. So going too far in one direction – immediately rejecting a claim solely because someone assures you it has been scientifically proven – is just as stupid as immediately accepting a claim because someone assures you it has been scientifically proven.

So, how do you even begin to evaluate the “scientifically proven”claims?

Let’s start with a real life example of mine, with the nutrition company Neolife.

A few years back I had several friends who were bigtime crusaders for Neolife supplements.  “Cgallo you will be especially impressed by these supplements because they are backed by Science!” they would insist. In fact, the company tag-line is even “Based in nature | Backed in Science.” Being the curious and trusting person that I am, I went ahead and bought a 3 month prescription of their little super pack of wellness, which included all sorts of goodies like vitamins, minerals, fish oils, etc. At the end of the 3 months, my pee was very yellow and I was $100 poorer, but I wasn’t filled with such electrifying energy that I wanted to give up coffee or sprint up a mountain.* So of course after I spent my money and yellowed my pee, I decided to look into the original research that Neolife was so cocky about.

I found two original research articles. I think both of us only have the time and mental energy to tackle the first one. So, let’s chat about the article “Effects of a carotene-deficient diet on measures of blah blah blah” by Dixon and chums, published in 1994. The author affiliations were from respectable places like the University of California and the Center of Disease Control, so that seemed legit.

Always check author affiliations! It’s okay to be snobby about Neverheardof University. Of course, that doesn’t mean that any and all studies published by researchers at Harvard are flawless.. but it gives you an idea of the quality of research.


It was also published in Free Radical and Biology Medicine journal, which although I had never heard of, has a high impact factor,** so another check.

Then I began to read the methods.

Read Matthew Perry GIF - Find & Share on GIPHY


Umm.. 9 subjects? That is a very low number — especially for a study involving humans, who are chockfull of randomness and variation. Even lab rats have tons of variability in behavioral and physiological measures despite being in-bred and kept in extremely similar environments — so humans with all of our snowflake complexity are even worse!


All 9 women were given a beta-carotene (which our bodies convert into Vitamin A) supplement for 4 days. These 4 days were considered baseline.

Then, for over 2 months the women were made to eat a diet very low in beta-carotene while taking nonsense (aka placebo) pills to trick the women into thinking they were taking beta-carotene.

For 28 days after that, all the women were given a supplement with a butt-load of beta-carotene AND for the very last 12 days, given six capsules of Neolife’s “carotenoid complex.” What is probably the most puzzling aspect of this study is that although they threw in Neolife products at the end of the study, they make no effort to distinguish the effects of mega-beta-carotene supplement alone from the effects of mega-beta-carotene supplement + Neolife carotenoid complex. So it is entirely unclear if any effects shown are due to the whopping serving of beta-carotene, the Neolife supplements, or a combination of both.


Confused Parks And Recreation GIF - Find & Share on GIPHY

Throughout all of this roller coaster of carotenes , they would take samples of the women’s blood to analyze the content for various markers of oxidation. Honestly, I’m a neuroscientist and unfamiliar with the exact techniques used in this study to measure different oxidants, so I’m not even going to touch that. But the great news is, neither you or I don’t have to know a thing about the techniques or measures of this study because the design and analyses tell us enough.

Let’s just review what we know so far, just by reading the methods —

  • This study was done on such a small number of women, even with an otherwise perfect study design all of the results should be taken with a grain of salt.
  • The researchers looked at the effects of supplementation for roughly 1 month. Neolife people want you to be popping their pills for life. That’s a pretty huge difference. If there’s some sort of long-term benefit or detriment to lifetime use, this study doesn’t even begin to address it.
  • As I harped on before — but is definitely worth saying again — this study made no effort to distinguish between the effects of a regular ol’ beta-carotene supplement and Neolife’s carotenoid complex.
  • Even if everything else about this study was perfect — including the techniques used to measure oxidation, that’s still just one measure. A very important general principle when evaluating “scientific proof” is what the researchers are using as their metric of effect. Often, extremely specific results like “beta-carotene supplementation reduced plasma TBARS and erythrocyte superoxide dismutase
    activity” — through a twisted game of telephone as non-scientist writers try to interpret and communicate the findings — eventually gets presented as “carrots can save your life!” But how many people know what plasma TBARS and erythrocyte superoxide dismutase activity actually is, and what low or high levels really mean? Certainly not I. So always look and see how researchers evaluated their end point, and try not to accept overly-simplistic explanations like “oxidation = bad. Supplement make oxidation low. Supplement = good.”
  • There was no “control group” that was not deprived of beta-carotene before given supplements, or any group that was deprived but then allowed to just start eating normally again without taking a booty load of carrot pills. Disrupting the body by force-depleting levels of a nutritive substance… and then showing your supplements can bring the levels back to normal … doesn’t tell us anything about whether they would work in populations with a normal range of beta-carotene levels, much less if it would even be healthy to increase levels too far above baseline! In almost every biological system, it’s about balance, not blasting the system with any one nutritive component. In fact, see this article written for lay audiences (original article here) indicating that too much beta-carotene can be risky biz.


Okay.. well that was cathartic for me, at least. I hope you all feel super smart and can now lay a scientific whoopin’ on anyone who tries to close the argument with “scientific proof.”


American Flag America GIF - Find & Share on GIPHY


*I always want to saunter up a mountain. But sprinting is a different story.


Think Like a Scientist: Overcoming y axes deceptions

I have been through a lot of schooling. Some of it was a complete waste of time and utter nonsense. Some of it was useful. Thankfully, I am intellectually generous enough to share the highlights of my education so you all can be PhD-level thinkers without all the poverty-level stipends, rat bites, and bi-monthly existential crises.

I decided to write a series of short posts covering some of the most generally useful tools I learned to see through the b.s. when marketing companies, pop-science articles, politicians, and other ruffians present data in order to convince you of something.

This episode of Think Like a Scientist, let’s talk about y axes.

Okay, let’s set the scene. I’m the writer of Galloblog. I want you to read more Galloblog. In order to convince you, I throw this figure in your face –


Yowza! Pretty convincing, amiright?! Look how far apart those bars are! Reading Galloblog is equivalent to playing with puppies while eating peanut butter and listening to Tim Keller sermons!!

Or.. is it?!
Let me give you a few things to consider when you look at that figure.

Note the y axes (in red) is zoomed in to show 0.7 to 0.8. The more zoomed in, the more dramatic any differences between bars will look.  Look at the figure below — far less impressive. Glancing at this figure, you may conclude that there are no differences at all, yes? But look! It’s the exact same data.

galloblogfigure2Now, “zooming” in on the differences between groups isn’t always a shady scientific practice, but when evaluating the quality / importance of the data presented it’s important to have an understanding of what the possible range of scores actually is.

Ooo! I threw this in at the last minute for free! Something that drives me *insane* that I have seen far too many times in peer-reviewed scientific articles is showing two figures side by side – as a way to say “the difference between these two groups in this condition  (in this example, reading Galloblog) is real and we want you to be impressed by this, but we don’t want you to be impressed by the difference between these two groups in this other condition (in this example, reading Matt Walsh’s blog)” – but with different y axes!


If you just glanced at the figures above, you would think – “Yowza! Galloblog readers are so much happier than Matt Walsh blog readers!”


But if you made the Matt Walsh blog y axis scale the same “zoom” as the y axis in the Galloblog figure, they look pretty similar. The only real difference here is in the overall happiness level of Galloblog and Matt Walsh blog readers – not the effect of reading the blog. Surprise, surprise! 😉

Finally, let’s talk very briefly about the label of the y axis. The y label is “Happiness Factor (AU).” What is a “happiness factor” – is it a legit scale that many other researchers have used to evaluate happiness, or did Galloblog researchers pull it out of their bootays?

AU usually means arbitrary units, which means this scale isn’t linked to an observable measurement per se (e.g. “number of times smiled”) but is a relative scale. This doesn’t mean it’s not worth noting, but it does mean you should be asking “what is this happiness relative to?”

Okay — that is all the lecturing either of us could handle for right now. Stay tuned for more opportunities to become a sophisticated critic of all data!


cgallo, PhD


No Galloblog readers were harmed in the writing of this article.