Think Like a Scientist: No evidence vs. evidence against in Facebook use and anxiety

This post will address one of the most important science-related concepts I think I’ve ever grasped: No evidence is not the same as evidence against.

“Whatever do you mean, Dr. Galloswag?!” exclaims you.

Okay – let’s think about Facebook use in relation to anxiety. Facebook has been accessible to the unwashed masses since 2006. I didn’t sign up until I began undergrad in Fall 2007.  #ancient Pretend my mum was nervous about the idea of me joining Facebook.

***

Mum of Cgallo: “I don’t know sweetums, it just seems like having that much interaction with random people without actually seeing them face-to-face could be bad for your mental health.”

Young Cgallo: “There is no evidence that Facebook use is linked to anxiety, Mum! Get out of my face!”

***

Guess what? Young Cgallo was technically right – at the time there wasn’t any scientific evidence. When I entered the search terms “Facebook, anxiety” into PubMed ( a database of life-science / biological articles), the earliest search result was from 2009, and the earliest might-be-relevant search result was dated 2013.

facebookanxiety1

facebookanxiety2
See?! I’m tellin the truf!

Why this delay? Because it took a while for older adults to realize how impactful Facebook was. Because research is slow. And so there was no evidence because, well, no one had looked for  it. But now there are articles galore on the relationship between Facebook use – and other forms of social media – in relation to anxiety.

facebookanxietyarticle
For example…

So Cgallo’s Mum in this imaginary situation was vindicated over time!

Takeaway #1: Sometimes someone can be technically correct that there is “no evidence” — but that’s because no one has published data on the topic at all!

Now, let’s imagine another scenario — what if there had been multiple studies of Facebook and anxiety, but most or all of the studies reported no significant correlation between Facebook use and anxiety. That is much more informative than there simply being no data at all… but it’s only moderately in favor of young Cgallo. When a study doesn’t find a relationship it could be because …

  • There is not a correlation between the variables of interest (in this case, Facebook use and anxiety)

OR…

  • Power issues: The study did not include enough participants to detect meaningful differences if they were there.
  • Operational-definition issues: The study defined anxiety in a funky way. One study might decide to look at Facebook use in relation to being diagnosed with an anxiety disorder by a therapist, another in relation to scoring high on a standardized anxiety scale, yet another in relation to self-reported feelings of stress.
  • Time-scale issues: The study could have looked at the effect of Facebook use over the course of a week and found no correlation to anxiety. That still doesn’t tell us much about the effect of Facebook use over several years.

Takeaway #2: Sometimes someone can be technically correct that there is “no evidence” — but that’s because all the studies conducted on the issue of interest had design issue(s). 

I remember the first time I really thought about this in graduate school, and it’s actually pretty frustrating. But there is almost always going to be another, usually different/better way a researcher could approach a research question. So often times, a lack of evidence means absolutely nothing meaningful IRL.

“What are you getting at here, Cgallo? Are you trying to suggest that we can never really say with confidence that two things are not related to each other??” demands you.

Mais non!” Cgallo sputters.

For one, if there are many good quality (e.g. large sample size, good operational definitions, relevant time scales) studies conducted on an issue and none of them find an association, that’s a pretty good clue that there may not actually be a relationship between Facebook use and anxiety, or whatever you’re interested in (autism and vaccine *cough cough*).

But let’s contrast all of this with the gold standard: evidence against!

Gold GIF - Find & Share on GIPHY

What do I mean? Well, many researchers are terrified of publishing mumbo jumbo, so they err on the side of caution and choose statistical tests that are more likely to give false negatives than false positives (I may go more into what this means in a future post, if it pleases the queen). As a result of this statistical conservatism (teehee), it’s quite difficult to get results that say “yes! Thing 1 is related to thing 2 in a meaningful way!” *SO* if you really want to argue that there’s no relationship between Facebook use and anxiety, find evidence against this statement. How? Well, what if there was an entire body of research pointing to Facebook use being linked to feelings of calm, tranquility, peace, stability, happiness, etc.? That is very different – and in my opinion, much more meaningful – than a simple absence of evidence.

Takeaway #3: The absence of evidence for something (e.g. Facebook use being anxiogenic) is not nearly as powerful as evidence for the opposite (e.g. Facebook use being anxiolytic).

Great! I think we all feel better now! Make sure you share this article on Facebook!

Justin Bieber Wink GIF - Find & Share on GIPHY

Think Like a Scientist: Don’t be a doof about “scientific proof”

“Scientifically proven”  is often thrown in as a final punch to convince you to buy something (“Scientifically proven to support liver health!” or take a certain political stance (“there is no scientific proof that teaching abstinence in public schools reduces teenage pregnancy.”). But regardless of my own susceptibility to marketing plots or personal political beliefs, this phrase often makes me cringe.

Cringe Smile GIF - Find & Share on GIPHY

Now, my goal here is *not* to convince you to doubt anyone and everyone who talks about scientific proof. There is, for example, strong scientific proof of gravity. So going too far in one direction – immediately rejecting a claim solely because someone assures you it has been scientifically proven – is just as stupid as immediately accepting a claim because someone assures you it has been scientifically proven.

So, how do you even begin to evaluate the “scientifically proven”claims?

Let’s start with a real life example of mine, with the nutrition company Neolife.

A few years back I had several friends who were bigtime crusaders for Neolife supplements.  “Cgallo you will be especially impressed by these supplements because they are backed by Science!” they would insist. In fact, the company tag-line is even “Based in nature | Backed in Science.” Being the curious and trusting person that I am, I went ahead and bought a 3 month prescription of their little super pack of wellness, which included all sorts of goodies like vitamins, minerals, fish oils, etc. At the end of the 3 months, my pee was very yellow and I was $100 poorer, but I wasn’t filled with such electrifying energy that I wanted to give up coffee or sprint up a mountain.* So of course after I spent my money and yellowed my pee, I decided to look into the original research that Neolife was so cocky about.

I found two original research articles. I think both of us only have the time and mental energy to tackle the first one. So, let’s chat about the article “Effects of a carotene-deficient diet on measures of blah blah blah” by Dixon and chums, published in 1994. The author affiliations were from respectable places like the University of California and the Center of Disease Control, so that seemed legit.

dixonetal
Always check author affiliations! It’s okay to be snobby about Neverheardof University. Of course, that doesn’t mean that any and all studies published by researchers at Harvard are flawless.. but it gives you an idea of the quality of research.

 

It was also published in Free Radical and Biology Medicine journal, which although I had never heard of, has a high impact factor,** so another check.

Then I began to read the methods.

Read Matthew Perry GIF - Find & Share on GIPHY

subjects

Umm.. 9 subjects? That is a very low number — especially for a study involving humans, who are chockfull of randomness and variation. Even lab rats have tons of variability in behavioral and physiological measures despite being in-bred and kept in extremely similar environments — so humans with all of our snowflake complexity are even worse!

*Anyway*

All 9 women were given a beta-carotene (which our bodies convert into Vitamin A) supplement for 4 days. These 4 days were considered baseline.

Then, for over 2 months the women were made to eat a diet very low in beta-carotene while taking nonsense (aka placebo) pills to trick the women into thinking they were taking beta-carotene.

For 28 days after that, all the women were given a supplement with a butt-load of beta-carotene AND for the very last 12 days, given six capsules of Neolife’s “carotenoid complex.” What is probably the most puzzling aspect of this study is that although they threw in Neolife products at the end of the study, they make no effort to distinguish the effects of mega-beta-carotene supplement alone from the effects of mega-beta-carotene supplement + Neolife carotenoid complex. So it is entirely unclear if any effects shown are due to the whopping serving of beta-carotene, the Neolife supplements, or a combination of both.

neolifetable

Confused Parks And Recreation GIF - Find & Share on GIPHY

Throughout all of this roller coaster of carotenes , they would take samples of the women’s blood to analyze the content for various markers of oxidation. Honestly, I’m a neuroscientist and unfamiliar with the exact techniques used in this study to measure different oxidants, so I’m not even going to touch that. But the great news is, neither you or I don’t have to know a thing about the techniques or measures of this study because the design and analyses tell us enough.

Let’s just review what we know so far, just by reading the methods —

  • This study was done on such a small number of women, even with an otherwise perfect study design all of the results should be taken with a grain of salt.
  • The researchers looked at the effects of supplementation for roughly 1 month. Neolife people want you to be popping their pills for life. That’s a pretty huge difference. If there’s some sort of long-term benefit or detriment to lifetime use, this study doesn’t even begin to address it.
  • As I harped on before — but is definitely worth saying again — this study made no effort to distinguish between the effects of a regular ol’ beta-carotene supplement and Neolife’s carotenoid complex.
  • Even if everything else about this study was perfect — including the techniques used to measure oxidation, that’s still just one measure. A very important general principle when evaluating “scientific proof” is what the researchers are using as their metric of effect. Often, extremely specific results like “beta-carotene supplementation reduced plasma TBARS and erythrocyte superoxide dismutase
    activity” — through a twisted game of telephone as non-scientist writers try to interpret and communicate the findings — eventually gets presented as “carrots can save your life!” But how many people know what plasma TBARS and erythrocyte superoxide dismutase activity actually is, and what low or high levels really mean? Certainly not I. So always look and see how researchers evaluated their end point, and try not to accept overly-simplistic explanations like “oxidation = bad. Supplement make oxidation low. Supplement = good.”
  • There was no “control group” that was not deprived of beta-carotene before given supplements, or any group that was deprived but then allowed to just start eating normally again without taking a booty load of carrot pills. Disrupting the body by force-depleting levels of a nutritive substance… and then showing your supplements can bring the levels back to normal … doesn’t tell us anything about whether they would work in populations with a normal range of beta-carotene levels, much less if it would even be healthy to increase levels too far above baseline! In almost every biological system, it’s about balance, not blasting the system with any one nutritive component. In fact, see this article written for lay audiences (original article here) indicating that too much beta-carotene can be risky biz.

 

Okay.. well that was cathartic for me, at least. I hope you all feel super smart and can now lay a scientific whoopin’ on anyone who tries to close the argument with “scientific proof.”

YOU’RE WELCOME, AMERICA!

American Flag America GIF - Find & Share on GIPHY

— EDITORIAL NOTES —

*I always want to saunter up a mountain. But sprinting is a different story.

Think Like a Scientist: Overcoming y axes deceptions

I have been through a lot of schooling. Some of it was a complete waste of time and utter nonsense. Some of it was useful. Thankfully, I am intellectually generous enough to share the highlights of my education so you all can be PhD-level thinkers without all the poverty-level stipends, rat bites, and bi-monthly existential crises.

I decided to write a series of short posts covering some of the most generally useful tools I learned to see through the b.s. when marketing companies, pop-science articles, politicians, and other ruffians present data in order to convince you of something.

This episode of Think Like a Scientist, let’s talk about y axes.

Okay, let’s set the scene. I’m the writer of Galloblog. I want you to read more Galloblog. In order to convince you, I throw this figure in your face –

galloblogfigure

Yowza! Pretty convincing, amiright?! Look how far apart those bars are! Reading Galloblog is equivalent to playing with puppies while eating peanut butter and listening to Tim Keller sermons!!

Or.. is it?!
galloblogfigure_yaxeshighlight.png
Let me give you a few things to consider when you look at that figure.

Note the y axes (in red) is zoomed in to show 0.7 to 0.8. The more zoomed in, the more dramatic any differences between bars will look.  Look at the figure below — far less impressive. Glancing at this figure, you may conclude that there are no differences at all, yes? But look! It’s the exact same data.

galloblogfigure2Now, “zooming” in on the differences between groups isn’t always a shady scientific practice, but when evaluating the quality / importance of the data presented it’s important to have an understanding of what the possible range of scores actually is.

Ooo! I threw this in at the last minute for free! Something that drives me *insane* that I have seen far too many times in peer-reviewed scientific articles is showing two figures side by side – as a way to say “the difference between these two groups in this condition  (in this example, reading Galloblog) is real and we want you to be impressed by this, but we don’t want you to be impressed by the difference between these two groups in this other condition (in this example, reading Matt Walsh’s blog)” – but with different y axes!

galloblogfiguremattwalshfigure

If you just glanced at the figures above, you would think – “Yowza! Galloblog readers are so much happier than Matt Walsh blog readers!”

galloblogfigure2mattwalshfigure3

But if you made the Matt Walsh blog y axis scale the same “zoom” as the y axis in the Galloblog figure, they look pretty similar. The only real difference here is in the overall happiness level of Galloblog and Matt Walsh blog readers – not the effect of reading the blog. Surprise, surprise! 😉

Finally, let’s talk very briefly about the label of the y axis. The y label is “Happiness Factor (AU).” What is a “happiness factor” – is it a legit scale that many other researchers have used to evaluate happiness, or did Galloblog researchers pull it out of their bootays?

AU usually means arbitrary units, which means this scale isn’t linked to an observable measurement per se (e.g. “number of times smiled”) but is a relative scale. This doesn’t mean it’s not worth noting, but it does mean you should be asking “what is this happiness relative to?”

Okay — that is all the lecturing either of us could handle for right now. Stay tuned for more opportunities to become a sophisticated critic of all data!

xoxoxo,

cgallo, PhD

— EDITORIAL NOTES —

No Galloblog readers were harmed in the writing of this article.