Tuesday, May 5, 2015

Blinded by Science blog 6: How can I trust that the information I am exposed to or gathering is accurate?


Welcome once again to another installment of Blinded by Science.  Today’s entry will have me answering two questions at once, not because of some silly notion of trying to work harder, but because the questions are so similar and therefore I’m hitting two birds with one stone (work smarter not harder).  Alicia E. asks, “Why can’t you believe everything you read on the Internet?  AKA What is peer-reviewed research and why is it important?  Please intersperse links to hilarious quacks peddling their conspiracy theories as science.”  In addition, Matthew L. asks, “There are so many ways to get information now a days, what is the best source and how do you know you’re getting the “correct” answers?  If I want to find an answer should I use Wikipedia, webMD, Google scholar or pubmed?  What makes them different anyways?”

Wow.  There are quite a few parts to each of those questions, but both basically boil down to “How can I trust that the information I am exposed to or gathering is accurate?”  This entry is especially important for me because I bet that I will soon be answering questions that deal with “controversial” topics and it will be nice to be able to refer to this post when questioned on my responses to those questions.  For this same reason, I’m going to try and keep this post as more “theoretical” and therefore not go into any specifics of “quacks peddling their conspiracy theories as science.”  As I said, not only will I will most likely end up mentioning these quacks in subsequent posts; I don’t want to give them any more traffic by linking to them if I can help it.  Heck, I’ll even try and keep this post serious.

So let’s get down to it.  How can you trust information presented to you?  While I will freely admit to being biased in this regard (it is important to keep in mind that being biased does not automatically mean I am wrong and it is always good to recognize when you are biased), if the information was generated through science, or cites scientific studies, you can feel good about it.

 
Okay, the seriousness didn’t last long.

Now before I get accosted by everyone, realize that my statement comes with an important caveat that I will be discussing a little further on, but for right now, please grant me a little leeway.

According to Dictionary.com, science is “systematic knowledge of the physical or material world gained through observation and experimentation.”  Science aims to answer questions about the physical universe and therefore increase our knowledge of it, usually via experiments.  I’m sure you all remember elementary/middle/high school science classes where you had to generate a hypothesis and then test it.  Well that is the most basic form of science and its most important aspect.  Anyone on the street can walk up to you and say that eating chocolate will cause your intestines to explode and kill you.

 
Would still be worth it.

But would you really take what they said at face value?  No.  And neither does science.  The claim that chocolate consumption causes death by intestinal explosion (hypothesis) would be tested.  Assuming the ethics of human experimentation wouldn’t have a problem with it, an easy test would be to give some people chocolate and see if more explosion deaths occur than in people to whom no chocolate was given.

But this brings me to the important caveat to my earlier statement.  Science does not work particularly fast, so it may take awhile to get at the answer.  One of the most important concepts in science is reproducibility, which is the idea that an experiment’s results, if done in the same way, can be reproduced by someone else.  It is part of the self-correcting nature of science, so if results cannot be duplicated then the results are called into question.  Additionally, the amount of subjects in the experiment is important due to concepts of probability (having more subjects involved helps to ensure that any results you are generating are due to your experiment and not merely based on chance).

Using the chocolate example from above, let us imagine a world in which you ran the experiment.  You enroll one volunteer and feed him/her a piece of chocolate.  Immediately, this person’s intestines explode.  Based on your experimental design (one person one piece of chocolate) you conclude that yes, chocolate does cause your intestines to explode.


You write up your results as a scientific paper and get it published (we will come back to this point later).  Others read the paper and wish to confirm your results.  They enroll one hundred people (Because let’s be honest, who wouldn’t join a study that was handing out free chocolate.  Also of note, it seems human subjects research laws are apparently ridiculously lax wherever these experiments are taking place).  This study also controls for other variables (like making sure no one ingests nitroglycerin during the study or eats chocolate made at a gunpowder factory).  All one hundred subjects eat the chocolate and are explosion free.  Now that there is disagreement in the literature, more experiments are done to try and confirm which side is correct.  All of these experiments, no explosions.  The consensus in the field becomes that chocolate is safe to eat.

Thank god!

But what about the original study, the one that had someone explode after eating chocolate?  It had some obvious experimental design problems that might explain the result being different from every other experiment.  The biggest problem was only having one subject in the study.  When you don’t have enough subjects in a study, the results you observe may just be due to random chance and not have anything to do with what you are studying.  This is why so many scientific papers utilize statistical tests to determine if their observations are real or just due to chance.  This is where the term “significant” as it applies to science comes from because it means something slightly different than its “real-life” definition (contrast definition 1 with definition 3).  Results are considered significant if, and only if, they pass statistical muster and are therefore determined to not be due to random chance.  Again, this does not mean the results are large or important.  But then, how do we know if studies published are designed appropriately?  Science has a system in which other scientists review your work before it can be published, checking for design or interpretation flaws.  This “peer-review” system works rather well and certainly catches a lot of the most egregious problems (sorry, egregious is such a fun word to use that I had to use it).  But it isn’t perfect, which is why reproducibility is such an important part of science as well.  Reproducibility helps to catch scientific fraud or methodological mistakes because if results are not able to be repeated, the community takes a harder look at the methodology utilized.

But what the heck does this all mean for the original questions of this post?  It means science is really good at catching its own mistakes and is self-correcting, though sometimes this process can take a long time (it takes time for further studies to be published or for better technology to be developed that allows for better experimental design).  As such, on the internet and in life, you should just trust me for your informational needs.

I am not a crook.

Seriously though, where the internet is concerned, information you find on sites like PubMed.gov and Google scholar are usually very trustworthy.  They only contain articles that have been peer-reviewed and published in academic journals.  But they are databases of articles so it can be hard to find the specific information you want.  Sites like Wikipedia make it easier to find the information that you want and are usually very trustworthy when it comes to facts (especially because they have to cite their sources), because it is possible for pretty much anyone to edit the content, you need to be more skeptical of interpretations based on these facts or opinions expressed as facts.

So what should you be most skeptical of when it comes to the information presented to you?

1) Articles that cite only one study and give it outlandish claims (especially if this is a recent study):  News articles do this all the time.  They blow a study’s conclusions way out of proportion.  As an example, imagine a paper that observed cancer cells had a defect in glucose metabolism that made them unable to survive without an adequate supply.  In the discussion, the researchers mention that perhaps decreasing dietary glucose could help treat cancer patients.  News sites, in an effort to generate hits, would create headlines like “Study says sugar causes cancer!” or “Study says low-glucose diet cures cancer!”  That’s why it is always better to actually read the source if you can.  Plus, as I said above, until a study’s results have been reproduced, also treat them with a grain of salt.

2)  Anecdotal evidence (personal stories):  I’ve seen this online constantly where a website or poster in the comments section (by the way, reading comments sections is possibly the quickest way to lose all faith in humanity whatsoever and yet, despite knowing this, you are never able to stop) posts a personal story either about themselves, their immediate family members, or friends (anti-vaccine being the most common I’ve come across, but many others refer to anecdotal evidence as well).


But anecdotal evidence is ABSOLUTELY TERRIBLE for coming to any sort of conclusion about anything*.  There are no statistics involved and no control of possible confounding variables.  Random chance happens.  Just because your significant other broke out in acne all over his/her body two hours after eating peanut butter ice cream does not mean ice cream causes acne.  That was an awful thing to have happen, but it would be irresponsible (at best) to immediately blame it on the ice cream.  It could have just been a freak occurrence or it could have been the new body soap he/she used in the shower that morning.  The point is, until the situation is systematically studied, you just can’t know for sure what the cause was and that is why you have to be extremely skeptical about anecdotal stories.

I’d like to end this post with a little warning to everyone that looks up information online, reads the newspaper, or watches the news on TV:  Beware of confirmation bias.  Confirmation bias is the unconscious tendency to believe information which agrees with or reinforces your previously held ideas and to reject that information that disagrees.  Someone who already thinks ice cream is bad for you would be much more likely to take the story of ice cream causing acne as truth and less likely to believe someone who argues that ice cream couldn’t cause acne, even if that second person had evidence.  So always be on guard when presented with new information so you can evaluate it fairly and please keep in mind any biases you might have (because we all have them) so as to not disregard good information just because it calls into question a belief you already had.

*Anecdotal evidence can provide rationale for scientific studies on the subject, but taken alone, this type of evidence means nothing.

No comments:

Post a Comment