Tuesday, June 25, 2013

Not a fan of eczema meta-studies, especially that antibiotics one

You don’t have to look far for an example of how the media can inflate a trivial scientific result into something that looks like important news.

Take last week’s report in the British Journal of Dermatology that exposure of newborns or infants to antibiotics increases the risk of them developing eczema. It was all over the mainstream media, with headlines such as “Report claims antibiotics cause eczema” and “Could Using Antibiotics As A Child Make You Develop Eczema?” I’m still seeing it on Twitter.

I think it’s almost criminally irresponsible to publish news like this when you just know thousands of parents will now hesitate to give their kids antibiotics. The kids will be the ones who suffer needlessly, when they must endure potentially life-threatening infections without treatment.

If giving a child antibiotics substantially increased the risk of developing severe eczema, then that news would be worth paying attention to. But that is not what the BJD paper concludes.

For a start, the paper is a meta-study: a review and summary of a large number of original population studies that other scientists already carried out.

Meta-studies are a great way for scientists to pad their publication records without getting their hands dirty with real research.

In my experience, a meta-study is suspect just because it exists. I don’t see meta-studies coming out in areas in which the science is indisputable (e.g., that UV from the sun causes skin cancer). I see them in areas in which there’s no scientific consensus and most likely the phenomenon under study has a very small real effect. In the field of eczema research, I see meta-studies published about vitamin D, probiotics, traditional Chinese herbal medicine, and so on.

The reason you see meta-studies in these areas is because the trials are all finding different results and someone wants to obtain a big picture of what is going on. Lots of noise and a small signal. If it was obvious what was going on, there’d be no point in a meta-study.

But one major question is how do you compare studies that are done with different aims and measures? This question is especially relevant for the field of eczema research, where there isn’t even a consensus about how to diagnose or measure atopic dermatitis. Not that long ago I went to San Diego as a patient representative to the HOME meeting (the third such get-together), at which researchers were trying to settle on a single standard survey form for measuring how bad a patient’s eczema is. In several meta-studies I have seen the authors mention (i.e. complain) about how difficult it is to draw conclusions from multiple eczema population studies.

Then, the conclusions of the meta-studies are usually weak. The results are almost always presented as “odds ratios,” which to me seem like mathematical sleight-of-hand to inflate very small results. In the antibiotics-early life meta-study, the researchers reported an odds ratio of about 1.4. What this means is you get the number 1.4 when you divide one number, the odds that a child will develop eczema if they get antibiotics, by another number, the odds the child will develop eczema if they are not given antibiotics. If you assume that the second number is about 2:8, or 20% (given that there’s a 20% chance a kid in general will get some kind of eczema) that means, for an odds ratio of 1.4, that there’s a 26% chance a kid given antibiotics will develop eczema.

Big deal, a 6% increase in risk—if you believe the meta-study, which is comparing 20 other studies that all used different methods and measures.

Is that worth risking your child’s life for?

No comments:

Post a Comment