Skip to content

Country

FREE SHIPPING FOR ORDERS OVER $130

master key

Nov 9, 2012

Can We Dump Randomized Clinical Trials?

Keith Scott-Mumby

If a boxer knocked out 12 of 13 opponents, would you bet on him winning most of the next 10 bouts? Probably you would. Certainly, nobody would think you were crazy or crooked. But that’s bad science! Specifically, that’s not a big enough series of cases to justify confidence that he could keep winning. We have a lot of problems with defining science, even when we are well-meaning and intelligent (not in the pay of some drug cartel). The answers you get depend, largely, on the questions you ask (like in real life!) But one of the difficulties dogging mainstream science is randomized controlled double-blind trials. OK, I don’t need to go into the definition of double-blind, etc. Let’s focus on the word “randomized”. It means what is says. Somewhere in the study a computer spits out a series of random numbers, saying who gets what treatment, in what order. It’s supposed to remove chance and bias. But if you’ve ever tossed a coin and got 8 heads in a row, you’ll know that you can’t totally rule out the surprises of chance. The only way to eliminate it from your calculations is to do very large numbers of examples. We all know that if you go on and on tossing a coin, it will eventually come out roughly equal: as many tails as heads. 8 out of 10 heads was just a lucky run. It gave a false impression. But eventually you would have, say, 2963 heads and 2922 tails, or roughly equal. The trouble is, with science investigations, you can’t always assemble enough people to make a statistically significant case. You might need thousands of cases, yet you can only find a few hundred volunteers. What to do? Common sense says go ahead and make the best of it you can; at least you’ll get some sense of a result. You might even get lucky and find the evidence points overwhelmingly in one direction (but remember the 8 heads out of 10 coin tosses warning!) Your results may not mean anything and that allows critics to trample on your nice published study, if they don’t like it. Big dilemma!

Nov 9, 2012

Can We Dump Randomized Clinical Trials?

Keith Scott-Mumby

If a boxer knocked out 12 of 13 opponents, would you bet on him winning most of the next 10 bouts? Probably you would. Certainly, nobody would think you were crazy or crooked. But that’s bad science! Specifically, that’s not a big enough series of cases to justify confidence that he could keep winning. We have a lot of problems with defining science, even when we are well-meaning and intelligent (not in the pay of some drug cartel). The answers you get depend, largely, on the questions you ask (like in real life!) But one of the difficulties dogging mainstream science is randomized controlled double-blind trials. OK, I don’t need to go into the definition of double-blind, etc. Let’s focus on the word “randomized”. It means what is says. Somewhere in the study a computer spits out a series of random numbers, saying who gets what treatment, in what order. It’s supposed to remove chance and bias. But if you’ve ever tossed a coin and got 8 heads in a row, you’ll know that you can’t totally rule out the surprises of chance. The only way to eliminate it from your calculations is to do very large numbers of examples. We all know that if you go on and on tossing a coin, it will eventually come out roughly equal: as many tails as heads. 8 out of 10 heads was just a lucky run. It gave a false impression. But eventually you would have, say, 2963 heads and 2922 tails, or roughly equal. The trouble is, with science investigations, you can’t always assemble enough people to make a statistically significant case. You might need thousands of cases, yet you can only find a few hundred volunteers. What to do? Common sense says go ahead and make the best of it you can; at least you’ll get some sense of a result. You might even get lucky and find the evidence points overwhelmingly in one direction (but remember the 8 heads out of 10 coin tosses warning!) Your results may not mean anything and that allows critics to trample on your nice published study, if they don’t like it. Big dilemma!
Close (esc)

Popup

Use this popup to embed a mailing list sign up form. Alternatively use it as a simple call to action with a link to a product or a page.

Age verification

By clicking enter you are verifying that you are old enough to consume alcohol.

Search

Shopping Cart