An elderly man in a suit adjusts the hand of the Doomsday ClockThe saying goes “even a stopped clock gives the right time twice a day”. The implication is that no matter how wrong you are, how broken your reasoning, or how unfounded your opinion, once in a while (by chance) you’ll be right about something. There’s also an inverse corollary here: No matter how good your reasoning and your facts are, no matter how often you’re right, sometimes you’re going to make a mistake and be wrong.

Life’s tough like that, and we all really know these things, but it is largely these two principles that have led into a long-term distrust of the process of science.

Keep the stopped clock in mind, because we’ll be getting back to it after a little bit of context.

Science is an interesting process. When done right, a scientist develops an idea by gathering facts and making observations, then she tries to prove herself wrong by gathering more facts and conducting experiments to see if she can make the idea fall apart. If she cannot, then other scientists try to blow a hole in the idea with data and experiments. If nobody can do it, then the idea stands until something new comes along which causes the process to begin again, with a better idea.

Here’s one of the things we usually don’t understand, though: Just because we find some new data that doesn’t fit, and discard an older theory for a newer one doesn’t actually make the old theory wrong; by which I mean it wasn’t a mistake or an error or a bungle. Most theories that are replaced by better ones are still mostly correct. We’ve just found something that’s a little bit more correct. That doesn’t mean that the original one was wrong. It just wasn’t as complete as the new one.

Ideally, as we learn more and as we gather more data, we incrementally revise our theories and our models, replacing old ones with better ones. Probably most of the theories that held true twenty years ago have been replaced (perhaps several times) with better ones. However, that doesn’t mean the theories of two-decades-ago were wrong. They’ve usually just expanded to take new information into account. Most of the things they actually taught us are still likely true. We just understand it better now.

So science can be likened to a process of trying to prove yourself wrong with verifiable facts, and reproducible experiments (where possible – because not every field has practical experiments you can run without having a spare planet handy). Susan the scientist has checked her data and her work back to front and tried to find some way to shoot down her theory. She’s done this because her fellow scientists in the field who might have access to other data or different equipment are going to try to shoot it down too. Susan’s an expert in her field, and she’s spent days and nights applying that expertise to figure out any way her idea might be wrong.

Now let’s add someone else to the mix.

We’ll call this person, say, George Brandis, just for the sake of argument. George is not an expert in Susan’s field. George has no training, education or experience in Susan’s field. Maybe he has a legal background instead where the only things that are really important are what you can convince someone else – a judge or jury – are true; truths that might not actually necessarily be true. In George’s career, success is measured by being able to convince other people that his position is right, whether it actually is or not. These seem like fine credentials for a political career.

George doesn’t really have any verifiable facts about Susan’s field or her theories. He probably couldn’t even explain the reasoning of them in any detail. However, George has something Susan doesn’t. He has an unfounded opinion (that is, an opinion not based on verifiable facts and structured reasoning based on those facts – an opinion which just exists in the absence of anything to support it).

What bugs George about Susan, and people like her, is that they don’t give his opinion equal weight. He says they’re wrong, and tries to convince others of the same thing. Susan has data and a tested model, but George thinks his personal opinion is just as important when it comes to absolute truth.

Why would this be?

Well, one factor would have to be science-reporting. When you pick up a newspaper, or a magazine or browse through your favourite news Web-sites, you’ll see various articles about ‘sciencey’ discoveries and research and results.

Regrettably, much of what you’ll read is what I like to call ‘hokum’. That is, complex and intricate research is boiled down into a few paragraphs that are simplified so much that they are misleading or often outright false. The writer doesn’t really understand what they’re reporting on, and they often ‘jazz it up a little’. Sometimes the reporting ends up going on to say the opposite of what Susan found, once it has passed through the hands of an editor or two.

So, much of what you read about science’s research and discoveries is often wrong.

Now, back to the stopped clock.

You, like George, have probably read these various articles about something ‘sciencey’ from time to time and occasionally paused and thought “I don’t think that is right!”

We’ve all done that. It doesn’t help that the article might actually be misrepresenting the facts, but every so often we get exposed to some alleged research results that conflicts with one of those unfounded opinions we all have. We feel in our gut that it is wrong.

Later, perhaps weeks, or months or years later, we encounter another article which either presents the facts differently, leading us to think that the previous view was overturned, or read about a new theory that replaces the old one, or Susan’s peers found that she’d made a mistake, forcing her to go back and start over.

“Aha! I was right, and science was wrong!” we think to ourselves.

You: 1; Science: nil – right?

That’s the stopped clock at work. Occasionally our unfounded opinions are right (and sometimes we only think they are), but those are the incidents you remember. You don’t remember all the times that your opinion was wrong and the science was right. Sometimes you’re just waiting for science to just to get on far enough and come up with a new theory that you think will support your opinion. Because you can still convince yourself you’re right. Those lazy scientists just haven’t proven it yet.

And each and every time you do turn out to be right, or imagine that you are, that just reinforces your (and George’s) notion that your opinions are way better than any verifiable science. Why you don’t even need science to know what’s right and what isn’t!

George isn’t alone, of course. There’s plenty of other folks like that, say on the US House Science Committee who are “a bit iffy on the whole science thing.”

Most of us aren’t Susan. Most of us aren’t necessarily even experts in our fields. Heck, even Susan is likely to make a mistake now and again (as the stopped clock tells us).

But if I had to bet between Susan and her fellow experts who try to prove themselves and their theories wrong year after year, on one hand, and George, on the other, who thinks that people with unfounded opinions should carry equal weight when important decisions are to be made … well, my money would be on Susan.

I wonder where George’s money might be.

Tags: , , , ,

Categories: Culture, Opinion, Science.



Got a news tip or a press-release? Send it to news@taterunino.net.
  • Support us

    Writing is my day job. Site advertising pays for the hosting, but nothing else. Help keep us in coffee and keyboards

    ... or donate in Second Life at this location.

  • ...or use Flattr

  • Read previous post:
    Close