Pages

Follow the reluctant adventures in the life of a Welsh astrophysicist sent around the world for some reason, wherein I photograph potatoes and destroy galaxies in the name of science. And don't forget about my website, www.rhysy.net

Friday, 13 November 2015

When Worlds Collide : Science In Society

What Is Science, Anyway ?

Science, someone told me recently, is a business. This bold assertion made me too angry to respond because I've seen first-hand what happens if you try and run an observatory as a business, and it just plain doesn't work. Science is not a business. Nor is it art, or politics, or journalism, or a religion, or anything else. It's it's own thing, but like most enterprises it does have some aspects of all of those. Today I'm going to explore a few of those and look at how science and other endeavours relate to each other.

Is science a religion ? Actually screw this one, I can't be bothered.

I'vewrittenfartoomuch to bother repeating myself as to why science isn't dogmatic to go through all that again. Simply put, if you think science clings fervently to its beliefs, you are deluded. Nor does religion refuse to admit its mistakes either, though both religious fundamentalists and individual, highly intelligent scientists can and do sometimes behave like this. No amount of devout atheism will save you from being a blithering idiot - or, more to the point, from having any irrational beliefs at all. But I really don't want to dwell on this today, so just go and watch the movie Contact. Don't worry, I'll wait. Just don't bother bringing this up in the comments because I'm not in the mood, OK ?

Fair enough. What about the arts ?

You might not necessarily think that science has much in common with art either, but you'd be wrong. As explained previously, the humanities courses can be an essential tool in developing rational thinking. I'll get to the most obviously relevant form of this - journalism - in a minute, but even the purely creative arts share important aspects with science. And I don't mean the more obvious overlap of scientific illustrations, which have to be able to inform as well as (in a sense) entertain - I mean something much more fundamental.

Problem solving is not so very different to creativity. This is especially important in theoretical physics, where thinking up radically new concepts is the key to making breakthroughs, but it's also essential to be able to come up with new ways of analysing data or doing new tricks with old technology. Finding new interpretations of old data is also tantamount to being creative.

These similarities mean that sometimes the process of doing both science and the arts can be very similar. Both require large amounts of time to do nothing but thinking (and in the case of science at least, an awful lot of background reading). Inspiration can't be forced - you cannot make people have new ideas. You can, however, encourage them. Science and art are both sometimes highly elaborate forms of play, to explore the question, "what if I did it this way...", or better yet, "what does this button do ?" Such thinking intrinsically demands a liberal, reasonably informal atmosphere. Insisting that people are at their desks during some particular set of hours and only talk to each other during scheduled meetings makes absolutely no damn sense whatsoever.

And art and science sometimes both require large amounts of trial and error and sheer patience. Scientific productivity is hard to measure in any case (more on that later), but it simply cannot be evaluated at all on hourly, daily, or even weekly timescales. Above all, both require the freedom to fail - to spend years on something that may very well be utterly useless without worrying that it might end careers if nothing useful turns up*.

* Recent example - this project took a full seven years to complete, and it could easily have been the case that we would have found nothing interesting whatsoever. So you can take your hourly timesheets and file them where the sun does not shine, thanks.

You might say, "Well, in science a negative result is always useful, and in art, paintings that people hate today might be seen as masterpieces eventually." And that's partially true. Sometimes results in science also aren't recognized as significant until centuries after their inception. And, sometimes, like terrible works of art, they are just god-awful and done by people who are simply very, very stupid.

On the other hand there are plenty of pictures of Jesus, but very few that could be described as "a very hairy monkey in an ill-fitting tunic", so maybe this one wasn't such a waste after all.

The point is that sometimes there's just no way to know in advance if your research will be a load of tripe or lead to the invention of electrical power. To invest only in research you believe is more likely to have practical benefits is the sign of a small mind bereft of vision. You cannot innovate without taking risks. With science, the payoffs - electricity, radio, television, communications satellites, hurricane prediction, the internet, fewer diseases, greater food production, cheap global travel - are so stupendously important that I'm continually amazed by the ongoing need to justify science expenditure at all. It just makes no sense to me whatsoever. Perhaps it's because these things are such an integral part of our daily lives that we don't often think of them as scientific advances.

And no, I don't care about the typo.

Of course science isn't quite the same as the arts. Scientific freedom of thought has to be constrained by observational evidence, whereas the imagination of an artist has no such limits. Scientists can question whether the observations were done correctly, but that's as far as you can go - if your theory predicts that a plane will fly and it doesn't, well that's tough on you. Art on the other hand has no such clear objective measurement.

Journalism then.

Good journalism is the search for truth, no matter how unpleasant it may be or how many people want to stop you from exposing it (excepting perhaps cases where that might endanger innocent people). Similarly, good science is about the search for truth no matter how crazy it might seem or how opposed it may be to any and all ideologies.

Bad journalism, on the other hand, is all about making a sale : telling people what they want to hear or what evokes such a strong emotional reaction that they can't help but feel it's correct. It's much more about who than what or why. Similarly, bad science is about only going for easy, non-controversial results, never considering alternative ideas, interpreting the evidence to mean what you already think it should mean, and deliberately trying to agree or disagree with specific people rather than their ideas.

Sorry Augustine, but that's total rubbish. If it was true we wouldn't have people still claiming the Earth is frickin' flat or refusing to use vaccines. Maybe natural selection will kill off those idiots eventually, but it's a slow and unnecessary process. Bad journalism and bad science (and bad science communication) can be immensely damaging practises.

Good science is a lot like good journalism. The major difference is that science isn't about people, it's about specific things and general trends : the ability to say, "if I do this, given these conditions, then that will always happen." Journalism, on the other hand is usually only about discovering what happened in the past in a very specific circumstance, rather than what might happen next. Journalist's predictions are (often simply because the number of variables is so large) only speculation, even if they are very well-informed speculation, whereas established scientific results are always true. Heating up lead to ~350 C will always melt it - unlike in politics, it doesn't matter what the mood of the experimenter was or whether they'd gained enough support of their peers.

There's also a curious difference between science and journalism when it comes to impartiality. It's usually the job of a journalist to communicate the findings of experts to the public rather than present their own opinions, at least when it comes to making predictions. But it is not always the job of a scientist to be impartial. Scientists are supposed to be objective, which is not the same thing.

Suppose some imbecile of a politician decides that astrology really works. It's the job of a journalist to interview both the politician, scientists, and possibly members of the public to inform people what those groups believe, not to decide who's right and who's wrong. But it is exactly the job of a scientist to decide what's right and what's wrong, to tell you what the evidence says. When the case is a decisive as this, being impartial is not being objective at all, because astrology objectively doesn't work. It doesn't matter if you disagree, you're wrong. Science is not a democratic process.

Not quite true : you do need people's opinions on facts, but we'll get back to that soon.

It's the job of a scientist to form an opinion based on the evidence. As usual it all comes back to this : the beliefs of science are evidence-based and provisional. But for established results which have mountains of evidence (or even irrefutable proof) backing them up, objectivity and impartiality could not be further apart : a scientist who believes the Earth is flat is an oxymoron.

When it comes to cutting-edge research, however, it's another story entirely. Scientists should, of course, try and assess the evidence in an unbiased way. But that does not mean they shouldn't form an opinion about it, only that they should be prepared to change that opinion given different evidence. Of course, when there is very good opposing evidence to an idea, scientists should be honest enough to state, "this is my opinion, but for an alternative you should talk to that guy." But you cannot expect scientists not to express their opinions at all, which brings us neatly on to the next area.

Science and business

There's one very important overlap between science and the world of business : the need for competitive collaborations. You might get the occasional lone genius, but even they have to read what others have done, and most people work more effectively when they have other people to talk to and bounce ideas off - i.e. generate inspiration, as mentioned earlier. Add to that that science is not easy and takes a very long time, and the need for collaboration becomes obvious.

The need for competition may be less obvious. As a socialist I can't say I normally approve of competition, and on a personal level I hate competition in all forms. "Oh, you want to tackle me for the football ? Please go right ahead, I didn't want it anyway"*. Overcoming this deferential instinct requires an almost overwhelming effort on my part**. So believe me when I say that competition is at least as important to science as cooperation.

* Or, worse, "Oh, you've kicked the ball towards me ? Obviously that cannot be the case because no-one in their right mind would want me to have the ball, so I'll just assume it's for someone else and step out of the way. Sorry to be a bother."** On the rare occasions I meet someone even more deferential than me, the result is a confused stalemate.

The reason is that scientists, being made of flesh and blood, are human and have subjective opinions, sometimes much more strongly held than the evidence warrants. Competition and rivalry act as a check against this : it's in the interests of your opponents to prove you wrong. Just like collaboration, this doesn't work if you go to extremes. You have to try and attack the opposing view as far as the evidence allows, but no further. It's not always easy and of course people make mistakes and sometimes develop bitter feuds over pretty unimportant things. Similarly, they sometimes even try and cheat the system by undermining this competitive process entirely, just as big businesses sometimes conspire price-fixing schemes. But mistakes don't invalidate the underlying importance of the competitive process itself, they simply indicate that it's not being carried out correctly.

Beyond that, it's tough to see science as a business. In principle, of course it doesn't matter where the money comes from as long as a good scientific working environment is fostered. But the goal of business - to generate income - is generally incompatible with the goal of science - which is to discover the truth, the whole truth, and not just the truths which make money. The two aren't necessarily opposed, just orthogonal : they don't relate to each other in any way. Science does produce world-changing spin-offs, but not always deliberately and not all that frequently. The truth may set you free, but not every truth can make you rich.

Businesses value productivity. Productivity is also important in science, but as we've seen, it can't be measured on a minute-by-minute basis. This modern idea that you must publish as many papers as possible - the business-like approach - is counter-productive to scientific accomplishment. The pressure to publish inherently encourages the publication of mediocre results, sometimes in the guise of long-winded, obfuscated descriptions that mean nothing but look sexy : "If you can't convince then confuse" as the old saying goes. If you're of the ilk that think science isn't producing enough revolutionary ideas these days (and I'm not) then you should be actively fighting against the "publish or perish" academic culture.

Somewhat paradoxically, transparency is also very important in science, which is another thing that goes directly against the grain of most business models. We need to be open about exactly what was done and how, we just don't need to hold every minor experiment in the same regard as breakthrough discoveries. Which is why I think the publication system could probably do with a reform to account for this.

The ultimate end result of this transparency is knowledge produced for the public good which is freely available to everyone. The typical business practise of keeping everything closed and patented is directly opposed to this : profit-based research is an anathema to knowledge for knowledge's sake. And of course investors may have their own biases and aims for commissioned research, and that they'll want to make the results seem as important as possible. If this model of funding was the norm rather than the exception, the effect of this would not be good for science or trust in science.

As I've explained previously, sensationalism is enough of a problem in the current situation. When you report fringe research as though a breakthrough discovery was just around the corner, mainstream scientists are quick to point out why it's either wrong or at least highly unlikely. Because this keeps happening almost continuously, it makes scientists look closed-minded - people have oddly selective, short memories : they forget the legions of incidents where scientists quite correctly said, "nope, this isn't going to work" and it didn't*. And yes, funding fringe research will result in some important discoveries, but it won't be anything like as much as its supporters think. Most of the time you do not get to move directly to "Go" and collect £200 - you have to do years of hard graft first.

* Unless you think they all did but politicians covered it all up. See this.

Unfortunately, sensationalism sells. A money-driven system intrinsically encourages sensationalism and this would, I think, be an utter disaster of apocalyptic proportions. Wine would turn into water, a plague of Frog would destroy us all, and kittens would explode. Well, not quite, but it could come close. If you have a system of funding only from private sources, the number of sensationalist promotion stories will undoubtedly increase dramatically*. Correspondingly the number of projects which simply fail miserably and make no useful contribution at all will also rise. That isn't going to improve public confidence in science - quite the reverse.

* This is much less of a problem in the public sector where research funding depends far less on selling a product.

Then there's the question of whether or not blue-skies pure research would attract anything like as much funding in a private funding model as from public money. I suspect not, because no-one knows which research project will be the big breakthrough. It's especially tough to see a private company financing the LHC or the HST. Though there are someprivate observatories, they are few and far between. There's also the simple issue that scientists want to do research, not marketing. Hiring people to market inherently unprofitable research ? Not likely.

What about a radically different approach, like Kickstarter or Patreon ? In some ways I can see the appeal of crowdfunding. Potentially I could raise the amount of money I deem necessary in advance of doing the project (thus giving me the all-important freedom to fail), and I can be totally honest with investors about my style of working. "Yeah, I went for a walk to feed the ducks, because my brain was fried and sitting at my computer was making things worse, not better. And anyway I like ducks, so screw you." Unlike standard business models, there's nothing about crowdfunded projects that compels them to be profit-motivated or opaque.

"Yes, I know I was gone for three days. There were a lot of ducks, OK ?"

But in practical terms it's very hard to see this working. First, large scientific endeavours require millions of dollars per year (or more), and sustaining this looks to be very difficult. The commendable project to keep the MOPRA telescope alive via Kickstarter managed to raise almost $100,000, but that's only about enough to run it for a month - enough to finish a survey, but nowhere near enough to operate it long-term. For that, traditional funding is required (which, happily, they now have). In any case, science by donations looks like less of a business and more of a charity (not intended as an insult because I like charities, just to point out that it's not a profitable endeavour).

Second, there's the dubious claim as to whether the public are the best people to decide which research gets funded and which doesn't. It's the second part I have issue with - as for the first, it's your money, do what you like with it. But relying on non-scientists to decide which projects don't get any funding is like asking FIFA to decide which chess players should be banned from competition. Private funds that supplement and increase existing public money is a good thing, relying exclusively on private sources is a horrible idea.

So instead of trying to get corporations to do science, which is a bit like asking Herod for babysitting advice, how about we all give some portion of our earnings to a committee of a mixture of experts and generalists and let them decide how it's spent ?

In an ideal world politics would have no say in science. But science will always have a crucial role in politics - at least if you care about making the best choice given the available evidence. In the real world, mitigating the influence that politics has on science is also very difficult - but it is not impossible.

The Yes Minister episode from which the above exchange is taken is about the proposed use of the fictional chemical "metadioxin", which is proven to be safe but has the unfortunate similarity of name to the toxic "dioxin" (which are real chemicals). Despite knowing that metadioxin is completely safe, the Minister decides to ban it anyway, fearing that to go against public opinion would be too "courageous" a decision :

Sir Humphrey : If you want to be really sure that the Minister doesn't accept it, you must say the decision is "courageous".Bernard : And that's worse than "controversial"?Sir Humphrey : Oh, yes! "Controversial" only means "this will lose you votes". "Courageous" means "this will lose you the election" !

A fictional example, but only a deluded wombat would pretend that politicians always act in the best interests of their constituents rather than themselves. Or that the general public always know what's best for them. I mean, have you met the general public ? Some of them think the Moon is a hologram, a lot of them think that genetically modified foods are unsafe, and according to many reports half of the population of the most technologically advanced nation on the planet don't believe in evolution. Like science, democracy isn't a perfect system. It's just a lot better than the alternatives.

"But... but... but I thought you valued my sage political advice !"

The role of scientists is not to dictate policy, even in their expert areas. In a representative democracy, scientists are not philosopher kings. Rather science produces results which, being available for free public examination, politicians can consider in their decision-making process. Just like any other pressure group, scientists can campaign for specific policies, but they don't get to set them.

Which brings me back to that earlier remark by John Oliver that you don't need people's opinion on facts. In science, no you don't. In politics, yes you do. In a democracy, if you can't persuade people that what you're doing is right, then riding roughshod over their feelings is very, very dangerous - even if those feelings are misplaced. In science emotions (ideally) have no part to play in making decisions, in politics they can be of critical importance.

This raises the tricky question : should scientists publically advocate specific policies ? Clearly scientists need a public voice, because the public need to know when politicians are acting against scientific advice. And sometimes the scientific conclusions lead to only one possible course of action, and it simply doesn't make sense for scientists to shut up and refuse to commit to advocating a policy, e.g. vaccines.

But this is not without problems - for both scientists and politicians alike. Twenty years ago, no-one was complaining that climate scientists were politically motivated when they said that humans were warming the planet, because politicians were ignoring them. Now that politicians are starting to make even paltry efforts to follow the scientific advice, the issue has become massively polarised between left and right... at least in America. Strangely for those who believe it's a huge conspiracy or hoax, this doesn't seem to be the case in the UK, where both parties at least try and make noises about fighting climate change*.

* However there is a split, but unfortunately its difficult to compare the two studies since they both asked different questions. Still, certain Tory MPs have made scathing attacks on climate change deniers and even Thatcher herself had a brief flirtation with environmentalism.

Regardless of why this happened, the effects have been damaging for everyone. Scientists, the claim goes, are motivated more by appeasing their political puppeteers than their search for the truth... even though no-one accused them of this until politicians started following their advice ! Politicians didn't tell scientists, "find a result to enact this policy", because clamping down on carbon emissions is difficult for all concerned. The energy sector is both financially powerful, integral to our way of life, and generates massive amounts of revenue for the government. It is not remotely plausible to suggest that politicians have been duped into giving up billions in taxes for the sake of a bunch of tree-hugging loonies, and it's even sillier to think that there was a global political conspiracy to manipulate scientific findings in order to harm the economy.

These guys brought down the oil industry ? Yeah, right, whatever.

And yet that is precisely the lie the global warming deniers have sold. Consequently, right-wing politicians now look like a bunch of complete nutters to almost all scientists (at least in America) who won't listen to expert advice. Scientists look like they're either massively incompetent on in the pocket of left-wing politicians, and left-wing politicians look like... well I've no idea what their agenda is supposed to be, because the whole thing is frickin' nuts.

If that's what happens when people think scientists are partisan, it serves as a powerful warning that scientists and politicians make strange bedfellows. Which is not surprising given how fundamentally different the two systems are from each other.

In the Yes Minister episode, Hacker convinces the lead scientist Professor Henderson to alter his report to basically say that the chemical isn't suitable for use in industry. He lies to Henderson that the "proof" isn't as certain as it could be, "some of the findings have been questioned" - and then he uses the emotional blackmail tactic to suggest that the press would jump all over him if anything went wrong. It's a combination of politics and journalism that's far more toxic than the chemical itself.

Unfortunately, this kind of thing does happen in the real world. But it is positively deranged to think that practically all climate scientists in the entire world are in the thrall of politicians, unless you have a serious medical-grade level of paranoia coupled with a fundamental misunderstanding of how science works.

There are buffers in place to minimise the level of political interference in the day-to-day running of scientific institutes. Money doesn't come directly from politicians into the pockets of scientists (which would be a disaster). Instead it comes via research councils which send money to universities who then make their own decisions about how to spend it. So while politicians do ultimately have control over which major areas get funding, they have little or no direct influence on what research is actually carried out.

No-one should pretend that this system is perfect. But it looks a lot better than the alternative of a money-driven approach, and let's not forget that the public system has produced revolutionarydiscoveries. It may be nice to think that politicians or even scientists are suppressing major breakthroughs somehow, but this is fiction. They are all just human. Trying to get them to deliberately co-operate at the best of times is like herding cats; trying to get them to deliberately do something which goes against the very spirit of free scientific inquiry is like herding tigers. Which are on fire. With swords.

Conclusions

We've seen the traits science shares with some other common activities, and which ones it has to reject. It's interesting to try and consider what the Platonic ideal of a scientific working environment would be like : that is, one which maximizes the rate of important scientific discoveries, regardless of whether those results are of direct practical benefit or not. I refrain from using "revolutionary breakthroughs" since overturning the established paradigm is NOT the goal of science. It is the search for truth, whatever that truth may be, even the (admittedly ultra-ridiculous) prospect that we've already found all the answers.

Anyway, it would probably look something like this :

Guaranteed funding over multiple years, with the assurance of further extensions if performance was judged to be satisfactory, though of course still with the prospect of firing people if they're just plain lazy. Give people the freedom to fail but not the freedom to do sod all or contradict the observational evidence.*

As little connection between those funding the research and those doing the research as possible, regardless of whether the money is public or private. Money will inevitably dictate which broad areas of research are studied, but there need to be strong safeguards to prevent it influencing anything more specific than that.

Absolutely impossible to make financial profit directly from specific research projects (though not necessarily prohibiting rewards for consistent good work).

Pressure to publish important results in peer-reviewed journals, but no pressure to publish every single finding of any significance (except maybe in some other format).

Compulsory public release of raw data after publication of the results. As much transparency of the process as possible, excepting the normal sort of privacy that everyone expects in any working environment.

Compulsory public access to all publications.

An informal atmosphere where people can propose ridiculous things without being ridiculed, and can work in whatever way they find most stimulating (within reason) - if that means mostly working in a park while feeding the ducks, then better stock up on bread.

Freedom to be objective, not impartial - that is, to form and express an opinion based on the evidence, not ideology.

Encouragement of competition with rivals to prevent subjective opinions from dominating, i.e. a false consensus.

Performance evaluation based on quality of work, not quantity - or at least a composite of both

* Incidentally, do you know what "guaranteed funding provided you keep doing the work" is normally called ? It's called a normal freakin' job, that's what. What the hell the advantage of doing science mainly via 2-3 year postdoctoral positions is supposed to be has escaped me entirely.

Although the Platonic ideal doesn't exist (by definition), there's one career that comes closer than any other : a tenured professorship. "But," I'm sure many of you are saying, "tenured professors are the most orthodox and conservative of all !" That is a common misconception. It might be true for the very public figures you see in the media. It is NOTtrue in the hallowed halls of actual academia. About half of the senior academics I know are in various stages of lunacy, or, as Robert Minchin once so eloquently put it, "The natural state of an academic is to go mad." Whether this becoming more and more unorthodox results in better science or not is another matter entirely.

I didn't list public funding among the ideal situation criteria, because I want to emphasise that there is a role for private money in science - and especially in technology. Funding "out there " research is a good thing - as long as it's in addition to mainstream ideas, not a replacement for mainstream science. Private money doesn't worry me, but the prospect of science being done exclusively or mostly from private sources is in my view truly frightening.