Friday, 18 October 2013

Wronging The Right





After yesterdays terse and trivial post, a lurch into something a lot more serious and with a very long link from The Economist about the issue just how good is science and coming up with the answer that it may not be in many cases.

The extract below is the conclusion of the piece which indicates it's thrust and coverage. Essentially what it is about is that there has been work done to replicate the research and findings of a number of key scientific subjects.

The findings are very worrying is that too often the new work does not come up with the same findings or results of the earlier and worse in many cases there are serious weaknesses in data, statistics and analysis which compromise or even deny the findings.

As elements of this early research forms the bed rock of thinking in a number of areas and has led to extensive research assuming its reliability this is very bad news in some areas of study.

But this is in the areas that have been looked at and these are a very small amount of earlier research.  If this pattern were to be found in the event of across the board work in critical areas then who knows what might need revision and revisiting?

The trouble with much of our science is the limitations of what can be and who can do it.  This is tied up with funding, the existing academic bodies, the infrastructure of "expert" committees, too often ruled by vested interests and inevitably government decisions made on grounds well removed from Science.

In the UK it is my contention that the Health Protection Agency does not do much for health, is engaged in protecting the major commercial interests and is an agency of restrictive government and not of public interest.

The article concludes, quote:

Making the paymasters care

Conscious that it and other journals “fail to exert sufficient scrutiny over the results that they publish” in the life sciences, Nature and its sister publications introduced an 18-point checklist for authors this May. The aim is to ensure that all technical and statistical information that is crucial to an experiment’s reproducibility or that might introduce bias is published.

The methods sections of papers are being expanded online to cope with the extra detail; and whereas previously only some classes of data had to be deposited online, now all must be.

Things appear to be moving fastest in psychology. In March Dr Nosek unveiled the Centre for Open Science, a new independent laboratory, endowed with $5.3m from the Arnold Foundation, which aims to make replication respectable.

Thanks to Alan Kraut, the director of the Association for Psychological Science, Perspectives on Psychological Science, one of the association’s flagship publications, will soon have a section devoted to replications.

It might be a venue for papers from a project, spearheaded by Dr Nosek, to replicate 100 studies across the whole of psychology that were published in the first three months of 2008 in three leading psychology journals.

People who pay for science, though, do not seem seized by a desire for improvement in this area. Helga Nowotny, president of the European Research Council, says proposals for replication studies “in all likelihood would be turned down” because of the agency’s focus on pioneering work.

James Ulvestad, who heads the division of astronomical sciences at America’s National Science Foundation, says the independent “merit panels” that make grant decisions “tend not to put research that seeks to reproduce previous results at or near the top of their priority lists”.

Douglas Kell of Research Councils UK, which oversees Britain’s publicly funded research argues that current procedures do at least tackle the problem of bias towards positive results: “If you do the experiment and find nothing, the grant will nonetheless be judged more highly if you publish.”

In testimony before Congress on March 5th Bruce Alberts, then the editor of Science, outlined what needs to be done to bolster the credibility of the scientific enterprise. Journals must do more to enforce standards. Checklists such as the one introduced by Nature should be adopted widely, to help guard against the most common research errors.

Budding scientists must be taught technical skills, including statistics, and must be imbued with scepticism towards their own results and those of others. Researchers ought to be judged on the basis of the quality, not the quantity, of their work.

Funding agencies should encourage replications and lower the barriers to reporting serious efforts which failed to reproduce a published result. Information about such failures ought to be attached to the original publications.

And scientists themselves, Dr Alberts insisted, “need to develop a value system where simply moving on from one’s mistakes without publicly acknowledging them severely damages, rather than protects, a scientific reputation.”

This will not be easy. But if science is to stay on its tracks, and be worthy of the trust so widely invested in it, it may be necessary.

Unquote.

When we consider how few in government dealing with policy know much about Science at all and blithely make crucial decisions about our lives and the billions that might be spent on summaries of summaries of such research, no wonder so much goes badly wrong.

But what worries me is those things that are not being researched because they are inconvenient or would upset an existing order.

2 comments:

  1. "Budding scientists must be taught technical skills, including statistics, and must be imbued with scepticism towards their own results and those of others."

    In my experience, most scientists don't have that kind of scepticism in their genes. Yes they approve of scepticism as an ideal, but have no notion that it might apply to their own activities. They quite genuinely don't get it.

    Over and over again it is more advantageous to go with the flow and over and over again that's exactly what they do without think they are doing anything wrong or out of the ordinary.

    ReplyDelete
  2. Demetrius: Thanks again for your kind work.

    I think that the subject that you discuss here is cogent to all the problems that you discuss throughout your writings.

    In nearly all cases, the problems tend to lead back to a finding/belief/study which came up with the hare-brained idea in the first place.

    The idea is propagated through true zealots who like the idea for personal reasons and because it better fits their preconceived notions. It is compressed into dogma and it is then defended for financial/egotistic reasons far removed from the search for truth.

    ReplyDelete