16 December 2012

Why Strong Science Assessors Matter

In my latest Bridges column I connect Hurricane Sandy to Mad Cow disease through the repeatedly-learned lesson that effective use of science in decision making depends upon having strong institutions. Here is how it starts:
Last month in Berlin, I participated in the 10th anniversary conference of the German Federal Institute for Risk Assessment – the Bundesinstitut für Risikobewertung (BfR). The BfR is one of a number of European organizations that Catherine Geslain-Lanéelle, executive director of the European Food Safety Authority (EFSA), characterized at the conference as "the children of Mad Cow disease." This group of siblings includes the EFSA, departmental chief scientific advisors in the UK, and others. These organizations, and the conditions under which they were created, remind us that if science is to be well used in policy and politics, then strong institutions are necessary. This is a lesson continuously relearned, most recently in the United States in the aftermath of Hurricane Sandy.
You can read the rest here, and for background on the science and politics of the "hurricane deductible" see this post from last week. No doubt I will have occasion to re-visit this subject later this week.


  1. 1. Remember the Italian science advisors that told everyone that the pre-shocks didn't matter, the person saying a big earthquake was imminent was a quack, and to ignore the tradition and advice of getting away were prosecuted. I'm not sure this is a bad thing when scientists demand both immunity and authority simultaneously.

    2. I'm in the middle of reading (via downpour.com audiobook) Gary Taubes "Good Calories, Bad Calories" about all the politics and intrigue in the - I can't call it "science" - of obesity and health. There are other examples from medicine.

    3. Lamark v.s. Lysenko - who had the stronger institution? It isn't strength but meekness and truth that is the virtue.

  2. Could you elaborate on exactly what you mean with "strong institutions" here? Do you just mean institutions that can translate scientific judgements to objectively determined numbers? Or is there more to it. What would make a "strong" institute?



  3. -2-Jos

    I'll have a lot more to say on this, as it is a subject of a new chapter in 2nd edition of THB. Here is a short bullet list I shared in my BfR presentation of "best practices" for strong science assessors:

    *Conflict of interest guidelines
    *Rigorous handling of uncertainties
    *Explicit engagement of alternative views
    *Formal elicitation of decision makers
    *Complete data and method transparency
    *Public engagement
    *Explicit consideration of policy options
    *Research on science for policy and policy for science
    *Decision process evaluation and design

    The BfR assessment guidelines are really excellent along these lines:



  4. Hi Roger,

    Last week I gave a short talk on some of these issues as addressed in your book, Mike Hulme's "Why we disagree ..." and the ClimateShift project by Matt Nisbet, and last month I organized a workshop for some high level officials from various ministries, so this list is really helpful, thanks for that!

    Could you explain briefly what you mean with "formal elicitation of decision makers"?

    I understand what is meant with expert elicitation, but how to apply that to policy makers?

    And with regard to public engagement, would that be anyone or are you thinking of a particular part of the public (or does that depend on the extent of the problem and the corresponding part of the public that is involved?)?

    Cheers, Jos.

  5. -4-Jos

    By "formal elicitation of decision makers" I simply mean a process through which it is determined what scientific questions policy makers want answered from scientists.

    Too often expert assessments reflect the answers to questions that experts think that policy makers might want answered. (Aside: So this is how we got the "hockey stick"!) Such a process is of course iterative ((e.g., Policy maker: "I want a perfect 100-year prediction" ... Scientist: "Well, I can't give you that, but how about ...").

    Dan Sarewitz and I proposed one example of such a formal process in this paper, which goes into a lot of detail:

    Sarewitz, D., and R.A. Pielke, Jr. (2007), The neglected heart of science policy: Reconciling supply of and demand for science. Envionmental Science & Policy 10 5-16

    A final point -- giving policy makers answers to _their_ questions that can be addressed through the tools of science is very different than telling policy makers what they should be doing There is a clear difference between "assessment" (or what I have called "science arbitration") and advocacy.


  6. Hi Roger,

    Thanks, that's clear.

    Looking forward to reading that new chapter. I'll keep that list in my mind when addressing issues related to science and politics/policy making.

    As a final note: in the workshop I also talked about some of the recommendations from the ClimateShift project (being honest, open and accountable; the public engagement model). There were also a number of science communicators, who - at least to my surprise - wholeheartedly agreed. Quite a different response from what I often get from fellow climate scientitsts ...

  7. Starting this year, the National Hurricane Center will allow hurricane warnings to be retained even if a storm is reclassified as post-tropical before striking land.