29 January 2013

It's Time to Bury the Easterlin Paradox

Over the past several years as I have discussed The Climate Fix and the "iron law" of climate policy, someone often brings up the Easterlin Paradox as a counter argument. There are several reasons why it fails to contradict the iron law, the most important one being -- it is just wrong.

... Richard Easterlin of the University of South[ern] California, has been studying the concept of national happiness since the 1970s, when he formulated his "Easterlin Paradox".

"Simply stated, the happiness-income paradox is this: at a point in time both among and within countries, happiness and income are positively correlated," he said. "But, over time, happiness does not increase when a country's income increases."
The Easterlin paradox suggest that in terms of human happiness -- a squishy concept to be sure -- there is a limit to economic growth beyond which there really is just no point in attaining more wealth. Further, a decoupling between income and happiness at some threshold would imply that GDP would not be a good measure of welfare, we would need some other metric.

A recent paper (PDF) by Daniel Sacks, Betsey Stevenson and Justin Wolfers argues that the Easterlin paradox is also wrong. They explain why this matters:
This conclusion has important implications for policy and for science. If raising income does not raise well-being, then policy should focus on goals other than economic growth. And given the central role of relative income, researchers have spent a great deal of time and energy understanding why relative concerns are so important.
They ask "But is Easterlin correct?" and they answer as follows, with a resounding "No":
The accumulation of data over recent decades shows that Easterlin’s Paradox was based on empirical claims which are simply false. In fact rich countries enjoy substantially higher subjective well-being than poor countries, and as countries get richer, their citizens experience ever more well-being. What’s more, the quantitative relationship between income and well-being is about the same, whether we look across people, across countries, or at a single country as it grows richer. This fact turns Easterlin’s argument on its head: if the difference in well-being between rich and poor countries is about the same as the difference in well-being between rich and poor people, then it must be that absolute income is the dominant factor determining well-being.
Their paper is non-technical and worth reading in full, also Derek Thompson at The Atlantic has a great summary.

Sacks at al. argue that the Easterlin paradox was accepted mistakenly, based on a misapplication of statistical reasoning of the sort that is quite common in academic studies. They explain:
When scholars began studying comparative well-being in the 1970s, data was only available for a handful of countries. Consequently Easterlin (1974) failed to find a statistically significant relationship between wellbeing and GDP—although in fact the estimated relationship was positive. This failure to obtain statistically significant findings reflected the limited power of a test based on a small sample of countries, rather than a finding of a precisely estimated nil relationship. Indeed, Easterlin’s original data also fail to reject the null that the cross-country relationship equals the cross-person relationship (Stevenson and Wolfers, 2008). In other words he could reject neither the presence of the Easterlin Paradox nor the complete absence of any such paradox.
I'd add to that explanation the fact that the Easterlin paradox fits in very well with a Malthusian, limits to growth world view. No doubt many have wanted it to be true, regardless of the data.

The Sacks et al. paper complements an analysis that I discussed last summer on the relationship of GDP to proposed alternatives to GDP., by Delhey and Kroll. They found that GDP does a surprisingly good job of reflecting outcomes in more complex, non-GDP metrics. I concluded:
Calls to replace GDP are common these days. Any such metric should meet the basic empirical test of doing better than GDP in its relationship with outcomes that people care about. Most proposed metrics fail this basic test.

Ultimately, GDP will all but certainly remain core to efforts to measure well being. That said, there are important dimensions that it misses. Bringing those dimensions into view will not, in my view, be accomplished by inventing a better single metric, but by realizing that no one metric captures all that matters and recognizing that understanding well being is complex, multi-dimensional and involves trade-offs that people do not agree about. Multiple metrics will thus aid in both clarity and focus, and help to avoid the risk of experts seeking to impose their values on the broader community via stealth accounting.
The Easterlin paradox, like many urban myths, won't be going away soon. However, research provides a compelling case that the paradox is empirically not supported. Money certainly isn't everything, but it does help to explain an awful lot.

24 January 2013

Donald Hornig: 1920-2013

Donald Hornig, science advisor to Presidents John F. Kennedy and Lyndon Johnson, passed away on January 21, 2013 at the age of 92.  The Washington Post has an obituary here.

In 2005 Dr. Hornig participated in our science advisors series (pictured above during his visit), and was pretty sharp for 85 (or 45 for that matter!). Here is the transcript from his 2005 lecture at the University of Colorado (a video of his talk is here, and a transcript and recording of my interview of him, can be found here):
I will confine myself to my own experience with Presidents Eisenhower, Kennedy and Johnson. Before I proceed, I should remind you that I'm a Neanderthal, with a fading memory, but a lively imagination.

My experience relates to a time before many of you were born. On the other hand, although the actors have changed and the times have changed, a surprising number of problems look very much the same.

What are the problems? Some of the discussions and much of the analysis concerns the management of the enormous collection of programs carried on by the federal agencies or funded by the federal government.
Each has its own body of scientists, its particular management organization and traditions, and its particular problems. I don't know if you'll all agree with me, but I don't think there can be a single science policy when it encompasses this whole spectrum. So, I won't attempt to discuss one.

Still, despite all its diversity, the ship of state needs to be led, and steering it is ultimately the job of the President. The ultimate goal for us has to be to assist him in all matters which require scientific and technical judgments. This shows up in the appointment letters of most, if not all, of the Science Advisors -- I haven't checked this out, I have to say -- but where the key phrase is, "to advise and assist the President in all matters affected by or pertaining to science and technology." I know that's true in several, and is also incorporated in the 1976 Act which establishes the office at a much later date.

Now, the military have faced the problems of utilizing new inventions, for example, the machine gun, throughout history. It was World Word II that saw the introduction of weapons such a radar and sonar, which utilized new science to deal with strategic problems, such as the submarine war or the air assault on England.

Universities got seriously involved, probably for the first time in that sort of thing, starting in 1940. The White House itself only got involved when the possibility of developing an atomic bomb was brought to the attention of President Franklin Roosevelt in 1941. Now, I'm not entirely exactly sure of that date, plus or minus one year. But, the resulting program, the Manhattan Project, was subsequently pursued, again, within the framework of the Armed Forces and didn't involve the kind of organization we're talking about now -- civilian organizations.

The kind of discussion we are now having started in 1957 when the Soviet Union launched Sputnik. I had previously participated in various government advisory committees, such as one on the application of infrared radiation to warfare, which taught me about the problems in dealing with very high-ranking military officers. That isn't always a good experience -- but, anyway. In any case, Sputnik was a very different matter.

Against the background of an on-going Cold War, the launch of a long-range rocket capable of carrying nuclear weapons produced a major public shock. Not only because of its military implications, because it stimulated public concerns and public discussion of the whole state of American science. Our relative technological capabilities, the quality and nature of American higher education, and so on. I mean, really a very fundamental national self-examination.
What quickly emerged was that except for the military, the President had no one he trusted to turn to. President Eisenhower understood the importance -- as a general -- of a reliable and loyal staff to back him up. But in the government, there were economists, lawyers, businessmen with a variety of backgrounds, and so on. But no one equipped to think about this new range of critical problems.

President Eisenhower's response was to create the post of Special Assistant to the President for Science and Technology, to which he named James Killian, then president of MIT.

A little later, the President's Science Advisory Committee (PSAC), which has already been mentioned, appointed by the President -- and this is unique, because it doesn't happen now -- and reporting directly to the President was established. Neither one of them, though, had any statutory authority.
PSAC consisted of distinguished scientists, some academic and some with industrial backgrounds, but all with dealings and broad experience in dealing with large public problems, mostly acquired, incidentally, in World War II. We haven't any similar training ground now for public servants.

You'll recognize that although the scale, the application, the sophistication and variety of problems have grown enormously, this problem remains. The President's advisors, mainly, are non-scientists.

Whether the White House needs its own science and technology apparatus and what its role should be or could be, has been debated ever since. And I imagine we will discuss this and related matters in our sessions here in the next day or two.

I've already indicated that I don't think we need a science oversight or management staff. We have an OMB and the various agencies also can do that.

We need a source of wisdom, experience and perspective drawn from the entire world-wide scientific and technical community. We should ask how we can help the President to lead in the midst of an on-going transformation which is, in historical times, proceeding with breathless speed.

Think of it. Petroleum-powered transport began during my grandparents' lives. The first airplane flew when my parents were children. Commercial radio could first be heard when I was a child. I remember my father bringing home an acid battery-powered radio in 1924 to listen, miraculously, to voting reports on the presidential election. Sulfa drugs were introduced in the `30s, followed shortly by penicillin. Technical advance helped turn the tide in World War II, and the atom bomb ended the war.

Since then, the process of scientific discovery and ideological -- huh, ideological -- technological advances has not only continued, but has been speeding up. In many ways, it has become the central initiator of large-scale changes in the economy, in the structure of society, and in many facets of our culture, even. It is even a political force.

Although I have participated in a variety of government advisory committees, my direct experience with events at the level of our considerations here started in 1958, when I was named to the newly-created Space Sciences Board for the National Academy of Sciences, which Lloyd Berkner was chairman.
Its mission was to help design a program for the NASA, which had just been started and didn't have a staff. Well, we worked at it.

In 1959, I first met President Eisenhower. He asked me to serve on the Science Advisory Committee, and in January of 1960, I was formally appointed. Peace at that time focused on military intelligence and arms control problems, nuclear test ban and nuclear energy. It also devoted itself to a major effort to stimulate education in the sciences. And that's important. That was conceived from the beginning. It was a critical role for the presidency.

President Kennedy continued my term on PSAC. He was an interested president, and a good listener, and a great support to Jerry Weisner, who was then the Science Advisor. But, it is chastening to realize, when one tries to make stories that say how great all the Science Advisors were, that he consulted neither PSAC nor his Science Advisor in arriving at his decision to go to the moon, which may have been the biggest decision of the time.

My subsequent contact with him was as Chairman of the PSAC Booster Panel and Space Science Panel, to be invited to accompany him on a trip to visit nuclear and space installations in Colorado and New Mexico. Maybe -- I say maybe -- our conversations on Air Force One led to his asking me to become his Special Assistant, but I haven't the faintest idea.

People always ask, "Well, why?" I don't know.

At any rate, he was assassinated shortly thereafter, so I was in no-man's land until in January, Lyndon Johnson asked me if I would stay on. On January 24th, he formally appointed me as Special Assistant, and on the same day, sent up my nomination for a completely separate job -- and one could have a discussion about that complication -- Director of the Office of Science and Technology. And he sent it to the Senate and, in what must be record time, I was confirmed on January 26th; two days later.
My agenda quickly broadened beyond national security problems. In 1964, when task forces were being organized to provide the new president with initiatives he might undertake, Rachel Carson had just written her highly-acclaimed book, Silent Spring, which some of you may have read. And, I sent a memo to Bill Moyers that pollution and the problems with the environment were going to be one of the big political issues of our time and that it was important for that administration -- or our administration I would say -- to take the lead. He liked the idea, as did the President.

That summer, we set up a task force chaired by John Tooke of Princeton to develop it. And then the report, "Restoring the Quality of the Environment", was issued by the President, who wrote a foreword to it.

In the next five years, we tackled many issues, mostly dictated by my sense of what mattered to the President at any given time or trying to anticipate for him what was going to matter. This led to efforts in such areas as developing the potential of the oceans, coping with the world food problem, dealing with urban problems, and very importantly, meeting the need for basic research and advanced research in science and technology.

Another activity of the office then was to provide support to the scientific and technical leadership in the various government departments, many of whom had seen the same problems of communicating with their own colleagues and bosses that we did in the White House. You know, talking to guys who didn't know a thing about science. I hope, though, you'll appreciate the wondrous powers of small nudges from the White House. It helped smooth lots of things.
We were also important channels of communication between the agencies and relevant offices and officers of the BOB, Bureau of the Budget then. It's now OMB, the Office of Management and Budget.

Well, I could go on and chat about numerous examples, but I will leave that to questions and personal discussions.

Lastly, I would mention that science is a wonderful lubricant for foreign policy initiatives. It is relatively apolitical and, by and large, everyone loves science. That may not be as true anymore as it used to be.

I first realized that when, in 1964, Rudnyov, whose first name I can't remember, a Deputy Premier of the USSR, invited me to come to the Soviet Union as the first half of an exchange, of which he would be the return visitor. We sensed the vested interest in that. I thought it would be a good idea to accept, and the President concurred.

Immediately after he won the 1964 election, I made the trip in one of the President's airplanes, a 707, accompanied by a distinguished group of industrial scientists. Piore was a vice president of IBM, Hershey was a vice president of DuPont, Fisk was president of Bell Labs, and Holloman was the Assistant Secretary of Commerce. And that made the Russians take us seriously.

The outcome was to raise real questions as to whether our policy of trying to inhibit the flow of industrial technology to the Soviet Union was productive or might even be self-defeating. I can't give you an answer.

The visit was, I believe, the first time that it really became clear to the President that science and technology had a role to play in the conduct of science of foreign affairs. Something similar happened in the spring of 1965, when President Park Chung-Hee of Korea met President Johnson in Washington.

The day before his meeting with Park, the President called me on the phone saying that the stuff he had from State (that's capitalized, the State Department) was a lot of crap and he wanted something creative. PSAC fortunately was meeting that day, so we closed down our agenda and discussed various possibilities, homing in on the idea of an industrial research laboratory.

After PSAC adjourned, a rump group, including Frank Long of Cornell and Ken Pitzer, then president of Rice, and I, worked on a plan for what we called the Institute for Applied Research and Industrial Development, which I presented to President Johnson approximately one -- this shows how well the government's organized -- approximately one hour before President Park arrived the next morning.

President Johnson liked it. President Park was completely surprised because he thought you had to be notified in advance of these things, but he was pleased. At the end of the meeting, President Johnson had agreed, without consulting with me, to send me, backed by a distinguished delegation -- whatever that might mean -- to Korea, to see whether the idea could be implemented.

Well, the program was enormously successful. Now, 40 years later, KIST, K-I-S-T, the Korean Institute for Science and Technology, is thriving, has been a model for at least one aspect of development in several countries. It also has a whole string of off- shoots in Korea.

Well, these outcomes obviously gave the President ideas. The next summer, when President -- when Prime Minister Sabo of Japan was visiting, the President called me at home when I was at dinner at home -- while I was in the kitchen, sort of second- guessing Lilli at the stove -- the night before their meeting to say he wanted a fresh idea. You know, on demand.

This time, I proposed a U.S.-Japan medical cooperation program for Southeast Asia. Medical cooperation is always safe. And worked all night on it, consulting medical experts all over the country by telephone. The Japan-U.S. medical program went through and has been implemented. It is going strong right now. It's a good one.

This led to visits to Pakistan, India, Taiwan, Australia, many other places -- but, what I haven't talked about is Vietnam. We were not directly involved in the conduct of that war, but panels of PSAC continued to work on military questions. In fact, over half of the efforts of the Science Office throughout that period were devoted to defense, intelligence and space. That's as opposed to, perhaps 90% in the period -- the earlier period. My staff had about 35 professionals, and they worked closely with the Office of Management and Budget -- although it was still called BOB then -- on evaluating programs and budgets.

However, from 1967 to 1969 -- I got my text mixed up here, but it doesn't matter -- we went on working, but there was a continuing erosion of confidence in the loyalty of our office and PSAC through the President's goals, particularly the plans for an ABM -- the anti-ballistic missile -- and for super sonic transport. And once that came up, there was a gradual erosion in everything else we did.

The fact is there was never a conflict -- that's been discussed much in various places overtly, but there was a cooling off. You know, who works for you and who are your friends? That contributed to a total breakdown under my successor under President Nixon, leaving the reconsideration of the role of the Science Advisor and the Office of Science and Technology, which is the subject, I believe, of much of the meeting we're holding now. And with that, I thank you.

23 January 2013

Greek Tragedy

The Financial Times reports today that three statisticians who worked for official Greece statistical agency (the Hellenic Statistical Authority or Elstat) have been charged with crimes for falsely calculating Greek debt in 2009:
Greece has brought criminal charges against the official responsible for measuring the country’s debt, thereby calling into question the validity of its €172bn second bailout by the EU and International Monetary Fund.

Andreas Georgiou, head of the independent statistical agency Elstat, and two senior officials are accused of undermining the country’s “national interests” by inflating the 2009 budget deficit figure used as the benchmark for successive austerity packages.

The three statistical experts face criminal charges of making false statements and corrupt practices, a judicial official said, adding that if found guilty they could serve prison terms of five to 10 years. They have denied any wrongdoing.
The issue appears to involve methodological disputes over how to properly calculate Greek debt, and the choices made by the statisticians in the application of different methods which led to smaller or larger estimates. The calculation of absolute debt levels mattered a great deal politically but also procedurally, as it was a key factor in EU and IMF assistance provided to Greece as a consequence of its financial crisis.

Last April , the New York Times provided some background:
The latest chapter in the complex saga begins in June 2010, a month after Greece signed its first loan agreement with the so-called troika — the European Union, the European Central Bank and the International Monetary Fund — when the former finance minister, George Papaconstantinou, appointed Mr. Georgiou to run the Hellenic Statistical Authority, known as Elstat. 

A year earlier, Greece had been plunged into crisis when the newly elected Socialists announced that the 2009 budget deficit would be 12.4 percent of gross domestic product, twice the previous estimate. In April 2010, the European Union’s statistics agency, Eurostat, revised Greece’s deficit upward again, to 13.6 percent, which forced Greece to seek a bailout. And in November 2010 Eurostat, working with Elstat and Mr. Georgiou, revised the deficit for 2009 upward a final time to 15.4 percent, leading the troika to demand additional budget cuts of $7.65 billion.

How that final calculation was conducted is now the subject of intense debate. Mr. Georgiou has said that it reflects Greece’s first-ever adherence to accepted European procedures. Yet some critics, including some who were on Elstat’s since-disbanded six-person board, said that Mr. Georgiou had actually applied standards that were stiffer than European norms, then tried to thwart them when they raised questions about the process.
Looking a bit deeper the NYT explained that the methodological process for calculating debt levels in Greece and the EU more broadly was less an actuarial exercise than a politically-negotiated social construction:
Mr. Georgiou said that members of the board had the incorrect assumption that they could “vote” on methodology and statistics, and that some represented vested interests that did not want Greece’s dire finances to come to light. “We were faced with significant pressures through the board not to revise the deficit upwards on account of fully applying European Union rules, but to minimize it,” he said.

In the past, countries in a stronger position than Greece have traditionally negotiated with Eurostat over how to classify items in the government debt. In testimony before the parliamentary committee, other Greek officials said the country had lost that ability once it accepted foreign aid.
Walter Radermacher, the director of Eurostat, the EU statistical agency, provided this bit of wisdom:
The truth is not my business. I am a statistician. I don’t like words like ‘correct’ and ‘truth.’ Statistics is about measuring against convention.
Georgiou offered his explanation of goings on in 2011:
I am being prosecuted for not cooking the books. We would like to be a good, boring institution doing its job. Unfortunately, in Greece statistics is a combat sport.
This case will be resolved under Greek law for which I can offer no expertise. However, this case shares some interesting similarities and differences with the recent judgement against Italian government scientists for their role in providing what was determined to be misleading and inaccurate information in advance of the L'Aquila earthquake.

This week the judge in the L'Aquila case provided his justification for convicting the scientists, explaining that they had misled the public and did so in collusion with politicians, The judge explained that, in effect, the scientists were guilty of delivering a politically-friendly message that was not in fact well-supported by the state of the science, contributed to poor decisions by L'Aquila residents.

In contrast, the Greek statisticians have been charged for what appears to be the opposite crime -- they failed to bend to the will of politicians and instead delivered a politically-unfriendly message. The statisticians did so in the context of statistical conventions that are not well-established or agreed upon.

A third recent case that is relevant is the so-called "hurricane deductible" associated with insurance payouts related to "Hurricane" Sandy. In that case, immediately after the storm politicians quickly moved to define conventions for insurance payouts in a manner that best suited their desired political outcomes, regardless of what the science may say about Sandy's meteorological status at landfall. In the process, the politicians defeated any chance to share costs of insurance according to risk of property location using a scientific metric. The final scientific judgments are not in on Sandy, and politicians have made clear how they think such judgments should be made by the responsible government agency.

What ties the three cases together is a lack of strong institutions able to arbitrate empirical questions independently. The lack of such institutions means that science -- whether hurricane, earthquake or economic -- played little role in these decisions. Yet in each case there was the expectation that input from independent experts might contribute to improved decision making. In each case arguably the opposite occurred.

The failures here have nothing to do with anyone being "anti-science" but rather stem from poor policy design where expertise meets politics. In the case of the Italian and Greek experts the outcomes are personally tragic. More broadly, in each case the public suffers from the loss of expert knowledge in decision making. Such situations are entirely preventable.

17 January 2013

The Authoritarian Science Myth

The image above shows President Dwight Eisenhower swearing in James Killian as the first science advisor to the US president. Eisenhower rushed through the ceremony because he wanted to leave on a golf trip to Augusta, Georgia. Little appreciated is that James Killian, widely celebrated at the best and most powerful science advisor was not a scientist at all.

Writing in yesterday's New York Times, physicist Laurence Krauss repeats a common call for scientists to occupy a position more central to political power:
Scientists’ voices are crucial in the debates over the global challenges of climate change, nuclear proliferation and the potential creation of new and deadly pathogens. But unlike in the past, their voices aren’t being heard.
He wistfully invokes a mythological golden age of scientific authoritarianism:
The men who built the bomb had enormous prestige as the greatest physics minds of the time. They included Nobel laureates, past and future, like Hans A. Bethe, Richard P. Feynman, Enrico Fermi, Ernest O. Lawrence and Isidor Isaac Rabi

In June 1946, for instance, J. Robert Oppenheimer, who had helped lead the Manhattan Project in Los Alamos, N.M., argued that atomic energy should be placed under civilian rather than military control. Within two months President Harry S. Truman signed a law doing so, effective January 1947.
There are two problems with Krauss's diagnosis and prescription. First, science and scientists have never been more central to policy making than they are today. Second, the golden age of scientific authority that he invokes is a fable that scientists tell themselves to justify their current demand for more authority in politics.

These themes are discussed in our 2009 paper playfully titled, "The Rise and Fall of the Science Advisor to the President of the United States," published in Minerva and here in PDF. The science advisor is arguably the most prominent scientist in the US government and the focus of decades of discussion about authority and power of science in government.

Here is what we concluded:
Over the second half of the 20th century and into the 21st governance can be characterized by an ever increasing reliance on specialized expertise. There are several reasons for this trend, which include the challenges of dealing with risks to human well being and security—from terrorism to the safety of food supplies, from natural disasters to human influences on the environment, from economic shocks, globalization, and many more. Some of these risks are the result of purposive technological innovation, such as the invention and proliferation of nuclear technologies beginning with the Manhattan Project during World War II. Because innovation can create new risks, a new proactive politics has emerged seeking to limit technological innovation and diffusion. Examples of this dynamic can be seen in efforts to limit the presence of genetically modified crops in Europe, to contain research on stem cells in the United States, and to militate against the consequences of economic globalization around the world.
In this context, the need for expert advice in government has increased exponentially. But one of the effects of the triumph of expertise has been the diminishment of the president’s science advisor as the ‘‘go-to’’ individual on issues with a scientific or technical component. In many respects, the science advisor is just another person with a Ph.D. staffing the Executive Offices of the President. President Obama received high marks from the scientific community for appointing a number of prominent scientists to administrative positions, including a Nobel Prize-winning physicist to Secretary of Energy, illustrating that the science advisor s but one of many highly qualified people in an administration. The science advisor does have a very unique role in helping to oversee and coordinate the budgets of agencies that support science, but even here the science advisor’s role is subject tothe idiosyncrasies of each administration.

In the future it seems improbable that the science advisor’s role would return to the exalted position that it held for a brief time during the Eisenhower Administration. In any case, that exalted position may be more mythical than real, which has set the stage today for some unrealistic expectations about the position.
Do read Krauss' piece and then read ours. Feel free to come back and comment.

For further reading, see our book on presidential science advisors.

15 January 2013

Extreme Misrepresentation: USGCRP and the Case of Floods

The US Global Change Research Program has released a draft national assessment on climate change (here in PDF) and its impacts in the United States, as required by The US Global Change Research Act of 1990 (which incidentally was the subject of my 1994 PhD dissertation). There has been much excitement and froth in the media.

Here I explain that in an area where I have expertise on, extremes and their impacts, the report is well out of step with the scientific literature, including the very literature it cites and conclusions of the IPCC. Questions should (but probably won't) be asked about how a major scientific assessment has apparently became captured as a tool of advocacy via misrepresentation of the scientific literature -- a phenomena that occurs repeatedly in the area of extreme events. Yes, it is a draft and could be corrected, but a four-year effort by the nation’s top scientists should be expected to produce a public draft report of much higher quality than this.

Since these are strong allegations, let me illustrate my concerns with a specific example from the draft report, and here I will focus on the example of floods, but the problems in the report are more systemic than just this one case.

What the USGCRP report says:
Infrastructure across the U.S. is being adversely affected by phenomena associated with climate change, including sea level rise, storm surge, heavy downpours, and extreme heat… Floods along the nation’s rivers, inside cities, and on lakes following heavy downpours, prolonged rains, and rapid melting of snowpack are damaging infrastructure in towns and cities, farmlands, and a variety of other places across the nation.
The report clearly associates damage from floods with climate change driven by human activities. This is how the draft was read and amplified by The New York Times:
[T]he document minces no words.

“Climate change is already affecting the American people,” declares the opening paragraph of the report, issued under the auspices of the Global Change Research Program, which coordinates federally sponsored climate research. “Certain types of weather events have become more frequent and/or intense, including heat waves, heavy downpours, and, in some regions, floods and droughts."
To underscore its conclusion, the draft report includes the figure at the top of this post (from Hirsch and Ryberg 2011), which shows flood trends in different regions of the US. In a remarkable contrast to the draft USGCRP report, here is what Hirsch and Ryberg (2011) actually says:
The coterminous US is divided into four large regions and stationary bootstrapping is used to evaluate if the patterns of these statistical associations are significantly different from what would be expected under the null hypothesis that flood magnitudes are independent of GM [global mean] CO2. In none of the four regions defined in this study is there strong statistical evidence for flood magnitudes increasing with increasing GMCO2.
Got that? In no US region is there strong statistical evidence for flood magnitudes increasing with increasing CO2. This is precisely the opposite of the conclusion expressed in the draft report, which relies on Hirsch and Ryberg (2011) to express the opposite conclusion.

Want more? Here is what IPCC SREX, the recent assessment of extreme events, says (here in PDF):
There is limited to medium evidence available to assess climate-driven observed changes in the magnitude and frequency of floods at regional scales because the available instrumental records of floods at gauge stations are limited in space and time, and because of confounding effects of changes in land use and engineering. Furthermore, there is low agreement in this evidence, and thus overall low confidence at the global scale regarding even the sign of these changes.
The SREX is consistent with the scientific literature -- neither detection (of trends) nor attribution (of trends to human forcing of the climate system) has been achieved at the global -- much less regional or subregional -- levels. Yet, USGCRP concludes otherwise.

The leaked IPCC AR5 SOD reaffirms the SREX report and says (here in PDF), in addition to documenting a signal of earlier snowmelt in streamflows, no such signal of increasing floods has been found:
There continues to be a lack of evidence regarding the sign of trend in the magnitude and/or frequency of floods on a global scale
The IPCC has accurately characterized the underlying literature:
Observations to date provide no conclusive and general proof as to how climate change affects flood behaviour
Given the strength of the science on this subject, the USGCRP must have gone to some effort to mischaracterize it by 180 degrees. In areas where I have expertise, the flood example presented here is not unique in the report (e.g., Hurricane Sandy is mentioned 31 times).

Do note that just because the report is erroroneous in areas where I have expertise does not mean that it is incorrect in other conclusions. However, given the problematic and well-documented treatment of extremes in earlier IPCC and US government reports, I'd think that the science community would have its act together by now and stop playing such games.

So while many advocates in science and the media shout "Alarm" and celebrate its depiction of extremes, another question we should be asking is, how is it that it got things so wrong? Either the IPCC and the scientific literature is in error, or the draft USGCRP assessment is -- But don't take my word for it, check it out for yourself.

08 January 2013

Review of The Geek Manifesto

My review of The Geek Manifesto is up over at The Breakthrough Institute. In it I discuss "predistortion," the UEA emails, David Nutt's sacking and a conversation between Clint Eastwood and Gene Hackman in Unforgiven. Here is a short excerpt:
The idea that science and scientists deserve special treatment in politics is often what leads to the temptation to exploit that specialness for political gain, which ultimately works against science being afforded special treatment. In this manner, calls for a “geek revolution” can have a hard time avoiding the slippery slope of scientific authoritarianism.
See the entire, longish review here.  Please feel free to come back and tell me what you think!

07 January 2013

Air Capture Update 2013: Still Progressing

One of the most tantalizing possibilities for dealing with the accumulation of carbon dioxide in the atmosphere is to take it out using brute force methods of chemistry, biology or even geology. I discuss such "air capture" of carbon dioxide in Chapter 5 of The Climate Fix. As the climate debate continues to generate more heat than light, technologies of air capture are continuing to improve.

On Saturday, the New York Times had a smart article with an update on the technology:
[A] Canadian company has developed a cleansing technology that may one day capture and remove some of this heat-trapping gas directly from the sky. And it is even possible that the gas could then be sold for industrial use.

Carbon Engineering, formed in 2009 with $3.5 million from Bill Gates and others, created prototypes for parts of its cleanup system in 2011 and 2012 at its plant in Calgary, Alberta. The company, which recently closed a $3 million second round of financing, plans to build a complete pilot plant by the end of 2014 for capturing carbon dioxide from the atmosphere, said David Keith, its president and a Harvard professor who has long been interested in climate issues.

The carbon-capturing tools that Carbon Engineering and other companies are designing have made great strides in the last two years, said Timothy A. Fox, head of energy and environment at the Institution of Mechanical Engineers in London.

“The technology has moved from a position where people talked about the potential and possibilities to a point where people like David Keith are testing prototype components and producing quite detailed designs and engineering plans,” Dr. Fox said. “Carbon Engineering is the leading contender in this field at this moment for putting an industrial-scale machine together and getting it working.”
A crucial question of course is cost. In a 2009 paper on air capture (here in PDF), I compared the idealized costs of using air capture as the main mechanism to achieve stabilization of carbon dioxide concentrations, based on existing cost estimates of the technology, with the costs of stabilization under conventional mitigation policies, as estimated by the IPCC and Stern Review.

In that analysis I found that under the identical assumptions used by IPCC/Stern, air capture which cost between $100 and $500 per tonne of carbon would lead to overall economic costs of 0.5% to 3.0% of overall GDP to achieve stabilization at 450 ppm. This was a surprising result, because for conventional mitigation Stern estimated the costs at up to 4% of GDP and IPCC up to 5.5%. Critics of my paper complained that air capture was not yet possible. My reply was that neither was conventional mitigation possible, so why not pursue both?

A more recent literature review of air capture technology and economics by Goeppert et al (2012, here, $) found estimated costs of $50 to $3,700 per tonne of carbon. They conclude:
Direct CO2 capture from the air is still in its infancy. The cost of a commercial plant will depend on many factors including the process used as well as the cost of labor, materials and energy. While there is no question that the capture of CO2 from the air is possible, more research and development is clearly needed to optimize this technology and determine its economic viability. Only with the construction of demonstration and pilot plants will we have a clearer understanding of the total cost associated with DAC. A few start-up companies including Carbon Engineering, Kilimanjaro Energy, Global Thermostat and Climeworks have started such an effort. Some of the proposed devices and prototypes for the capture of CO2 from the air are shown in Fig. 14. It should also be pointed out that the costs associated with DAC units are not ‘‘stand alone’’. Once captured, the CO2 will be used for applications such as enhanced oil recovery (EOR) or recycling into chemicals and fuels including methanol, DME and hydrocarbons (CCR). This will give an economic value to the captured CO2, lowering the de facto cost of DAC and provide a more favorable overall picture of the process. Water (moisture) could also be separated from the air at the same time as CO2, which could provide clean water as an added value. . .

CO2 from the atmosphere provides a nearly inexhaustible carbon source for humankind. Combining carbon capture and storage with subsequent withdrawal for technological recycling based on an anthropogenic chemical carbon cycle offers a feasible new solution to our carbon conundrum. As fossil fuels are becoming scarcer and increasingly depleted, carbon capture and recycling offers a renewable and safe source for carbon containing fuels and their products. It also liberates humankind from the limitations associated with the biological natural carbon sources including crops and biomass. The chemical carbon cycle constitutes humankind’s practical technological analog of nature’s photosynthetic CO2 recycling. At the same time, the anthropogenic chemical carbon cycle (CCR) also helps to mitigate the environmental harmful effect of excessive CO2 in the atmosphere. Instead of just a greenhouse gas harmful to the Planet’s ecosystem, CO2 should therefore be considered as a valuable industrial C1-feedstock.
Ultimately, the test of air capture will not come from journal articles or policy debates, but actual engineering in the real world. Of the actual costs, Goeppert tells the NYT, "We won’t know for sure until someone builds a pilot plant."

The good news is that far from the glare of the climate debate, scientists and engineers are hard at work on advancing air capture technology. And guess what? They are making progress. Watch this space.

References cited

A. Goeppert et al. 2012. Air as the renewable carbon source of the future: an overview of CO2 capture from the atmosphere, Energy & Environmental Science, 5:7833-7853 DOI: 10.1039/C2EE21586A

R. A. Pielke, Jr. 2009. An idealized assessment of the economics of air capture of carbon dioxide in mitigation policy. Environmental Science & Policy 3:216-225

02 January 2013

A New Year's Resolution for Scientists

In Nature today, Dan Sarewitz offers up a New Year's resolution for scientists:
To prevent science from continuing its worrying slide towards politicization, here’s a New Year’s resolution for scientists, especially in the United States: gain the confidence of people and politicians across the political spectrum by demonstrating that science is bipartisan.
That a call for science to demonstrate that it is not a partisan endeavor is necessary is reflective of the degree to which leading scientific institutions in the United States (and elsewhere as well) have become deeply partisan bodies. Sarewitz explains:
[S]cience has come, over the past decade or so, to be a part of the identity of one political party, the Democrats, in the United States. The highest-profile voices in the scientific community have avidly pursued this embrace. For the third presidential election in a row, dozens of Nobel prizewinners in physics, chemistry and medicine signed a letter endorsing the Democratic candidate. 

The 2012 letter argued that Obama would ensure progress on the economy, health and the environment by continuing “America’s proud legacy of discovery and invention”, and that his Republican opponent, Mitt Romney, would “devastate a long tradition of support for public research and investment in science”. The signatories wrote “as winners of the Nobel Prizes in Science”, thus cleansing their endorsement of the taint of partisanship by invoking their authority as pre-eminent scientists.

But even Nobel prizewinners are citizens with political preferences. Of the 43 (out of 68) signatories on record as having made past political donations, only five had ever contributed to a Republican candidate, and none did so in the last election cycle. If the laureates are speaking on behalf of science, then science is revealing itself, like the unions, the civil service, environmentalists and tort lawyers, to be a Democratic interest, not a democratic one.
Partisanship within the scientific community shows itself not just in elections but in how the science community positions itself with respect to government. For a while now several scientific associations (especially AAAS and AGU) have taken on the role of seeing Democrats as allies and Republicans as opponents.

John Besley and Matt Nisbet documented this phenomenon in a recent paper and explain how the nature of social media serves to amplify partisanship:
With an ever-increasing reliance on blogs, Facebook and personalized news, the tendency among scientists to consume, discuss and refer to self-confirming information sources is only likely to intensify, as will in turn the criticism directed at those who dissent from conventional views on policy or public engagement strategy. Moreover, if perceptions of bias and political identity do indeed strongly influence the participation of scientists in communication outreach via blogs, the media or public forums, there is the likelihood that the most visible scientists across these contexts are also likely to be among the most partisan and ideological.
Such dynamics are found in more conventional media as well. In a 2009 paper I documented that Science magazine published 40 editorials critical of the Bush Administration during its 2 terms, and only 1 such critique of the Clinton Administration's previous 2 terms (here in PDF). I have just updated this analysis through the first term of the Obama Administration, and found no editorials critical of the Obama Administration. Instead, there were editorials with the following titles:
An approach that critiques the president when he is a Republican and cheer-leads when he is a Democrat lends itself to more than just cynicism -- it contributes to the politicization of science policy issues which by their nature can be problematic regardless of who is in office.

I have often marveled on this blog at how issues of scientific integrity -- which were so important to scientists and science connoisseurs during the Bush Administration -- largely disappeared in social media science policy discussions, and only occasionally appeared in the conventional media.

The issues, however, have not disappeared. A few weeks ago, the Union of Concerned Scientists observed in the case of genetically modified salmon:
Despite what the President might have said about scientific integrity, we’ve seen White House interference on what should be science regulatory decisions.
A list of troubling issues under the Obama Administration where science and politics meet is, well, almost Bush-like, and includes issues related to drilling safety, the muzzling of scientists at USDA and at HHS, clothing political decisions in dodgy scientific claims on the morning after pill and Yucca Mountain, the withholding of scientific information for fear of political fallout ... and the list goes on.

For those who care about scientific integrity, the selective attention of the scientific community is problematic because it reduces the issue to a matter of electoral politics rather than the nitty-gritty details of actual policy implementation.

Sarewitz finds another reason to object:
This is dangerous for science and for the nation. The claim that Republicans are anti-science is a staple of Democratic political rhetoric, but bipartisan support among politicians for national investment in science, especially basic research, is still strong. For more than 40 years, US government science spending has commanded a remarkably stable 10% of the annual expenditure for non-defence discretionary programmes. In good economic times, science budgets have gone up; in bad times, they have gone down. There have been more good times than bad, and science has prospered.

In the current period of dire fiscal stress, one way to undermine this stable funding and bipartisan support would be to convince Republicans, who control the House of Representatives, that science is a Democratic special interest.

This concern rests on clear precedent. Conservatives in the US government have long been hostile to social science, which they believe tilts towards liberal political agendas. Consequently, the social sciences have remained poorly funded and politically vulnerable, and every so often Republicans threaten to eliminate the entire National Science Foundation budget for social science.
For partisans, none of this analysis makes sense because their goal is to simply vanquish their political opponents. That science has become aligned with the Democratic party is, from where they sit, not a problem but a positive. Thus more partisanship is needed, not less. I have no illusions of convincing the extreme partisans of the merit in Sarewitz's view.  I do think that there are many in the scientific community who object to the exploitation of scientific institutions to the detriment of both science and decision making, and no doubt it is to this group that Sarewitz's resolution is offered.

There are promising signs that the partisan wave which has engulfed the scientific community over the past decade is receding somewhat. This is good news. But the scientific community still has a lot of work to do. Sarewitz offers some helpful advice:
The US scientific community must decide if it wants to be a Democratic interest group or if it wants to reassert its value as an independent national asset. If scientists want to claim that their recommendations are independent of their political beliefs, they ought to be able to show that those recommendations have the support of scientists with conflicting beliefs. Expert panels advising the government on politically divisive issues could strengthen their authority by demonstrating political diversity. The National Academies, as well as many government agencies, already try to balance representation from the academic, non-governmental and private sectors on many science advisory panels; it would be only a small step to be equally explicit about ideological or political diversity. Such information could be given voluntarily.

To connect scientific advice to bipartisanship would benefit political debate. Volatile issues, such as the regulation of environmental and public-health risks, often lead to accusations of ‘junk science’ from opposing sides. Politicians would find it more difficult to attack science endorsed by avowedly bipartisan groups of scientists, and more difficult to justify their policy preferences by scientific claims that were contradicted by bipartisan panels.

During the cold war, scientists from America and the Soviet Union developed lines of communication to improve the prospects for peace. Given the bitter ideological divisions in the United States today, scientists could reach across the political divide once again and set an example for all.
There is of course nothing wrong with partisanship or with scientists participating in politics, they are after all citizens. However, our scientific institutions are far too important to be allowed to become pawns in the political battles of the day.

Brain Circulation in 16 Countries

Via the World Bank, the graphic above shows "brain circulation" -- the immigration and emigration of researchers -- for 16 countries in 2011. Countries that experienced a "drain" included India, Italy and Belgium. Gainers included the US, Australia and Switzerland.

The data comes from: Chiara Franzoni, Giuseppe Scellato, and Paula Stephan, May 2012. “Foreign Born Scientists: Mobility Patterns for Sixteen Countries,” National Bureau of Economic Research.