Friday, February 19, 2016

Yes MOST Models Run Hot

... or, "Just what the world needs, another CMIP vs. Observation blog post".

Background

Arguments vary; this one touches on some of the common ones:

Peer-reviewed pocket-calculator climate model exposes serious errors in complex computer models and reveals that Man’s influence on the climate is negligible

Anthony Watts / January 16, 2015   

What went wrong?

A major peer-reviewed climate physics paper in the first issue (January 2015: vol. 60 no. 1) of the prestigious Science Bulletin (formerly Chinese Science Bulletin), the journal of the Chinese Academy of Sciences and, as the Orient’s equivalent of Science or Nature, one of the world’s top six learned journals of science, exposes elementary but serious errors in the general-circulation models relied on by the UN’s climate panel, the IPCC. The errors were the reason for concern about Man’s effect on climate. Without them, there is no climate crisis.

Thanks to the generosity of the Heartland Institute, the paper is open-access. It may be downloaded free from http://www.scibull.com:8080/EN/abstract/abstract509579.shtml. Click on “PDF” just above the abstract.

That link is now broken; fear not, co-author Briggs generously hosts the paper for us ...



Monckton, Soon, Legates and Briggs (2015), Why models run hot: results from an irreducibly simple climate model

... so that we may be edified.  Blog rebuttals to this paper are legion, when I have a bit more time I'll footnote my favourites.  For purposes of this post I'll simply assert that this model isn't irreducibly simple ...


... and is also irrevocably, irredeemably broken insofar as Monckton & Co. have selectively and inconsistently executed it.  Other than that glowing condemnation, it is not wholly without merit so far as I can tell ... as long as one doesn't cherry-pick data for feeding its input parameters.  Or as Monckton so candidly states:
So I hope that people will do us the courtesy of reading the paper, thinking about it and then having a go at running the model for themselves, using whatever parameter values they consider appropriate. Each parameter is discussed in the paper, so as to give some guidance on the appropriate interval of values.
The emboldened bit being my emphasis because it's such a non-Briggs-like thing to say it made me wonder exactly how much of a contribution he offered to warrant being listed as an author save for ...
Fig. 5 Climate sensitivity ΔT∞ at CO2 doubling against closed-loop gains g∞ on [-1,+2]

... the risible and deservedly much-derided "Process engineers' design limit y∞ ≤ 0.1", which has Briggs written all over it.  Though to be fair, plural engineers is not the kind of thing one expects a staunch monotheist to espouse; perhaps he had a rare fit of inclusiveness.  Or more likely, it being Monckton's plot, he really is making a secular argument that human engineers with the aim of designing stable thermodynamic/electronic systems keep closed-loop gain below +0.1 and thus takes the leap of logic that the Earth's climate system must follow suit.

With that digression out of the way, on to my central point which is ...

The IPCC Throws Their Own Models Under the Bus

... with regularity.  Rare is it a "teh modulz suck" argument seen in the blogosphere that I cannot find already published in AR5.  One of the most "damning" I know of comes from AR5 WGI Chapter 9, Evaluation of Climate Models:
Box 9.2 | Climate Models and the Hiatus in Global Mean Surface Warming of the Past 15 Years

[..]

Model Response Error

The discrepancy between simulated and observed GMST trends during 1998–2012 could be explained in part by a tendency for some CMIP5 models to simulate stronger warming in response to increases in greenhouse gas (GHG) concentration than is consistent with observations (Section 10.3.1.1.3, Figure 10.4). Averaged over the ensembles of models assessed in Section 10.3.1.1.3, the best-estimate GHG and other anthropogenic (OA) scaling factors are less than one (though not significantly so, Figure 10.4), indicating that the model-mean GHG and OA responses should be scaled down to best match observations. This finding provides evidence that some CMIP5 models show a larger response to GHGs and other anthropogenic factors (dominated by the effects of aerosols) than the real world (medium confidence). As a consequence, it is argued in Chapter 11 that near-term model projections of GMST increase should be scaled down by about 10% (Section 11.3.6.3). This downward scaling is, however, not sufficient to explain the model-mean overestimate of GMST trend over the hiatus period.
Emphasis mine.  The implication being that the CMIP5 historical ensemble runs approximately 10% too hot ...


... and as ((1 / 0.9201) - 1) * 100 = 8.7%, by Jove, they may have a point:


Ayup, using the (naive) assumption that CO2 is the only external forcing used by CMIP5 models in the RCP ensemble runs, the forward-looking results appear to be a tad too hot.

However, simply scaling CMIP5 output and calling it good is hardly acceptable as a permanent solution, not least due to all the absolute temperature-sensitive non-linear responses known (and suspected) to exist in the real system -- some not very well constrained (e.g., ice sheet albedo feedback, tropical deep convection, from which also partially follows the perennially elusive role of radiative water vapour feedback vs. that of clouds).  As such, the modelling community is hardly resting on its collective Nobel laurels, as reader BBD comments:
There is evidence that when the CMIP5 forcing estimates used for AR5 are updated to bring them into line with real-world forcing history, then modelled global average temperature comes into much closer agreement with observations (Schmidt et al. 2014). This would suggest that model physics and so emergent behaviours like model sensitivity are reasonably accurate.
Given all the ink spilt by the IPCC in its own assessments and by the AGW consensus modelling community in peer-reviewed primary literature on the litany of poorly understood weather/climate processes, raw computational horsepower limitations, bugs and other gremlins lurking in zillions of lines of FORTRAN, I have often rhetorically asked climate model-bashers:
  1. How exactly is it bad science that the climate modelling community apparently knows far more about their problems and limitations than you do, and aren't shy about committing these "failures" to print in liturature?
  2. If the models are so unreliable as to lack ANY predictive utility, why in holy fuck are you so cavalier about making unmitigated changes to the radiative properties of atmosphere?
Though I don't usually use such colourful metaphors out of respect for the lopsidedly tender sensibilities exhibited by Mr. Watts and the majority of his followers.  Responses to more mildly stated versions of (2) run from "The failure of The Models proves that climate sensitivity is low," to "Yabbut, isn't it suspicious that ALL models run hot?"

The latter argument is easily foiled ...


... and never in my experience acknowledged.  Certainly the "all models are hot" zombie myth shuffles onward, undead and rank as ever.

As for The Models foiling the radiative theory of AGW, and therefore being ill-suited for policy-making decisions -- which is generally the root argument even when left unspoken -- I like this somewhat infamous bit of straight-talk from Richard Betts over at Bishop Hill:
Once again this brings us back to the thorny question of whether a GCM is a suitable tool to inform public policy.
Bish, as always I am slightly bemused over why you think GCMs are so central to climate policy.

Everyone* agrees that the greenhouse effect is real, and that CO2 is a greenhouse gas.
Everyone* agrees that CO2 rise is anthropogenic
Everyone** agrees that we can't predict the long-term response of the climate to ongoing CO2 rise with great accuracy. It could be large, it could be small. We don't know. The old-style energy balance models got us this far. We can't be certain of large changes in future, but can't rule them out either.

So climate mitigation policy is a political judgement based on what policymakers think carries the greater risk in the future - decarbonising or not decarbonising.

A primary aim of developing GCMs these days is to improve forecasts of regional climate on nearer-term timescales (seasons, year and a couple of decades) in order to inform contingency planning and adaptation (and also simply to increase understanding of the climate system by seeing how well forecasts based on current understanding stack up against observations, and then futher refining the models). Clearly, contingency planning and adaptation need to be done in the face of large uncertainty.

*OK so not quite everyone, but everyone who has thought about it to any reasonable extent
**Apart from a few who think that observations of a decade or three of small forcing can be extrapolated to indicate the response to long-term larger forcing with confidence

Aug 22, 2014 at 5:38 PM | Registered Commenter Richard Betts
Emphasis again mine.  Shockingly, contrarian denizens immediately stripped all nuance and context out of the statements of a good scientist doing good science and being honest about the limitations of predicting a murky and uncertain future vs. what nearer to 20/20 hindsight has gleaned from observation -- and yes, models -- tell us about past and present.  There was much rejoicing in the form of jig-dancing and declaration of victory along the lines of, "Even Richard Betts admits we don't know shitall about climate!"  Very next comment in the thread sets it up:
I disagree, Richard.

Four or five years ago I would probably have agreed but I'm afraid that now I think that the only people who agree that "the CO2 rise is anthropogenic" are those who have stopped thinking. Some of the rise is anthropogenic but since the correlation with increased temperature is poor why should anybody care?

And why do we need GCMs whose predictive capability is zero to "inform contingency planning and adaptation" when that seems to include such political wheezes as not draining the Somerset Levels or building on flood plains because we aren't going to have as many floods or advising everyone to re-design their gardens for a Mediterranean climate?

I'm never quite sure whether it's hubris or chutzpah you guys suffer from but trying to second-guess nature is not very bright. The largest uncertainty I see these days is how far from reality the next Met Office long-range forecast will turn out to be and how Auntie Julia will manage to convince herself that it was actually right if you include large enough error bars.

(And you will also have to take the blame for idiots like the EU who think that reducing the power of vacuum cleaners will save electricity and at the same time do a better job. Oh yes you will, because it is the output of these GCMs as spun by the climate activists that are "informing" (LOL) the politicians' decisions! You want us to adapt; they make the decisions as to how. Wrongly, usually.)

Aug 22, 2014 at 6:01 PM | Registered Commenter Mike Jackson
My emphasis.  Note how "poor correlation" (it isn't when one considers more than the past 19 years of lower tropospheric temperatures) morphs into the absolute "zero predictability".  Then the rather ironic chutzpah of claiming Dr. Betts is suffering from hubris after having just said, "We don't know" exactly what the future holds.  Not one to stray off "the science is settled even when we're constantly told it isn't" meme, he triumphantly ends with a confident declaration that Betts' uncertainty is ever more evidence that "climate activists" are usually wrong.

Next page of comments contains back-to back examples of some themes I mention previously:
Richard - even with all your points granted: CGMs all run on a hot side; none has been able to predict the temperature of the 21st century. With a hindsight, the modelers claim they now cam model the "hiatus". For catastrophic scenarios, models are absolutely crucial.

Aug 22, 2014 at 6:52 PM | Unregistered Commenter Curious George

A short note to the "Climate Science Cabal"
When in a hole stop digging.

Aug 22, 2014 at 7:12 PM | Unregistered Commenter Mike Singleton
Apparently George is not curious enough to do any fact checking on not-hot models, nor does he apparently understand that just because weather is chaotic and therefore not reliably predicable after about week does not mean we can't attribute causality to unexpected multi-decadal trends after they have happened.

Singleton apparently doesn't realize that he's implicitly asking Betts to stick to the talking points of an imagined and improbable cabal of rent-seeking conspirators who are too incompetent to have the faked surface temperature data match the ideologically motivated, overly-warm model outputs.

What About this Hiatus Thingy?

... because even the IPCC admits it happened!

Let's read the beginning of Box 9.2 ... carefully:
Box 9.2 | Climate Models and the Hiatus in Global Mean Surface Warming of the Past 15 Years

The observed global mean surface temperature (GMST) has shown a much smaller increasing linear trend over the past 15 years than over the past 30 to 60 years (Section 2.4.3, Figure 2.20, Table 2.7; Figure 9.8; Box 9.2 Figure 1a, c). Depending on the observational data set, the GMST trend over 1998–2012 is estimated to be around one-third to one-half of the trend over 1951–2012 (Section 2.4.3, Table 2.7; Box 9.2 Figure 1a, c). For example, in HadCRUT4 the trend is 0.04oC per decade over 1998–2012, compared to 0.11oC per decade over 1951–2012. The reduction in observed GMST trend is most marked in Northern Hemisphere winter (Section 2.4.3; Cohen et al., 2012). Even with this “hiatus” in GMST trend, the decade of the 2000s has been the warmest in the instrumental record of GMST (Section 2.4.3, Figure 2.19). Nevertheless, the occurrence of the hiatus in GMST trend during the past 15 years raises the two related questions of what has caused it and whether climate models are able to reproduce it.

Figure 9.8 demonstrates that 15-year-long hiatus periods are common in both the observed and CMIP5 historical GMST time series (see also Section 2.4.3, Figure 2.20; Easterling and Wehner, 2009; Liebmann et al., 2010). However, an analysis of the full suite of CMIP5 historical simulations (augmented for the period 2006–2012 by RCP4.5 simulations, Section 9.3.2) reveals that 111 out of 114 realizations show a GMST trend over 1998–2012 that is higher than the entire HadCRUT4 trend ensemble (Box 9.2 Figure 1a; CMIP5 ensemble mean trend is 0.21oC per decade). This difference between simulated and observed trends could be caused by some combination of (a) internal climate variability, (b) missing or incorrect radiative forcing and (c) model response error. These potential sources of the difference, which are not mutually exclusive, are assessed below, as is the cause of the observed GMST trend hiatus.
 I'll do one better ...


... 40 year "pauses" have precedent in GMST observation.  And internal variability ...


... is not your friend if you think two decades of flattish GMST trend is the harbinger of the next Ice Age and/or the death rattle of AGW.

A topic for another day: why the 109 month filter sampling, how LOESS smoothing increases uncertainty at the endpoints of a series, and thus why declaring The Pause dead could bite one in the arse.

On that latter note, here's Ye Olde Reliable ...


 Pause?   What Pause?  I don't see no steenkin' Pause!

2 comments:

  1. Excellent post, BG. Since this is now officially an echo chamber and you are preaching to the choir, let's go the whole hog and turn the mic over to tonight's surprise climate sleb model sceptic Jaaaaames Hansen!

    TH: A lot of these metrics that we develop come from computer models. How should people treat the kind of info that comes from computer climate models?

    Hansen: I think you would have to treat it with a great deal of skepticism. Because if computer models were in fact the principal basis for our concern, then you have to admit that there are still substantial uncertainties as to whether we have all the physics in there, and how accurate we have it. But, in fact, that's not the principal basis for our concern. It's the Earth's history-how the Earth responded in the past to changes in boundary conditions, such as atmospheric composition. Climate models are helpful in interpreting that data, but they're not the primary source of our understanding.

    TH: Do you think that gets misinterpreted in the media?

    Hansen: Oh, yeah, that's intentional. The contrarians, the deniers who prefer to continue business as usual, easily recognize that the computer models are our weak point. So they jump all over them and they try to make the people, the public, believe that that's the source of our knowledge. But, in fact, it's supplementary. It's not the basic source of knowledge. We know, for example, from looking at the Earth's history, that the last time the planet was two degrees Celsius warmer, sea level was 25 meters higher.

    And we have a lot of different examples in the Earth's history of how climate has changed as the atmospheric composition has changed. So it's misleading to claim that the climate models are the primary basis of understanding.


    [Canned applause / fade]

    ReplyDelete
    Replies
    1. I like your echoes, BBD. Thanks for the compliment and contribution. Dr. Hansen is never one to turn in lukewarm opinions, is he.

      Delete