Issues: Availability of Climate Data Reduces Variance Among GCM Forecasts
Widely known is the fact that we (the non climate research centers) completely depend on the IPCC to get the necessary climatic data for any assessment under the context of climate change. The need of such an organism providing and centralizing these kind of datasets is critical for research programmes such as CIAT’s DAPA. The IPCC has, therefore, the mandate of providing future climate forecasts, which in the paper sounds just great.
To achieve that, the so-called Coupled Models Intercomparison Projects (CMIP) have been created. Up to now, three different CMIP projects have been carried out, with the participation of all climatic research centers that have developed and run a GCM (Global Climate Model). IPCC’s Fourth Assessment Report (4AR) in 2007 was founded on CMIP3. But, what’s the problem, then?
CMIP3 documents and datasets are freely available through an online platform (named ESG -Earth System Grid) in which you just have to create a user and you’ll be granted (after a few minutes or perhaps an hour) to download any climate dataset that was used in the 4AR. Despite that, no clear agreement has been made on what the variables that each (climatic research) center has to provide should be, which is just sadly a bad new for centers like CIAT.
The vast majority of our assessment models depend on four main variables (at a monthly time step): precipitation, maximum temperature, minimum temperature and mean temperature, and for more than a half of the GCMs at the CMIP3 data-portal, there’s no minimum and maximum temperature data. I’ve been wondering, during the entire last year: why was that?… I arrived to the conclusion that four facts led to what I feel is a big mistake:
- Climatic research centers were able to decide which variables they wanted to ‘release’, so, in few words, they did what they wanted
- Most of the analyses in the 4AR only used mean temperature and precipitation data, making some people think that these were the only variables needed for impact assessment
- No impact assessment offices, research centers or other potential users of these climate forecasts where queried when defining which variables should be included in the data portal
- No central agreement was made within the CMIP3.
Hopefully, all this will be fixed for the Fifth Assessment Report‘s model intercomparison project (CMIP5), which has both a central agreement and have included the necessary variables in view of other research centers’ needs within their official variable list (see pag. 20). In DAPA, we made our homework and downloaded all the climatic datasets and compared the variance among GCMs when using a reduced set of GCMs (due to these data availability constraints). Here’s what we found for the Andean region, for the SRES-A1B emission scenario by 2050s:
In the upper row the four maps show: (a) average change in precipitation (in mm) using 24 GCMs, (b) standard deviation in precipitation change (in mm) for the same 24 GCMs, (c) average change in temperature (in celsius) using 24 GCMs, and (d) standard deviation in temperature change (in celsius) for the same 24 GCMs. The bottom row shows the same but using the 11 GCMs for which minimum and maximum temperature data was available.
You might note that even in some cases the average change in precipitation pattern varies from one set to another (see northern Bolivia, for example), and moreover, for most areas and both variables variance is reduced due to the reduction in the number of GCMs. There are some areas where increases in variance are observed (see the intersection between Colombia, Peru and Ecuador rightmost column): Variance increases by at least 50%. But, more surprising is the fact that responses of assessment models will be influenced by these issues as well, so, I just say: we need to report these issues when using IPCC’s datasets: No way to scape.
I wonder whether there’s anyone outside climatic research centers that has found a solution to this issue?