A posteriori verification of how skillful was a climate prediction or a set of predictions – here we focus on monthly to decadal timescales - is essential to estimate the quality of climate forecast system and its potential usefulness in the future.
Summary
A climate model at any level of complexity is an approximation of the climate system that also employs approximate boundary and initial conditions. Hence, the integration of climate model can lead to a significant drift and bias with respect to observations/analyses/reanalyses (OBS) that demand post-processing adjustment to yield predictions with practical skill. In general, weather and climate forecast verification at all time scales is an active area of research that give us insight into the quality and/or utility of the forecast system. Assessing forecast quality also provides beneficial feedback for the development of forecast systems as well as the improvement of bias correction and calibration methods.
At the most basic level, climate forecast verification is investigation of the properties of joint distribution of forecasts and OBS (observations or estimates of what actually occurred). A single verification measure cannot expose all key aspects or attributes of forecast quality (bias, reliability, resolution, sharpness, etc.). A prudent selection of complementary summary scores should reveal how well climate predictions correspond to the associated OBS. Such metrics/indices of forecast quality must be informative and tailored to address specific questions of interest for the growing spectrum of climate forecasters, and forecast developers and users. OBS are critical ingredients of the forecast process at the initialization, and during adjustment and verification phases. Thus, changing availability and quality of OBS and the requirement to separate the training and test OBS (cross-validation to avoid artificial skill) must be taken into account.
Climate forecasts of categorical and continuous predictands can be deterministic or probabilistic. There is expanding variety of verification scores available to analyze different attributes. Forecast skill is commonly reported as a skill score - a relative improvement of a particular measure (e.g. mean absolute error, mean squared error, ranked probability score, etc.) over some baseline, i.e. statistical reference forecast such as climatology, persistence, anomaly persistence, etc. Therefore, the development of empirical prediction methods in parallel to the development of state-of-the-art climate models is important for establishing a hierarchy in quality of available dynamical and statistical forecast systems.
Users can also require measures of utility to examine added economic value of comprehensive climate predictions instead of only measures of added quality (with respect to some baseline). Forecast quality and forecast value, not necessarily overlapping facets of the forecast usefulness, must be assessed separately for each purpose. Overall, it is critical to take into consideration statistical significance of relative improvements as well as socio-economic benefits achievable through such forecast-assisted decision-making.
Objectives
The objectives of this research line are the development and application of:
1. skill scores and other metrics for forecast verification
2. statistical prediction methods for reference forecasts
3. post-processing drift and bias correction, and calibration methods
4. techniques to assess prediction skill based on multiple OBS and forecast sources
5. user-oriented measures that address specific aspects of forecast quality or utility