Evaluating statistical consistency in the ocean model component of the Community Earth System Model (pyCECT v2.0)
Baker, A., Hu, Y., Hammerling, D., Tseng, Y., Xu, H., et al. (2016). Evaluating statistical consistency in the ocean model component of the Community Earth System Model (pyCECT v2.0). Geoscientific Model Development, doi:10.5194/gmd-9-2391-2016
Title | Evaluating statistical consistency in the ocean model component of the Community Earth System Model (pyCECT v2.0) |
---|---|
Author(s) | Allison Baker, Yong Hu, Dorit Hammerling, Yuheng Tseng, Haiying Xu, Xiaomeng Huang, Frank O. Bryan, Guangwen Yang |
Abstract | The Parallel Ocean Program (POP), the ocean model component of the Community Earth System Model (CESM), is widely used in climate research. Most current work in CESM-POP focuses on improving the model's efficiency or accuracy, such as improving numerical methods, advancing parameterization, porting to new architectures, or increasing parallelism. Since ocean dynamics are chaotic in nature, achieving bit-for-bit (BFB) identical results in ocean solutions cannot be guaranteed for even tiny code modifications, and determining whether modifications are admissible (i.e., statistically consistent with the original results) is non-trivial. In recent work, an ensemble-based statistical approach was shown to work well for software verification (i.e., quality assurance) on atmospheric model data. The general idea of the ensemble-based statistical consistency testing is to use a qualitative measurement of the variability of the ensemble of simulations as a metric with which to compare future simulations and make a determination of statistical distinguishability. The capability to determine consistency without BFB results boosts model confidence and provides the flexibility needed, for example, for more aggressive code optimizations and the use of heterogeneous execution environments. Since ocean and atmosphere models have differing characteristics in term of dynamics, spatial variability, and timescales, we present a new statistical method to evaluate ocean model simulation data that requires the evaluation of ensemble means and deviations in a spatial manner. In particular, the statistical distribution from an ensemble of CESM-POP simulations is used to determine the standard score of any new model solution at each grid point. Then the percentage of points that have scores greater than a specified threshold indicates whether the new model simulation is statistically distinguishable from the ensemble simulations. Both ensemble size and composition are important. Our experiments indicate that the new POP ensemble consistency test (POP-ECT) tool is capable of distinguishing cases that should be statistically consistent with the ensemble and those that should not, as well as providing a simple, subjective and systematic way to detect errors in CESM-POP due to the hardware or software stack, positively contributing to quality assurance for the CESM-POP code. |
Publication Title | Geoscientific Model Development |
Publication Date | Jul 12, 2016 |
Publisher's Version of Record | https://dx.doi.org/10.5194/gmd-9-2391-2016 |
OpenSky Citable URL | https://n2t.net/ark:/85065/d7251ktn |
OpenSky Listing | View on OpenSky |
CISL Affiliations | TDD, ASAP, GSP |