The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
ClimX: a challenge for extreme-aware climate model emulation
ClimX is a challenge about building fast and accurate machine learning emulators for the NorESM2-MM Earth System Model, with evaluation focused on climate extremes rather than mean climate alone.
Dataset summary
This dataset contains the full-resolution ClimX data in NetCDF-4 format (targets + forcings, depending on split) with a native grid of (about ) resolution. It also contains the lite-resolution version, with a native grid of (about ) resolution:
- Lite-resolution: <1GB, spatially coarsened, meant for rapid prototyping.
- Full-resolution: ~200GB, full-resolution data for large-scale training.
What you will do (high level)
You train an emulator that predicts daily 2D fields for 7 surface variables:
tas,tasmax,tasminpr,huss,psl,sfcWind
However, the benchmark targets are 15 extreme indices derived from daily temperature and precipitation (ETCCDI-style indices). The daily fields are an intermediate output your emulator produces (useful for diagnostics and for computing the indices).
Conceptually:
where are forcings (greenhouse gases + aerosols) and is the climate state.
Dataset structure
Spatial and temporal shape
Full-resolution daily fields:
- Historical:
lat: 192, lon: 288, time: 60224 - Projections:
lat: 192, lon: 288, time: 31389
Splits and scenarios (official challenge setup)
Training uses historical + several SSP scenarios; testing is on the held-out SSP2-4.5 scenario:
- Train: historical (1850–2014) +
ssp126,ssp370,ssp585(2015–2100) - Test (held-out):
ssp245(2015–2100)
To avoid leakage, targets for ssp245 are withheld in the official evaluation; only the forcings are provided for that scenario. The full outputs will be released after the competition.
How to load the data
This dataset is distributed as NetCDF-4 files. There are two common ways to load it.
Option 1 (recommended): clone the ClimX code and use the helper loader
The ClimX repository already includes a helper module (src/data/climx_hf.py) that allows you to download the dataset from Hugging Face and open it as three lazily-loaded “virtual” xarray datasets:
git clone https://github.com/IPL-UV/ClimX.git
cd ClimX
pip install -U "huggingface-hub" xarray netcdf4 dask
from src.data.climx_hf import download_climx_from_hf, open_climx_virtual_datasets
# Download NetCDF artifacts from HF into a local cache directory.
root = download_climx_from_hf("/path/to/hf_cache", variant="full")
# Open as three virtual datasets (lazy / dask-friendly).
ds = open_climx_virtual_datasets(root, variant="full") # or "lite"
ds.hist # historical (targets + forcings)
ds.train # projections training scenarios (targets + forcings; excludes `ssp245` scenario)
ds.test_forcings # `ssp245` scenario forcings only (no targets)
Option 2: download NetCDFs and open with xarray directly
You can also download files from Hugging Face and open them with xarray.
Example:
from huggingface_hub import hf_hub_download
import xarray as xr
path = hf_hub_download(
repo_id="isp-uv-es/ClimX",
repo_type="dataset",
filename="PATH/TO/A/FILE.nc", # replace with an actual file in this dataset repo
)
ds = xr.open_dataset(path)
print(ds)
Links
License and usage
The dataset is released under MIT. In addition, if you are participating in the ClimX competition, please follow the competition rules (notably: restrictions on external climate training data and redistribution of competition data).
- Downloads last month
- 92