|
|
--- |
|
|
license: fair-noncommercial-research-license |
|
|
language: |
|
|
- en |
|
|
--- |
|
|
|
|
|
# A multi-view contrastive learning framework for spatial embeddings in risk modeling |
|
|
|
|
|
In this repository, we provide the pretrained models as described in our paper: |
|
|
|
|
|
> Holvoet, F., Blier-Wong, C., & Antonio, K. (2025). A multi-view contrastive learning framework for spatial embeddings in risk modeling. *arXiv preprint arXiv:2511.17954*. |
|
|
|
|
|
Paper is available as pre-print via Arxiv: **[arXiv preprint arXiv:2511.17954](https://arxiv.org/abs/2511.17954)** |
|
|
|
|
|
This repository accompanies our [GitHub repository](https://github.com/freekholvoet/MultiviewSpatialEmbeddings), where you can find the code to train the models and an example of usage. |
|
|
|
|
|
## Using the pretrained models |
|
|
|
|
|
The pretrained models described in Section 3.4 of the paper are provided in this repository. |
|
|
|
|
|
There are five different models available: |
|
|
|
|
|
- `EU16_GS32_OSM16.ckpt` |
|
|
- `EU16_OSM16.ckpt` |
|
|
- `EU32_GS96_OSM32.ckpt` |
|
|
- `EU64_GS64.ckpt` |
|
|
- `EU8_GS32_OSM32.ckpt` |
|
|
|
|
|
Example on how to download the models and calculate embeddings for a list of latitude and longitude coordinates: |
|
|
|
|
|
```python |
|
|
from huggingface_hub import hf_hub_download |
|
|
from load_lightweight import get_mvloc_encoder |
|
|
import torch |
|
|
|
|
|
device = torch.device("cuda" if torch.cuda.is_available() else "cpu") |
|
|
|
|
|
# Example coordinates (latitude, longitude) of various European cities |
|
|
c = torch.tensor([ |
|
|
(50.8503, 4.3517), # Brussels |
|
|
(48.8566, 2.3522), # Paris |
|
|
(51.5074, -0.1278), # London |
|
|
(52.5200, 13.4050), # Berlin |
|
|
(41.9028, 12.4964), # Rome |
|
|
(40.4168, -3.7038), # Madrid |
|
|
(59.3293, 18.0686), # Stockholm |
|
|
(60.1699, 24.9384), # Helsinki |
|
|
(47.4979, 19.0402), # Budapest |
|
|
(48.2082, 16.3738), # Vienna |
|
|
], dtype=torch.float32) |
|
|
|
|
|
model = get_mvloc_encoder( |
|
|
hf_hub_download("FreekH/multiview_spatial_embedding", "MODEL_NAME.ckpt"), |
|
|
device=device |
|
|
) |
|
|
model.to(device) |
|
|
|
|
|
with torch.no_grad(): |
|
|
emb = model(c.to(device).double()).detach().cpu().numpy() |
|
|
``` |
|
|
|
|
|
Replace `MODEL_NAME.ckpt` with the desired model filename from the list above. The [GitHub repository](https://github.com/freekholvoet/MultiviewSpatialEmbeddings) contains a Jupyter Notebook, called Add_embeddings_to_data.ipynb, that includes a function to systematically add embeddings to a data set containing a latitude and a longitude feature. |
|
|
|
|
|
## Citation |
|
|
|
|
|
Citing the paper: |
|
|
|
|
|
```bibtex |
|
|
@article{holvoet2025multiview, |
|
|
title={A multi-view contrastive learning framework for spatial embeddings in risk modeling}, |
|
|
author={Holvoet, Freek and Blier-Wong, Christopher and Antonio, Katrien}, |
|
|
journal={arXiv preprint arXiv:2511.17954}, |
|
|
year={2025} |
|
|
} |
|
|
``` |
|
|
|
|
|
Citing the models: |
|
|
|
|
|
```bibtex |
|
|
@misc{holvoet_pretrainedmodels, |
|
|
author = { Freek Holvoet }, |
|
|
title = { Spatial embeddings via multiview contrastive learning}, |
|
|
year = 2025, |
|
|
note = {[Pretrained spatial embedding models]}, |
|
|
url = { https://huggingface.co/FreekH/multiview_spatial_embedding }, |
|
|
doi = { 10.57967/hf/7009 }, |
|
|
publisher = { Hugging Face } |
|
|
} |
|
|
``` |