File size: 3,059 Bytes
351a2bf 2af21c9 351a2bf 2af21c9 351a2bf 36cd207 a0177f3 351a2bf 36cd207 351a2bf fac3498 351a2bf 63d45bc 2af21c9 63d45bc 351a2bf 311100a 56734e5 63d45bc 56734e5 63d45bc 56734e5 351a2bf |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 |
---
license: fair-noncommercial-research-license
language:
- en
---
# A multi-view contrastive learning framework for spatial embeddings in risk modeling
In this repository, we provide the pretrained models as described in our paper:
> Holvoet, F., Blier-Wong, C., & Antonio, K. (2025). A multi-view contrastive learning framework for spatial embeddings in risk modeling. *arXiv preprint arXiv:2511.17954*.
Paper is available as pre-print via Arxiv: **[arXiv preprint arXiv:2511.17954](https://arxiv.org/abs/2511.17954)**
This repository accompanies our [GitHub repository](https://github.com/freekholvoet/MultiviewSpatialEmbeddings), where you can find the code to train the models and an example of usage.
## Using the pretrained models
The pretrained models described in Section 3.4 of the paper are provided in this repository.
There are five different models available:
- `EU16_GS32_OSM16.ckpt`
- `EU16_OSM16.ckpt`
- `EU32_GS96_OSM32.ckpt`
- `EU64_GS64.ckpt`
- `EU8_GS32_OSM32.ckpt`
Example on how to download the models and calculate embeddings for a list of latitude and longitude coordinates:
```python
from huggingface_hub import hf_hub_download
from load_lightweight import get_mvloc_encoder
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Example coordinates (latitude, longitude) of various European cities
c = torch.tensor([
(50.8503, 4.3517), # Brussels
(48.8566, 2.3522), # Paris
(51.5074, -0.1278), # London
(52.5200, 13.4050), # Berlin
(41.9028, 12.4964), # Rome
(40.4168, -3.7038), # Madrid
(59.3293, 18.0686), # Stockholm
(60.1699, 24.9384), # Helsinki
(47.4979, 19.0402), # Budapest
(48.2082, 16.3738), # Vienna
], dtype=torch.float32)
model = get_mvloc_encoder(
hf_hub_download("FreekH/multiview_spatial_embedding", "MODEL_NAME.ckpt"),
device=device
)
model.to(device)
with torch.no_grad():
emb = model(c.to(device).double()).detach().cpu().numpy()
```
Replace `MODEL_NAME.ckpt` with the desired model filename from the list above. The [GitHub repository](https://github.com/freekholvoet/MultiviewSpatialEmbeddings) contains a Jupyter Notebook, called Add_embeddings_to_data.ipynb, that includes a function to systematically add embeddings to a data set containing a latitude and a longitude feature.
## Citation
Citing the paper:
```bibtex
@article{holvoet2025multiview,
title={A multi-view contrastive learning framework for spatial embeddings in risk modeling},
author={Holvoet, Freek and Blier-Wong, Christopher and Antonio, Katrien},
journal={arXiv preprint arXiv:2511.17954},
year={2025}
}
```
Citing the models:
```bibtex
@misc{holvoet_pretrainedmodels,
author = { Freek Holvoet },
title = { Spatial embeddings via multiview contrastive learning},
year = 2025,
note = {[Pretrained spatial embedding models]},
url = { https://huggingface.co/FreekH/multiview_spatial_embedding },
doi = { 10.57967/hf/7009 },
publisher = { Hugging Face }
}
``` |