Papers
arxiv:2601.21866

MoHETS: Long-term Time Series Forecasting with Mixture-of-Heterogeneous-Experts

Published on Jan 29
Authors:
,

Abstract

MoHETS combines sparse Mixture-of-Heterogeneous-Experts with convolutional and Fourier-based experts to improve long-horizon multivariate time series forecasting while maintaining parameter efficiency.

AI-generated summary

Real-world multivariate time series can exhibit intricate multi-scale structures, including global trends, local periodicities, and non-stationary regimes, which makes long-horizon forecasting challenging. Although sparse Mixture-of-Experts (MoE) approaches improve scalability and specialization, they typically rely on homogeneous MLP experts that poorly capture the diverse temporal dynamics of time series data. We address these limitations with MoHETS, an encoder-only Transformer that integrates sparse Mixture-of-Heterogeneous-Experts (MoHE) layers. MoHE routes temporal patches to a small subset of expert networks, combining a shared depthwise-convolution expert for sequence-level continuity with routed Fourier-based experts for patch-level periodic structures. MoHETS further improves robustness to non-stationary dynamics by incorporating exogenous information via cross-attention over covariate patch embeddings. Finally, we replace parameter-heavy linear projection heads with a lightweight convolutional patch decoder, improving parameter efficiency, reducing training instability, and allowing a single model to generalize across arbitrary forecast horizons. We validate across seven multivariate benchmarks and multiple horizons, with MoHETS consistently achieving state-of-the-art performance, reducing the average MSE by 12% compared to strong recent baselines, demonstrating effective heterogeneous specialization for long-term forecasting.

Community

model_architecture

MoHETS: an encoder-only transformer for multivariate time-series forecasting. (a) The input embedding module splits time channels into sequences of channel-independent patch embeddings. (b) The exogenous embedding module projects, fuses, and patches covariates with the input series to produce aligned exogenous patch embeddings. These patches are processed through B stacked Transformer blocks; each block is composed of self-attention, cross-attention, and a (c) Mixture-of-Heterogeneous-Experts (MoHE) layer, where a shared depthwise-convolution expert maintains sequence continuity and routed Fourier experts resolve local spectral patterns. (d) The patch decoder head projects final embeddings to forecasting horizons.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2601.21866 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2601.21866 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2601.21866 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.