MobiusNet
A geometric deep learning architecture using Möbius wave interference lenses for efficient image classification.
Model Description
MobiusNet learns frequency-selective sparse coding through three drifting wave functions (L, M, R) combined via learnable XOR/AND logic. The architecture progressively sharpens selectivity through depth, culminating in near-binary winner-take-all gating at the final block.
Primary Concerns
The flops are considerably higher than alternative variations. The system DOES WORK, and it does improve the output, but the training time is higher due to the twist in/twist out architectural advantage. In the process, you introduce additional uncertainty due to the nature of differentiation. The input must be controlled during distillation and the output must be catered.
This experiment was successful, but the optimization isn't there yet to provide a useful solution.
I believe, but I'm still not 100% certain, the ksimplex geometric linear will provide this necessary application within the architecture itself. So far the tests show it can produce very deviant results, so more tests are needed.
Wave Interference Mechanism
Each Möbius Lens computes:
L = exp(-α · sin²(ω · s · (x + drift_L · t))) # Left wave (drift=+1)
M = exp(-α · sin²(ω · s · (x + drift_M · t))) # Middle wave (drift=0)
R = exp(-α · sin²(ω · s · (x + drift_R · t))) # Right wave (drift=-1)
XOR = |L + R - 2·L·R|
AND = L · R
gate = σ(LayerNorm(w·[L,M,R] × (0.5 + 0.5·(xor_w·XOR + (1-xor_w)·AND))))
Learned Progression
| Block | ω | α | XOR weight | L/M/R means | Behavior |
|---|---|---|---|---|---|
| S0B0 | 1.55 | 0.64 | 0.40 | 0.80/0.92/0.71 | Broad overlapping |
| S0B1 | 3.01 | 0.22 | 0.69 | 0.82/0.80/0.83 | Nearly all passes |
| S1B0 | 0.93 | 2.00 | 0.79 | 0.86/0.87/0.81 | Sharpening |
| S1B1 | 1.63 | 0.50 | 0.41 | 0.86/0.48/0.55 | M/R diverge |
| S2B0 | 1.64 | 2.09 | 0.58 | 0.12/0.08/0.20 | Sparse |
| S2B1 | 2.68 | 5.22 | 0.99 | 0.02/0.02/0.05 | Winner-take-all |
Usage
Installation
pip install torch safetensors huggingface_hub
Inference
import torch
import torch.nn as nn
import torch.nn.functional as F
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
import math
# ============================================================================
# ARCHITECTURE
# ============================================================================
class MobiusLens(nn.Module):
def __init__(self, dim, layer_idx, total_layers, scale_range=(0.5, 2.5)):
super().__init__()
self.t = layer_idx / max(total_layers - 1, 1)
scale_span = scale_range[1] - scale_range[0]
step = scale_span / max(total_layers, 1)
self.register_buffer('scales', torch.tensor([
scale_range[0] + self.t * scale_span,
scale_range[0] + self.t * scale_span + step
]))
self.twist_in_angle = nn.Parameter(torch.tensor(self.t * math.pi))
self.twist_in_proj = nn.Linear(dim, dim, bias=False)
self.omega = nn.Parameter(torch.tensor(math.pi))
self.alpha = nn.Parameter(torch.tensor(1.5))
self.phase_l = nn.Parameter(torch.zeros(2))
self.drift_l = nn.Parameter(torch.ones(2))
self.phase_m = nn.Parameter(torch.zeros(2))
self.drift_m = nn.Parameter(torch.zeros(2))
self.phase_r = nn.Parameter(torch.zeros(2))
self.drift_r = nn.Parameter(-torch.ones(2))
self.accum_weights = nn.Parameter(torch.tensor([0.4, 0.2, 0.4]))
self.xor_weight = nn.Parameter(torch.tensor(0.7))
self.gate_norm = nn.LayerNorm(dim)
self.twist_out_angle = nn.Parameter(torch.tensor(-self.t * math.pi))
self.twist_out_proj = nn.Linear(dim, dim, bias=False)
def forward(self, x):
# Twist in
cos_t, sin_t = torch.cos(self.twist_in_angle), torch.sin(self.twist_in_angle)
x = x * cos_t + self.twist_in_proj(x) * sin_t
# Wave interference
x_norm = torch.tanh(x)
t = x_norm.abs().mean(dim=-1, keepdim=True).unsqueeze(-2)
x_exp = x_norm.unsqueeze(-2)
s = self.scales.view(-1, 1)
a = self.alpha.abs() + 0.1
def wave(phase, drift):
pos = s * self.omega * (x_exp + drift.view(-1, 1) * t) + phase.view(-1, 1)
return torch.exp(-a * torch.sin(pos).pow(2)).prod(dim=-2)
L, M, R = wave(self.phase_l, self.drift_l), wave(self.phase_m, self.drift_m), wave(self.phase_r, self.drift_r)
# XOR/AND combination
w = torch.softmax(self.accum_weights, dim=0)
xor_w = torch.sigmoid(self.xor_weight)
lr = xor_w * (L + R - 2*L*R).abs() + (1 - xor_w) * L * R
gate = torch.sigmoid(self.gate_norm((w[0]*L + w[1]*M + w[2]*R) * (0.5 + 0.5*lr)))
x = x * gate
# Twist out
cos_t, sin_t = torch.cos(self.twist_out_angle), torch.sin(self.twist_out_angle)
return x * cos_t + self.twist_out_proj(x) * sin_t
class MobiusConvBlock(nn.Module):
def __init__(self, channels, layer_idx, total_layers, scale_range=(0.5, 2.5), reduction=0.5):
super().__init__()
self.conv = nn.Sequential(
nn.Conv2d(channels, channels, 3, padding=1, groups=channels, bias=False),
nn.Conv2d(channels, channels, 1, bias=False),
nn.BatchNorm2d(channels),
)
self.lens = MobiusLens(channels, layer_idx, total_layers, scale_range)
third = channels // 3
which_third = layer_idx % 3
mask = torch.ones(channels)
mask[which_third*third : which_third*third + third + (channels % 3 if which_third == 2 else 0)] = reduction
self.register_buffer('thirds_mask', mask.view(1, -1, 1, 1))
self.residual_weight = nn.Parameter(torch.tensor(0.9))
def forward(self, x):
identity = x
h = self.conv(x).permute(0, 2, 3, 1)
h = self.lens(h).permute(0, 3, 1, 2) * self.thirds_mask
rw = torch.sigmoid(self.residual_weight)
return rw * identity + (1 - rw) * h
class MobiusNet(nn.Module):
def __init__(self, in_chans=1, num_classes=1000, channels=(64, 128, 256),
depths=(2, 2, 2), scale_range=(0.5, 2.5), use_integrator=True):
super().__init__()
total_layers = sum(depths)
channels = list(channels)
self.stem = nn.Sequential(
nn.Conv2d(in_chans, channels[0], 3, padding=1, bias=False),
nn.BatchNorm2d(channels[0]),
)
self.stages = nn.ModuleList()
self.downsamples = nn.ModuleList()
layer_idx = 0
for si, d in enumerate(depths):
stage = nn.ModuleList([
MobiusConvBlock(channels[si], layer_idx + i, total_layers, scale_range)
for i in range(d)
])
layer_idx += d
self.stages.append(stage)
if si < len(depths) - 1:
self.downsamples.append(nn.Sequential(
nn.Conv2d(channels[si], channels[si + 1], 3, stride=2, padding=1, bias=False),
nn.BatchNorm2d(channels[si + 1]),
))
self.integrator = nn.Sequential(
nn.Conv2d(channels[-1], channels[-1], 3, padding=1, bias=False),
nn.BatchNorm2d(channels[-1]),
nn.GELU(),
) if use_integrator else nn.Identity()
self.pool = nn.AdaptiveAvgPool2d(1)
self.head = nn.Linear(channels[-1], num_classes)
def forward(self, x):
x = self.stem(x)
for i, stage in enumerate(self.stages):
for block in stage:
x = block(x)
if i < len(self.downsamples):
x = self.downsamples[i](x)
x = self.integrator(x)
return self.head(self.pool(x).flatten(1))
# ============================================================================
# LOAD AND RUN
# ============================================================================
device = "cuda" if torch.cuda.is_available() else "cpu"
# Load model
model = MobiusNet(
in_chans=1,
num_classes=1000,
channels=(64, 128, 256),
depths=(2, 2, 2),
scale_range=(0.5, 2.5),
use_integrator=True,
).to(device)
weights_path = hf_hub_download(
repo_id="AbstractPhil/mobiusnet-distillations",
filename="checkpoints/mobius_tiny_s_imagenet_clip_vit_l14/20260111_000512/checkpoints/best_model.safetensors",
)
model.load_state_dict(load_file(weights_path))
model.eval()
# Inference on CLIP features
# Input: CLIP-ViT-L14 image features reshaped to [B, 1, 24, 32]
clip_features = torch.randn(1, 768) # Replace with actual CLIP features
x = clip_features.view(1, 1, 24, 32).to(device)
with torch.no_grad():
logits = model(x)
pred = logits.argmax(dim=-1)
probs = F.softmax(logits, dim=-1)
print(f"Predicted class: {pred.item()}, confidence: {probs[0, pred].item():.2%}")
With Real CLIP Features
from transformers import CLIPModel, CLIPProcessor
from PIL import Image
# Load CLIP
clip_model = CLIPModel.from_pretrained("openai/clip-vit-large-patch14").to(device).eval()
clip_processor = CLIPProcessor.from_pretrained("openai/clip-vit-large-patch14")
# Extract features
image = Image.open("your_image.jpg").convert("RGB")
inputs = clip_processor(images=image, return_tensors="pt").to(device)
with torch.no_grad():
vision_out = clip_model.vision_model(**inputs)
clip_features = clip_model.visual_projection(vision_out.pooler_output)
# Note: The model was trained on pre-extracted features with σ≈0.036
# You may need to match that distribution for optimal results
x = clip_features.view(1, 1, 24, 32)
with torch.no_grad():
logits = model(x)
pred = logits.argmax(dim=-1)
Training Details
- Dataset: ImageNet-1K via pre-extracted CLIP-ViT-L14 features
- Input: 768-dim CLIP features reshaped to [1, 24, 32]
- Epochs: 50
- Optimizer: AdamW (lr=1e-3, weight_decay=0.05)
- Scheduler: CosineAnnealingLR
- Batch Size: 256
- Parameters: 1.74M
Architecture Details
Input: [1, 24, 32] (768 = 24 × 32)
├── Stem: Conv2d(1→64) + BN
├── Stage 0: 2× MobiusConvBlock(64) → [64, 24, 32]
├── Downsample: Conv2d(64→128, stride=2)
├── Stage 1: 2× MobiusConvBlock(128) → [128, 12, 16]
├── Downsample: Conv2d(128→256, stride=2)
├── Stage 2: 2× MobiusConvBlock(256) → [256, 6, 8]
├── Integrator: Conv2d + BN + GELU
├── AdaptiveAvgPool2d(1)
└── Linear(256→1000)
Key Insights
- Progressive Sharpening: α increases through depth (0.22 → 5.22), creating increasingly selective filters
- XOR Logic Emergence: Final block learns xor_weight=0.99, implementing near-pure XOR gating
- LayerNorm Amplification: Tiny wave differences (σ≈0.02) get rescaled to meaningful gate distributions
- Sparse Resonance: High α creates winner-take-all dynamics where only resonant channels activate
Citation
@misc{mobiusnet2026,
author = {AbstractPhil},
title = {MobiusNet: Wave Interference Lenses for Geometric Deep Learning},
year = {2026},
publisher = {HuggingFace},
url = {https://huggingface.co/AbstractPhil/mobiusnet-distillations}
}
License
Apache 2.0 '''
Dataset used to train AbstractPhil/mobiusnet-distillations
Evaluation results
- Top-1 Accuracy on ImageNet-1K (CLIP-ViT-L14 features)self-reported80.800

