- BackSlash: Rate Constrained Optimized Training of Large Language Models The rapid advancement of large-language models (LLMs) has driven extensive research into parameter compression after training has been completed, yet compression during the training phase remains largely unexplored. In this work, we introduce Rate-Constrained Training (BackSlash), a novel training-time compression approach based on rate-distortion optimization (RDO). BackSlash enables a flexible trade-off between model accuracy and complexity, significantly reducing parameter redundancy while preserving performance. Experiments in various architectures and tasks demonstrate that BackSlash can reduce memory usage by 60% - 90% without accuracy loss and provides significant compression gain compared to compression after training. Moreover, BackSlash proves to be highly versatile: it enhances generalization with small Lagrange multipliers, improves model robustness to pruning (maintaining accuracy even at 80% pruning rates), and enables network simplification for accelerated inference on edge devices. 3 authors · Apr 23, 2025
- It Takes a Good Model to Train a Good Model: Generalized Gaussian Priors for Optimized LLMs Despite rapid advancements in the research and deployment of large language models (LLMs), the statistical distribution of model parameters, as well as their influence on initialization, training dynamics, and downstream efficiency, has received surprisingly little attention. A recent work introduced BackSlash, a training-time compression algorithm. It first demonstrated that pre-trained LLM parameters follow generalized Gaussian distributions (GGDs) better. By optimizing GG priors during training, BackSlash can reduce parameters by up to 90\% with minimal performance loss. Building on this foundational insight, we propose a unified, end-to-end framework for LLM optimization based on the GG model. Our contributions are threefold: (1) GG-based initialization scheme that aligns with the statistical structure of trained models, resulting in faster convergence and improved accuracy; (2) DeepShape, a post-training regularization method that reshapes weight distributions to match a GG profile, improving compressibility with minimized degradation in performance; and (3) RF8, a compact and hardware-efficient 8-bit floating-point format designed for GG-distributed-initialized BackSlash training, enabling low-cost inference without compromising accuracy. Experiments across diverse model architectures show that our framework consistently yields smaller and faster models that match or outperform standard training baselines. By grounding LLM development in principled statistical modeling, this work forges a new path toward efficient, scalable, and hardware-aware AI systems. The code is available on our project page: https://huggingface.co/spaces/shifeng3711/gg_prior. 4 authors · May 31, 2025
- Degree-similar graphs and cospectral graphs Let G be a graph with adjacency matrix A(G) and degree matrix D(G), and let L_μ(G):=A(G)-μD(G). Two graphs G_1 and G_2 are called degree-similar if there exists an invertible matrix M such that M^{-1} A(G_1) M =A(G_2) and M^{-1} D(G_1) M =D(G_2). In this paper, we address three problems concerning degree-similar graphs proposed by Godsil and Sun. First, we present a new characterization of degree-similar graphs using degree partition, from which we derive methods and examples for constructing cospectral graphs and degree-similar graphs. Second, we construct infinite pairs of non-degree-similar trees G_1 and G_2 such that tI- L_μ(G_1) and tI-L_μ(G_2) have the same Smith normal form over Q(μ)[t], which provides a negative answer to a problem posed by Godsil and Sun. Third, we establish several invariants of degree-similar graphs and obtain results on unicyclic graphs that are degree-similar determined. Lastly we prove that for a strongly regular graph G and any two edges e and f of G, G backslash e and G backslash f have identical μ-polynomial, i.e., det(tI-L_μ(G backslash e))=det(tI-L_μ(G backslash f)), which enables the construction of pairs of non-isomorphic graphs with same μ-polynomial, where G backslash e denotes the graph obtained from G by deleting the edge e. 4 authors · Sep 1, 2025