Principled Synthetic Data Enables the First Scaling Laws for LLMs in Recommendation
Abstract
A novel layered framework generates high-quality synthetic data for large language models in recommender systems, demonstrating superior performance and predictable scaling laws compared to traditional methods.
Large Language Models (LLMs) represent a promising frontier for recommender systems, yet their development has been impeded by the absence of predictable scaling laws, which are crucial for guiding research and optimizing resource allocation. We hypothesize that this may be attributed to the inherent noise, bias, and incompleteness of raw user interaction data in prior continual pre-training (CPT) efforts. This paper introduces a novel, layered framework for generating high-quality synthetic data that circumvents such issues by creating a curated, pedagogical curriculum for the LLM. We provide powerful, direct evidence for the utility of our curriculum by showing that standard sequential models trained on our principled synthetic data significantly outperform (+130% on recall@100 for SasRec) models trained on real data in downstream ranking tasks, demonstrating its superiority for learning generalizable user preference patterns. Building on this, we empirically demonstrate, for the first time, robust power-law scaling for an LLM that is continually pre-trained on our high-quality, recommendation-specific data. Our experiments reveal consistent and predictable perplexity reduction across multiple synthetic data modalities. These findings establish a foundational methodology for reliable scaling LLM capabilities in the recommendation domain, thereby shifting the research focus from mitigating data deficiencies to leveraging high-quality, structured information.
Community
We present the first empirically validated scaling laws for LLMs in recommendation, enabled by a principled layered synthetic data framework that transforms noisy, biased user interaction logs into a high-quality pedagogical curriculum鈥攄emonstrating that standard models trained on our
synthetic data outperform those trained on real data by over 130% on Recall@100.
This is a great use for synthetic data.
I'm wondering which of the bias types from table 1 is the most difficult to manage?
I would suggest Data Incompleteness & Sparsity is most difficult to handle, there are both technique challenges as well as policy chanllenges (regulations what data you could use). For the other bias, there are some techique method could mitigate, e.g., position calibration.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- OpenOneRec Technical Report (2025)
- ReaSeq: Unleashing World Knowledge via Reasoning for Sequential Modeling (2025)
- LLM-I2I: Boost Your Small Item2Item Recommendation Model with Large Language Model (2025)
- Selective LLM-Guided Regularization for Enhancing Recommendation Models (2025)
- Reasoning-guided Collaborative Filtering with Language Models for Explainable Recommendation (2026)
- Unleashing the Native Recommendation Potential: LLM-Based Generative Recommendation via Structured Term Identifiers (2026)
- GRAB: An LLM-Inspired Sequence-First Click-Through Rate Prediction Modeling Paradigm (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper