Foundation Model for Personalized Recommendation
a year ago
- #machine-learning
- #recommendation-systems
- #foundation-models
- Netflix's personalized recommender system is complex, with various specialized models, leading to high maintenance costs and difficulty in transferring innovations.
- The foundation model for recommendation centralizes member preference learning, enhancing accessibility and utility across different models.
- The model assimilates information from members' comprehensive interaction histories and content at a large scale, distributing learnings through shared weights or embeddings.
- Inspired by large language models (LLMs), the foundation model shifts from model-centric to data-centric approaches, leveraging semi-supervised learning.
- Tokenizing user interactions helps define meaningful events, balancing granular data and sequence compression to retain critical details.
- Sparse attention mechanisms and sliding window sampling extend the context window and ensure exposure to different segments of user history.
- Interaction tokens contain heterogeneous details, including action attributes and content information, organized into request-time and post-action features.
- The model uses an autoregressive next-token prediction objective, with modifications like multi-token prediction and auxiliary objectives to improve accuracy.
- Unique challenges include entity cold-starting, addressed through incremental training and inference with unseen entities using metadata.
- Downstream applications include direct use as a predictive model, utilizing embeddings, and fine-tuning with specific data.
- Scaling the foundation model involves robust evaluation, efficient training algorithms, and substantial computing resources, following principles from LLMs.
- The foundation model represents a unified, data-centric system, improving recommendation quality and addressing challenges like cold start and presentation bias.