Hasty Briefsbeta

Introduction to Multi-Armed Bandits

8 hours ago
  • #Bandit Algorithms
  • #Machine Learning
  • #Decision Making
  • Introduction to Multi-Armed Bandits as a framework for decision-making under uncertainty.
  • Structured as a textbook with self-contained chapters, exercises, and reviews of developments.
  • Covers IID rewards, adversarial rewards, contextual bandits, and connections to economics.
  • Includes standalone surveys on specific topics like 'bandits with similarity information'.
  • Appendix provides background on concentration and KL-divergence.