Hasty Briefsbeta

Bilingual

The Lottery Ticket Hypothesis: finding sparse trainable NNs with 90% less params

4 months ago
  • #neural networks
  • #pruning
  • #machine learning
  • Neural network pruning can reduce parameter counts by over 90% without compromising accuracy.
  • Pruned networks are difficult to train from scratch, limiting training performance improvements.
  • The 'lottery ticket hypothesis' suggests that dense networks contain subnetworks ('winning tickets') that can train effectively when isolated.
  • Winning tickets have initial weights that make training particularly effective.
  • An algorithm identifies winning tickets, which are less than 10-20% the size of original networks.
  • Winning tickets learn faster and achieve higher test accuracy than the original networks.