Hasty Briefsbeta

Bilingual

Will Scaling Solve Robotics?

10 months ago
  • #foundation-models
  • #machine-learning
  • #robotics
  • CoRL 2023 was the largest yet with over 900 attendees, 11 workshops, and nearly 200 accepted papers.
  • A central debate: Can training large neural networks on vast datasets solve robotics?
  • Foundation models' success in CV/NLP suggests potential for robotics, but challenges remain.
  • Arguments for scaling in robotics:
  • - Success in CV/NLP with large models and datasets.
  • - Early evidence from RT-X, RT-2, and Diffusion Policies papers.
  • - Leveraging progress in data, compute, and foundation models.
  • - Discovering a simpler manifold of practical robotics tasks.
  • - Large models may enable 'common sense' reasoning for robotics.
  • Arguments against scaling in robotics:
  • - Lack of large-scale robotics data compared to CV/NLP.
  • - Diversity in robot embodiments complicates data collection.
  • - High variance in environments robots must operate in.
  • - High cost and energy consumption of training large models.
  • - The '99.X% problem'—real-world applications require near-perfect reliability.
  • - Long-horizon tasks compound errors over time.
  • - Self-driving car companies' struggles with scaling approaches.
  • Misc. related arguments:
  • - Learning-based approaches can be deployed robustly despite lack of guarantees.
  • - Human-in-the-loop systems as a practical deployment strategy.
  • - Using simulators and existing vision/language data to mitigate data scarcity.
  • - Combining classical and learning-based approaches for better results.
  • Key takeaways:
  • - Pursue scaling in robotics but explore other directions too.
  • - Focus on real-world mobile manipulation and user-friendly systems.
  • - Report negative results to avoid repeated efforts.
  • - Encourage innovative, out-of-the-box thinking for new solutions.