Hasty Briefsbeta

Bilingual

Universal Reasoning Model (53.8% pass 1 ARC1 and 16.0% ARC 2)

4 months ago
  • #Universal Transformers
  • #Reasoning Models
  • #Artificial Intelligence
  • Universal transformers (UTs) are widely used for complex reasoning tasks like ARC-AGI and Sudoku.
  • Performance improvements in UTs come from recurrent inductive bias and strong nonlinear components, not elaborate architectural designs.
  • The Universal Reasoning Model (URM) enhances UT with short convolution and truncated backpropagation.
  • URM achieves state-of-the-art results: 53.8% pass@1 on ARC-AGI 1 and 16.0% pass@1 on ARC-AGI 2.
  • Code for the research is available online.