Hasty Briefsbeta

Bilingual

Questioning Representational Optimism in Deep Learning

a year ago
  • #Representation Learning
  • #AI
  • #Neural Networks
  • The paper challenges the assumption that better performance in AI systems implies better internal representations.
  • It compares neural networks evolved through open-ended search to those trained via stochastic gradient descent (SGD) on generating a single image.
  • SGD-trained networks exhibit fractured entangled representation (FER), while evolved networks approach unified factored representation (UFR).
  • FER may degrade core model capacities like generalization, creativity, and continual learning.
  • The paper provides code to load genomes, train SGD networks, and visualize internal representations.
  • Instructions for setting up the environment and running the project are included.
  • The repository contains assets, data, and source code for the project.
  • Contact information and citation details are provided for further research.