Hasty Briefsbeta

Bilingual

What are we missing out on when we think Transformer is unreasonable in biology?

10 months ago
  • #AI
  • #Transformers
  • #Neuroscience
  • The article argues against comparing macroscopic computational models (like Transformers) directly with microscopic biological units (like neurons), calling it a 'scale alignment' error.
  • Transformers simulate the brain's 'global workspace' function through self-attention mechanisms, not by mimicking individual neurons.
  • At the microscopic level, neurons correspond to transistors in GPUs, both performing localized computations.
  • Modern GPUs, with their massively parallel architecture, are physically similar to the brain's organizational principles.
  • The debate over 'biological plausibility' should focus on functional simulation rather than form imitation.
  • The article concludes that the future of AI should focus on simulating the brain's functions, not its biological form.