Navigating the Latent Space Dynamics of Neural Models
10 days ago
- #neural networks
- #dynamical systems
- #latent space
- Neural networks transform high-dimensional data into compact, structured representations in a lower-dimensional latent space.
- The paper presents an interpretation of neural models as dynamical systems acting on the latent manifold.
- Autoencoder models implicitly define a latent vector field derived by iteratively applying the encoding-decoding map.
- Standard training procedures introduce inductive biases leading to the emergence of attractor points within the vector field.
- The vector field can be leveraged as a representation for the network, providing tools to analyze model and data properties.
- Applications include analyzing generalization and memorization regimes, extracting prior knowledge from attractors, and identifying out-of-distribution samples.
- The approach is validated on vision foundation models, demonstrating its effectiveness in real-world scenarios.