Adventures in Neural Rendering
3 months ago
- #neural-networks
- #machine-learning
- #rendering
- Neural networks are increasingly used in rendering for tasks like antialiasing, upscaling, texture compression, material representation, and indirect lighting.
- Multilayer perceptrons (MLPs) can encode data in rendering, with configurations described by node counts (e.g., 3-3-3-1).
- MLPs consist of input, hidden, and output layers, with each node processing inputs through weighted sums and activation functions like ReLU or LeakyReLU.
- Training MLPs involves forward propagation, calculating loss, and backpropagation to adjust weights and biases.
- MLPs can encode signals like radiance and irradiance, sometimes outperforming Spherical Harmonics (SH) in quality with similar storage requirements.
- Smaller MLPs (e.g., 3-3-3-1) can approximate radiance well but struggle with irradiance, requiring larger networks for comparable accuracy.
- MLPs were tested for depth encoding and RTAO caching, showing potential but with high inference costs.
- Specular BRDF encoding was challenging, but reparameterization (e.g., Rusinkiewicz) improved results for smaller MLPs.
- MLPs are promising for signal encoding but require careful tuning of parameters, layers, and activation functions, with high training and inference costs.