Learning Lens Blur Fields
4 days ago
- #computational photography
- #optical blur
- #neural representation
- Introduction of lens blur field, a high-dimensional neural representation for modeling optical blur in complex camera systems.
- Lens blur field uses a multilayer perceptron (MLP) to capture variations of the 2-D point spread function (PSF) over image-plane location, focus setting, and optionally depth.
- The model accounts for defocus, diffraction, aberration, and sensor features like pixel color filters and micro-lenses.
- A novel dataset of 5-D blur fields for smartphone cameras and various lenses is introduced.
- Demonstration that lens blur fields can reveal differences in optical behavior between identical smartphone models.
- Simple capture setup involving a monitor and camera stand, with a pipeline that includes capturing focal stacks, solving non-blind deconvolution, and training an MLP.
- Applications include distinguishing devices by their blur signatures, image deblurring, and rendering realistic blurs.
- Potential for improved device-specific image restoration with more realistic renders.
- Upcoming release of the first dataset of 5D and 6D lens blur fields for smartphones and SLR lenses.