Hasty Briefsbeta

How Neural Super Sampling Works: Architecture, Training, and Inference

4 days ago
  • #neural-rendering
  • #mobile-gaming
  • #AI-upscaling
  • Arm introduced Neural Super Sampling (NSS), an AI-powered upscaling solution for mobile gaming, set to ship in Arm GPUs in 2026.
  • NSS overcomes limitations of traditional Temporal Super Sampling (TSS) by using a trained neural model, improving handling of edge cases like ghosting and disocclusion artifacts.
  • The NSS model is trained with sequences of 540p frames paired with 1080p ground truth images, using inputs like color, motion vectors, and depth.
  • Training employs a spatiotemporal loss function to ensure spatial fidelity and temporal consistency, using PyTorch with Adam optimizer and cosine annealing learning rate.
  • NSS uses a four-level UNet backbone with skip connections, generating per-pixel outputs for color, temporal stability, and disocclusion masks.
  • Key feedback mechanisms in NSS include temporal stability and disocclusion signals to maintain stability without handcrafted rules.
  • Pre- and post-processing stages run on the GPU, with inference executed via Vulkan ML extensions, designed for mobile efficiency.
  • Performance metrics like PSNR, SSIM, and FLIP are used to evaluate NSS, showing improvements in stability and detail retention over traditional methods.
  • Early simulations suggest NSS will be more efficient than Arm ASR, fitting within mobile hardware constraints with a target of ≤4ms per frame.
  • Developers can explore NSS through the Arm Neural Graphics Development Kit, with sample code and network structure available on the Arm Developer Hub.