We tested super-resolution pre-filter for LPR OCR. It did nothing
7 hours ago
- #optical-character-recognition
- #license-plate-recognition
- #super-resolution
- Neural super-resolution (SR) for license plate recognition (LPR) failed to improve OCR accuracy in production tests, with both a custom 42K-parameter model and a pretrained 1.21M-parameter model yielding 0.0% exact match and 0.4% character accuracy on crops under 100px.
- The LPR system achieves 98.6% plate accuracy through multi-crop voting, where clear, high-resolution crops outweigh noisy, low-resolution ones; SR on small crops adds hallucinated, incorrect characters without changing voting outcomes.
- Effective SR requires large models (1.5M+ parameters), adversarial OCR-guided training, and character confusion penalties, but is often unnecessary if the OCR model is trained with multi-scale augmentation to handle low-resolution inputs directly.
- SR is only beneficial in niche cases, such as when using a fixed commercial OCR system that cannot be retrained; otherwise, investing in quality training data and multi-frame fusion is more effective.
- Research highlights that SR can improve extremely low-resolution crops (e.g., 20–50px) in specific scenarios, but benchmark designs often overstate its value by not comparing against better-trained OCR models.