Hasty Briefsbeta

Weaponizing image scaling against production AI systems

3 days ago
  • #Prompt Injection
  • #Image Scaling Attacks
  • #AI Security
  • Weaponizing image scaling can exploit AI systems like Google Gemini CLI by hiding malicious prompts in images that appear harmless at full resolution but reveal injections when scaled down.
  • Image scaling attacks have been demonstrated on multiple platforms including Vertex AI Studio, Gemini’s web and API interfaces, Google Assistant, and Genspark, exploiting the mismatch between user perception and model inputs.
  • Different downscaling algorithms (nearest neighbor, bilinear, bicubic) require unique attack approaches, with variations across libraries (Pillow, PyTorch, OpenCV, TensorFlow) impacting attack techniques.
  • Anamorpher, an open-source tool, helps craft images for scaling attacks by exploiting predictable mathematical relationships in downscaling algorithms like bicubic interpolation.
  • Mitigation strategies include avoiding image downscaling, providing user previews of model inputs, and implementing secure design patterns to prevent unauthorized tool calls from image text.
  • Future research should explore image scaling attacks on mobile/edge devices, voice AI, and advanced techniques like semantic prompt injection and polyglots.