Seeing What AI Sees: The Hidden Effects of Data Augmentation in Deepfake Detection
Deepfake detection models perform well on familiar datasets but struggle with unseen ones due to the diversity of deepfake creation methods. Data augmentation offers a way to improve model generalisability without needing additional datasets. This study uses Gradient-weighted Class Activation Maps to visualise how different augmentations affect an EfficientNet-B4 model’s focus regions. Results show that only Fancy PCA improved cross-dataset accuracy, while other augmentations reduced performance by shifting attention away from key facial areas.
