Adversarial DeepFakes
Another episode in the saga of deepfakes, videos that make real people look like they’re saying or doing things they never said or did. In the fall of 2019, Facebook, Microsoft, and Google created datasets and challenges to automatically detect deepfakes (see DT #23); in October 2020, Microsoft then launched their Video Authenticator deepfake detection app (#48). Now, just a few months later, Neekhara et al. (2020) present an adversarial deepfake model that handily beats those detectors: “We perform our evaluations on the winning entries of the DeepFake Detection Challenge (DFDC) and demonstrate that they can be easily bypassed in a practical attack scenario.” And the carousel goes ‘round.