FAIR social network integrity
Facebook is increasingly talking publicly about the work it does to keep its platform safe, probably at least partially in response to the constant stream of news about its failures in this area (from Myanmar to Plandemic). This does mean we get to learn a lot about the systems that Facebook AI Research (FAIR) is building to stop viral hoaxes before they spread too widely. Examples include the recent inside look into their AI Red Team (DT #47); their Web-Enabled Simulations (WES, #38) and Tempotal Interaction Embeddings (TIES, #34) for detecting bots on Facebook; and their DeepFake detection dataset (#23). Now, Halevy et al. (2020) have published an extensive survey on their work preserving integrity in online social networks, in which they “highlight the techniques that have been proven useful in practice and that deserve additional attention from the academic community.” It covers many of the aforementioned topics, plus a lot more.