The
Partnership on AI to Benefit People and Society (PAI) is an international coalition of organizations with the mision “to shape best practices, research, and public dialogue about AI’s benefits for people and society.” Its
100+ member organizations cover a broad range of interests and include leading AI research labs (DeepMind, OpenAI); several universities (MIT, Cornell); most big tech companies (Google, Apple, Facebook, Amazon, Microsoft); news media (NYT, BBC); and humanitarian organizations (Unicef, ACLU).
Systems like this (and
Amsterdam’s AI registry, for example) are a clear sign that productized AI is a quickly starting to mature as a field, and that lots of good work is being done to manage its impact. Most importantly, I hope these projects will help us have more sensible discussions about regulating AI. Benedict Evans’ essays
Notes on AI Bias and
Face recognition and AI ethics are excellent reads on this; he compares calls to “regulate AI” to wanting to regulate databases — it’s not the right level of abstraction, and we should be thinking about specific policies to address specific problems instead. A dataset of categorized AI incidents, managed by a broad coalition of organizations, sounds like a great step in this direction.