Dynamically Typed

#17: GANPaint Studio, collaborative sketching with AI, and a survey of ML testing

Hey everyone, welcome to Dynamically Typed #17. Today’s issue is a bit shorter than usual because I’ve had a busy two weeks working on my MSc thesis—I’m already halfway through my internship at Adyen! Nevertheless, I found a good few interesting links for the newsletter.

In productized AI news, MIT’s CSAIL showed off an image editing tool called GANPaint Studio, and a US police body cam manufacturer banned the use of facial recognition on its cameras. On the research side, Microsoft introduced MASS, a new state-of-the-art pre-training technique for sequence-to-sequence models, and Zhang et al. published a comprehensive study of machine learning testing techniques. Finally, I found a fun study from Stanford that allows you to take turns with an AI to collaboratively make a drawing.

Productized Artificial Intelligence 🔌

GANPaint Studio (MIT CSAIL).

GANPaint Studio (MIT CSAIL).

GANPaint Studio is a tool by MIT CSAIL to modify images of certain categories using a generative adversarial network. Using the software, you can take an input image of something like a kitchen or a church and paint over an area you want to change. You can then tell it to draw extra chairs or windows, or different rooftops or trees, and GANPaint Studio will do its best to realistically fill the areas you marked with your desired objects. More:

Police body camera company Axon has banned the use of facial recognition technology on its cameras. This is the less shiny side of productized artificial intelligence: biased AI systems are being deployed in sensitive areas like policing, where they are likely to reinforce existing societal inequalities and (racial, gender, sexual orientation, …) discrimination. As Ben Evans wrote earlier this year:

[The] scenario for AI bias causing harm that is easiest to imagine is probably not one that comes from leading researchers at a major institution. Rather, it is a third tier technology contractor or software vendor that bolts together something out of open source components, libraries and tools that it doesn’t really understand and then sells it to an unsophisticated buyer that sees ‘AI’ on the sticker and doesn’t ask the right questions, gives it to minimum-wage employees and tells them to do whatever the ‘AI’ says.

Indeed, it’s not hard to imagine Evans’ scenario happening in the police body cam setting: nothing is stopping a budget-constrained police department from trying to use body cams to automatically find criminals, without realizing that current commercially-available facial recognition software is much more likely to, for example, misrecognize a person of color than a white person.

That’s why it’s refreshing to see a body cam manufacturer stepping up to take responsibility for the problem. Charlie Warzel for the New York Times:

According to [Axon’s independent] ethics board report, in early conversations about facial recognition, Axoninitially argued that it “could not dictate to customers how products were used, nor its customers’ policies, and that it could not feasibly patrol misuse of its product.” That’s Big Tech’s version of “guns don’t kill people, people kill people.” And it’s a view that’s very widely held across the industry.

Mr. Friedman hopes that Axon’s pledge will force other vendors to think about where the new technology might be headed and how it could impact the most vulnerable. “We want them to remember that just because you can build it, doesn’t mean you should.”

It’d be great to see technology companies that sell facial recognition APIs, like Amazon and Microsoft, build these principles into their user agreements instead of waiting for governments to put regulations in place. Read more about Axon’s decision in Warzel’s piece here: A Major Police Body Cam Company Just Banned Facial Recognition.

Machine Learning Technology 🎛

Figures from Machine Learning Testing: Survey, Landscapes and Horizons. (Zhang et al.)

Figures from Machine Learning Testing: Survey, Landscapes and Horizons. (Zhang et al.)

Zhang et al. published a comprehensive survey of machine learning testing research on arXiv. The paper is a collaboration between CREST (University College London), FAIR (Facebook), Kyushu Univeristy, and Nanyang Technological University, and it covers “any activity aimed at detecting differences between existing and required behaviours of machine learning systems.” This includes:

It looks like a very useful reference resource for both researchers and industry practitioners. Read the paper on arXiv: Machine Learning Testing: Survey, Landscapes and Horizons.

Microsoft Asia researchers have introduced Masked Sequence to Sequence Pre-training (MASS), which outperforms previous state-of-the-art methods on tasks like unsupervised machine translation, low-resource machine translation, abstractive summarization, and conversational response generation. Modern systems for these natural language tasks use an encoder-attention-decoder system, and with previous techniques like BERT and GPT, the encoder and decoder sides had to be pre-trained separately. MASS uses masking to pre-train the two sides jointly, which significantly boosts performance on the aforementioned tasks. More:

Quick ML resource links ⚡️ (see all 25)

Cool Things ✨

Drawing an octopus together; peach lines are me, yellow lines are the AI. (Stanford Department of Psychology)

Drawing an octopus together; peach lines are me, yellow lines are the AI. (Stanford Department of Psychology)

Let’s draw together! __ is a fun online study where you collaborate with an AI to draw different things. The site tells you something to draw, like an elephant or an octopus, and then invites you to make the first line. You then take turns drawing the next line until the sketch is complete, after which the AI checks your collaborative work by trying to guess what the drawing was.

The site is a part of “a study being performed by cognitive scientists in the Stanford Department of Psychology,” but I haven’t been able to find out much more about the research online. You can try it out here: Let’s draw together!

Thanks for reading! As usual, you can let me know what you thought of today’s issue using the buttons below or by replying to this email. If you’re new here, check out the Dynamically Typed archives or subscribe below to get a new issues in your inbox every second Sunday.

If you enjoyed this issue of Dynamically Typed, why not forward it to a friend? It’s by far the best thing you can do to help me grow this newsletter. 😁