Dynamically Typed

#8: Should OpenAI open-source their impressive new language model?

Hey everyone, exciting news! I’m now on the Pro plan of Revue, the service that powers this newsletter, which means that Dynamically Typed now has a snazzy new color scheme and a custom domain: dynamicallytyped.com.

In today’s issue, I’m focusing on GPT-2, OpenAI’s impressive language model that has stirred quite some controversy in the research community over the past few weeks. This deep-dive-on-one-topic format is new to the newsletter, so let me know if you like it! In other news, Microsoft released an update to its HoloLens AR headset, Google’s DeepMind used deep learning to optimize a wind farm, and Lyft’s self-driving car team shared their map making principles.

OpenAI’s Controversial Language Model 🦄

A sample from the GPT-2 language model. A human wrote the italicized prompt and a computer wrote the rest. (OpenAI)

A sample from the GPT-2 language model. A human wrote the italicized prompt and a computer wrote the rest. (OpenAI)

Just over two weeks ago, researchers at OpenAI released GPT-2 , a language model trained in an unsupervised way on 40GB of text from the Internet. A language model is any algorithm that takes some words as input (“the coffee is …”) and tries to predict the most likely next word as output (“… hot”); it is one of the most fundamental tools in Natural Language Processing (NLP) research. OpenAI’s GPT-2 model can do some pretty cool stuff:

On the latter two tasks, GPT-2 doesn’t achieve state-of-the-art performance, but it’s still cool to see how the researchers hacked their language model into performing them at all.

Now, for the controversy: OpenAI is not releasing the trained model, or the training data, to the world , in a departure from their previous open source research approach. From their blog post:

Due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper.

It’s easy to imagine how GPT-2 could be used to automatically generate large volumes of realistic-sounding fake news or inflammatory text. Indeed, The Verge tested exactly that:

[W]hen given a prompt like “Jews control the media,” GPT-2 wrote: “They control the universities. They control the world economy. How is this done? Through various mechanisms that are well documented in the book The Jews in Power by Joseph Goebbels, the Hitler Youth and other key members of the Nazi Party.”

This is definitely a worrying sample, so I can appreciate OpenAI’s intentions in deciding not to release the model for anyone to do this.

However, over the past few weeks, the research community has criticized this decision on two fronts: (1) open-sourcing the model would make its capabilities more transparent to the research community and the world, and (2) the model can probably be replicated fairly easily by a state or organization that has sufficient resources.

Stanford’s Hugh Zhang has the best take in his open letter published on The Gradient. He compares GPT-2 to Photoshop, which also scared people when it first came out because it enables anyone who’s willing to put a few hours into learning the software to make realistic-looking fake images. Zhang argues that precisely because everyone knows that images can so easily be manipulated (photoshopped) by anyone, “society has emerged relatively unscathed” compared to eras in history when only those with enough power could manipulate images and most people believed that those images were real (like Stalin’s political propaganda). An open source release would enable all sorts of art projects and stunts with GPT-2, so that knowledge of its capabilities would spread to much broader groups of the population that did not see the original announcement. Keeping the model secret, however, means that only states and organizations can replicate and use it, possibly for nefarious purposes.

I agree with Zhang’s calls to open source GPT-2, and I hope that OpenAI reverses their decision. What do you all think?

Read more about GPT-2 here:

Productized Artificial Intelligence 🔌

Microsoft’s HoloLens 2. (Vjeran Pavic, The Verge)

Microsoft’s HoloLens 2. (Vjeran Pavic, The Verge)

Microsoft has unveiled the second version of its augmented reality headset: the HoloLens 2. The biggest upgrade is that the glasses’ field of view (how much of what you see in front of you can be covered in holograms) has doubled in size, fixing the biggest complaint around the first version. Another problem was the awkward “Air Tap”-based interaction model; I got to experience this once in a demo, and it definitely felt clumsy. (Shooting ray guns at aliens bursting out of walls definitely made up for it though.) Microsoft has also fixed this issue, using an AI model that tracks up to 25 points in each of your hands and knows when you’re trying to grab and drag an object or press a button. It can now also take the spatial map from the built-in Kinect sensor and semantically understand whether those thousands of 3D dots are part of a wall or a human.

Although the HoloLens 2 is exclusively aimed at businesses, I’m excited to see the technology trickle down to consumer products in the coming few years. (Apple, for one, is a consumer-facing company that’s also investing heavily in AR.)

Google’s DeepMind used deep learning to optimize a wind farm, and made it 20% more valuable. Energy sources that can be scheduled are more valuable to an energy grid, which has always been a big drawback to renewables like wind energy because, well, the wind is rather unpredictable. DeepMind trained a model to predict power output 36 hours in advance, so the wind farm could make hourly energy commitments a full day in advance. This increased the value of the wind energy by 20% compared to a baseline of no time-based commitments. I’d love to work on a project like this one day.

Machine Learning Technology 🎛

The top Machine Learning projects on GitHub in 2018. (GitHub)

The top Machine Learning projects on GitHub in 2018. (GitHub)

GitHub published a report about the state of open source machine learning code, featuring their analysis of contributions to public machine learning projects on the platform from January 1st to December 31st, 2018. The most popular languages for such projects are Python, C++, and JavaScript, and common Python packages include numpy, scipy, pandas, and matplotlib. The top ML projects on GitHub, pictured above, of course also include tensorflow and scikit-learn. I don’t think any of these are particularly surprising, but it’s nice to know that the environments I’ve been working in reflect the overall preferences of the community.

Kumar Chellapilla at ride-hailing company Lyft shared the principles their Level 5 team is using to build maps for self-driving cars. Lots of companies in the automated vehicle (AV) space are currently trying to work out exactly what level of detail such maps need to get a network of self-driving taxis online; since no standard has emerged yet, it’s interesting to see what Lyft thinks is most important:

Chellapilla’s post goes into how they implement this, including four of the “layers” of their maps, from geography to real-time knowledge.

Cool Things ✨

Which of these is a real person, and which is a GAN-generated fake? (Which Face Is Real)

Which of these is a real person, and which is a GAN-generated fake? (Which Face Is Real)

I’m finishing today’s issue with Which Face Is Real, a site that shows you two faces and asks you to pick which one is real. It’s a project by the University of Washington’s Calling Bullshit Project, which wants people to think critically about the information in front of them. This is also an example of exactly what Hugh Zhang argued for regarding OpenAI’s GPT-2: the fact that the Generative Adversarial Network (GAN) that was used to generate these fake faces is open-source enables project like this to be created, which in turn spreads awareness of what the AI model is capable of.

Thanks for reading! As always, let me know what you thought of this issue using the buttons below or by sending me a message. (Especially if you have thoughts on the long-form first section!) If you’re new here, subscribe to get a new issue of Dynamically Typed in your inbox every second Sunday.