View profile

#33: Billie Eilish answers AI-generated interview questions, visual search for aerial imagery, and the Tech Won't Drill It pledge

Hey everyone, welcome to Dynamically Typed #33! This is a very full issue, with lots of links, so let
Dynamically Typed
#33: Billie Eilish answers AI-generated interview questions, visual search for aerial imagery, and the Tech Won't Drill It pledge
Hey everyone, welcome to Dynamically Typed #33! This is a very full issue, with lots of links, so let’s dive straight into it. :)
Editorial note: Inspired by the Tech Won’t Drill It pledge (see below), I’m also going to start doing a bit more background research on companies I write about on Dynamically Typed, and I’ll add a climate disclose if, for example, they do work that directly assists fossil fuel exploration or extraction.

Productized Artificial Intelligence 🔌
Right: The football field and running tracks at my high school in Rye, NY, USA. Left: similar-looking patches from across the United States.
Right: The football field and running tracks at my high school in Rye, NY, USA. Left: similar-looking patches from across the United States.
Geospatial analysis company Descartes Labs launched a free online visual search tool for aerial imagery. The search tool lets you select a 128 x 128 meter patch and then finds up to 1,000 similar-looking patches across the United States. It’s also extremely fast:
Searching the continental United States at 1-meter pixel resolution, corresponding to approximately 2 billion images, takes approximately 0.1 seconds. This system enables real-time visual search over the surface of the earth
This is a clever productization of Descartes’ Labs core technologies because, even though it doesn’t make any money directly, the tool’s maps are very shareable and serve as great marketing for their paid Workbench product.*
On a technical level, Descartes Labs Search uses a standard ResNet-50 convolutional neural network pretrained on ImageNet classification and fine-tuned on OpenStreetMap object classification, to encode each overlapping patch into a vector of 512 abstract binary features. They pre-process these encodings for each map patch and store them in-memory in Redis; searching for similar patches is then reduced to the simple problem of finding binary strings with a close Hamming distance to the selected patch.
I’ve found that the search tool works best for very distinct-looking patterns that mostly fit within a single patch: the football field with running tracks above, found at many high schools across the US, is a good example because it’s a mostly-green oval surrounded by a thin red line. You can try out the interactive search tool at on Descartes Labs’ website (which you should, it’s fun!) or read more of the technical details in the paper by Keisler et al. (2020) on arXiv: Visual search over billions of aerial and satellite images.
* Climate disclosure: From Descartes Labs’ Solutions and Contact pages, it looks like they actively market their Workbench product to, and work together with, oil and gas companies. AI companies should say no to fossil fuel exploration and development.
Quick productized AI links 🔌
Machine Learning Research 🎛
Edward Raff wrote about his findings in independently reproducing 250+ ML papers from scratch. I previously wrote about his NeurIPS 2019 paper on the topic in DT #23; he now distilled some of his findings into an article for The Gradient:
  1. Having fewer equations per page makes a paper more reproducible. This one is interesting because I’ve read quite a lot of complaints about people trying to make their paper look more math-y and impressive by including unnecessary derivations and proofs; this finding implies that those things actually hurts a paper’s reproducibility.
  2. Empirical papers may be more reproducible than theory-oriented papers.
  3. Sharing code is not a panacea. I’ve recently done quite a few paper reproductions at work, and this rings very true: research code is often so messy and inconsistent with the paper that it doesn’t add much value.
  4. Having detailed pseudocode is just as reproducible as having no pseudo code.
  5. Creating simplified example problems do not appear to help with reproducibility.
  6. Please, check your email. Papers are easier to reproduce if the authors are willing to answer questions about unclear details!
Read the full story, including Raff’s takeaways from these findings, on The Gradient: Quantifying Independently Reproducible Machine Learning.
Quick ML research + resource links 🎛 (see all 53 resources)
Artificial Intelligence for the Climate Crisis 🌍
Tech Won’t Drill It. Nearly 50 machine learning researchers wrote an open letter asking big tech companies to stop selling their AI products to fossil fuel companies:
As new applications of artificial intelligence (AI) to problems in the physical sciences emerge, many such innovations are being used to accelerate fossil fuel exploration and development projects[, …] leading to “automating the climate crisis.”
Momentum around this issue has been building up for a few months, recently through this Vox video highlighting that major tech companies are actively courting oil companies to use their machine learning products for fossil fuel exploration. Roel Dobbe and Meredith Whittaker, writing for the AI Now Institute in October 2019:
Amazon is luring potential customers in the oil and gas industry with programs like “Predicting the Next Oil Field in Seconds with Machine Learning.” Microsoft held an event called “Empowering Oil & Gas with AI,” and Google Cloud has its own Energy vertical dedicated to working with fossil fuel companies.
As the letter notes, this is especially egregious next to the publicity spotlight these same companies put on the climate-focused AI work—the very work I highlight in this newsletter twice a month. The pledge therefore “[urges] tech and oil companies to stop exploiting AI technologies to facilitate and accelerate fossil fuel exploration and extraction.” I’d be surprised if dropping these verticals would represent more than a drop in the bucket for these companies’ bottom lines, and I sincerely hope that groups like Google Workers for Action on Climate and Amazon Employees For Climate Justice can push for them to do so.
Signers of the pledge include Turing award winner Yoshua Bengio, Meredith Whittaker and Kate Crawford of the NYU AI Now Institute, and many university professors and researchers. I’ve signed it as well, and if you work in artificial intelligence—at any level; I’m just an ML Engineer!—I think you should fill in the Google Form to sign too: Tech Won’t Drill It—No to AI for Fossil Fuel Exploration and Development.
Quick climate AI links 🌍
Cool Things ✨
Vogue Magazine interviews Billie Eilish using questions generated by an AI bot.
Vogue Magazine interviews Billie Eilish using questions generated by an AI bot.
Nicole He used AI to generate the questions for Vogue Magazine’s Billie Eilish interview. This is another great example of creatively fine-tuning OpenAI’s GPT-2 (see also AI Dungeon 2 in DT #28), and the language model came up with some really novel questions:
  • Was there a point where you decided you’d rather look up to the sky or the internet?
  • Do you ever wear headphones with sounds in them?
  • Have you ever seen the ending?
Eilish answers these questions in Vogue’s video interview, and also reacts to a song the model generated based on all her previous lyrics (spoiler: she rates it a 6 out of 10). The video anthropomorphizes AI a bit too much in my opinion—it makes it sound like an “A.I. Bot” is conducting the whole interview, while it’s really just a person with a robot voice hand-picking the most interesting AI-generated questions to ask, and judging from the YouTube comments most viewers don’t realize that—but it’s a fun watch nonetheless: Billie Eilish Gets Interviewed By a Robot. Also check out He’s Twitter thread for more technical details.
This AI-generated art appeared in the New York Times on October 19th, 2018
This AI-generated art appeared in the New York Times on October 19th, 2018
IBM AI researchers wrote about the art the generated for the New York Times special session on AI. They first identified “core visual concepts” by scraping NYT articles for AI-related terms, and then used a discriminative appearance model to find the one that most distinctly represents NYT AI articles: the (slightly clichéd, but that makes sense) photo of a human and robot shaking hands. They then used a generative adversarial network (GAN) to generate new images of humans and robots shaking hands. Finally, they applied style transfer to generate art that feels in-line with the abstract art of past NYT covers. I think the resulting art looks really cool, and is definitely worthy of being a magazine cover or spread. Check out more generated art in the paper by Merler et al. (2020) on arXiv: Covering the News with (AI) Style (PDF).
Thanks for reading! As usual, you can let me know what you thought of today’s issue using the buttons below or by replying to this email. If you’re new here, check out the Dynamically Typed archives or subscribe below to get a new issues in your inbox every second Sunday.
If you enjoyed this issue of Dynamically Typed, why not forward it to a friend? It’s by far the best thing you can do to help me grow this newsletter. 🌬
Did you enjoy this issue?
Leon Overweel (Dynamically Typed)

My thoughts and links on productized artificial intelligence, machine learning technology, and the AI projects for the climate crisis. Delivered to your inbox every second Sunday.

If you don't want these updates anymore, please unsubscribe here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue