#33: Billie Eilish answers AI-generated interview questions, visual search for aerial imagery, and the Tech Won't Drill It pledge
Hey everyone, welcome to Dynamically Typed #33! This is a very full issue, with lots of links, so let’s dive straight into it. :)
Editorial note: Inspired by the Tech Won’t Drill It pledge (see below), I’m also going to start doing a bit more background research on companies I write about on Dynamically Typed, and I’ll add a climate disclose if, for example, they do work that directly assists fossil fuel exploration or extraction.
Productized Artificial Intelligence 🔌
Right: The football field and running tracks at my high school in Rye, NY, USA. Left: similar-looking patches from across the United States.
Geospatial analysis company Descartes Labs launched a free online visual search tool for aerial imagery. The search tool lets you select a 128 x 128 meter patch and then finds up to 1,000 similar-looking patches across the United States. It’s also extremely fast:
Searching the continental United States at 1-meter pixel resolution, corresponding to approximately 2 billion images, takes approximately 0.1 seconds. This system enables real-time visual search over the surface of the earth
This is a clever productization of Descartes’ Labs core technologies because, even though it doesn’t make any money directly, the tool’s maps are very shareable and serve as great marketing for their paid Workbench product. *
On a technical level, Descartes Labs Search uses a standard ResNet-50 convolutional neural network pretrained on ImageNet classification and fine-tuned on OpenStreetMap object classification, to encode each overlapping patch into a vector of 512 abstract binary features. They pre-process these encodings for each map patch and store them in-memory in Redis; searching for similar patches is then reduced to the simple problem of finding binary strings with a close Hamming distance to the selected patch.
I’ve found that the search tool works best for very distinct-looking patterns that mostly fit within a single patch: the football field with running tracks above, found at many high schools across the US, is a good example because it’s a mostly-green oval surrounded by a thin red line. You can try out the interactive search tool at on Descartes Labs’ website (which you should, it’s fun!) or read more of the technical details in the paper by Keisler et al. (2020) on arXiv: Visual search over billions of aerial and satellite images.
_* _Climate disclosure_ : From Descartes Labs’ _Solutions _and_ Contact _pages, it looks like they actively market their Workbench product to, and work together with, oil and gas companies. AI companies_ should say no _to fossil fuel exploration and development._
Quick productized AI links 🔌
- 💄 Pinterest launched another AI-powered feature in its app: Try On powered by Lens shows users how lipstick would look on their lips, through an augmented reality overlay in the selfie camera. Story by Maghan McDowell for Vogue Business: Pinterest steps up e-commerce efforts with AR try-on. (Also see DT #23 for more about Pinterest Lens.)
- 👩⚖️ A Dutch court ruled that the System Risk Indication (SyRI), an automated surveillance system used to predict benefit or tax fraud, is illegal. Sanne Blauw for The Correspondent: An algorithm was taken to court – and it lost (which is great news for the welfare state)
Machine Learning Research 🎛
Edward Raff wrote about his findings in independently reproducing 250+ ML papers from scratch. I previously wrote about his NeurIPS 2019 paper on the topic in DT #23; he now distilled some of his findings into an article for The Gradient:
- Having fewer equations per page makes a paper more reproducible. This one is interesting because I’ve read quite a lot of complaints about people trying to make their paper look more math-y and impressive by including unnecessary derivations and proofs; this finding implies that those things actually hurts a paper’s reproducibility.
- Empirical papers may be more reproducible than theory-oriented papers.
- Sharing code is not a panacea. I’ve recently done quite a few paper reproductions at work, and this rings very true: research code is often so messy and inconsistent with the paper that it doesn’t add much value.
- Having detailed pseudocode is just as reproducible as having no pseudo code.
- Creating simplified example problems do not appear to help with reproducibility.
- Please, check your email. Papers are easier to reproduce if the authors are willing to answer questions about unclear details!
Read the full story, including Raff’s takeaways from these findings, on The Gradient: Quantifying Independently Reproducible Machine Learning.
Quick ML research + resource links 🎛 (see all 53 resources)
- 👾 The Abstraction and Reasoning Challenge (ARC, see DT #26) now has a Kaggle challenge.
- 💬 Microsoft has a gigantic new language model: Turing-NLG: A 17-billion-parameter language model is more than 10x bigger than OpenAI’s GPT-2. Crazy.
- ⚡️ Google launched Colab Pro, a $10/month subscription that gives you cloud Jupyter notebooks with faster GPUs + TPUs, longer runtimes, and more memory.
- ⚡️ Renato Negrinho wrote a Python TikZ transpiler to generate TikZ code for LaTeX figures more easily. GitHub link: negrinho/sane_tikz
Artificial Intelligence for the Climate Crisis 🌍
Tech Won’t Drill It. Nearly 50 machine learning researchers wrote an open letter asking big tech companies to stop selling their AI products to fossil fuel companies:
As new applications of artificial intelligence (AI) to problems in the physical sciences emerge, many such innovations are being used to accelerate fossil fuel exploration and development projects[, …] leading to “automating the climate crisis.”
Momentum around this issue has been building up for a few months, recently through this Vox video highlighting that major tech companies are actively courting oil companies to use their machine learning products for fossil fuel exploration. Roel Dobbe and Meredith Whittaker, writing for the AI Now Institute in October 2019:
Amazon is luring potential customers in the oil and gas industry with programs like “Predicting the Next Oil Field in Seconds with Machine Learning.” Microsoft held an event called “Empowering Oil & Gas with AI,” and Google Cloud has its own Energy vertical dedicated to working with fossil fuel companies.
As the letter notes, this is especially egregious next to the publicity spotlight these same companies put on the climate-focused AI work—the very work I highlight in this newsletter twice a month. The pledge therefore “[urges] tech and oil companies to stop exploiting AI technologies to facilitate and accelerate fossil fuel exploration and extraction.” I’d be surprised if dropping these verticals would represent more than a drop in the bucket for these companies’ bottom lines, and I sincerely hope that groups like Google Workers for Action on Climate and Amazon Employees For Climate Justice can push for them to do so.
Signers of the pledge include Turing award winner Yoshua Bengio, Meredith Whittaker and Kate Crawford of the NYU AI Now Institute, and many university professors and researchers. I’ve signed it as well, and if you work in artificial intelligence—at any level; I’m just an ML Engineer!—I think you should fill in the Google Form to sign too: Tech Won’t Drill It—No to AI for Fossil Fuel Exploration and Development.
Quick climate AI links 🌍
- 🖥 Kasim et al. (2020) on arXiv: Deep Emulator Network SEarch (DENSE): Up to two billion times acceleration of scientific simulations with deep neural architecture search—they specifically mention climate science and fusion energy research as use cases for their new general approach for speeding up simulations.
- 🛫 Yoshua Bengio, on his new blog, wrote an argument for AI conferences to allow for more remote/video representations, to reduce their carbon footprint: .
Cool Things ✨
Vogue Magazine interviews Billie Eilish using questions generated by an AI bot.
Nicole He used AI to generate the questions for Vogue Magazine’s Billie Eilish interview. This is another great example of creatively fine-tuning OpenAI’s GPT-2 (see also AI Dungeon 2 in DT #28), and the language model came up with some really novel questions:
- Was there a point where you decided you’d rather look up to the sky or the internet?
- Do you ever wear headphones with sounds in them?
- Have you ever seen the ending?
Eilish answers these questions in Vogue’s video interview, and also reacts to a song the model generated based on all her previous lyrics (spoiler: she rates it a 6 out of 10). The video anthropomorphizes AI a bit too much in my opinion—it makes it sound like an “A.I. Bot” is conducting the whole interview, while it’s really just a person with a robot voice hand-picking the most interesting AI-generated questions to ask, and judging from the YouTube comments most viewers don’t realize that—but it’s a fun watch nonetheless: Billie Eilish Gets Interviewed By a Robot. Also check out He’s Twitter thread for more technical details.
This AI-generated art appeared in the New York Times on October 19th, 2018
IBM AI researchers wrote about the art the generated for the New York Times special session on AI. They first identified “core visual concepts” by scraping NYT articles for AI-related terms, and then used a discriminative appearance model to find the one that most distinctly represents NYT AI articles: the (slightly clichéd, but that makes sense) photo of a human and robot shaking hands. They then used a generative adversarial network (GAN) to generate new images of humans and robots shaking hands. Finally, they applied style transfer to generate art that feels in-line with the abstract art of past NYT covers. I think the resulting art looks really cool, and is definitely worthy of being a magazine cover or spread. Check out more generated art in the paper by Merler et al. (2020) on arXiv: Covering the News with (AI) Style (PDF).
Quick cool things links ✨
- 🎨 Frank and Frank (2020) on arXiv: Rembrandts and Robots: Using Neural Networks to Explore Authorship in Painting
- 📹 Digg: Someone Used Neural Networks To Upscale An 1895 Film To 4K 60 FPS, And The Result Is Really Quite Astounding
Thanks for reading! As usual, you can let me know what you thought of today’s issue using the buttons below or by replying to this email. If you’re new here, check out the Dynamically Typed archives or subscribe below to get a new issues in your inbox every second Sunday.
If you enjoyed this issue of Dynamically Typed, why not forward it to a friend? It’s by far the best thing you can do to help me grow this newsletter. 🌬