View profile

#56: My new job in climate AI!

Hey there, welcome to Dynamically Typed #56. First, a personal update going into the new year: I'm st
Dynamically Typed
#56: My new job in climate AI!
Hey there, welcome to Dynamically Typed #56. First, a personal update going into the new year: I’m starting a new job! Monday will be my first day as machine learning engineer at Dexter Energy, an Amsterdam-based startup that uses AI to predict and optimize renewable (mostly wind/solar) power sources.
The world is deploying ever-cheaper renewables at a breakneck pace, and in the next decades we’ll need to do this even more, even faster, to cleanly energize the world. As electricity on the grid is increasingly supplied by highly variable wind and solar sources, forecasting and optimizing them becomes increasingly important. How much sun will there be in two hours? Should I sell my wind park’s power directly to the grid, or charge a battery now and sell it later? This is where AI comes in: see DeepMind’s wind forecasting project, for example, and lots of work at the 2019 and 2020 Climate Change AI workshops. The seminal 97-page CCAI paper also calls out power forecasting as a High Leverage application of AI to climate (see DT #16). I’ve been writing about this stuff in Dynamically Typed for almost two years now, so I’m super excited that I’ll finally get to apply my own ML and software engineering skills to these problems directly!
Anyway, back to today’s DT: there hasn’t been too much action in the AI world over the past two (holiday) weeks, but I still found a few cool links across most categories, including several from NeurIPS 2020 (the biggest AI conference, which ran from December 6th to 12th).

Productized Artificial Intelligence 🔌
  • 🚗 Business Insider’s Mathias Döpfner did a long new interview with Elon Musk. It covers a lot, and most of it isn’t too relevant to DT, but this Musk quote is: “I’m extremely confident that Tesla will have level five [self driving] next year, extremely confident, 100%.” Yes, this definitely isn’t the first time Musk has claimed full self driving is just around the corner, but my slightly contrarian take (from a few months ago) is that I actually do think Tesla will get to a useful level of self-driving — deployed at scale in consumer cars — first. Their big bet years ago that vision (without LIDAR) is enough for autonomy has enabled them to be years ahead of the competition with their dataset. They’ve harnessed their fleet of Telsas on real roads for very clever sampling, feedback loops (ghost mode), and regression testing; Andrej Karpath, (Tesla’s head of AI, had a really great talk on all this in April last year.
Machine Learning Research 🎛
The evolution of DeepMind's reinforcement learning algorithms. (DeepMind)
The evolution of DeepMind's reinforcement learning algorithms. (DeepMind)
  • 👾 New from DeepMind, in Nature: Mastering Atari, Go, chess and shogi by planning with a learned model by Schrittwieser et al. (2020). The paper describes DeepMind’s new games-playing reinforcement learning algorithm MuZero, which is the latest evolution of the lab’s previous AlphaGo (2016), AlphaGo Zero (2017), and AlphaZero (2018) algorithms. The key improvement in MuZero is that it doesn’t need to be explicitly told the rules of the games it plays: it’s model-free, and “just models aspects that are important to the agent’s decision-making process.” This helps it achieve state-of-the-art (and superhuman) results on the Atari suite, Go, chess, and shogi.
  • 🧠 François Chollet’s NeurIPS talk on abstraction and reasoning, based on his 2019 Measure of Intelligence paper (see DT #26), is a great watch. It covers the shortcut rule (“you get what you optimize for — at the detriment of everything else”); levels of generalization; and what kinds of abstraction he thinks deep learning can and cannot solve (by itself). The tutorial also included a talk on analogical reasoning by Melanie Mitchell and one on math reasoning by Christian Szegedy; this live-tweeted thread by Antonio Vergari summarizes all three talks in bite-size chunks. (RE: the previous link, it’d be super cool to see DeepMind try to tackle the ARC challenge — maybe someday.)
I’ve also collected all 75+ ML research tools previously featured in Dynamically Typed on a Notion page for quick reference. ⚡️
Artificial Intelligence for the Climate Crisis 🌍
  • 🏭 Towards Tracking the Emissions of Every Power Plant on the Planet by Couture et al. (2020) won the Best Pathway to Impact award at the NeurIPS 2020 CCAI workshop. Supported by Al Gore’s Climate TRACE (and grants from Google.org and Bloomberg Philanthropies), the project “[uses] machine learning to infer power generation and emissions from visible and thermal power plant signatures in satellite images.” In this initial paper, the authors present models that can predict whether a power plant is currently on or off from a single satellite image; their best model, a convolutional neural network, gets a mean average precision of 81% on this binary classification task. Interestingly, they find that the “vapor plume” (steam) from a power plant’s cooling system is a better indicator for its emissions than the “smoke plume” (greenhouse gasses) coming out of its main chimney.
  • 🔌 The United Kingdom Centre for AI & Climate organized a workshop on AI for Net Zero electricity in November 2020. Outcomes of the workshop include five high-level recommendations: (1) develop a vision and roadmap for the role of AI in reaching a net-zero electricity system; (2) create and distribute energy-AI training resources and build links with universities; (3) invest in explainable AI; (4) coordinate the development of data standards and access to key datasets; and (5) for government agencies to engage more closely with businesses. These all sound very reasonable to me — much better than most high-level “how will we sprinkle AI into this?” business strategies I’ve read! It’d be interesting to see if the European Union, the United States, China, India, etc. also have strategies on this.
Cool Things ✨
Sample images from Taming Transformers, with resolutions 1280x832, 1024x416, and 1280x240, from top to bottom. (Esser et al., 2020)
Sample images from Taming Transformers, with resolutions 1280x832, 1024x416, and 1280x240, from top to bottom. (Esser et al., 2020)
  • 🖼 In Taming Transformers for High-Resolution Image Synthesis, Esser et al. (2020) present “the first results on semantically-guided synthesis of megapixel images with transformers” — high-resolution AI-generated pictures! The samples on the project’s website are super impressive. Their model is “a convolutional VQGAN, which learns a codebook of context-rich visual parts, whose composition is modeled with an autoregressive transformer.”
Thanks for reading! As usual, you can let me know what you thought of today’s issue using the buttons below or by replying to this email. If you’re new here, check out the Dynamically Typed archives or subscribe below to get a new issues in your inbox every second Sunday.
If you enjoyed this issue of Dynamically Typed, why not forward it to a friend? It’s by far the best thing you can do to help me grow this newsletter. 🎆
Did you enjoy this issue?
Leon Overweel (Dynamically Typed)

My thoughts and links on productized artificial intelligence, machine learning technology, and AI projects for the climate crisis. Delivered to your inbox every second Sunday.

If you don't want these updates anymore, please unsubscribe here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue