#14: Artificial intelligence for medicine and the climate crisis
Hey everyone, this is the 14th issues of Dynamically Typed. Since there are also 14 days between each issue, that means I’ve now been doing this newsletter for 14^2 = 196 days—over half a year!
Recently, I’ve become more interested in how artificial intelligence can be used to fight the climate crisis, so I’m dedicating a section in today’s DT to a few projects I’ve come across in this space. Is this something you’d like to see in future editions as well? Let me know by replying to this email.
Other news I’m covering today includes a lot of research by Google, from end-to-end systems for detecting lung cancer in CT scans and translating speech to speech, to a framework of best practices for applied AI. Related, there’s new work from Stripe and the Wolfram Language centered around deploying machine learning systems at scale.
Productized Artificial Intelligence 🔌
Google’s Devvret Rishi and Margaret Jennings wrote up a framework of best practices for applied AI based on their work with ten fintech startups through Google’s Launchpad accelerator. The framework is divided into three sections:
- Framing the problem: getting the right team, incentives and expectations in place
- Building the model: starting simple and focusing on data pipelines over fancy models (“don’t be a hero”)
- Deploying, measuring, monitoring: testing for performance and biases early and thoroughly
A lot of their main takeaways mirror Josh Cogan’s earlier post for the Launchpad blog (see DT #7): building and tuning fancy machine learning models is not what takes the most time in productizing AI:
We can’t stress enough the importance of starting simple, with a clear business goal tied to the model’s output
Collecting data, cleaning data, augmenting your relatively small dataset, and aligning on a common business value will take the majority of your time
Read their full post here: Bridging the gap between research and big tech: applied AI/ML best practices for the modern enterprise.
Artificial Intelligence for the Climate Crisis 🌍
GAN-generated images of houses before and after a flood hits. (Schmidt et al.)
Victor Schmidt et al. used Generative Adversarial Networks to visualize the consequences of climate change. Their GAN is trained on street-view images of houses before and after catastrophic extreme weather events like floods and forest fires, which the climate crisis will exacerbate. The trained model takes an image of a house and, if it is located in a region that climate models predict will be hit by the effects of climate change in the next 50 years, transform it into what the house would look like when it is hit by extreme weather. Schmidt et al. hope that this will help create “a more visceral understanding of the effects of climate change.” Read more about it here:
- Will Knight for the MIT Technology Review: AI can show us the ravages of climate change
- Schmidt et al. on arXiv: Visualizing the Consequences of Climate Change Using Cycle-Consistent Adversarial Networks
Alina Tugend wrote about several artificial intelligence projects around climate change for The New York Times. They include:
- Dr. Maria Uriarte of Columbia University is using AI to analyze how different types of trees are affected by hurricanes. She trained a network to identify tree species in satellite images, allowing her to use before-and-after images to compare the rates at which different species survive a hurricane.
- Stanford’s Grid Resilience & Intelligence Platform (GRIP) project uses satellite imagery to “anticipate, absorb and recover from events that cause grid outages, such as extreme weather or a cyberattack.”
- The Australian Commonwealth Scientific and Industrial Research Organization is using machine learning to more efficiently genetically engineer wheat to resist worsening growing conditions caused by climate change.
- One Concern uses artificial intelligence to forecast damage due to earthquakes and flooding.
Read Tugend’s full piece for much more detail about all of the projects above: How A.I. Can Help Handle Severe Weather.
(PS: If you know a better title for this section than “Artificial Intelligence for the Climate Crisis,” let me know!)
Machine Learning Technology 🎛
Google’s lung cancer prediction model (Shravya Shetty, The Keyword)
Google AI’s Health division has published a new end-to-end lung cancer screening system. The model, published in Nature Medicine, takes a 3D CT scan as input and outputs a prediction of whether it is a potential cancer case; it does this very effectively:
When using a single CT scan for diagnosis, our model performed on par or better than the six radiologists. We detected five percent more cancer cases while reducing false-positive exams by more than 11 percent compared to unassisted radiologists in our study.
It’s very cool to see advanced ML technology being published outside of the usual AI conferences: besides the obvious scientific and medical value of this specific contribution, hopefully publications like this will help scientists in other fields understand more deeply what types of problems AI is good at tackling. More:
- Paper by Ardilla et al. in Nature Medicine: End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography
- Shravya Shetty, M.S. for Google’s The Keyword blog: A promising step forward for predicting lung cancer
- Denise Grady for The New York Times: A.I. Took a Test to Detect Lung Cancer. It Got an A.
Also from Google: Translatotron, an end-to-end speech-to-speech translation model. Previous models for translating spoken text would take three steps: (1) automatic speech recognition to turn the spoken source sentence into text, (2) machine translation to turn translate into the target language, and (3) text-to-speech synthesis to “speak” it. Translatotron does this all in one go, improving translation speed, reducing compounding errors between steps, and making it easier to retain the voice of the original speaker in the translation (which is really, really cool). Read more here:
- Ye Jia and Ron Weiss for the Google AI Blog (includes speech samples): Introducing Translatotron: An End-to-End Speech-to-Speech Translation Model
- Jia et al. on arXiv (includes model architecture): Direct speech-to-speech translation with a sequence-to-sequence model
Rob Story wrote about Railyard, an API and job manager that payments processor Stripe uses for scalable machine learning. Stripe uses machine learning for everything from fraud detection to retrying failed credit card charges, and it has teams training hundreds of new models every day. Railyard streamlines this process by exposing an API to which data scientists can submit jobs from any ML framework. Running on top of Stripe’s Kubernetes cluster, Railyard then trains, evaluates and saves the model. This takes away the cognitive load of having to think about infrastructure, operations, model state, etc., so data scientists can focus just on building and testing their models. Story’s full post explains how they implemented this at Stripe: Railyard: how we rapidly train machine learning models with Kubernetes.
The Wolfram Engine is now free for developers to encourage its use in production systems; this engine is what powers the Wolfram Language and Wolfram|Alpha. I’ve always wanted to get more into the Wolfram Language because it has implementations of lots of machine learning models, plus an enormous set of built-in “computational knowledge” about the real world:
There are now altogether 5000+ functions in the language, covering everything from visualization to machine learning, numerics, image computation, geometry, higher math and natural language understanding—as well as lots of areas of real-world knowledge (geo, medical, cultural, engineering, scientific, etc.).
I never thought of a project that I could build end-to-end in the Wolfram Language ecosystem, but this release now makes it possible to run the Wolfram Engine on a server and call into it from other programming languages, opening up a world of possibilities. More on Stephen Wolfram’s blog: Launching Today: Free Wolfram Engine for Developers.
Quick ML resource links ⚡️ (see all)
- Philip Guo shared ten “research design patterns” for finding research project ideas in technology-related fields, plus what to watch out for when applying them. Link: Research Design Patterns
I’ve now also made a Notion page with all the quick ML resource links I’ve shared so far, which I’ll keep updating as I send out new issues of Dynamically Typed. Check it out here: Machine Learning Resources.
Cool Things ✨
Infinite Patterns used the bottom-right image of a butterfly as input to “dream” this pattern. (Damien Henry, The Keyword)
Infinite Patterns is a tool to create one-of-a-kind patterns using machine learning. It’s made by artist duo Pinar&Viola and engineer Alexander Mordvintsev as part of the Google Arts & Culture Lab. You can upload an image to Infinite Patterns and it’ll use a DeepDream algorithm to transform it into a pattern. Try it here: Infinite Patterns.
Thanks for reading! As usual, you can let me know what you thought of today’s issue using the buttons below or by replying to this email. If you’re new here, check out the Dynamically Typed archives or subscribe below to get a new issues in your inbox every second Sunday.
If you enjoyed this issue of Dynamically Typed, why not forward it to a friend? It’s by far the best thing you can do to help me grow this newsletter. 💕