#71: The AlphaFold Protein Structure Database, OpenAI Triton, and a CLIP "Tour of the Sacred Library"
Hey everyone, welcome to Dynamically Typed #71! Today for productized AI I covered the AlphaFold Protein Structure Database, which is a collaboration between DeepMind and the European Bioinformatics Institute. For ML research I’ve got OpenAI’s Python-like open-source Triton GPU programming language; and for climate AI there’s CCAI’s extensive papers repository. Finally, for cool things I found a very cool CLIP artwork by Ryan Moulton, and Runway’s new research blog. Happy Sunday!
Productized Artificial Intelligence 🔌
- 🧬 AlphaFold, DeepMind’s protein folding neural network, represented a breakthrough in structural biology when it was released in December. Given a protein’s sequenced “code” of amino acid chains, the model predicts what shape the molecule “folds” itself into in nature — a key property for understanding how the protein works. After open-sourcing it last month, DeepMind has now partnered with the European Bioinformatics Institute (EMBL-EBI) to build the AlphaFold Protein Structure Database, “which offers the most complete and accurate picture of the human proteome, doubling humanity’s accumulated knowledge of high-accuracy human protein structures.” The database currently has a total of 350,000 predicted 3D protein structures and will soon be extended to cover all 100+ million sequenced proteins. It’s very impressive how in less than a year, AlphaFold went from cutting-edge research to something that end users (scientists and drug developers) can use without having to run or understand the AI model themselves. From EMBL’s press release: “This step change will catalyse a huge amount of research in new areas, and the development of applications that were previously impossible, impractical or limited in their scope by the hitherto relatively restricted amounts of 3D structural information available.” Some people even think this’ll get the AlphaFold team a Nobel prize. More coverage: MIT Tech review, NYT, DeepMind blog.
Machine Learning Research 🎛
- ⚙️ OpenAI has released v1.0 of Triton, its Python-like GPU programming language for neural networks. “Triton makes it possible to reach peak hardware performance with relatively little effort; for example, it can be used to write FP16 matrix multiplication kernels that match the performance of cuBLAS—something that many GPU programmers can’t do—in under 25 lines of code. Our researchers have already used it to produce kernels that are up to 2x more efficient than equivalent Torch implementation.” It’s interesting to see how many different intermediate languages have popped up in recent years — JAX and MLIR are the other big ones — that sit somewhere between the abstraction level of frameworks (TensorFlow/PyTorch) and of bare-metal GPU languages (CUDA), all trying to find an architectural balance in what bits to keep flexible and what bits to abstract away from developers. I always find it very hard to estimate which of these things are going to be mainstream and which will mostly be used as “glue” by framework- and low-level language builders. I guess time will tell. Triton is available on GitHub at openai/triton.
Artificial Intelligence for the Climate Crisis 🌍
- 🌎 After launching their wiki last month, Climate Change AI has now set up a workshop papers repository featuring the work presented at all eight previous Tackling Climate Change with AI events, held across conferences like ICML, ICLR and NeurIPS. All papers are tagged with searchable subject areas and whether the paper won an award. The power and energy tag, for example, will be a great source for our literature reviews at Dexter!
Cool Things ✨
“Tour of the Sacred Library” by Ryan Moulton
- 🖼 CLIP, one of OpenAI’s recent multimodal neural networks, is becoming one of the main models in AI artists’ tool belts. They’ve discovered that one funny side effect from CLIP being trained on internet data is that, while prompting the model with the text “flowing landscape” causes it to generate a bit of a bland image, prompting it with “flowing landscape | Incredible Realistic 3D Rendered Background” produces amazing results. Similarly, Ryan Moulton prompted CLIP by describing a scene and suffixing it with “in the sacred library by James Gurney,” which resulted in a beautifully stylized set of images: Tour of the Sacred Library — Come, walk with me for a while through latent space. Worth a click.
- 🎨 Runway, the “app store” of easy-to-use machine learning models accessible through a Photoshop-like interface, has launched a new Runway Research blog. It features technical walkthroughs, overviews of their open-source work, and a deep dive into their Green Screen video editing tool. Sadly, though, there’s no RSS feed for the blog yet.
Thanks for reading! As usual, you can let me know what you thought of today’s issue using the buttons below or by replying to this email. If you’re new here, check out the Dynamically Typed archives or subscribe below to get a new issues in your inbox every second Sunday.
If you enjoyed this issue of Dynamically Typed, why not forward it to a friend? It’s by far the best thing you can do to help me grow this newsletter. 🥧