Dynamically Typed

#36: Google releases TensorFlow Quantum, Software 2.0 at Plumerai, and encoding scenes in neural networks

Hey everyone, welcome to Dynamically Typed #36! Today in productized AI, I’m covering a blog post on how we’re thinking about Software 2.0 at Plumerai, and I have links to some great reads from Andreessen Horowitz and Google’s Soli team. For ML research, I wrote about TensorFlow Quantum, which is exactly what it sounds like: machine learning for quantum computers! As the world (rightly) focuses on our current COVID-19 crisis, the past two weeks have been a bit quiet on climate AI news so I don’t have any stories there for today. But, lastly, I did find a cool paper about using a set of still images to encode 3D scenes into fully-connected neural networks. Let’s dive into today’s newsletter, and remember to wash your hands! 🧼

Productized Artificial Intelligence 🔌

The stacked layers of Plumerai’s Larq ecosystem: Larq Compute Engine, Larq, and Larq Zoo.

The stacked layers of Plumerai’s Larq ecosystem: Larq Compute Engine, Larq, and Larq Zoo.

A few days ago, we published a new blog post about software 2.0 at Plumerai , which touches on some points that are interesting for productized artificial intelligence at large. As a reminder, Plumerai—and my day-to-day research there—is centered around Binarized Neural Networks (BNNs): deep learning models in which weights and activations are not floating-point numbers but can only be -1 or +1. Larq is our ecosystem of open-source packages for BNN development: larq/zoo has pretrained, state-of-the-art models; larq/larq integrates with TensorFlow Keras to provide BNN layers and training tools; and larq/compute-engine is an optimized converter and inference engine for deploying models to mobile and edge devices.

Andrej Karpathy, who was previously at OpenAI and is now developing Tesla’s self-driving software, first wrote about his vision for Software 2.0 back in 2017. I recommend reading the whole essay, but it boils down to the idea that large chunks of currently human-written software will be replaced by learned neural networks—something we already see happening in areas like computer vision, machine translation, and speech recognition/synthesis. Let’s look at two benefits Karpathy notes about this shift that are relevant to our work on Larq.

First, neural networks are agile. Depending on computational requirements—running on a high-power chip vs. an energy-efficient one—deep learning models can be scaled by trading off size for accuracy. Take one of the BNN families in our Zoo package, for example: the XL version of QuickNet achieves 67.0% top-1 ImageNet classification accuracy at a size 0f 6.2mb, but it can also be scaled down to get 58.6% at just 3.2mb. Depending on the power and accuracy requirements of your application, you can swap in one for the other without having to otherwise change your code.

Second, deep learning models constantly get better. If we think of a cool new training trick or other optimization for BNNs, we can push it out in an updated version of the QuickNet family. In fact, that’s exactly what we did last week:

A great example of the power of this integrated approach is the recent addition of one-padding across the Larq stack. Padding with ones instead of zeros simplifies binary convolutions, reducing inference time without degrading accuracy. We not only enabled this in [Larq Compute Engine], but also implemented one-padding in Larq and retrained our QuickNet models to incorporate this feature. All you need to do to get these improvements is update your pip packages.

I’m super excited about the idea of Software 2.0, and—as you can probably tell from the paragraphs above—I’m pumped to be working on it every day at Plumerai, both on the research side (making better models) and the software engineering side (improving Larq). You can read more about our Software 2.0 aspirations in this blog post: The Larq Ecosystem: State-of-the-art binarized neural networks and even faster inference.

Quick productized AI links 🔌

Machine Learning Research 🎛

“A high-level abstract overview of the computational steps involved in the end-to-end pipeline for inference and training of a hybrid quantum-classical discriminative model for quantum data in TFQ. “

“A high-level abstract overview of the computational steps involved in the end-to-end pipeline for inference and training of a hybrid quantum-classical discriminative model for quantum data in TFQ. “

Google has released TensorFlow Quantum (TFQ), its open-source library for training quantum machine learning models. The package integrates TensorFlow with Cirq, Google’s library for working with Noisy Intermediate Scale Quantum (NISQ) computers (scale of ~50 - 100 qubits). Users can define a quantum dataset and model in Cirq and then use TFQ to evaluate it and extract a tensor representation of the resulting quantum states. For now Cirq computes these representations (samples or averages of the quantum state) using millions of simulation runs, but in the future it will be able to get them from real NISQ processors. The representations feed into a classical TensorFlow model and can be used to compute its loss. Finally, a gradient descent step updates the parameters of both the quantum and classical models.

A key feature of TensorFlow Quantum is the ability to simultaneously train and execute many quantum circuits. This is achieved by TensorFlow’s ability to parallelize computation across a cluster of computers, and the ability to simulate relatively large quantum circuits on multi-core computers.

TensorFlow Quantum is a collaboration with the University of Waterloo, (Google/Alphabet) X, and Volkswagen, which aims to use it for materials (battery) research. Other applications of quantum ML models include medicine, sensing, and communications.

These are definitely still very much the early days of the quantum ML field (and of quantum computing in general), but nonetheless it’s exciting to see this amount of software tooling and infrastructure being built up around it. For lots more details and links to sample code and notebooks, check out the Google AI blog post by Alan Ho and Masoud Mohseni here: Announcing TensorFlow Quantum: An Open Source Library for Quantum Machine Learning.

Quick ML research + resource links 🎛 (see all 58 resources)

Cool Things ✨

Novel views generated from pictures of a scene.

Novel views generated from pictures of a scene.

Representing Scenes as Neural Radiance Fields for View Synthesis, or NeRF, is some very cool new research from researchers at UC Berkeley. Using an input of 20-50 images of a scene taken at slightly different viewpoints, they are able to encode the scene into a fully-connected (non-convolutional) neural network that can then generate new viewpoints of the scene. It’s hard to show this in static images like the ones I embedded above, so I highly recommend you check out the excellent webpage for the research and the accompanying video.

Thanks for reading! As usual, you can let me know what you thought of today’s issue using the buttons below or by replying to this email. If you’re new here, check out the Dynamically Typed archives or subscribe below to get a new issues in your inbox every second Sunday.

If you enjoyed this issue of Dynamically Typed, why not forward it to a friend? It’s by far the best thing you can do to help me grow this newsletter. 🏠