Dynamically Typed

#4: GAN you feel the love tonight?

Hey everyone, welcome to the first public (ahh!!) issue of Dynamically Typed. My goal with this newsletter is to keep you up to date on productized AI, ML technology, and the tech industry in general (plus some cool stuff I find across the web). Hopefully, every second Sunday, I’ll help you find a few interesting stories that you otherwise wouldn’t have come across. So put on some sweet tunes and let’s dive in!

Productized Artificial Intelligence 🔌

Benedict Evans wrote a blog post how much the “[Google / China / Facebook] has all the data, so there’s no way to compete with them on AI!” arguments hold true. He argues that it doesn’t, really: all this data is very problem-specific, so it doesn’t generalize much beyond the specific tasks those companies deal with – so there’s lots of opportunities for AI startups to tackle other problems, if they can find the data and product-market fit. On the long run, he thinks machine learning will be much like databases:

This takes me to a metaphor I’ve used elsewhere - we should compare machine learning to SQL. It’s an important building block that allowed new and important things, and will be part of everything. If you don’t use it and your competitors do, you will fall behind. Some people will create entirely new companies with this - part of Wal-Mart’s success came from using databases to manage inventory and logistics more efficiently. But today, if you started a retailer and said “…and we’re going to use databases”, that would not make you different or interesting - SQL became part of everything and then disappeared. The same will happen with machine learning.

Read Evans’ full post here: Does AI make strong tech companies stronger? (Web).

But when a company like Google uses its vast data to its advantage, the results are amazing: a few weeks ago, Aparna Chennapragada (VP, Lens and AR at Google) introduced the new version of Google Lens. The app can now use your phone camera to search for what you’re looking at (like a specific dog breed) and read the text you’re seeing (like copying text off a menu). Chennapragada also explains how they used Google Images search data to train these features, and her post also contains some great gifs and photos of Shiba Inus: The era of the camera: Google Lens, one year in (Web).

Also from Google: a blog post reviewing their AI principles after half a year in production. The post covers their internal AI ethics education as well as their new formal review structure to asses new ML projects, projects and deals. I think it’s good progress. Read it here: Google AI Principles updates, six months in (Web.)

Brad Templeton published a year-in-review blog post on the progress made in self-driving cars in 2018. His post touches on the Uber fatality (a self-driving Uber killed someone in March), Waymo’s soft launch, flying cars (really!), and lots of other stuff. Read it here: Robocar Year in Review for 2018 (Web).

Machine Learning Tech 🎛

Faces generated by copying “coarse” (top three rows), “middle” (next two rows), and “fine” (bottom row) styles from the source (top) faces to the destination (left) faces (Tero Kerras et al., NVIDIA)

Faces generated by copying “coarse” (top three rows), “middle” (next two rows), and “fine” (bottom row) styles from the source (top) faces to the destination (left) faces (Tero Kerras et al., NVIDIA)

I meant to share this in the last newsletter, but it slipped through the cracks: a new paper by researchers at NVIDIA has made some incredible progress on Generative Adversarial Networ (GAN) face generation.

Zaid Alyafeai wrote a great post on the latent space of BigGan, which includes a high-level introduction to GAN image synthesis and fun examples of stuff the community has done with BigGan. He ends with an exploration into implementing image interpolation, background hallucination, and class “breeding” in a free online Google colab notebook. Read it here: BigGanEx: A Dive into the Latent Space of BigGan (Web).

Instagram Engineering published a deep dive into the lessons they learned in machine learning for the main feed and stories feed. The post touches on a lot of topics that I learned about last semester, like Gaussian Processes and KL divergence, and it surprised me to actually see these things in used in production products that serve “over 1 billion users on a regular basis.” Read it here: Lessons Learned at Instagram Stories and Feed Machine Learning (Medium).

Google AI published two papers on their exploration of quantum neural networks (QNNs). In the first, they present a theoretical QNN model and prove that it can be trained to classify images of handwritten numbers in the MNIST dataset. In the second, they show that the random initializations used today to train conventional neural nets would not work for QNNs. Google AI’s blog post: Exploring Quantum Neural Networks (Web).

Google also open-sourced TensorFlow Privacy, their library for training machine learning models in a way that preserves the privacy of training data. As more and more personal data is collected online and used to train ML models, privacy leaks from model weights are increasingly possible, so seeing differential privacy theory (Wikipedia) implemented in a mainstream AI framework is very encouraging. Check it out on Github: tensorflow/privacy (Web).

Tech and Startups 🚀

An article by Max Read at New York Magazine made the rounds on Twitter this week: How Much of the Internet Is Fake? Turns Out, a Lot of It, Actually (Web). A lot of the issues Read brings up, like deepfakes, fake subscribers, and state-sponsored trolls, are both real and important; but what my feed seemed to focus on is that “The numbers are all fking fake, the metrics are bullshit” (Twitter). I think that’s the wrong thing to take away form Read’s article. On a technical level, the only way to get perfect metrics for video views or detect advanced fake click botnets is to collect even more , even creepier personal data than many tech companies already do. Is this really the direction in which we want to push Silicon Valley? Although I have a few other issues with Read’s piece (some of the ways he “debunks” tech companies’ statements don’t really hold up), I’d love to hear what you all think of the article.

Speaking of privacy: The New York times had a big exposé on Facebook’s data sharing practices with other big tech companies. For example, the report claimed that Netflix and Spotify had read , write and delete access to users’ Messenger messages. This sounds outrageous, but in reality the integrations were just there for old, no longer available features for those apps to let users send messages without leaving the app. Because of that, I recommend reading Will Oremus’ take on this at Slate:

The real problem turned up by the Times’ reporting is that Facebook failed to pull the plug on this type of access until last year, even though many of the integrations had been abandoned years earlier. But that sloppiness, while inexcusable, isn’t the part that made headlines. What it does mean is that every Facebook privacy misstep from here on out is likely to be viewed as more villainous than it really is, including by people who have the power to do something about it.

Read the full pieces here:

Frederico Viticci at MacStories published his must-have iOS apps for 2018. My favorites from his list are Working Copy (a git client) and Pythonista (a Python interpreter). Check it out: My Must-Have iOS Apps, 2018 Edition (Web).

Sony is ramping up production of its 3D smartphone cameras, which use laser dot projections to measure distances to objects up to five meters away from the sensor. Huawei’s next generation of phones will use them, and Sony is hoping that more will follow suit. It’d be cool to see what sort of stuff app developers do with this if the sensor reaches a high enough level of penetration in the market. Yuji Nakamura’s report at Bloomberg: Sony Boosts 3D Camera Output After Interest From Phone Makers (Web).

People in AI 👩‍💻

Kyle Wiggers at VentureBeat interviewed Geoffrey Hinton (co-inventor of backpropagation) and Demis Hassabis (cofounder of DeepMind), on how close we are to Artificial General Intelligence (AGI). Unsurprisingly, like most experts in the field, they think AGI is still a ways away. The interview also touches on reinforcement learning, knowledge transfer, and Hinton’s view that future AI systems will shift more towards unsupervised learning. Read it here: Geoffrey Hinton and Demis Hassabis: AGI is nowhere close to being a reality (Web).

As part of their Overlooked obituary series (Web), the New York Times published a profile of Karen Sparck Jones, a founding mother of computational linguistics. Her 1972 paper “Synonymy and Semantic Classification” intruded inverse document frequency (Wikipedia), which is still at the basis of many NLP systems today. Two of my favorite excerpts:

When she had to bike to a formal dinner, as one often did at Cambridge, she was known to use clothing pegs to pin her dress to the handlebars.

Sparck Jones mentored a generation of researchers, male and female, and came up with a slogan: “Computing is too important to be left to men.”

Sparck Jones passed away in 2007. You can read the full obituary by Nellie Bowles here: Overlooked No More: Karen Sparck Jones, Who Established the Basis for Search Engines (Web).

Cool Things ✨

One of the dozens of points in Netflix’s Bandersnatch choose your own adventure film (Netflix)

One of the dozens of points in Netflix’s Bandersnatch choose your own adventure film (Netflix)

Have you seen Bandersnatch yet? It’s an interactive movie by Netflix, and my high school friends and I had a great time exploring all its different endings and easter eggs. Peter Rubin at Wired also wrote a great feature on the tech and people that made it happen: How the Surprise New Interactive Black Mirror Came Together (Web).

You can now explore the collection of the National Museum of Brazil in Rio that burned down in September last year online through a collaboration with Google Street View: The Destroyed Collection of the National Museum of Brazil Resurrected Online (Web).

My Stuff 😁

The landing page for Weekly.Cool

The landing page for Weekly.Cool

I launched my first real product on Friday! It’s called Weekly.Cool and it’s a paid subscription service that sends you an email digest of the top posts from a subreddit once a week.

I’ve only got three paying subscribers so far (four if I count myself), so I don’t think I hit quite the right audience on Twitter and ProductHunt. Over the next few weeks I’m going to experiment with different ways of finding that audience.

Thanks for reading! As always, please let me know what you thought using the buttons below or by sending me a message. If you enjoyed this issue, why not forward it to a friend or share it somewhere? :)