#2: Would you like some dip with your AI chips?
Hi and welcome to the second edition of my newsletter! Based on feedback from the first issue that the ML section had some unexpectedly technical content (thx Santi), I’ve split it up into a section about productised AI (non-technical) and a section about machine learning tech (more technical). Let me know what you think about the change!
Productized Artificial Intelligence 🔌
Connie Chan at Andreessen Horowitz (a16z) wrote an overview of three “AI-first” startups from China. Although I’m not 100% on board with including Tik-Tok on this list, the second two are more interesting: an anonymous chat / dating app called Soul, and an English tutoring app that uses NLP to give detailed pronunciation feedback and ML to personally “tailor questions to be challenging, but not discouraging.” Full post: When AI is the Product: The Rise of AI-Based Consumer Apps (Web)
The competition to build dedicated hardware for efficiently training and running machine learning models is heating up. Google has TPUs, Intel has its Nervana chips, Apple’s iPhone processors have dedicated “neural engines,” and now Amazon announced its own AI chip: AWS Inferentia (Web)
The fourth-generation Apple Watch has a feature that detects irregular heart rhythms, which launched this week (a few months after the watch was released). It took less than a day for someone to use the feature to find out they may suffer from atrial fibrillation and post about it online. This kind of productized AI, in the hands of millions of users, will save lives:
- ECG app and irregular heart rhythm notification available today on Apple Watch (Web)
- /r/AppleWatch: Heading to a cardiologist… (Reddit)
The Pixel 3 phone is receiving a lot of praise for its camera, which has incredible low-light performance. Google posted two blog posts about their computational photography + machine learning approach that enable this:
- Night Sight: Seeing in the Dark on Pixel Phones (Web)
- Learning to Predict Depth on the Pixel 3 Phones (Web)
Machine Learning Tech 🎛
Netflix wrote an extended post about how they use Jupyter notebooks to experiment with data, including a bunch of tools they’ve built for scheduling and parameterizing such notebooks (as well as a completely custom React notebook UI!). Since I use notebooks for many classes in my AI coursework, this is super cool to see! Their medium post: Beyond Interactive: Notebook Innovation at Netflix (Medium)
Google is working on extending its Open Images dataset by making it more diverse through their Crowdsource app that allows people all over the world to complete AI training tasks, like uploading images. It doesn’t look like Google is paying people for this work, though, which is a shame–especially because this is aimed at traditionally underrepresented groups, it feels a bit exploitative. Google’s blog post: Adding Diversity to Images with Open Images Extended (Web)
Surya Ganguli at Stanford’s Human-Centered AI lab wrote a post about the collaboration between biologists who try to understand natural intelligence and computer scientists who try to implement artificial intelligence, across many AI subfields. It’s a long post, but I found the parts about the modular structure of the human brain, and about active learning through world models, to be super interesting. Full post: The intertwined quest for understanding biological intelligence and creating artificial intelligence (Web)
Also from Stanford: they achieved the same performance on 1/3rd the data on 10 different visual tasks using transfer learning. The work won the best paper award at CVPR2018, and their landing page is amazing: Taskonomy: Disentangling Task Transfer Learning (Web)
One of the biggest ML / AI conferences, NeurIPS, was this week. Google and Apple both wrote up what they presented at the conference:
- Google at NeurIPS 2018 (Web)
- Apple at NeurIPS 2018 (Web)
Tech and Startups 🚀
Andrew Chen at a16z posted a great presentation about the near future of consumer startups, through a historical lens. His examples include how the launch of the US postal service enabled the first chain letter; how Michelin was the original content marketer; and how toothpaste because successful through coupons. Watch it here: What’s Next in Consumer Startups (YouTube)
Last year, Elon Musk claimed he could set up a giant Tesla battery in less than 100 days to fix energy problems in South Australia. The battery was built, and now the result are in: South Australia’s big battery slashes $40m from grid control costs in first year (Web)
It’s always good to hear a not-insane perspective on blockchain / crypto. Here’s one: Ethereum’s Vitalik Buterin says IBM’s corporate blockchain is missing the point (Web)
Google is planning to build a censored version of its search engine for China, which would have its data open to China’s big brother government. Many Google employees are not happy about this, and they’ve written an open letter asking execs to put a stop to it. It’s got 735 signatures so far: We are Google employees. Google must drop Dragonfly (Medium)
The Correspondent, the Dutch “#unbreakingnews” startup that’s trying to launch in the US, has raised $1.6 million of their $2.5 million crowd funding campaign. They just have five days left, and I hope they’ll make it. (I’ve donated twice.) More over at The Correspondent (web).
Fun Stuff ✨
If you read one thing from this issue, make it this. This is the story of how Jeff Dean and Sanjay Ghemawat, Google’s only two level-11 engineers, worked together to build some of the huge systems that power the company today, from BigTable to TensorFlow. It also touches on why working in couples (e.g. pair programming) is so powerful. Read James Somer’s piece here: The Friendship That Made Google Huge (Web)
Like last year, Spotify found some funny trends in their data and turned them into billboard ads. My favorite: “In a year of royal celebrations, let’s also toast the fact that someone made a playlist ‘its the royal wedding tomorrow!!!’ 22 days after the wedding.” More: Spotify’s 2018 data billboards (Twitter)
My Stuff 😁
Here’s the second project I did for my MSc AI: using data from the Nederlandse Spoorwegen, I modeled which cities are best to commute from if you work in Amsterdam but don’t want to pay Amsterdam rent, based on factors like travel time, train crowdedness, and punctuality.
- My Tweet about the project (Twitter)
- Full report (PDF)
- Code (GitHub)
Thanks for reading! Please let me know what you thought of this issue using the buttons below, by replying to this email, or by dropping me a message on your preferred platform!
Also! If you have interesting thoughts about any of the stuff I talked about, let me know–I’d like to include some in the next issue!