Microsoft is investing $1 billion in OpenAI.
To raise enough money to pay for the recruiting and computing needed to compete with the likes of DeepMind, much of the OpenAI nonprofit (and its employees) has been pivoted to become the for-profit OpenAI LP. The communityâs reaction, unsurprisingly,
has been mixed; especially OpenAIâs lofty marketing talk about the pre- and post-artificial general intelligence (AGI) periods of the companyâs future raised some eyebrows.
This âpre-AGI periodâ is also where the Microsoft investment and partnership comes in: they will become OpenAIâs âexclusive cloud providerâ and their main strategy for productizing the labâs AI research.
[Weâll] be working hard together to further extend Microsoft Azureâs capabilities in large-scale AI systems. ⌠[We] intend to license some of our pre-AGI technologies, with Microsoft becoming our preferred partner for commercializing them.
This is great for Microsoftâs AI cloud (and klout), but Iâm not too sure what it says about the future of OpenAI. Since they
wonât disclose the terms of the investment, itâs hard to know how much the companyâs original missionâ"the creation of beneficial AGI"âwill fall to the wayside to make room for productization.
(And if it does, is that necessarily a bad thing? After all, productized AI is what this section of Dynamically Typed is all about. Iâd love to hear your thoughts, so use that reply button!)
Read more about OpenAI + Microsoft here:
Ben Evans of Andreessen Horowitz wrote a post about the potential of computer vision to touch almost everything. On the back of imaging sensors that have become ridiculously cheap in recent years (because of the efficiency of the smartphone supply chain), Evans argues that âimaging plus MLâ will power a lot more AI computing
on the edge:
The common thread across all of this is that vision replaces single-purpose inputs, and human mechanical Turks, with a general-purpose input.
I think that especially the former makes for an interesting thought experiment. Letâs take a look at fire detection: in the past, we found out a house was on fire only once a human saw it; then, smoke alarms were invented that started doing that job (and saving lives) automatically; now,
imaging plus ML can âseeâ a fire on a camera feedâwithout the need of a single-purpose input. Of course, smoke alarms are already cheap and widely available, so they probably wonât get replaced by AI cameras.
But what other specialized sensors can become more affordable or more ubiquitous if they can be replaced by a computer vision model? And what things that a human can monitor now, for which we have no specialized sensors, can we start to track using AI-powered cameras? Thatâs where cheap imaging plus ML will have a huge impact. (
Pervasive facial recognition and
image censorship, sadly, are obvious immoral examples that are already being put into production.)