Microsoft is investing $1 billion in OpenAI.
To raise enough money to pay for the recruiting and computing needed to compete with the likes of DeepMind, much of the OpenAI nonprofit (and its employees) has been pivoted to become the for-profit OpenAI LP. The community’s reaction, unsurprisingly, has been mixed
; especially OpenAI’s lofty marketing talk about the pre- and post-artificial general intelligence (AGI) periods of the company’s future raised some eyebrows.
This “pre-AGI period” is also where the Microsoft investment and partnership comes in: they will become OpenAI’s “exclusive cloud provider” and their main strategy for productizing the lab’s AI research.
[We’ll] be working hard together to further extend Microsoft Azure’s capabilities in large-scale AI systems. … [We] intend to license some of our pre-AGI technologies, with Microsoft becoming our preferred partner for commercializing them.
This is great for Microsoft’s AI cloud (and klout), but I’m not too sure what it says about the future of OpenAI. Since they won’t disclose the terms of the investment
, it’s hard to know how much the company’s original mission—"the creation of beneficial AGI"—will fall to the wayside to make room for productization.
(And if it does, is that necessarily a bad thing? After all, productized AI is what this section of Dynamically Typed is all about. I’d love to hear your thoughts, so use that reply button!)
Read more about OpenAI + Microsoft here:
Ben Evans of Andreessen Horowitz wrote a post about the potential of computer vision to touch almost everything.
On the back of imaging sensors that have become ridiculously cheap in recent years (because of the efficiency of the smartphone supply chain), Evans argues that “imaging plus ML” will power a lot more AI computing on the edge
The common thread across all of this is that vision replaces single-purpose inputs, and human mechanical Turks, with a general-purpose input.
I think that especially the former makes for an interesting thought experiment. Let’s take a look at fire detection: in the past, we found out a house was on fire only once a human saw it; then, smoke alarms were invented that started doing that job (and saving lives) automatically; now, imaging plus ML can “see” a fire
on a camera feed—without the need of a single-purpose input. Of course, smoke alarms are already cheap and widely available, so they probably won’t get replaced by AI cameras.
But what other specialized sensors can become more affordable or more ubiquitous if they can be replaced by a computer vision model? And what things that a human can monitor now, for which we have no specialized sensors, can we start to track using AI-powered cameras? That’s where cheap imaging plus ML will have a huge impact. (Pervasive facial recognition
and image censorship
, sadly, are obvious immoral examples that are already being put into production.)