Abstraction and Reasoning at NeurIPS 2020
François Chollet’s NeurIPS talk on abstraction and reasoning, based on his 2019 Measure of Intelligence paper (see DT #26), is a great watch. It covers the shortcut rule (“you get what you optimize for — at the detriment of everything else”); levels of generalization; and what kinds of abstraction he thinks deep learning can and cannot solve (by itself). The tutorial also included a talk on analogical reasoning by Melanie Mitchell and one on math reasoning by Christian Szegedy; this live-tweeted thread by Antonio Vergari summarizes all three talks in bite-size chunks. (RE: the previous link, it’d be super cool to see DeepMind try to tackle the ARC challenge — maybe someday.)