Dynamically Typed

Methods is Papers with Code's machine learning knowledge graph

Papers with Code’s Methods page for the residual block (cropped).

Papers with Code’s Methods page for the residual block (cropped).

A few weeks ago, Papers with Code launched Methods , a knowledge graph of hundreds of machine learning concepts:

We are now tracking 730+ building blocks of machine learning: optimizers, activations, attention layers, convolutions and much more! Compare usage over time and explore papers from a new perspective.

I’ve started using Methods as my go-to reference for many things at work. Sitting at a more abstracted level than the documentation for your ML library of choice, it’s an incredibly useful resource for anyone doing ML research or engineering. Each Methods page contains the following sections:

I’ve found the last of those sections to be especially handy for answering those hard-to-Google “what’s the name of that other thing that’s kind of like this thing again?” questions. (Also see this Twitter thread by the project’s co-creator Ross Taylor for a few example uses of the other sections.) Methods launched just a month ago, and given how useful it already is, I’m very excited to see how it grows in the future.

One additional feature I’d find useful is the inverse of the components section: I also want to know which methods build on top of the method I’m currently viewing. Another thing I’d like to see is an expansion of code links for methods to also include TensorFlow snippets—but since Facebook AI Research bought Papers with Code late last year, I’m guessing that keeping these snippets exclusive to (FAIR-controlled) PyTorch may be a strategic decision rather than a technical one.