Notes on: Bronstein, M. M., Bruna, J., LeCun, Y., Szlam, A., & Vandergheynst, P. (2017): Geometric deep learning: going beyond euclidean data

Table of Contents

Motivation

  • One of the key reasons for success of deep learning is stationarity and composability through local statistics
  • CNNs on images have stationarity due to shift-invariance of an image, locality due to the local connectivity, and composibility stems from the multi-resolution structure of the grid
  • These properties are exploited by CNNs:
    • alternating convolution and downsampling (pooling)
    • convolutions have a two-fold effect:
      1. Extracts local features that are shared across the image domain, greatly reduces the number of parameters in the network
      2. Convolutional architecture itself imposes some priors about the data, which appear very suitable for images

Geometric learning problems

Two classes of geomtric learning problems:

  • Characterize the structure of the data
  • Analyzing functions defined on a given non-Euclidean domain

Structure of the domain

  • Assume to be given a set od data points of underlying lower dimensional structure embedded in a high-dimensional Euclidean space
  • Recovering the lower-dimensional structure is often referred to as manifold learning

Manifold learning / non-linear dimensionality reduction

Many methods follow the recipe:

  1. Construct a representation of local affinity / similiarity of the data points (typically, sparsely connected graph)
  2. Data points are embedded into a low-dimensional space trying to preserve some criterion of the original affinity

Data on a domain

Terminology

/ non-linear dimensionality reduction
recovering low-dimensional structure embedded in some higher dimensional manifold
(no term)