Dion S.

My life on campus has changed a lot. When I first entered school, I figured I could get away with things like I did in high school by not doing all the homework and not having to study for all of my…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




Deep Parametric Continuous Convolutional Neural Network

Deep Parametric Continuous Convolutional Neural Networks: Shenlong Wang, Simon Suo, Wei-Chiu Ma, Andrei Pokrovsky, Raquel Urtasun.

Parametric Continous Convolution is a learnable operator that operates over non-grid structured data and exploits parametrized kernel functions that span the full continuous vector space. It can handle arbitrary data structure as long as its support relationship is computable.

The continuous convolution operator is based on Monte-Carlo integration, so given particular function f and g with a finite number of input points Yi sampled from the domain so the convolution at any arbitrary point x will be approximated as:

Well, but now how do we construct the continuous convolution kernel function g, which is parameterized such that each point in the support domain is assigned a value (Kernel weights). Such parameterization is infeasible for continuous convolution since the function g will be defined over an infinite number of points.

The solution to the problem is to model g using parametric continuous functions using multi-layer perceptron (MLP)as the approximator because MLPs are expressive and capable of approximating continuous function, so

The kernel function g(z,Θ) spans the full continuous support domain while remaining parameterized by a finite number of parameters.

Parametric Continuous Convolution Layer:

The input and output of the parametric continuous convolution layer can be different. So the input of each convolution layer has 3 parts:

For each layer, we evaluate the kernel function g(Yj — Xi; Θ) for all Xi in S and all Yj in O. Finally the output feature is calculated as:

where,

N : number of input points

M: number of output points

D: dimensionality of the support domain

F: Output feature dimension

0: Input feature dimension

The network takes the input feature and their associated position in the support domain as input. Following standard CNN architecture, we can add batch normalization, non-linearities and the residual connection between layers which was critical to helping convergence. Pooling can be employed over the support domain to aggregate information.

All of the blocks are differentiable so the network can be learned through backpropagation.

Locality Enforcing Continuous convolution

The standard convolution kernel is of limited size to keep locality. Similarly, it can be enforced in parametric continuous convolution by constraining the influence of the function g to points close to x. so,

where w is the modulating window, where we can only keep non-zero kernel values for its K-nearest neighbors (kNN) or points within a fixed radius (Ball-point).

Add a comment

Related posts:

The Dazzling Tragedy of THE BAREFOOT CONTESSA

The Barefoot Contessa is the most classic example of the star vehicle at a time when star vehicles were at their most potent and entertaining. Seeing matinee idols on the screen living out fantasies…