Kernel Trick - Applications

Applications

It has been applied to several kinds of algorithm in machine learning and statistics, including:

  • Perceptrons
  • Support vector machines
  • Principal component analysis
  • Canonical correlation analysis
  • Fisher's linear discriminant analysis
  • Cluster analysis

Commonly used kernels in such algorithms include the RBF and polynomial kernels, representing a mapping of vectors in into a much richer feature space over degree- polynomials of the original variables:

where is a constant trading off the influence of higher-order versus lower-order terms in the polynomial. This is the inner product in a feature space induced by the mapping


\varphi(x) = \langle x_n^2, \ldots, x_1^2, \sqrt{2} x_n x_{n-1}, \ldots, \sqrt{2} x_n x_1, \sqrt{2} x_{n-1} x_{n-2}, \ldots, \sqrt{2} x_{n-1} x_{1}, \ldots, \sqrt{2} x_{2} x_{1}, \sqrt{2c} x_n, \ldots, \sqrt{2c} x_1, c \rangle

The kernel trick here lies in working in an -dimensional space, without ever explicitly transforming the original data points into that space, but instead relying on algorithms that only need to compute inner products within that space, which are identical to and can thus cheaply be computed in the original space using only multiplications.

Read more about this topic:  Kernel Trick