A supervised learning algorithm for linear classifiers that aim to find a hyperplane decision boundary that maximizes the margin to data.
Specifically, SVM aims to choose the hyperplane so that the distance from the hyperplane to the nearest data point on each side is maximized. These nearest points are called support vectors because they “support” the hyperplane.
The distance between the two hyperplanes is . So, to maximize the margin, needs to be minimized. This is why objective functions for SVMs typically include a regularizer. A typical training object would be:
where is Hinge Loss. Hinge loss is ideal for support vector machines because its definition includes maximizing the distance between and the decision boundary, which is the same as the margin maximization objective here.
See also: Pegasos Algorithm