Abstract (EN):
We introduce two formulations for training support vector machines, based on considering the L-1 and L-infinity norms instead of the currently used L-2 norm, and maximising the margin between the separating hyperplane and each data sets using L-1 and L-infinity distances. We exploit the geometrical properties of these different norms. and propose what kind of results should be expected for them. Formulations in mathematical programming for linear problems corresponding to L-1 and L-infinity norms are also provided, for both the separable and non-separable cases. We report results obtained for some standard benchmark problems. which confirmed that the performance of all the formulations is similar. As expected, the CPU time required for machines solvable with linear programming is much shorter.
Language:
English
Type (Professor's evaluation):
Scientific
No. of pages:
10