TY - JOUR
T1 - Multi-kernel regularized classifiers
AU - Wu, Qiang
AU - Ying, Yiming
AU - Zhou, Ding-Xuan
PY - 2007/2
Y1 - 2007/2
N2 - A family of classification algorithms generated from Tikhonov regularization schemes are considered. They involve multi-kernel spaces and general convex loss functions. Our main purpose is to provide satisfactory estimates for the excess misclassification error of these multi-kernel regularized classifiers when the loss functions achieve the zero value. The error analysis consists of two parts: regularization error and sample error. Allowing multi-kernels in the algorithm improves the regularization error and approximation error, which is one advantage of the multi-kernel setting. For a general loss function, we show how to bound the regularization error by the approximation in some weighted Lq spaces. For the sample error, we use a projection operator. The projection in connection with the decay of the regularization error enables us to improve convergence rates in the literature even for the one-kernel schemes and special loss functions: least-square loss and hinge loss for support vector machine soft margin classifiers. Existence of the optimization problem for the regularization scheme associated with multi-kernels is verified when the kernel functions are continuous with respect to the index set. Concrete examples, including Gaussian kernels with flexible variances and probability distributions with some noise conditions, are used to illustrate the general theory. © 2006 Elsevier Inc. All rights reserved.
AB - A family of classification algorithms generated from Tikhonov regularization schemes are considered. They involve multi-kernel spaces and general convex loss functions. Our main purpose is to provide satisfactory estimates for the excess misclassification error of these multi-kernel regularized classifiers when the loss functions achieve the zero value. The error analysis consists of two parts: regularization error and sample error. Allowing multi-kernels in the algorithm improves the regularization error and approximation error, which is one advantage of the multi-kernel setting. For a general loss function, we show how to bound the regularization error by the approximation in some weighted Lq spaces. For the sample error, we use a projection operator. The projection in connection with the decay of the regularization error enables us to improve convergence rates in the literature even for the one-kernel schemes and special loss functions: least-square loss and hinge loss for support vector machine soft margin classifiers. Existence of the optimization problem for the regularization scheme associated with multi-kernels is verified when the kernel functions are continuous with respect to the index set. Concrete examples, including Gaussian kernels with flexible variances and probability distributions with some noise conditions, are used to illustrate the general theory. © 2006 Elsevier Inc. All rights reserved.
KW - Classification algorithm
KW - Convex loss function
KW - Misclassification error
KW - Multi-kernel regularization scheme
KW - Regularization error and sample error
UR - http://www.scopus.com/inward/record.url?scp=33846804061&partnerID=8YFLogxK
UR - https://www.scopus.com/record/pubmetrics.uri?eid=2-s2.0-33846804061&origin=recordpage
U2 - 10.1016/j.jco.2006.06.007
DO - 10.1016/j.jco.2006.06.007
M3 - RGC 21 - Publication in refereed journal
SN - 0885-064X
VL - 23
SP - 108
EP - 134
JO - Journal of Complexity
JF - Journal of Complexity
IS - 1
ER -