Abstract
A novel classification method using ℓ 2,1-norm based regression is proposed in this paper. The ℓ 2,1-norm based loss function is robust to outliers or large variations distributed in the given data, and the ℓ 2,1-norm regularization term selects correlated samples across the whole training set with grouped sparsity. A probabilistic interpretation under the multiple task learning framework presents theoretical foundation for the optimal solution. Complexity analysis of our proposed classification algorithm is also presented. Several benchmark data sets including facial images and gene expression data are used for evaluating the effectiveness of the new proposed algorithm, and the results show competitive performance particularly better than those using dummy matrix as the response variables. This result is very useful since it is important for selecting appropriate response variables in classification oriented regression models. © 2012 Elsevier Ltd. All rights reserved.
| Original language | English |
|---|---|
| Pages (from-to) | 2708-2718 |
| Journal | Pattern Recognition |
| Volume | 45 |
| Issue number | 7 |
| DOIs | |
| Publication status | Published - Jul 2012 |
Research Keywords
- ℓ 2,1-norm
- Dummy variables
- Multiple task learning
- Nearest subspace
- Sparsity regularization
Fingerprint
Dive into the research topics of 'Robust classification using ℓ 2,1-norm based regression model'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver