Abstract
We introduce an algorithm that learns gradients from samples in the supervised learning framework. An error analysis is given for the convergence of the gradient estimated by the algorithm to the true gradient. The utility of the algorithm for the problem of variable selection as well as determining variable covariance is illustrated on simulated data as well as two gene expression data sets. For square loss we provide a very efficient implementation with respect to both memory and time.
| Original language | English |
|---|---|
| Pages (from-to) | 519-549 |
| Journal | Journal of Machine Learning Research |
| Volume | 7 |
| Publication status | Published - Mar 2006 |
Research Keywords
- Generalization bounds
- Reproducing kernel Hilbert space
- Tikhnonov regularization
- Variable selection
Fingerprint
Dive into the research topics of 'Learning coordinate covariances via gradients'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver