Error Bound Stability in Learning Theory
DescriptionThe proposed project will consider a large class of error bounds developed in the context of statistical learning theory which are expressed in terms of suitable functionals of the target function (e.g. its norm in a reproducing kernel Hilbert space or other functional space). These bounds are unstable in the sense that a small perturbation (relative to the natural L2 norm) of the target function can induce an arbitrarily large increase of the relevant functional and make the error bound empty. A recent result shows how to recover stability for two important classes of problems using a suitable version of Fano's inequality. The first class includes problems in which the output samples are binary, the second class includes general linear regularization filters. The main purpose of the project is to develop mathematical techniques which would allow to recover stability for much more general learning algorithms using information theoretical results, and apply these results to various error bounds available in the literature. Other purpose of the project is to develop computational techniques to estimate the "stabilized" complexity functionals from empirical samples, and to implement these algorithms on synthetic and real datasets.
|Effective start/end date||1/05/10 → 30/06/10|