Abstract
The surrogate that predicts the performance of hyperparameters has been a key component for sequential model-based hyperparameter optimization. In practical applications, a trial of a hyperparameter configuration may be so costly that a surrogate is expected to return an optimal configuration with as few trials as possible. Observing that human experts draw on their expertise in a machine learning model by trying configurations that once performed well on other datasets, we are inspired to build a trial-efficient surrogate by transferring the meta-knowledge learned from historical trials on other datasets. We propose an end-to-end surrogate named as Transfer Neural Processes (TNP) that learns a comprehensive set of meta-knowledge, including the parameters of historical surrogates, historical trials, and initial configurations for other datasets. Experiments on extensive OpenML datasets and three computer vision datasets demonstrate that the proposed algorithm achieves state-of-the-art performance in at least one order of magnitude less trials. © 2021 by the author(s).
Original language | English |
---|---|
Title of host publication | Proceedings of the 38th International Conference on Machine Learning |
Editors | Marina Meila, Tong Zhang |
Publisher | ML Research Press |
Pages | 11058-11067 |
ISBN (Print) | 9781713845065 |
Publication status | Published - Jul 2021 |
Event | 38th International Conference on Machine Learning (ICML 2021) - Virtual Duration: 18 Jul 2021 → 24 Jul 2021 https://icml.cc/virtual/2021/index.html https://proceedings.mlr.press/v139/ |
Publication series
Name | Proceedings of Machine Learning Research |
---|---|
Volume | 139 |
ISSN (Print) | 2640-3498 |
Conference
Conference | 38th International Conference on Machine Learning (ICML 2021) |
---|---|
Period | 18/07/21 → 24/07/21 |
Internet address |