Nowadays, the emerging cloud Machine-Learning-as-a-Service (MLaaS) platforms, driven byleading companies like Google and Amazon, have established the foundation for many real-worldInternet applications. On the rise of this ML marketplace, how to properly ensure the data usage,transfer, and rigorous protection has attracted wide attention from the public. Latest studies haveshown that ensuring MLaaS security and privacy is particularly difficult, and today’sunderstanding on these problems is still underdeveloped. Firstly, most prior studies stand uponsimplified threat assumptions. They often overlook how significantly the background information(i.e., the readily-available open-source models or public benchmark datasets) and compoundattacks (e.g., model extraction followed by membership inference) would help lower the attackbound against existing MLaaS in practice, preventing us from approaching the real upper boundof potential threats. Secondly, existing countermeasures are likely to become ineffective onpractical MLaaS platforms, because the existing threat models can no longer capture thecapability of real-world adversaries. Even solutions under strong privacy guarantees, such asdifferential privacy, have obvious shortcomings, such as significant model accuracy decay, andunexpected side effects, such as higher model bias. Yet, most countermeasures are incompatiblewith existing cloud MLaaS platforms. These observations demand new defense mechanisms,especially for real-world cloud MLaaS platforms. In this project, we propose to advance thefrontier of MLaaS security and privacy issues, with focus on practical exploitation attacks andeffective countermeasures. Our research thrusts include: 1) Investigate efficient exploitationattacks of black-box MLaaS in practice, including new model extraction and adversarial evasionattacks in real-world settings; 2) Comprehensively analyze unexpected information leakage forMLaaS, including both group-level and record-level privacy breaches respectively; 3) Investigatepractical and effective solutions on hardening security and privacy for MLaaS, including bothoutput perturbation and parameter perturbation defences accordingly. Our results will contributenew insights into the practical threats to MLaaS, and benefit all deep learning applicationsinvolving sensitive or strongly regulated data.