Semi-Supervised Learning (SSL) has exhibited strong effectiveness in boosting the performance of classification models with the aid of a large amount of unlabeled data. Recently, regularizing the classifier with the help of adversarial examples has proven effective for semi-supervised learning. Existing methods hypothesize that the adversarial examples are based on the pixel-wise perturbation of the original samples. However, other types of adversarial examples (e.g., with spatial transformation) should also be useful for improving the robustness of the classifier. In this paper, we propose a new generalized framework based on adversarial networks, which is able to generate various types of adversarial examples. Our model consists of two modules which are trained in an adversarial process: a generator mapping the original samples to adversarial examples which can fool the classifier, and a classifier that tries to classify the original samples and the adversarial examples consistently. We evaluate our model on several datasets, and the experimental results show that our model outperforms the state-of-the-art methods for semi-supervised learning. The experiments also demonstrate that our model can generate adversarial examples with various types of perturbation such as local spatial transformation, color transformation, and pixel-wise perturbation. Moreover, our model is also applicable to supervised learning, performing as a regularization term to improve the generalization performance of the classifier.