Abstract
The incorporation of artificial intelligence (AI), particularly deep learning (DL) technologies, has greatly improved the adaptability, inclusivity, and intellectual capabilities of the next generation of AI-native wireless networks. This advancement involves real-time data processing and analysis using AI and DL algorithms for proactive maintenance, self-optimization, enhanced security, and improved user experiences. The dissertation explores AI-driven optimization strategies in wireless networks, focusing on addressing nonconvex optimization challenges such as optimizing power usage, data rate demands, and sum rate optimization.The first strategy, OpenRANet, is an optimization-based deep learning model engineered to address the nonconvex optimization challenge of joint subcarrier and power allocation in open RAN, aiming to minimize total power consumption while ensuring users meet transmission data rate requirements. We start by formulating convex subproblems of the original problem, employing techniques like decoupling, change-of-variable strategy, and relaxation. Then we use efficient iterations with the standard interference function framework to derive the primal-dual solutions of the subproblems, which can be utilized as a convex optimization layer in designing OpenRANet with machine-learning techniques. By integrating machine-learning techniques with the analysis of convex subproblems, OpenRANet excels in adhering to problem constraints, enhancing solution accuracy compared to pure machine learning approaches, and improving computational efficiency over optimization-based methods.
The second technique introduces a deep reinforcement learning-based block coordinate descent (DRL-based BCD) algorithm to address the nonconvex sum-rate maximization problem with a total power constraint. Firstly, we present an efficient block coordinate descent (BCD) method to solve the sum-rate maximization problem. While this method may not always achieve globally optimal solutions, it provides a pathway for integrating machine learning and domain-specific techniques with theoretical analysis of the underlying convexity of the subproblems. We then integrate deep reinforcement learning (DRL) techniques into the BCD method and propose the DRL-based BCD algorithm. This approach combines the data-driven learning capability of DRL techniques with the navigational and decision-making characteristics of the optimization-based BCD method, enabling it to adhere to constraints, enhance performance significantly (potentially achieve the exact optimal solution) by reducing sensitivity to initial points, and mitigating the risk of being trapped in local optima.
The third method, Neural Sum Rate Maximization, addresses the nonconvex problems of maximizing the sum rates with individual power constraints and a total power constraint for both uplink and downlink multiple access. To tackle these challenges, we leverage two key techniques: optimization-based majorization-minimization method and neural network-based algorithm unrolling technique, which maps the iterations of the original algorithms onto trainable neural network layers, facilitating the development and deployment of our algorithms in the AI-native layer for future wireless networks. Our approach also exploits the mathematical structures in the sum rate maximization problems, particularly the standard interference function framework and the alternating direction method of multipliers for iterative algorithm design. Furthermore, by combining the algorithm unrolling techniques, our approach not only demonstrates the capacity to learn from data for performance enhancement but also delivers significant improvements in efficiency.
The fourth part of this thesis conducts a preliminary analysis of the outage probability under general fading environments. Due to the complexity of obtaining a precise closed-form expression theoretically and mathematically, we employ regression analysis to effectively tackle the outage probability. We consider a special class of utility maximization problems subject to general outage probability constraints, which may not be expressed explicitly or in deterministic form. We then show that these constraints can be reformulated into constraints involving implicit standard interference functions that exhibit affine linearity. Hence, despite their implicit characteristics, we can employ linear regression to convert these standard interference functions into explicit forms. Subsequently, we utilize the convergence properties of the standard interference functions to efficiently resolve the utility maximization problem.
Through numerical experiments, the above strategies demonstrate substantial advantages in the augmentation of solution precision, enhancement of computational efficiency, and compliance with problem constraints in wireless networks. This implies that the fusion of deep-learning methodologies with the analysis of convex optimization provides valuable potential for designing resource-constrained AI-native wireless optimization strategies in next-generation wireless networks.
| Date of Award | 6 May 2025 |
|---|---|
| Original language | English |
| Awarding Institution |
|
| Supervisor | Chung CHAN (Supervisor) & Chee Wei TAN (External Co-Supervisor) |