Resource Optimization for Caching, Computing, and Communications in Fog Radio Access Networks


Student thesis: Doctoral Thesis

View graph of relations

Related Research Unit(s)


Awarding Institution
Award date10 Jun 2022


Fog radio access network (F-RAN) is a promising network architecture that brings network functions closer to mobile users. F-RAN consists of a cloud server connected via a wireless fronthaul link to densely deployed fog access points (F-APs). The F-APs, with caching, computing, and signal processing capabilities, serve mobile users over the access network. The limited capacity of the fronthaul link is the bottleneck of the F-RAN. Moreover, the F-APs have limited storage capacity and transmission power. The limited network resources need to be efficiently utilized to satisfy mobile users’ demands and increase the profitability of mobile network operators. We study the problem of content caching and delivery in F-RAN to minimize the fronthaul traffic load and energy consumption at the F-APs and thereby pro- vide file download services to a large number of users. Moreover, we consider the computation offloading by mobile users. By exploiting the computation capability of F-APs, mobile users with limited battery power and processing speed can offload their delay-sensitive and computation-intensive tasks to one or more F-APs. We study the joint optimization of computation and communication resources to minimize the energy consumption of mobile users while satisfying the delay constraint of tasks.

For content placement and delivery, we study the problem of how to place files on the F-APs and deliver them to mobile users over the fronthaul and access network. We consider two scenarios. In the first scenario, the delivery scheme is fixed, and the content placement is designed to optimize the network resources. In the second scenario, the content placement is fixed, and the delivery schemes over the fronthaul and access network are designed to optimize the network resources. For the first scenario, the content placement problem is studied from the information-theoretic perspective under a cooperative delivery scheme. We show that coding is not needed for isolated networks in which all users are connected to all F-APs, and that caching the most popular files first is optimal. For large-scale networks in which each user is connected to a subset of all F-APs, a geographical clustering algorithm is proposed to group the F-APs into clusters. Then, maximum distance separable (MDS) repetition and uncoded repetition schemes are proposed for caching. A heuristic algorithm based on hypergraph coloring is constructed to place the packets on the F-APs for both schemes. The simulation results show that our proposed schemes outperform some existing benchmark schemes.

For the second scenario, given uncoded and coded caching schemes based on file splitting, we jointly design the content delivery over both the fronthaul and access network to satisfy the quality of service (QoS) constraint. For the access network, a user can be served via an F-AP or beamforming from the associated F-APs to meet the required QoS. The fronthaul link is assumed to be a broadcast link, and index coding is applied. An optimal polynomial-time index coding algorithm is proposed for uncoded caching, and a heuristic for MDS coded caching. We also investigated the tradeoff between fronthaul traffic and energy transmission. The simulation results show that beamforming and index coding can considerably reduce the fronthaul traffic and transmitted energy and balance the tradeoff between them.

For computation offloading, we investigate the joint computation and communication resource allocation problem, which aims to maximize the number of served users and minimize the total energy consumption subject to delay tolerance constraints. The problem solution is determined for both non-orthogonal multiple access (NOMA) and orthogonal multiple access (OMA) transmission schemes. Moreover, the joint user pairing and F-AP assignment problem for NOMA is proved to be NP-hard. For both NOMA and OMA, heuristic and optimal algorithms based on graph matching are designed. Simulation results show that NOMA considerably outperforms OMA in terms of outage probability and energy consumption, especially for tight delay tolerance constraints and large computational tasks.