Mobility-Aware Intelligent Services in Digital-Twin Empowered Edge Computing

Project: Research

View graph of relations

Description

Mobile edge computing (MEC) is envisioned as a promising paradigm for delay-sensitive service provisioning. With billions of IoT devices being connected to the Internet, more and more services provided by MEC networks are intelligent services that are machinelearning based inference services. It thus becomes crucial to provide accurate inference services while meeting stringent user service delay requirements. Orthogonal to the MEC, digital twin (DT) is another enabling technology to revolutionize many fields, including autonomous driving, healthcare, education, and smart city. The global DT industry is forecast to grow to $73.5 billion in 2027 and $125.7 billion in 2030.Empowered by the DT technology, intelligent services will become more accurate through continuous training on their inference models, using the updated DT source data of the inference models. The study of intelligent services in DT-empowered MEC networks now is growing interest. However, existing methods and techniques of service provisioning are not applicable to intelligent services due to lack of continuous training on inference models. New theories, algorithms and techniques thus are urgently needed. This project will investigate mobility-aware intelligent services in a DT-empowered MEC network under the mobility of objects and users. Since the quality of each intelligent service is determined by the accuracy of its inference model, while the accuracy of each inference model is determined by the freshness of its source DT data that is achieved through synchronizations between the DTs and their objects. Considering the mobility of both objects and users, accurate inference service provisioning poses several challenges: (1) how to develop a metric to capture the accuracy of inference models? (2) Due to limited bandwidth resource, should which objects upload their updates to maximize the accumulative accuracy of inference models? (3) How to strive for nontrivial trades-off between the accumulative freshness of inference models and the cost to achieve the freshness? (4) should which resolution of an inference model be chosen to meet the service delay requirement of its user? (5) how to develop effective prediction mechanisms to accurately predict the volume of the update data generated by each object, and the number of containers of different resolutions for each inference model needed?This project will develop a suite of novel solutions to mobility-aware intelligent services in DT-empowered MEC networks. It will contribute substantially to enhance our understanding on core technologies and challenges of inference services. It will also contribute to devise performance-guaranteed algorithms and innovative prediction mechanisms. 

Detail(s)

Project number9043668
Grant typeGRF
StatusNot started
Effective start/end date1/01/25 → …