TY - JOUR
T1 - Deep-learning-based unobtrusive handedness prediction for one-handed smartphone interaction
AU - Chen, Taizhou
AU - Zhu, Kening
AU - Yang, Ming Chieh
PY - 2023/2
Y1 - 2023/2
N2 - The handedness (i.e. the side of the holding and operating hand) is an important contextual information to optimise the one-handed smartphone interaction. In this paper, we present a deep-learning-based technique for unobtrusive handedness prediction in one-handed smartphone interaction. Our approach is built upon a multilayer LSTM (Long-Short-Term Memory) neural network, and processes the built-in motion-sensor data of the phone in real time. Compared to the existing approaches, our approach eliminates the need of extra user actions (e.g., on-screen tapping and swiping), and predicts the handedness based on the picking-up action and the holding posture before the user performs any operation on the screen. Our approach is able to predict the handedness when a user is sitting, standing, and walking at an accuracy of 97.4%, 94.6%, and 92.4%, respectively. We also show that our approach is robust to the turbulent noise with an average accuracy of 94.6% for the situations of users in the transportation tools (e.g., bus, train, and scooter). Furthermore, the presented approach can classify users’ real-life single-handed smartphone usage into left- and right-handed with an average accuracy of 89.2%.
AB - The handedness (i.e. the side of the holding and operating hand) is an important contextual information to optimise the one-handed smartphone interaction. In this paper, we present a deep-learning-based technique for unobtrusive handedness prediction in one-handed smartphone interaction. Our approach is built upon a multilayer LSTM (Long-Short-Term Memory) neural network, and processes the built-in motion-sensor data of the phone in real time. Compared to the existing approaches, our approach eliminates the need of extra user actions (e.g., on-screen tapping and swiping), and predicts the handedness based on the picking-up action and the holding posture before the user performs any operation on the screen. Our approach is able to predict the handedness when a user is sitting, standing, and walking at an accuracy of 97.4%, 94.6%, and 92.4%, respectively. We also show that our approach is robust to the turbulent noise with an average accuracy of 94.6% for the situations of users in the transportation tools (e.g., bus, train, and scooter). Furthermore, the presented approach can classify users’ real-life single-handed smartphone usage into left- and right-handed with an average accuracy of 89.2%.
KW - Handedness prediction
KW - LSTM
KW - Motion sensor
KW - Single hand
KW - Smartphone interaction
UR - http://www.scopus.com/inward/record.url?scp=85123188965&partnerID=8YFLogxK
UR - https://www.scopus.com/record/pubmetrics.uri?eid=2-s2.0-85123188965&origin=recordpage
U2 - 10.1007/s11042-021-11844-6
DO - 10.1007/s11042-021-11844-6
M3 - RGC 21 - Publication in refereed journal
SN - 1380-7501
VL - 82
SP - 4941
EP - 4964
JO - Multimedia Tools and Applications
JF - Multimedia Tools and Applications
IS - 4
ER -