Current techniques for tracking sleep are either obtrusive (Polysomnography) or low in accuracy (wearables). In this early work, we model a sleep classification system using an unobtrusive Ballistocardiographic (BCG)-based heart sensor signal collected from a commercially available pressure-sensitive sensor sheet. We present DeepSleep, a hybrid deep neural network architecture comprising of CNN and LSTM layers. We further employed a 2-phase training strategy to build a pre-trained model and to tackle the limited dataset size. Our model results in a classification accuracy of 74%, 82%, 77% and 63% using Dozee BCG, MIT-BIH’s ECG, Dozee’s ECG and Fitbit’s PPG datasets, respectively. Furthermore, our model shows a positive correlation (r = 0.43) with the SATED perceived sleep quality scores. We show that BCG signals are effective for long-term sleep monitoring, but currently not suitable for medical diagnostic purposes.

, , ,
doi.org/10.1145/3341162.3343758
Joint Conference on Pervasive and Ubiquitous Computing and International Symposium on Wearable Computers
Centrum Wiskunde & Informatica, Amsterdam (CWI), The Netherlands

Rao, S., El Ali, A., & César Garcia, P. S. (2019). DeepSleep: A ballistocardiographic deep learning approach for classifying sleep stages. In UbiComp/ISWC 2019- - Adjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers (pp. 187–190). doi:10.1145/3341162.3343758