Sleep Staging in Patients with Suspected Sleep Apnea Using Wearables and Deep Learning

No Thumbnail Available
Author
Aguet, Clémentine
Constantin, Loris
Baty, Florent
Boesch, Maximilian
Renevey, Philippe
Ferrario, Damien
Lemay, Mathieu
Brutsche, Martin
Braun, Fabian
DOI
10.1515/bmt-2025-1001
Abstract
Methods The initial deep learning model combined residual and temporal convolutional layers to capture intricate and long-range dependencies. An iterative multi-stage training approach utilized raw ECG and PPG signals for robust feature learning. A lightweight version using inter-beat intervals was developed for deployment in memory-constrained systems. Both models were evaluated on a cohort of 171 patients with suspected sleep apnea. Participants underwent in-hospital PSG alongside simultaneous recording of reflectance PPG and 3-D accelerometer signals using CSEM’s wrist-worn wearable device. Results Compared to PSG, the original deep learning model achieved a median accuracy of 80.2% with a Cohen’s Kappa of 0.71 in classifying the four sleep stages: wakefulness, light sleep (S1+S2), deep sleep (S3), and rapid eye movement (REM). It showed low median errors of 10.0 min for total sleep time (TST) and 1.89% for sleep efficiency. The lightweight model performed comparably well (median accuracy 80.1%, Cohen’s Kappa 0.70), with slightly improved sleep metrics (median TST error 8.5 min, median SE error 1.5%). Conclusion These findings support accurate, scalable, and cost-effective sleep monitoring. The simplified model maintained performance, highlighting potential for efficient wearable implementation without compromising reliability. Future work should enhance wake detection, possibly by incorporating accelerometer-derived motion. Overall, such a PPG-based solution underscores the feasibility of unobtrusive, long-term sleep monitoring at patient’s homes.
Publication Reference
BMT 2025, Muttenz (Switzerland)
Year
2025-09-11
Sponsors