TY - GEN
T1 - Can we discriminate between apnea and hypopnea using audio signals?
AU - Halevi, M.
AU - Dafna, E.
AU - Tarasiuk, A.
AU - Zigel, Y.
N1 - Publisher Copyright:
© 2016 IEEE.
PY - 2016/10/13
Y1 - 2016/10/13
N2 - Obstructive sleep apnea (OSA) affects up to 14% of the population. OSA is characterized by recurrent apneas and hypopneas during sleep. The apnea-hypopnea index (AHI) is frequently used as a measure of OSA severity. In the current study, we explored the acoustic characteristics of hypopnea in order to distinguish it from apnea. We hypothesize that we can find audio-based features that can discriminate between apnea, hypopnea and normal breathing events. Whole night audio recordings were performed using a non-contact microphone on 44 subjects, simultaneously with the polysomnography study (PSG). Recordings were segmented into 2015 apnea, hypopnea, and normal breath events and were divided to design and validation groups. A classification system was built using a 3-class cubic-kernelled support vector machine (SVM) classifier. Its input is a 36-dimensional audio-based feature vector that was extracted from each event. Three-class accuracy rate using the hold-out method was 84.7%. A two-class model to separate apneic events (apneas and hypopneas) from normal breath exhibited accuracy rate of 94.7%. Here we show that it is possible to detect apneas or hypopneas from whole night audio signals. This might provide more insight about a patient's level of upper airway obstruction during sleep. This approach may be used for OSA severity screening and AHI estimation.
AB - Obstructive sleep apnea (OSA) affects up to 14% of the population. OSA is characterized by recurrent apneas and hypopneas during sleep. The apnea-hypopnea index (AHI) is frequently used as a measure of OSA severity. In the current study, we explored the acoustic characteristics of hypopnea in order to distinguish it from apnea. We hypothesize that we can find audio-based features that can discriminate between apnea, hypopnea and normal breathing events. Whole night audio recordings were performed using a non-contact microphone on 44 subjects, simultaneously with the polysomnography study (PSG). Recordings were segmented into 2015 apnea, hypopnea, and normal breath events and were divided to design and validation groups. A classification system was built using a 3-class cubic-kernelled support vector machine (SVM) classifier. Its input is a 36-dimensional audio-based feature vector that was extracted from each event. Three-class accuracy rate using the hold-out method was 84.7%. A two-class model to separate apneic events (apneas and hypopneas) from normal breath exhibited accuracy rate of 94.7%. Here we show that it is possible to detect apneas or hypopneas from whole night audio signals. This might provide more insight about a patient's level of upper airway obstruction during sleep. This approach may be used for OSA severity screening and AHI estimation.
UR - http://www.scopus.com/inward/record.url?scp=85009115391&partnerID=8YFLogxK
U2 - 10.1109/EMBC.2016.7591412
DO - 10.1109/EMBC.2016.7591412
M3 - Conference contribution
C2 - 28268991
AN - SCOPUS:85009115391
T3 - Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS
SP - 3211
EP - 3214
BT - 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2016
PB - Institute of Electrical and Electronics Engineers
T2 - 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2016
Y2 - 16 August 2016 through 20 August 2016
ER -