Atıl İlerialkan, Speaker and Posture Classification Using Instantaneous Acoustic Features of Breath Signals
Features extracted from speech are widely used for problems such as biometric speaker identification, but the use of speech data raises concerns about privacy. We propose a method for speech and posture classification using only breath data. The acoustical information was extracted from breath instances using the Hilbert-Huang transform and fed into our CNN-RNN network for classification. We also created our publicly available dataset, BreathBase, which contains more than 5000 breath instances of 20 participants in 5 different postures with 4 different microphones. Using this data, 85% speaker classification and 98% posture classification accuracy is obtained.
Date: 27.11.2019 / 15.00 Place: A-212