Speech is the most prevalent way that people communicate with one another. Based on various methods for classifying the emotions from transformed data, communication systems identify the emotional states of people. Voice and visual media are the most direct information delivery methods used by humans. MFCC is the most often utilized extraction feature recognition. Speech data preprocessing, feature extraction, and speech emotion classification make up the three stages of the speech emotion detection process. Data may occasionally show abnormalities. As a result, the key of emotion recognition lies in the categorization architecture and speech emotion characteristic, both of which contain crucial knowledge.
|Title of host publication
|International Conference on Information Technology and Mechatronics Engineering, ICITME 2021
|Dexter R. Buted, Elbert M. Galas, Randy Joy M. Ventayen, Potenciano D. Conte
|American Institute of Physics Inc.
|Published - 16 May 2023
|2021 International Conference on Information Technology and Mechatronics Engineering, ICITME 2021 - Pangasinan, Virtual, Philippines
Duration: 10 Dec 2021 → 12 Dec 2021
|AIP Conference Proceedings
|2021 International Conference on Information Technology and Mechatronics Engineering, ICITME 2021
|10/12/21 → 12/12/21
ASJC Scopus subject areas
- General Physics and Astronomy