Skip to main content

Emotion Recognition from Speech Using Multiple Features and Clusters

  • Conference paper
  • First Online:
Evolution in Computational Intelligence

Part of the book series: Smart Innovation, Systems and Technologies ((SIST,volume 267))

  • 318 Accesses

Abstract

Speech Emotion Recognition (SER) using speech signals to detect our emotions is gaining popularity in the field of Human–Computer Interactions. The emotional state of a speaker is identified from a speech by using Mel Frequency Cepstral Coefficients (MFCC) feature and Gammatone Cepstral Coefficients (GTCC) as features with less dimensionality, and classification is done based on vector quantization (VQ) modelling and minimum distance classifier. The source used was the Berlin database which has the recorded utterances in various emotions like anger, boredom, sad and neutral spoken by actors. The speech signals are first digitized and pre-emphasized, after which it gets converted into frames. The frames are then multiplied with hamming window to reduce damping at higher frequencies. Then from the windowed speech signal, Mel Frequency Cepstral Coefficients (MFCC) feature and Gammatone Cepstral Coefficients (GTCC) are extracted. The extracted features are applied to the VQ models and based on minimum distance; the emotion is classified. The Unsupervised machine learning algorithm K-MEANS is used as a classifier and then the comparison is carried out between the accuracy of MFCC and GTCC features in distinguishing the emotions such as anger, sadness, boredom, and neutral.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 189.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 249.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 249.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Revathi, A., Jeyalakshmi, C.: Emotion recognition: different sets of features and models. Int. J. Speech Technol. 22(3), 473–482 (2019). http://www.scopus.com/inward/record.url?eid=2-s2.0-85049946565&partnerID=MN8TOARS

  2. Revathi, A., Jeyalakshmi, C.: Comparative analysis on the use of features and models for validating language identification system. In: Proceedings of the International Conference on Inventive Computing and Informatics, ICICI 2017, pp. 693–698 (2018). http://www.scopus.com/inward/record.url?eid=2-s2.0-85048347705&partnerID=MN8TOARS

  3. Dharani, D., Revathi, A.: Singer identification using clustering algorithm. In: International Conference on Communication and Signal Processing, ICCSP 2014—Proceedings, pp. 1927–1931 (2014). http://www.scopus.com/inward/record.url?eid=2-s2.0-84916216617&partnerID=MN8TOARS

  4. Revathi, A., Jeyalakshmi, C.: Efficient speech recognition system for hearing impaired children in classical tamil language. Int. J. Biomed. Eng. Technol. 26 (2018). http://www.inderscience.com/offer.php?id=89261

  5. Daniel, D., Revathi, A.: Raga identification of carnatic music using iterative clustering approach. In: Proceedings of the International Conference on Computing and Communications Technologies, ICCCT 2015, pp. 19–24 (2015). http://www.scopus.com/inward/record.url?eid=2-s2.0-84961827474&partnerID=MN8TOARS

  6. Revathi, A., Sasikaladevi, N., Nagakrishnan, N., Jeyalakshmi, C.: Robust emotion recognition from speech: Gammatone features and models. In: Proceedings of the 2nd International Conference on Communication and Electronics Systems, ICCES 2017 (2018). https://link.springer.com/article/10.1007/s10772-018-9546-1

  7. Bombatkar, A., Bhoyar, G., Morjani, K., Gautam, S., Gupta, V.: Emotion recognition using speech processing using K-nearest neighbor algorithm. Int. J. Eng. Res. Appl. (2004). http://ijera.com/special_issue/ICIAC_April_2014/EN/V4/EN-2026871.pdf

  8. Revathi, A., Jeyalakshmi, C., Muruganantham, T.: Perceptual features based rapid and robust language identification system for various Indian classical languages. Lect. Notes Comput. Vis. Biomech. (2018). https://link.springer.com/chapter/10.1007%2F978-3-319-71767-8_25

  9. Revathi, A., Krishnamurthi, V., Jeyalakshmi, C.: Alphabet model-based short vocabulary speech recognition for the assessment of profoundly deaf and hard of hearing speeches. Int. J. Modell. Identif. Control 23 (2015). http://www.inderscience.com/offer.php?id=69932

  10. Rahana, F., Raseena, P.E.: Gammatone cepstral coefficients for speaker identification. Int. J. Adv. Res. Electr. Electron. Instrum. Eng. 2 (2013). https://www.ijareeie.com/upload/2013/dec13-special/63_rahana_fathima.pdf

  11. Wang, H., Zhang, C.: The application of gammatone frequency cepstral coefficients for forensic voice comparison under noisy conditions. Austr. J. Foren. Sci. (2019). https://www.tandfonline.com/doi/full/10.1080/00450618.2019.1584830

  12. Martinez, J., Perez, H., Escamilla, E., Suzuki, M.M.: Speaker recognition using mel frequency cepstral coefficients (MFCC) and vector quantization (VQ) techniques. In: CONIELECOMP 2012, 22nd International Conference on Electrical Communications and Computers, pp. 248–251, Cholula, Puebla, Mexico (2012)

    Google Scholar 

  13. Valero, X., Alias.: Gammatone cepstral coefficients: biologically inspired features for non-speech audio classification. IEEE Trans. Multimed. 14(6), 1684–1689 (2012). https://ieeexplore.ieee.org/document/6202347

  14. Adiga, A., Magimai, M., Seelamantula, C.S.: Gammatone wavelet cepstral coefficients for robust speech recognition. https://core.ac.uk/download/pdf/192527929.pdf

  15. Gabrielle, K.L.: Evaluating gammatone frequency cepstral coefficients with neural networks for emotion recognition from speech. Cornell University (2018). https://arxiv.org/abs/1806.09010

  16. Ayoub, B., Jamal, K., Arsalane, Z.: Gammatone frequency cepstral coefficients for the speaker identification over VoIP networks. In: 2016 International Conference on Information Technology for Organizations Development (IT4OD), Fez, Morocco, pp. 1–5 (2016). https://ieeexplore.ieee.org/document/7479293

Download references

Declaration

We have taken permission from competent authorities to use the images/data as given in the paper. In case of any dispute in the future, we shall be wholly responsible.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to A. Revathi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Revathi, A., Neharika, B., G, G. (2022). Emotion Recognition from Speech Using Multiple Features and Clusters. In: Bhateja, V., Tang, J., Satapathy, S.C., Peer, P., Das, R. (eds) Evolution in Computational Intelligence. Smart Innovation, Systems and Technologies, vol 267. Springer, Singapore. https://doi.org/10.1007/978-981-16-6616-2_25

Download citation

Publish with us

Policies and ethics