Skip to main content

Can You Tell? SSNet - A Biologically-Inspired Neural Network Framework for Sentiment Classifiers

  • Conference paper
  • First Online:
Book cover Machine Learning, Optimization, and Data Science (LOD 2021)

Abstract

When people try to understand nuanced language they typically process multiple input sensor modalities to complete this cognitive task. It turns out the human brain has even a specialized neuron formation, called sagittal stratum, to help us understand sarcasm. We use this biological formation as the inspiration for designing a neural network architecture that combines predictions of different models on the same text to construct accurate and computationally efficient classifiers for sentiment analysis and study several different realizations. Among them, we propose a systematic new approach to combining multiple predictions based on a dedicated neural network and develop mathematical analysis of it along with state-of-the-art experimental results. We also propose a heuristic-hybrid technique for combining models and back it up with experimental results on a representative benchmark dataset and comparisons to other methods (DISCLAIMER: This paper is not subject to copyright in the United States. Commercial products are identified in order to adequately specify certain procedures. In no case does such identification imply recommendation or endorsement by the National Institute of Standards and Technology, nor does it imply that the identified products are necessarily the best available for the purpose.) to show the advantages of the new approaches.

We thank the NIST Information Technology Laboratory (ITL) for the research funding and support.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Reprinted from [6] with permission by Springer Nature, order #4841991468054.

References

  1. Maas, A.: Large movie review dataset (2011). http://ai.stanford.edu/~amaas/data/sentiment/

  2. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014)

  3. Brants, T., Popat, A.C., Xu, P., Och, F.J., Dean, J.: Large language models in machine translation. In: Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (Prague), pp. 858–867 (June 2007)

    Google Scholar 

  4. Breiman, L.: Stacked regressions. Mach. Learn. 24(1), 49–64 (1996)

    Article  MATH  Google Scholar 

  5. Cambria, E.: Affective computing and sentiment analysis. IEEE Intell. Syst. 31(2), 102–107 (2016)

    Article  Google Scholar 

  6. Di Carlo, D.T., et al.: Microsurgical anatomy of the sagittal stratum. Acta Neurochir. 161(11), 2319–2327 (2019). https://doi.org/10.1007/s00701-019-04019-8

    Article  Google Scholar 

  7. Cer, D., et al.: Universal sentence encoder (2018)

    Google Scholar 

  8. Cho, K., et al.: Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 (2014)

  9. Chomsky, N.: Center for brains minds + machines, research meeting: language and evolution (May 2017). https://youtu.be/kFR0LW002ig

  10. Collobert, R., Weston, J.: A unified architecture for natural language processing: deep neural networks with multitask learning. In: Proceedings of the 25th International Conference on Machine Learning, pp. 160–167 (2008)

    Google Scholar 

  11. Conneau, A., Kiela, D., Schwenk, H., Barraul, L., Bordes, A.: Supervised learning of universal sentence representations from natural language inference data. In: Proceedings of the 2017 Conference on Emprical Methods in Natural Language Processing, Copenhagen, Denmark, 7–11 September, pp. 670–680. Association of Computational Linguistics (2017). https://arxiv.org/abs/1705.02364v5

  12. D’Amour, A., et al.: Underspecification presents challenges for credibility in modern machine learning. Preprint arXiv:2011.03395 (2020)

  13. Deng, L., Hinton, G., Kingsbury, B.: New types of deep neural network learning for speech recognition and related applications: an overview. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 8599–8603 (2013)

    Google Scholar 

  14. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805v2 (2019)

  15. Google Brain Team: Open source library for ML models (2020). https://www.tensorflow.org/

  16. Graves, A., Wayne, G., Danihelka, I.: Neural Turing Machines. arXiv preprint arXiv:1410.5401 (2014)

  17. Heaven, W.D.: The way we train AI is fundamentally flawed. MIT Technol. Rev., November 2020. https://www.technologyreview.com/2020/11/18/1012234/training-machine-learning-broken-real-world-heath-nlp-computer-vision/

  18. Hoang, M., Bihorac, O.A.: Supervised learning of universal sentence representations from natural language inference data. In: Proceedings of the 22nd Nordic Conference on Computational Linguistics (NoDaLiDa), Turku, Finland, 30 September–2 October, pp. 187–196. Linköping University Electronic Press (2019)

    Google Scholar 

  19. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)

    Article  Google Scholar 

  20. Howard, J., Ruder, S.: Universal language model fine-tuning for text classification. arXiv preprint arXiv:1801.06146 (2018)

  21. Keras Documentation: IMDB movie reviews sentiment classification (2018). https://keras.io/datasets/

  22. Kittler, J., Hatef, M., Duin, R.P., Matas, J.: On combining classifiers. IEEE Trans. Pattern Anal. Mach. Intell. 20(3), 226–239 (1998)

    Article  Google Scholar 

  23. Kumar, A., Kim, J., Lyndon, D., Fulham, M., Feng, D.: An ensemble of fine-tuned convolutional neural networks for medical image classification. IEEE J. Biomed. Health Inform. 21(1), 31–40 (2017)

    Article  Google Scholar 

  24. LeBlanc, M., Tibshirani, R.: Combining estimates in regression and classification. J. Am. Stat. Assoc. 91(436), 1641–1650 (1996)

    MathSciNet  MATH  Google Scholar 

  25. Luong, M.T., Pham, H., Manning, C.D.: Effective approaches to attention-based neural machine translation. arXiv preprint arXiv:1508.04025 (2015)

  26. Magerman, D.M.: Learning grammatical structure using statistical decision-trees. In: Miclet, L., de la Higuera, C. (eds.) ICGI 1996. LNCS, vol. 1147, pp. 1–21. Springer, Heidelberg (1996). https://doi.org/10.1007/BFb0033339

    Chapter  Google Scholar 

  27. Màrquez, L., Rodríguez, H.: Part-of-speech tagging using decision trees. In: Nédellec, C., Rouveirol, C. (eds.) ECML 1998. LNCS, vol. 1398, pp. 25–36. Springer, Heidelberg (1998). https://doi.org/10.1007/BFb0026668

    Chapter  Google Scholar 

  28. Mikolov, T., Sutskever, I., Chen, K., Corrado, G., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Advances in Neural Information Processing Systems, pp. 3111–3119 (2013). https://arxiv.org/abs/1310.4546

  29. Minaee, S., Kalchbrenner, N., Cambria, E., Nikzad, N., Chenaghlu, M., Gao, J.: Deep learning based text classification: a comprehensive review (2020)

    Google Scholar 

  30. Opitz, D., Maclin, R.: Popular ensemble methods: an empirical study. J. Artif. Int. Res. 11(1), 169–198 (1999)

    MATH  Google Scholar 

  31. Paul, R., Hall, L., Goldgof, D., Schabath, M., Gillies, R.: Predicting nodule malignancy using a CNN ensemble approach. In: 2018 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2018)

    Google Scholar 

  32. Pennington, J., Socher, R., Manning, C.D.: GloVe: global vectors for word representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162

  33. Perez, F., Avila, S., Valle, E.: Solo or ensemble? Choosing a CNN architecture for melanoma classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (2019)

    Google Scholar 

  34. Pexman, P.M.: How do we understand sarcasm? Front. Young Mind. 6(56), 1–8 (2018). https://doi.org/10.3389/frym.2018.00056

    Article  Google Scholar 

  35. Savelli, B., Bria, A., Molinara, M., Marrocco, C., Tortorella, F.: A multi-context CNN ensemble for small lesion detection. Artif. Intell. Med. 103, 101749 (2020). https://doi.org/10.1016/j.artmed.2019.101749. http://www.sciencedirect.com/science/article/pii/S0933365719303082

  36. Schuster, M., Paliwal, K.K.: Bidirectional recurrent neural networks. IEEE Trans. Sig. Process. 45(11), 2673–2681 (1997)

    Article  Google Scholar 

  37. Sun, C., Huang, L., Qiu, X.: Utilizing BERT for aspect-based sentiment analysis via constructing auxiliary sentence. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota, pp. 380–385. Association for Computational Linguistics (June 2019). https://doi.org/10.18653/v1/N19-1035. https://www.aclweb.org/anthology/N19-1035

  38. Turing, A.M.: I. - Computing machinery and intelligence. Mind LIX(236), 433–460 (1950). https://doi.org/10.1093/mind/LIX.236.433

  39. Vassilev, A.: BowTie - a deep learning feedforward neural network for sentiment analysis. In: Nicosia, G., Pardalos, P., Umeton, R., Giuffrida, G., Sciacca, V. (eds.) LOD 2019. LNCS, vol. 11943, pp. 360–371. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-37599-7_30

    Chapter  Google Scholar 

  40. Voelker, A.R., Kajić, I., Eliasmith, C.: Legendre memory units: Continuous-time representation in recurrent neural networks. In: Advances in Neural Information Processing Systems (2019). https://papers.nips.cc/paper/9689-legendre-memory-units-continuous-time-representation-in-recurrent-neural-networks

  41. Wolfram Neural Net Repository: BookCorpus dataset (2019). https://resources.wolframcloud.com/NeuralNetRepository/resources/BERT-Trained-on-BookCorpus-and-English-Wikipedia-Data

  42. Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: XLNet: generalized autoregressive pretraining for language understanding. In: Advances in Neural Information Processing Systems, pp. 5753–5763 (2019)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Apostol Vassilev .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 This is a U.S. government work and not under copyright protection in the U.S.; foreign copyright protection may apply

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Vassilev, A., Hasan, M., Jin, H. (2022). Can You Tell? SSNet - A Biologically-Inspired Neural Network Framework for Sentiment Classifiers. In: Nicosia, G., et al. Machine Learning, Optimization, and Data Science. LOD 2021. Lecture Notes in Computer Science(), vol 13163. Springer, Cham. https://doi.org/10.1007/978-3-030-95467-3_27

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-95467-3_27

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-95466-6

  • Online ISBN: 978-3-030-95467-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics