Abstract
Activation function is the heart of the neural network and its impact is different from one to another. Nowadays, there are many activation functions, but the well-known is the rectified linear unit (ReLU). Google brain invented an activation function called Swish and defined as f(x) = x*Sigmoid (βx). This function provides good results and outperforms ReLU. In addition, trying to enhance the Swish function introduces adjusted Swish, f(x) = βx*Sigmoid(x). This paper presents a new activation function similar to Swish and adjusted Swish, f(x) = βx*Sigmoid (βx), which we name it E_Swish Beta. We examine E_Swish Beta, E_Swish, Swish, and ReLU in different datasets and models. We show that E_Swish Beta enhances the result better than others do. It performs 0.53, 0.61, and 0.42% improvement relative to ReLU, Swish, and E_Swish, respectively, on Cifar10 dataset for WRN 16–4 model. Also on Cifar100 dataset for WRN 16–4, E_Swish Beta provides 1.77, 0.69, and 0.27% relative to ReLU, Swish, and adjusted Swish.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Schmidhuber J (2015) Deep learning in neural networks: an overview. Neural Netw 61:85–117
Salam A, El Hibaoui A (2018) Comparison of machine learning algorithms for the power consumption prediction—case study of Tetouan city. In: Proceedings of 2018 6th international renewable and sustainable energy conference, IRSEC 2018
LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553):436–444
Maarof M, Selamat A, Shamsuddin SM (2009) Text content analysis for illicit web pages by using neural networks. J Teknol Siri 50(D):73–91
Ding B, Qian H, Zhou J (2018) Activation functions and their characteristics in deep neural networks. In: Proceedings of the 30th Chinese control and decision conference, CCDC 2018. pp 1836–1841
Minai AA, Williams RD (1993) On the derivatives of the sigmoid. Neural Netw 6(6):845–853
Glorot X, Bengio Y (2010) Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the thirteenth international conference on artificial intelligence and statistics. pp 249–256
Hahnloser RHR, Sarpeshkar R, Mahowald MA, Douglas RJ, Seung HS (2000) Erratum: digital selection and analogue amplification coexist in a cortex-inspired silicon circuit. Nature 405(6789):947–951
Xu B et al (2015) Empirical evaluation of rectified activations in convolutional network. arXiv preprint arXiv:1505.00853
He K, Zhang X, Ren S, Sun J (2015) Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In: Proceedings of the IEEE international conference on computer vision, pp 1026–1034
Nair V, Hinton GE (2010) Rectified linear units improve restricted Boltzmann machines. In: ICML
Clevert DA, Unterthiner T, Hochreiter S (2015) Fast and accurate deep network learning by exponential linear units (elus). arXiv preprint arXiv:1511.07289
Klambauer G, Unterthiner T, Mayr A, Hochreiter S (2017) Self-normalizing neural networks. In: Advances in neural information processing systems. pp 971–980
Godfrey LB, Gashler MS (2015) A continuum among logarithmic, linear, and exponential functions, and its potential to improve generalization in neural networks. In: 2015 7th international joint conference on knowledge discovery, knowledge engineering and knowledge management (IC3K), vol 1. IEEE, pp 481–486
Hendrycks D, Gimpel K (2016) Bridging nonlinearities and stochastic regularizers with gaussian error linear units
Jin X et al (2015) Deep learning with s-shaped rectified linear activation units. arXiv preprint arXiv:1512.07030
Ramachandran P, Zoph B, Le QV (2017) Searching for activation functions. arXiv preprint arXiv:1710.05941
Alcaide E (2018) E-Swish: adjusting activations to different network depths. arXiv preprint arXiv:1801.07145
Lecun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324
Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems. pp 1097–1105
Zagoruyko S, Komodakis N (2016) Wide residual networks. arXiv preprint arXiv:1605.07146
He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 770–778
Hasanpour SH, et al. (2016) Lets keep it simple, using simple architectures to outperform deeper and more complex architectures. arXiv preprint arXiv:1608.06037
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Salam, A., El Hibaoui, A. (2021). E_Swish Beta: Modifying Swish Activation Function for Deep Learning Improvement. In: Hassanien, A.E., Darwish, A., Abd El-Kader, S.M., Alboaneen, D.A. (eds) Enabling Machine Learning Applications in Data Science. Algorithms for Intelligent Systems. Springer, Singapore. https://doi.org/10.1007/978-981-33-6129-4_19
Download citation
DOI: https://doi.org/10.1007/978-981-33-6129-4_19
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-33-6128-7
Online ISBN: 978-981-33-6129-4
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)