Abstract
We aimed to improve the state of the art in decoding speech from neural activity, with the ultimate goal of developing a useful brain-machine interface (BMI) for individuals who have lost the ability to speak—from ALS, a stroke, or other traumatic brain injury. In our recent study (Makin et al. in Nat Neurosci 23:575–582, 2020), each of four participants undergoing clinical monitoring for epilepsy read aloud, making repeated passes through a set of some 30–50 sentences, while her electrocorticogram was simultaneously recorded. Our algorithm, which was inspired by recent ideas in machine translation, brought word error rates down from the previous state of the art, about 60, to 3%. In this chapter, we discuss those results, their limitations, and their implications for the general problem of speech decoding.
J. G. Makin is now with the School of Electrical and Computer Engineering at Purdue University. For questions about the algorithm/code, contact him at jgmakin@purdue.edu. For questions about experiment/data, contact EFC at edward.chang@ucsf.edu.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Errors are computed as the minimum number of word insertions, deletions, and substitutions required to transform the predicted into the true word sequence. Dividing by the number of words in the true sequence yields a word error rate. Intuitively, any sensible decoder should achieve error rates between 0 and 1.0, since the WER for a “decoder” that just predicts an empty sequence for every “input” is precisely 1.0. But in practice poor decoders can make errors at rates greater than 1.
References
Angrick M, Herff C, Mugler E, Tate MC, Slutzky MW, Krusienski DJ, Schultz T (2019) Speech synthesis from ECoG using densely connected 3D convolutional neural networks. J Neural Eng
Anumanchipalli GK, Chartier J, Chang EF (2019) Speech synthesis from neural decoding of spoken sentences. Nature 568(7753):493–498
Brumberg JS, Kennedy PR, Guenther FH (2009) Artificial speech synthesizer control by brain-computer interface. In: Interspeech, pp 636–639
Brumberg JS, Wright EJ, Andreasen DS, Guenther FH, Kennedy PR (2011) Classification of intended phoneme production from chronic intracortical microelectrode recordings in speech-motor cortex. Front Neuroeng 5:1–12
Caruana R (1997) Multi-task learning. Multitask Learn 28:41–75
Cho K, Van Merrienboer B, Bahdanau D, Bengio Y (2014) On the properties of neural machine translation: encoder–decoder approaches. In: Proceedings of SSST-8, eighth workshop on syntax, semantics and structure in statistical translation, pp 103–111
Cho K, Van Merrienboer B, Gulcehre C, Bahdanau D, Bougares F, Schwenk H, Bengio Y (2014) Learning phrase representations using RNN encoder-decoder for statistical machine translation. In: 2014 conference on empirical methods in natural language processing (EMNLP), pp 1724–1734
Gehring J, Auli M, Grangier D, Yarats D, Dauphin YN (2017) Convolutional sequence to sequence learning. In: 34th international conference on machine learning, ICML 2017, vol 3, pp 2029–2042
Herff C, Heger D, De Pesters A, Telaar D, Brunner P, Schalk G, Schultz T (2015) Brain-to-text: decoding spoken phrases from phone representations in the brain. Front Neurosci 9:1–11
Kingma DP, Ba J (2014) Adam: a method for stochastic optimization
Makin JG, Moses DA, Chang EF (2020) Machine translation of cortical activity to text with an encoder-decoder framework. Nat Neurosci 23:575–582
Martin S, Brunner P, Holdgraf C, Heinze HJ, Crone NE, Rieger J, Schalk G, Knight RT, Pasley BN (2014) Decoding spectrotemporal features of overt and covert speech from the human cortex. Front Neuroeng 7:1–15
Moses DA, Leonard MK, Makin JG, Chang EF (2019) Real-time decoding of question-and-answer speech dialogue using human cortical activity. Nat Commun 10(1)
Mugler EM, Tate MC, Livescu K, Templer JW, Goldrick MA, Slutzky MW (2018) Differential representation of articulatory gestures and phonemes in precentral and inferior frontal gyri. J Neurosci 4653(46):1206–1218
Munteanu C, Penn G, Baecker R, Toms E, James D (2006) Measuring the acceptable word error rate of machine-generated webcast transcripts. In: Interspeech, pp 157–160
Pei X, Barbour DL, Leuthardt EC (2011) Decoding vowels and consonants in spoken and imagined words using electrocorticographic signals in humans. J Neural Eng 8(4):1–11
Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R (2014) Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res 15:1929–1958
Stavisky SD, Rezaii P, Willett FR, Hochberg LR, Shenoy KV, Henderson JM (2018) Decoding speech from intracortical multielectrode arrays in dorsal “arm/hand areas” of human motor cortex. In: Proceedings of the annual international conference of the IEEE Engineering in Medicine and Biology Society, EMBS, pp 93–97
Sutskever I, Vinyals O, Le QV (2014) Sequence to sequence learning with neural networks. In: Advances in neural information processing systems 27: proceedings of the 2014 conference, pp 1–9
Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: 2015 IEEE conference on computer vision and pattern recognition (CVPR). IEEE, pp 1–9
Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser L, Polosukhin I (2017) Attention is all you need. In: Advances in neural information processing systems, pp 5998–6008
Wrench A (2019) MOCHA-TIMIT. Online database
Xiong W, Droppo J, Huang X, Seide F, Seltzer ML, Stolcke A, Yu D, Zweig G (2017) Toward human parity in conversational speech recognition. IEEE/ACM Trans Audio Speech Lang Process 25(12):2410–2423
Acknowledgements
The project was funded by a research contract under Facebook’s Sponsored Academic Research Agreement. Data were collected and pre-processed by members of the Chang lab, some (MOCHA-TIMIT) under NIH grant U01 NS098971. Some neural networks were trained using GPUs generously donated by the Nvidia Corporation.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Makin, J.G., Moses, D.A., Chang, E.F. (2021). Speech Decoding as Machine Translation. In: Guger, C., Allison, B.Z., Gunduz, A. (eds) Brain-Computer Interface Research. SpringerBriefs in Electrical and Computer Engineering. Springer, Cham. https://doi.org/10.1007/978-3-030-79287-9_3
Download citation
DOI: https://doi.org/10.1007/978-3-030-79287-9_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-79286-2
Online ISBN: 978-3-030-79287-9
eBook Packages: Computer ScienceComputer Science (R0)