Analysis of French Phonetic Idiosyncrasies for Accent Recognition

Speech recognition systems have made tremendous progress since the last few decades. They have developed significantly in identifying the speech of the speaker. However, there is a scope of improvement in speech recognition systems in identifying the nuances and accents of a speaker. It is known that any specific natural language may possess at least one accent. Despite the identical word phonemic composition, if it is pronounced in different accents, we will have sound waves, which are different from each other. Differences in pronunciation, in accent and intonation of speech in general, create one of the most common problems of speech recognition. If there are a lot of accents in language we should create the acoustic model for each separately. We carry out a systematic analysis of the problem in the accurate classification of accents. We use traditional machine learning techniques and convolutional neural networks, and show that the classical techniques are not sufficiently efficient to solve this problem. Using spectrograms of speech signals, we propose a multi-class classification framework for accent recognition. In this paper, we focus our attention on the French accent. We also identify its limitation by understanding the impact of French idiosyncrasies on its spectrograms.


Introduction
Accent recognition is one of the most important topics in automatic speaker and speaker-independent speech recognition (SI-ASR) systems in recent years.
The growth of voice-controlled technologies has becoming part of our daily life, nevertheless variability in speech makes these spoken language technologies relatively difficult. One of the profound variability in a speech signal is the accent.
Different models could be developed to handle SI-ASR by accurately classifying the various accent types [1]. Such a successful accent recognition module can be integrated into a natural language processor, leading to its wide ranging impact in finance [2], medical science [3], and sustainable environment [4].
Dialect/accent refers to the different ways of pronouncing/speaking a language within a community. Some illustrative examples could be American En-

glish versus British English speakers or the Spanish speakers in Spain versus
Caribbean. During the past few years, there have been significant attempt to automatically recognize the dialect or accent of a speaker given his or her speech utterance. Recognition of dialects or accents of speakers prior to automatic speech recognition (ASR) helps in improving performance of the ASR systems by adapting the ASR acoustic and/or language models appropriately.
Moreover, in applications such as smart assistants as the ones used in smartphones, by recognizing the accent of the caller and then connecting the caller to agent with similar dialect or accent will produce more user-friendly environment for the users of the application.
Most of the existing techniques do not possess good accuracy in identifying the various accents. One of the reasons we are having trouble to have a good accuracy in the accent recognition problem is the lack of knowledge we have of English syllabic structure. In order to approximate English phonology, we have to understand the native language similarities of articulation, intonation, and rhythm. In the past, the research has focused on phone inventories and sequences, acoustic realizations, and intonation patterns. Therefore, it is important to study the English syllable structure. The main problem behind word recognition is the understanding of the syllable. It usually consists of an obligatory vowel with optional initial and final consonants. One familiar way of subdividing a syllable is into onset and rhyme. All syllables in all languages phonetically at least consist of onset and rhyme. However, these categories alone do not indicate where the syllable is placed within the word. In order to capture foreign accents in English, we want to highlight those constituents of the syllable that are most likely to prove difficult for speakers of languages in which they are not contained [5].
In this paper, we focus on the specifications of the French language. We are interested in identifying the idiosyncrasies [6] of French people that lead a model into predicting the wrong accent.

Related work
Berkling et al. [5] discussed the tonal and non-tonal languages and their treatment in speech recognition systems. In Kardava et al. [7], they have developed an approach to solve the above mentioned problems and create more effective, improved speech recognition system of Georgian language and of languages, that are similar to Georgian language. Katarina et al. proposed [8], an automatic method of detection of the degree of foreign accent and the results are compared with accent labeling carried out by an expert phonetician. In [9], they give a new approach for modelling allophones in a speech recognition system based on hidden Markov models.
In [10], they studied mutual influences between native and non-native vowel production during learning, i.e., before and after short-term visual articulatory feedback training with non-native sounds. To obtain a speaker's pronunciation characteristics, [11] gave a method based on an idea from bionics, which uses spectrogram statistics to achieve a characteristic spectrogram to give a stable representation of the speaker's pronunciation from a linear superposition of short-time spectrograms. Hossari et al. in [12] used a two-stage cascading model using Facebook's fastTex implementation [13] to learn the word embed-dings. Davies et al. presented advanced computer vision methods, emphasizing machine and deep learning techniques that have emerged during the past 5-10 years [14]. The book provides clear explanations of principles and algorithms supported with applications. In [15], Farris present the Gini index and several measures of integrity.

Contributions of the paper
The main contributions of this paper 1 can be summarized as follows: • Highlighting the problem of the limit in the context of the study of accent recognition. In this paper, we will show there exists a "natural" limit of the accuracy when it comes to accent classification. The main aim of this work will be to address that limit and give a solution to that problem.
• Highlighting French idiosyncrasies restricting the accuracy values of deep learning models. In this paper, we focused our work on the French speakers. We decided to study the language habits of French speakers that could explain the decrease in precision. Indeed, the English language is an Indo-European Germanic language while the French is a Latin language, which means that their structure is very different. Thus, we will find strongly similar words between the two languages, but the way of pronouncing them will often vary a lot. Thus, the study of these Latin habits is particularly interesting in the context of our work: understanding which aspects of the French language reduce the effectiveness of our models will allow us to better recognize a French accent later on.
• Highlighting the incidence of these idiosyncrasies in the spectrograms, and therefore the models in question. Once we have isolated more clearly the responsible French idiosyncrasies, we determine their real impact on the models used (CNN in our case) by the precise study of spectrograms of 1 With the spirit of reproducible research, the code to reproduce the results in this paper is shared at https://github.com/pberjon/Article-Accent-Recognition. vocal samples used. In this case, we will compare different spectrograms for the same sentence and determine the differences between a "French" and "English" spectrogram, for a specific idiosyncrasy.
The rest of the paper is structured as follows. Section 2 discusses the data and the methods we used in our preliminary study (dataset and neural networks) and Section 3 discusses results we obtained with these methods. In Sections 4 and 5, we analysed the French speakers idiosyncrasies and their consequences on spectrograms. Finally, Section 6 concludes the work and discusses our future works.

A primer on French speakers idiosyncrasies
In this section, we provide a primer to the readers on the various types of speech idiosyncrasies exhibited by French speakers.

French-infused vowels
Nearly every English vowel is affected by the French accent [10]. French has no diphthongs, so vowels are always shorter than their English counterparts.
The long A, O, and U sounds in English, as in say, so, and Sue, are pronounced by French speakers like their similar but un-diphthonged French equivalents, as in the French words sais, seau, and sou. For example, English speakers pronounce say as [seI], with a diphthong made up of a long "a" sound followed by a sort of "y" sound. But French speakers will say [se] -no diphthong, no "y" sound. English vowel sounds which do not have close French equivalents are systematically replaced by other sounds, as it's showed in Table 1.

Dropped Vowels, Syllabification, and Word Stress
French people pronounce all schwas (unstressed vowels). Native English speakers tend toward "r'mind'r," but French speakers say "ree-ma-een-dair." They will pronounce amazes "ah-may-zez," with the final e fully stressed, unlike native speakers who will gloss over it: "amaz's." And the French often emphasize short O, as in cot French Accent : pronounced either "uh" as in cut, or "oh" as in coat U in words like full French Accent : pronounced "oo" as in fool the -ed at the end of a verb, even if that means adding a syllable: amazed becomes "ah-may-zed." Short words that native English speakers tend to skim over or swallow will always be carefully pronounced by French speakers. The latter will say "peanoot boo-tair and jelly," whereas native English speakers opt for pean't butt'r 'n' jelly.
Because French has no word stress (all syllables are pronounced with the same emphasis), French speakers have a hard time with stressed syllables in English, and will usually pronounce everything at the same stress, like actually, which becomes "ahk chew ah lee." Or they might stress the last syllable -particularly in words with more than two: computer is often said "com-pu-TAIR."

French-accented Consonants
H is always silent in French, so the French will pronounce happy as "appy.".
Once in a while, they might make a particular effort, usually resulting in an overly forceful H sound -even with words like hour and honest, in which the H is silent in English. J is likely to be pronounced "zh" like the G in massage. R will be pronounced either as in French or as a tricky sound somewhere between W and L. Interestingly, if a word starting with a vowel has an R in the middle, some French speakers will mistakenly add an (overly forceful) English H in front of it. For example, arm might be pronounced "hahrm." TH's pronunciation will vary, depending on how it's supposed to be pronounced in English: • voiced TH [ð] is pronounced Z or DZ: "this" becomes "zees" or "dzees" • unvoiced TH is pronounced S or T: "thin" turns into "seen" or "teen" Letters that should be silent at the beginning and end of words (psychology, lamb) are often pronounced.

Features for detecting accents
Spectrograms are pictorial representation of sound we can use for speech recognition [11]. The x-axis represents time in seconds while the y-axis represents frequency in Hertz. Different colors represent the different magnitude of frequency at a particular time. We can think of the spectrogram as an image.
Once the audio file is converted to an image, the problem reduces to an image classification task. Based on the number of images, algorithms like Support Vector Machines (SVM), etc. are used to classify sound, validate the speaker.

Our proposed framework for detecting accents
We used different Machine Learning and Deep Learning models, and the first one is a two convolutional layers neural network with 5 different accents We will focus on this 2-layer CNN for the rest of our work.

Dataset
Everyone who speaks a language, speaks it with an accent. A particular accent essentially reflects a person's linguistic background. When people listen to someone speak with a different accent from their own, they notice the difference, and they may even make certain biased social judgments about the speaker. In this paper, we used the Speech Accent Archive [16]. It has been

Accent recognition metric
In order to provide an objective evaluation of the accent recognition task, we compute the overall accuracy, F1-macro, F1-micro and hamming loss [17].
These metrics are defined as: In the overall accuracy formula, tp, tn, fp, fn stand respectively for true positive, true negative, false positive and false negative. In the Hamming loss formula, ⊕ denotes exlusive-or, X i,l (Y i,l ) stands for boolean that the i th datum (i th prediction) contains the l th label  With regular machine learning methods as SVM, we obtained low accuracy of 0.35. As expected, the impact of Deep Learning methods [14] is quite clear here.
We observe from Table 2, that the Convolutional Neural Networks achieves an accuracy of 0.65. However, we observe that we do not obtain an optimal score if we use too many layers in our model. Depending upon how large our dataset is, the CNN architecture is implemented. Adding layers unnecessarily to any CNN will increase our number of parameters only for the smaller dataset. It's true for some reasons that on adding more hidden layers, it will give a better accuracy. That's true for larger datasets, as more layers with less stride factor will extract more features for the input data. In CNN, how we play with the architecture is completely dependent on what our requirement is and how our data is. Increasing unnecessary parameters will only overfit your network, and that's the reason why our CNN with 2 layers has better results than with 4.
A macro-average will compute the metric independently for each class and then take the average (hence treating all classes equally), whereas a micro-

Multi-class accent recognition metric
In this case of multi-class classification, we are considering ACC, AGF, AUC and GI.
We obtained these results in the confusion matrices with the 2-layer CNN and the SVM method:   better. This can easily be explained by the size of the data sets corresponding to each accent. This is a result that shows fairly well the limit of classical machine learning algorithms. This limit in the evaluation scores for classical machine learning models are also observed in the broad areas of network security [18] and computer vision [19]. In this specific application of accent recognition, we observe that an increase in the number of vocal samples do not lead to an increased accuracy values. This difference is due to the lack of capacity of the SVM which has difficulty processing information as complex as images.

2-layer CNN
Classes ACC AGF AUC GI  Table 4: Multi-class classification metric values using our proposed 2-layer CNN model. Table 4 indicates that the results are much more harmonized between the different accents. We still do not have a perfect match between the size of the dataset and the performance of the model, but the disparities between accents disappear.
We can observe from Table 3 and Table 4 that the classical machine learning methods are quite ineffective and that the deep learning methods stand out clearly in accent recognition; that's why we will use the 2-layer CNNs as a reference for the rest of the paper. In most case, the SVM method is not powerful enough for us to have a good accuracy. That can be explained with the results we obtained on the Gini Index [15]. The values obtained by the index are quite low (negative values are considered quite low positive values), which means that in the case of SVM, the spectrograms are similar in nature. Such SVM methods are not selective enough to clearly determine the accent (which is also shown by the AGF values). However, the SVM method is not totally to be excluded: in the context of the Hindi accent or the German accent, the SVM turns out to be more effective than all the deep learning methods used.
The total computing time is 1 minute and 23 seconds when our proposed model is executed on Google Colab using GPU.

Impact of Idiosyncrasies on Speech Spectrograms
We will now study the idiosyncrasies of the French language and how it impacts the corresponding spectograms of the speech signals.
The spectrogram is a representation allowing to observe the whole of the decomposition spectral voice and speech on the same graphic representation.
This tool is precise, informative and reliable to analyze the characteristics of sound production. In a first-cut analysis, we associate the spectrogram with the temporal pace, the power profile and segmentation. More extensively, there are a significant number of indicators, metrics and tools. This includes the fundamental frequency and its derivatives, the alteration of voice and speech, and more generally the assessment of intelligibility. It is its ability to measure vocal alteration that will interest us here. We will focus on primarily two pieces of information given by the spectrogram: amplitude and frequency in our study.

The un-diphthonged "y"
Firstly, we will analyze differences on the spectrograms for the word "Wednesday", where the French speaker is not supposed to use the "y" sound, like it was explained in French-infused Vowels. Here are the spectrograms of an English speaker and a French speaker of the sentence "and we will go meet her Wednesday at the train station" : We can see, as expected, that at the end of the word (1.3-1.4 for English and 1.05-1.1 for French), the "y" is almost not even pronounced by the French speaker, while the English speaker pronounced it clearly. Indeed, the frequencies used are relatively similar on the whole of the audio sample, but certain syllables are pressed with a much higher frequency by a French speaker. Consequently, the corresponding amplitude will be low in magnitude. This explains a clear  difference between the perception of a word between a French speaker and an English speaker: the non-native will tend to pronounce English less loudly, but will support certain syllables much more than an English speaker.

Voiced TH [ð] is pronounced Z or DZ
French people tend to say "zees" instead of "these". That's what we can see in the sentence "Please call Stella, ask her to bring these things from the store.".
It's quite complicated to delimit the word "these" in this sentence because it is quite quick, so we will delimit "bring these", as the word "bring" does not represent a major problem for French speakers.
Here, we see that french people tend to diminish the importance of the word "bring" but accentuate the word "these", whereas English speakers seem to pronounce the sequence "bring these" at the same frequency. we think that's why, for French speakers, the "th" sounds like "z". Indeed, the closest sound to "th"  is "z" in the French language, so it's only natural for us to use it. Nevertheless, we think the reason why they accentuate it (because we could just use the sound "z" more discreetly) is because of the role of words like "these", "the", "this"... They're articles, and in the French language, they tend to accentuate the most important parts of the sentence, which made this French speaker diminish "bring", and accentuate "these".
Thus, French speakers idiosyncrasies have a direct impact on audio samples spectrograms. Then, we can easily understand why these idiosyncrasies have a direct impact on the results of deep learning models: the first reason why we use spectrograms in order to develop Speech Recognition Systems is to turn an audio classification problem into an image classification problem. Then, if the idiosyncrasies of a specific language have that much effect on spectrograms, that means that the different languages have different spectrograms and this should help the deep learning models to get a better classification between English and French.

Conclusions and future work
In this paper, we have concluded that the classical deep learning models are not powerful enough to accurately predict the accent of an user. Therefore, we decided to study the differences between tonal and non-tonal languages, in order to clearly identify the obstacles that prevent us from achieving better results in accent recognition. To fulfill this purpose, we decided to devote our analysis on the French accent, which is a non-tonal language. In this paper, we studied the idiosyncrasies of French speakers: the characteristics of the spoken French language that have a direct impact on the pronunciation of English words by French speakers. In addition, we determined the consequences these idiosyncrasies have on spectrograms, and consequently on the accuracy of deep learning models. In the future, we would like to work further on the subject of French idiosyncrasies, by building a model which determines if an idiosyncrasy is present in an audio sample or not. This would allow us to more easily determine the presence of a French accent in an audio sample. Such accurate recognition of accents in a speech signal will lead to better automatic speech recognition systems.