Representation of the brain network by electroencephalograms during facial expressions

BACKGROUND
Facial expressions, such as smiling and anger, cause many physical and psychological effects in the body, known as 'embodied emotions' or 'facial feedback theory.' In the clinical application of this theory in certain diseases, such as autism and depression, treatments such as forcing patients to smile have been used. However, the neural mechanisms underlying the representation of facial expressions remain unclear.


NEW METHOD
We proposed a method to construct brain networks based on the time course of the synchronization likelihood and determine the effects of various facial expressions on the situation using visual stimulus of faces. This method was applied to analyze electroencephalographic (EEG) data recorded during the recognition and representation of various positive and negative facial expressions. The brain networks were constructed based on the EEG data recorded in 11 healthy participants.


RESULTS
Channel sets from brain networks during unsymmetrical smiling expressions (i.e., only the right or left side) were highly linearly symmetrical. Channel sets from brain networks during negative facial expressions (i.e., anger and sadness) and symmetrical smiling expressions (i.e., smiling with an opened or closed mouth) were similar.


COMPARISON WITH EXISTING METHODS
While we obtained brain networks based on time course EEG correlations throughout the experiment, existing methods can analyze EEG data only at a certain time point.


CONCLUSIONS
The comparisons of different facial expressions could be used to identify the side of the facial muscles used while smiling and to determine how similar brain networks are induced by positive and negative facial expressions.


Introduction
Several attempts are being made to increase our understanding of the function, initiation, and evolution of emotions (Burkitt, 2019;Dolcos et al., 2020;Suslow et al., 2010), since their acknowledgement is key to improve communication skills, understand illnesses associated with emotional dysfunction (e.g., autism and depression), and distinguish humans from other animals. However, the study of emotions is challenging because emotions are highly subjective and cannot be easily evaluated. Ekman regarded emotional facial expressions as universal characteristics among people across various countries and defined six basic facial expressions, i.e., happiness, sadness, fear, disgust, anger, and surprise (Ekman, 1999). This study has been expanded to many research fields, including facial perception/recognition research investigating the representations of facial expressions. Some studies have found that the representations of facial expressions influence emotional experiences (e. g., people with an inhibited representation of facial expressions have weaker self-reported emotional experiences (Davis et al., 2010)) and the speed of judging emotional sentences (e.g., people with a happy facial expression judge positive sentences faster than negative sentences (Havas et al., 2007)). In a well-known study, a pen was held between the participants' teeth to enhance smiling or between their lips to inhibit it (Chang et al., 2014); significant differences were identified in the left and right middle cingulum regions during the recognition of happy and sad facial expressions while holding a pen with the teeth vs. holding it with the lips, respectively, even if the participants did not recall those Abbreviations: SL, synchronization likelihood; EEG, electroencephalogram; fMRI, functional magnetic resonance imaging; MEG, magnetoencephalography; SSC, Szymkiewicz-Simpson coefficient. network construction has been reported to help advance the understanding of neural functional activity (Yu et al., 2017).
In the present study, we aimed to separately record EEG and fMRI, and EEG analysis was followed by the fMRI study. Herein, we present the EEG results obtained as a first step of our study. Our final goal was to reveal the effects of facial expressions on clinical conditions, especially in people with psychiatric disorders, such as autism and depression. First, we focused on EEG use because of its affordability and applicability. We proposed a method to construct brain networks based on the synchronization between brain regions to distinguish tendencies during some representations of facial expressions with visual stimuli. In most facial expression studies, the perception/recognition of faces (Greening et al., 2018), mimicry of faces (Chartrand and van Baaren, 2009), and forced facial expressions (such as holding a pen with the teeth or lips) are mainly discussed. However, we focused on natural facial expressions that were easy for the participants to make. We asked the participants to make emotional or neutral faces while watching a movie presenting certain facial expressions.

Experimental scheme
We measured EEG data in 11 healthy participants (age range: 20-26 years; 5 females and 6 males; 10 right-handed and 1 left-handed). They made emotional or neutral faces while watching a movie consisting of eight facial expressions, as shown in Fig. 1.
The images displayed were as follows: A: normal, control. B: smiling with closed mouth. C: smiling only on the right side of the face with a closed mouth. D: smiling only on the left side of the face with a closed mouth. E: smiling with an opened mouth. F: surprise. G: anger. H: sadness.
The movie consisted of face images presented one by one, with a black screen between face images. One image, which was randomly chosen, stayed for 12 s in the middle of the screen, and subsequently, a black screen was shown for 9 s (SET in Fig. 2). Each image appeared twice in a movie, and two movies were presented with the faces in different orders.
Participants in the experiment laid back and gazed at the movie displayed on the ceiling while listening to sounds from the fMRI measurement playing from the speaker situated next to their heads (Fig. 2). The sounds were played because we intended to compare EEG data with fMRI data for use in further research to detect the active areas about facial expressions. Additionally, it was important to consider that the sounds derived from the measurement could alter the participants' brain activity (Tomasi et al., 2005;Scarff et al., 2004). To reduce its effect on The images displayed were as follows: A: normal, control. B: smiling with closed mouth. C: smiling only on the right side of the face with a closed mouth. D: smiling only on the left side of the face with a closed mouth. E: smiling with an opened mouth. F: surprise. G: anger. H: sadness. brain activity, the sound was maintained at the same decibel level throughout the experiment.
The participants watched two movies under different conditions. During the first movie, they simply gazed at the faces (condition 1). During the second movie, they gazed at the faces but also made the same facial expression as the displayed face images until they disappeared (condition 2). In condition 2, the participants were instructed to change their facial expression except while image A (normal face, control) was being displayed. The participants were asked to change their facial expression as quickly as possible after the image was shown and change it back to normal as quickly as possible after the image disappeared. The participants were able to rest between conditions.
In condition 1, the participants recognized the face with emotional information (e.g., happiness and anger). In condition 2, the participants made emotional faces during the face recognition. In both conditions, facial mimicry mechanisms could have been induced by face recognition using emotional information. Therefore, we compared the differences between conditions to focus solely on the conscious representation of emotional facial expressions.
The experimental procedures were approved by the Ethics Committee for Human Subject Research, Faculty of Computer Science and Systems Engineering, Kyushu Institute of Technology. Written informed consent was obtained from all participants before participation.

EEG measurement
EEG data were recorded with 19 electrodes (or channels) according to the International 10-20 system as follows: Fp1, Fp2, F3, F4, C3, C4, P3, P4, O1, O2, F7, F8, T7, T8, P7, P8, Fz, Cz, and Pz. In addition to the 19 electrodes, an earth electrode was attached between the eyebrows, and A1 and A2 (reference electrodes) were attached to both the earlobes. A1 and A2 were used to measure the average electrical activity through the head. The mean of A1 and A2 was subtracted from all channels' signals. The sampling frequency was 1000 Hz.

EEG analysis
We extracted the EEG data for 12 s while face images were shown in both conditions. Thus, we extracted EEG data during the recognition of facial expressions in condition 1 and during the representation of facial expressions with recognition in condition 2. Data were decomposed into five frequency bands via Fourier transform. Subsequently, we calculated the time course of the SL (Stam and van Dijk, 2002;Montez et al., 2006) to construct a network. Each electrode (node) pair showing the overlapped SL was connected by edges in a network where the SL peaks were overlapped in a short time interval (16 ms). Then, the betweenness centrality of each electrode in the network was used to perform statistical analyses between facial expressions and conditions. The channels showing significant differences were gathered as channel sets, and the Szymkiewicz-Simpson coefficient was calculated to visualize the differences between any two pairs of channel sets in all combinations.
Details of the EEG analysis methods are described in the following sections.

Specific frequency bands
EEG data were filtered within a 0.5-100/200 Hz bandpass, except for the Japanese electrical power noise (60 Hz), using AP Viewer Program Version 5.01 (NoruPro Light Systems, Inc., Tokyo, Japan). The maximum bandpass, 100 or 200 Hz, was chosen to have the same frequency as the maximum recording frequency range.
The EEG data, collected for 12 s while face images were shown in both conditions, were reconstructed according to the frequency bands by Fourier transform and inverse Fourier transform using the fft function in R software (John Chambers et al., Bell Laboratories, Murray Hill, NJ, USA). The frequency bands were defined as follows: theta, 4-8 Hz; lower alpha, 8-10 Hz; upper alpha, 10-13 Hz; beta, 13-30 Hz; and gamma, 30-45 Hz.

Synchronization likelihood
We calculated the SL (Stam and van Dijk, 2002;Montez et al., 2006) of the data corresponding to each frequency band. The SL is a useful index to predict how neurons in the brain are related to or synchronized with each other, and calculated between two pairs of electrodes over a time course. In our study, we had 171 pairs of electrodes because 19 channels were measured. The calculation consisted of the following four steps: First, the state vector of channel k at time i was defined with timedelay embedding vectors.
Here, x k,i is the time series of channel k at time i, L is the lag, and m is the dimension of the embedding vector in state space.
Here, fs is the sampling frequency (Hz), which was defined as 1000 Hz in this study; HF is the highest frequency (Hz) of the frequency band of interest.
The length of the state vector L * (m − 1) was defined as follows: Where LF is the lowest frequency (Hz) of the frequency band of interest. Second, we set the reference vector X A,i and vectors X A,j , ranging from i − W2 2 to i − W1 2 and from i + W1 2 to i + W2 2 in steps of 1 fs , respectively. W 1 and W 2 are defined as follows with n rec = 10 and p ref = 0.01:

Fig. 2. Experimental setting.
Participant position when watching a movie on the screen. The movie repeated 16 SETs. SET = stimulation unit consisting of a face image appearing for 12 s and a black screen for 9 s.
Third, we calculated SL AB,i between channels A and B at time i as follows: Where n AB is the number of simultaneous repetitions in channels A and B, and is defined as follows: Finally, these three steps were repeated to obtain the SL over a time course by an increment time s = 16 ms. The repetition times were different between each frequency band because of the different selections of W 1 and W 2 .
The parameters for this study are listed in Table 1.

Time course of the SL
To study how the synchronization changes during the recognition and representation of facial expressions (12 s), the time course of the SL was plotted with respect to the increment time s (Fig. 3). The first and last portions (40 data points each) of the SL values were omitted (indicated by red in Fig. 3). For the theta, lower alpha, upper alpha, beta, and gamma bands, the repetition times were 725, 742, 744, 746, and 748 for the calculation of SL and 10.9, 10.8, 10.8, 10.7, and 10.7 for the SL omitted ranges (in %), respectively. Then, we extracted the peak SL value during the experiment. Within the range of interest, we calculated an approximate curve by a method of the spline curve and the peaks of the curve (Fig. 4). The calculations were based on the functions smooth.spline (in the stats package) and findpeaks (in the pracma package) in R. We set the degrees of freedom for the spline curve such that it had the same value as the number of the data point. The peak was defined as being over the mean value of the SL + 0.2, with increased SL values in three steps before the peak and decreased SL values in three steps after the peak.

Construction of networks
The networks were constructed with the tightly related electrode pairs based on the overlapping of the SL peaks. We assumed that electrode pairs with SL peaks occurring simultaneously were tightly related to each other with regards to neural activity. Therefore, channel pairs were indicated as edges in a network if ≥5 peaks of the SL appeared simultaneously (Fig. 5).
We transformed a network into a 19 × 19 adjacency matrix (A) for further analyses. The columns and rows were ordered in the same manner using the channel positions. If there was an edge between channel i and j, the matrix element was defined as a i,j = a j,i = 1. If there was no edge, the matrix element was defined as a i,j = a j,i = 0.

Betweenness centrality
Betweenness centrality (Suzuki, 2009) is a measure of the importance of the channels in a network. In this study, the betweenness centrality of an adjacency matrix (A) was calculated in all the networks from all participants for each facial expression in a condition. This measure was calculated by the following equation: Where g jk is the number of the shortest paths between node j and k of an adjacency matrix (A) of the network, and g jk (i) is the number of the   shortest paths between j and k that pass through i. C b (i) was calculated by the betweenness function (in the sna package) in R.

Statistical tests
We collected all the betweenness centrality data from every participant corresponding to different facial expressions for statistical tests. We performed the Wilcoxon rank-sum test using the wilcox.exact function (in the exactRankTests package) in R to identify significant differences in each betweenness centrality of the same channels between different conditions with the same facial expressions (seven facial expression pairs, without the normal/control face image) or different facial expressions under the same conditions ( 8 C 2 = 28 facial expression pairs). Before using the Wilcoxon rank-sum test, the Shapiro-Wilk test was applied to check whether the betweenness centrality data had a normal distribution with the shapiro.test function in R. The data set did not have a normal distribution if the p-value of the Shapiro-Wilk test result was <0.01. Five or more channels with p-values <0.01 in the Wilcoxon rank-sum test were gathered as a channel set (the red points in Fig. 6). Note that 5 channels were >25 % of the total number of channels, 19.

Szymkiewicz-Simpson coefficient
Channel sets expressing differences between conditions were reanalyzed. To visualize the differences between any two pairs of channel sets in all the combinations, we calculated the Szymkiewicz-Simpson coefficient (SSC) (Vijaymeena and Kavitha, 2016). The coefficient, which can evaluate the similarity between any two sets in terms of the degree to which the sets overlapp, is useful when comparing channel sets. It was calculated by dividing the size of the intersection by the smaller size of the two sets (X, Y): We calculated the SSC as a similarity measure between two sets (the similarity SSC) and assessed it as a difference measure with respect to left-right symmetry between two sets, each inverted to the right and left (the difference SSC). For example, in the case of two sets (X, Y), we calculated the SSC between a set (X), which was inverted to the right and left as X − 1 , and another set (Y) using the following equation: SSC(X − 1 ,Y) = dSSC(X, Y). Therefore, the difference SSC was calculated by the following equation: Fig. 7 shows an example of the SSC calculation. In this study, we set the SSC threshold at 0.5.

Statistical tests
All p-values of the Shapiro-Wilk tests were <0.01. We then performed Wilcoxon rank-sum tests for all data sets because they did not have a normal distribution. Five or more channels that had p-values of the Wilcoxon rank-sum test <0.01 were gathered as a channel set. We used only channel sets in the gamma frequency band for further analysis because the channel sets in the other frequency bands were not remarkable.
In every combination pair (between different facial expressions in the same condition: 28 pairs; between different conditions with the same facial expression: 7 pairs), the channel sets were constructed in 23 out of 28 pairs (82 %) between different facial expressions in condition 2 and 5 out of 7 pairs (71 %) between condition 1 and 2 with the same facial expression. However, no channel set was constructed in pairs between different facial expressions in condition 1. Fig. 8 shows some network sets with remarkable SSC findings in the gamma frequency band. As shown in Fig. 8, the channel sets between condition 1 and 2 with the same facial expressions are represented on the upper left and channel sets between different facial expressions in condition 2 are represented on the right; the pairs of two sets are encircled in two colors (i.e., orange, and blue), and the SSC values written next to the pairs are encircled within the squares of the same color as that of the pairs. The numbers within orange rectangles in Fig. 8 are the SSCs between the normal and inverted images (the difference SSC, also listed in Table 2), and the numbers within blue rectangles in Fig. 8 are the SSCs between the normal images (the similarity SSC, also   Table 3).

Comparisons by SSC
Between conditions 1 and 2 with the same facial expressions, the two-channel sets in images C and D had a difference SSC of 0.75 (left in Fig. 8). Between the different facial expressions in condition 2, the channel set between images C and D exhibited a high left-right symmetry because just one channel on the right side (P4) was missed in the corresponding channel on the left side (P3; the single-channel set in the middle of Fig. 8). Additionally, between the different facial expressions SSC(X, Y) is a similarity measure, and SSC(X − 1 , Y) = dSSC(X, Y) is a difference measure with respect to left-right symmetry. The red points enclosed with pink circles represent the sets X ∩ Y or X − 1 ∩ Y. In this example, X and Y are different with respect to left-right symmetry because dSSC(X, Y) is greater than 0.5.

Fig. 8. Channel sets showing significant differences (p-value < 0.01) between each facial expression.
Uppercase letters represent the images in Fig. 1. The participants were watching the images in condition 1 and expressing the same faces as the images in condition 2. Condition 1 vs. 2 (upper left): comparison of the same images displayed in conditions 1 and 2. Condition 2 vs. 2 (right): comparison of different actual facial expressions. The numbers within orange rectangles are the SSCs between the normal and inverted images, representing the similarity between the two images within an orange rectangle next to the number (also shown in Table 2). The numbers within blue rectangles are the SSCs between normal images, representing the difference between the two images within a blue rectangle next to the number (also shown in Table 3). Fig. 8 [also shown in Table 2]). In summary, the SSC values between smiling on the right and left (images C and D) are listed in Table 2.

in condition 2, different SSC values >0.5 were observed between the channel sets of images A-C and A-D, F-C and F-D, B-C and B-D, E-C and E-D, G-C and G-D, and H-C and H-D (the two bottom horizontal lines of the combined images from left to right in
Moreover, similarity SSC values >0.5 were observed between the channel sets of images F-G and F-H, B-G and B-H, E-G and E-H, C-G and C-H, and D-G and D-H (the left two vertical lines of combined images from top to bottom in Fig. 8 [also shown in Table 3]) and between the channel sets of images C-B and C-E, D-B and D-E, G-B and G-E, and H-B and H-E (the right two vertical lines of the combined images from top to bottom in Fig. 8 [also shown in Table 3]). The SSC values between anger and sadness (images G and H), and smiling with a closed or an opened mouth (images B and E) are listed in Table 3.
Additionally, we obtained the images in condition 2 between images A and F, B and E, B and F, and E and F and between conditions 1 and 2 for images B, E, and F. However, these results are not shown in Fig. 8.

Discussion
Our objective was to identify the effects of various facial expressions using a new method to construct brain networks based on the SL time course from EEG data. It is possible to estimate differences in brain activity affected by the representations of different facial expressions, not by the recognition of different facial images, but rather in terms of the synchronization of neural functional activity by using the analysis of betweenness centrality. Twenty-eight pairs (including 23 pairs between different facial expressions in condition 2 and five pairs between different conditions with the same facial expressions) showed significant differences (p-value < 0.01) in ≥5 channels. These specific differences were found only between conditions 1 and 2 with the same images (differences between recognition and representation of emotional faces) and between different images in condition 2 (differences between representations of emotional faces). These results indicate that these differences can be found in the present conditions, including the representation of facial images.
The position of significantly different channels corresponds to the side of the facial muscles used in our study. In the channel sets between smiling only on the right and left between conditions 1 and 2 (the upper left combined two images in Fig. 8), over 62.5 % (5/8 × 100 = 62.5 % in image C, 8/12 × 100 ≃ 66.7 % in image D) of the significantly different channels were found on the opposite side of the muscle used (e.g., the channels on the left side of the head, rather than those on the right side of the head, showed significant differences if the right-sided facial muscles were used). Further, we found that the channels on one side of the head showed significant differences only if the facial muscles for smiling were used on that side alone. Therefore, the channel set between smiling only on the right and left sides of the face in condition 2 represented a high left-right symmetry (the middle single image in Fig. 8).
In other words, the symmetry of the channel set could be induced by the symmetry of the facial muscles used in our experiment. However, the differences in condition 2 between either normal symmetrical smile expressions (images B or E) and either unsymmetrical smile expressions (images C or D), i.e., between B and C, B and D, E and C, or E and D, were not found in ≥5 channels. These findings suggest that only the differences in muscle movements representing the same emotional information induced significant differences in betweenness centrality. Therefore, symmetrical facial expressions (using both sides of the facial muscles) are not different with the same emotional information, but unsymmetrical facial expressions (using only one side of the facial muscles) differ with the same emotional information. Additionally, differences in condition 2 between normal symmetrical smile expressions (images B or E) were not found in ≥5 channels, and they were only found on smiling. The symmetry of facial expressions was not the main focus of this study. Therefore, in the future, more symmetrical and unsymmetrical face images should be added to the experimental design for a deeper analysis of the symmetry of facial expressions. Moreover, unsymmetrical smiling may have caused linearly symmetrical channel sets. By comparing the channel sets between smiling only on the right/left side and other facial expressions (all except image A) individually in condition 2 (the bottom two horizontal lines of the combined images in Fig. 8), we found that the channel sets had a linear symmetry with the midline of the head based on high difference SSC values (>0.73) between the normal and inverted channel sets (Table 2). High difference SSCs may be induced by significantly different patterns of betweenness centrality in representations of smiling on the right and left (the single image in Fig. 8). These results support our previous notion that unsymmetrical smiling causes linearly symmetrical networks, which represent the opposite side of the muscle used (Watanabe and Yamazaki, 2019).
On the other hand, negative facial expressions (i.e., anger and sadness) and symmetrical smiling expressions (i.e., smiling with an opened or closed mouth) might have caused similar channel sets. In the same way, the channel sets showed high similarity SSC values (>0.8) between normal channel sets by the comparison of the channel sets representing anger/sadness and other different facial expressions (all except image A) individually in condition 2 (the left two vertical lines of the combined images in Fig. 8; also shown in Table 3). In addition, the channel sets showed high similarity SSC values (>0.72) between normal channel sets by the comparison of the channel sets representing smiling with an opened/closed mouth and other different facial expressions (all except image A) individually in condition 2 (the right two vertical lines of combined images in Fig. 8; also shown in Table 3). High similarity SSCs might have been induced by the similarity of the channel sets. We hypothesize that the representations of anger and sadness and those of smiling with an opened or a closed mouth are not significantly different. In fact, there were no significant differences in condition 2 between the representations of anger and sadness and those of smiling with an opened or a closed mouth in ≥5 channels. This result supports the notion in our previous study that negative facial expressions represent similar networks (Watanabe and Yamazaki, 2019).
We focused on the movement of facial muscles because the differences in brain activity are affected by the representations of different Table 2 Difference SSC (dSSC). SSCs between the normal and inverted images correspond to the numbers within orange rectangles in Fig. 8. The letters, from A to H, indicate the images in Fig. 1 Table 3 Similarity SSC. SSCs between the normal images correspond to the numbers within blue rectangles in Fig. 8. The letters, from A to H, indicate the images in Previous studies showed significantly more cheek activity in happy expressions than in either sad or angry ones (Smith et al., 1986;Likowski et al., 2012). Moreover, another study about the activities of facial muscled during the viewing of avatar faces (Likowski et al., 2012) found that the activity in the zygomaticus major muscle in the cheek was higher for happy faces than that for neutral, sad, and angry faces, and the activity in the corrugator supercilii muscle for frowning, which is a critical component of anger and sadness (Bos et al., 2016), was higher for sad and angry faces than that for neutral and happy faces. However, there was significantly more brow activity in angry expressions than that in sad ones (Smith et al., 1986). Particularly, the strongest facial electromyographic reactions over the frontalis muscle of the forehead and the corrugator supercilii muscle regions were exhibited during the anger-provoking situation (Jäncke, 1996). Therefore, we hypothesize that the characteristics of representations of happy (i.e., smiling) and angry/sad faces are based on the different facial muscles used. Our findings suggest that (1) the zygomaticus major muscle for smiling may show differences between the right and left sides of the face; (2) the zygomaticus major muscle may induce similar channel sets in symmetrical smiling with an open or closed mouth; and (3) the corrugator supercilii muscle for both angry and sad faces may induce similar channel sets in anger and sadness. As our method focused on channel synchronization, muscle activities were assumed to have been caused by the differences in neuron synchronization patterns.
The main limitation of our study is that the representations of facial expressions were personalized and different, even within participants. Additionally, the perception of faces may not have been stable because of different mental states on the test day, inter-individual differences in personalities, or fatigue during the experiments. To minimize these limitations, we measured the EEG within a short time (>6 min) to prevent fatigue and analyzed all the data sets between participants for each facial expression to account for the inter-individual differences in results.

Conclusion
The comparisons of channel sets in different facial expressions could be used to identify the side of the facial muscles used in smiling and determine how similar networks are induced by positive (i.e., smiling with an open or a closed mouth) and negative facial expressions (i.e., anger and sadness) in terms of the differences in neuron synchronization patterns. Our findings suggest that representations of facial expressions affect the networks' betweenness centrality, based on the SL in the gamma frequency band. A previous SL study of epileptic seizures mainly analyzed the mean amplitude of the SL (Montez et al., 2006). However, we could distinguish different facial expressions using the SL in terms of time changes and its peaks. The SL in its time course provides more efficient information than its mean. Moreover, the betweenness centrality of channels is a more useful index in network analyses than small-worldness (Bassett and Bullmore, 2006), which was used in our previous study (Watanabe and Yamazaki, 2019). The betweenness centrality provides more detailed information because it is an index for nodes in a network, but small-worldness is an index for a whole network. In summary, the set of channel pairs simultaneously increasing synchronization levels incorporates important channels, i.e., significantly different channels, in its network to distinguish representations of different facial expressions. In particular, the representations of positive and negative facial expressions show specific characteristics. The proposed mechanism underlying the representation of facial expressions, especially positive ones, would improve the outcomes of clinical treatments (such as laughing therapy (Lee et al., 2020)) based on the facial feedback hypothesis (Krstovska-Guerrero and Jones, 2013;Gehricke and Shapiro, 2000;Finzi and Rosenthal, 2016). Specific characteristics might be derived from specific brain areas; for example, the perception of facial expressions might be related to the dorsal prefrontal cortex and superior temporal sulcus (Davis et al., 2010).
Further fMRI studies are needed to determine more specific differences between various brain areas. This study combined an EEG experimental scheme with the noise produced during fMRI measurement as a baseline with which to compare the fMRI data obtained in the near future. Additionally, future research should focus on the networks activated by observing facial expressions to gain a deeper understanding of facial expressions.

Declaration of Competing Interest
The authors report no declarations of interest.