Predicting Moral Elevation Conveyed in Danmaku Comments Using EEGs

Moral elevation, the emotion that arises when individuals observe others’ moral behaviors, plays an important role in determining moral behaviors in real life. While recent research has demonstrated the potential to decode basic emotions with brain signals, there has been limited exploration of affective computing for moral elevation, an emotion related to social cognition. To address this gap, we recorded electroencephalography (EEG) signals from 23 participants while they viewed videos that were expected to elicit moral elevation. More than 30,000 danmaku comments were extracted as a crowdsourcing tagging method to label moral elevation continuously at a 1-s temporal resolution. Then, by employing power spectra features and the least absolute shrinkage and selection operator regularized regression analyses, we achieved a promising prediction performance for moral elevation (prediction r = 0.44 ± 0.11). Our findings indicate that it is possible to decode moral elevation using EEG signals. Moreover, the small-sample neural data can predict the continuous moral elevation experience conveyed in danmaku comments from a large population.


Introduction
Affective computing is a field that employs machine learning methods to decode human emotions based on individuals' behavioral or neurophysiological responses [1,2]. Recent re searches have successfully decoded an individual's emotions with neurophysiological signals recorded with electroenceph alography (EEG) [3,4], functional magnetic resonance imaging (fMRI) [5,6], or functional nearinfrared spectroscopy (fNIRS) [7,8]. While previous studies have proven the feasibility of affec tive computing based on brain signals, those studies have mainly focused on basic emotions (such as happiness/sadness/ fear/surprise), with less exploration of emotions related to social cognition [9,10]. Considering the differences in theoret ical construction, neurocognitive mechanisms, and practical applications between basic emotions and emotions associated with social cognition [11][12][13][14][15], it is important to investigate the feasibility of affective computing for socialrelated emotions.
The moral elevation is a socialrelated emotion elicited when witnessing others' moral behaviors (e.g., doctors fighting against pandemics and saving a life) [16]. As a social emotion, moral elevation has been proven to play an important role in deter mining practical moral choices and behaviors in real life [17][18][19]. Previous studies have found that moral elevation could motivate prosocial behaviors such as inhibiting prejudice against gay men [20] and increasing donations to charity [21,22]. Moral elevation has also been suggested to improve wellbeing [23].
For example, when experiencing a higher moral elevation level, clinically depressed and anxious individuals reported a higher closeness to others, and lower interpersonal conflicts [24]. Recording daily moral elevation experiences on the Internet was also found to reduce depressive symptoms and increase happi ness [25].
While moral elevation experiences are usually measured by selfreports, progress in the neural mechanism of moral eleva tion is expected to support an objective and automatic mea surement for moral elevation. For example, Englander et al. [26] found that brain regions, including the medial prefrontal cortex (mPFC) and temporoparietal junction (TPJ), were elicited specifically by moral elevation videos. Wang et al. [27] discov ered coactivation of the left orbitofrontal cortex and left infe rior temporal gyrus during picturestimuli moral elevation experience. The specific neural patterns when experiencing moral elevation suggested the potential to decode moral ele vation based on brain activities, yet the feasibility remains to be investigated.
The rapidchanging characteristic of affective experience also calls for attention when exploring the computing of moral elevation. Previous studies have often assumed that the affective state during a relatively long duration is stationary, such as tagging the whole video with the same affective label in video based paradigms (usually for an epoch longer than 10 s) [2]. As the affective state can change rapidly at a scale of seconds [28,29], this stationary assumption may not always hold. Therefore, it is preferred to have continuous, dynamic affective labels to enable decoding with a higher temporal resolution rather than using the same affective label for a whole video. Some studies have tried to address this issue by a continuous selfreport method. For example, participants needed to con tinuously move a sliding bar with a mouse for emotion labeling during the video [3,30]. However, these methods can be labor intensive and timeconsuming.
Internetbased crowdsourcing methods could offer a viable alternative for the continuous tagging of moral elevation [31][32][33]. One such method is danmaku comments, a popular type of com mentary among Internet video audiences in East Asia [34]. Audiences can share their realtime comments anytime during their video watching, and their danmaku comments will be immediately displayed on top of the videos (see video screen shots in Fig. 1A), which are visible to other audiences. While each audience may only post danmaku comments at some dis crete time points, continuous emotion tagging for the whole video could be achieved by accumulating audience numbers (for example, when more than 10,000 views). Moreover, the audi ence's emotional experience is one of the most frequently con veyed information in the danmaku comment [34]. Previous studies have proved the feasibility of extracting publiclevel emotions based on danmaku comments. For example, Li et al. [35] proposed a framework that could identify multidimensional emotions from danmaku comments using natural language processing; He et al. [36] revealed the potential of danmaku comments in promoting public crisis communication during COVID19. These studies suggest the possibility of danmaku comments to tag moral elevation continuously.
The present study aimed to explore the feasibility of affective computing for moral elevation based on brain signals. A con tinuous moral elevation tagging at a 1s time resolution was achieved based on danmaku comments. EEG was used as the neuroimaging technique due to its high temporal resolution [37]. Twentythree participants were invited to watch videos that were expected to elicit moral elevation with their EEG signals recorded. The least absolute shrinkage and selection operator (LASSO) regularized regression was employed to pre dict the temporal dynamics of moral elevation using EEG power spectra features. Our results demonstrated the feasibility of decoding the moral elevation conveyed in danmaku com ments using EEGs.

Acquisition of danmaku comments and construction of danmaku comment dictionary
Following the practice in the setup of emotion stimuli datasets [38,39], 8 research assistants were invited to empirically select videos that could elicit moral elevation from bilibili.com (one of the most popular online danmaku video media platforms in China, with about 80,000,000 daily active users in average) according to the criteria of over 90,000 views and over 60 dan maku comments per second. Ten videos with realtime dan maku comments were selected after the primary screening. The number of videos is comparable with previous studies [38,39].
All danmaku comments of the 10 videos before 29 July 2020 were crawled using a customized Python script with an HTTP request module. These comments were segmented into single words or phrases using Jieba, a Chinese word segmentation tool [40]. Words or phrases without emotional information, such as negators, stop words, and degree words, were excluded according to the dictionary construction principle of previous NLP studies [35,41], leaving 321 words and phrases for the dictionary. Nevertheless, the negators and degree words were also considered during emotion tagging; see more details in the "Moral elevation tagging with danmaku comments" section.
Then, 138 participants (76 females, mean age = 22 years, ranging from 17 to 33 years old) were invited to rate all the 321 extracted words and phrases using 7point Likert scales on the dimensions of the touched feeling and elevation. The moral elevation scores of those words and phrases were calculated as the mean rating scores of touched feeling and elevation aver aged among the participants [42]. The moral elevation scores were further empirically recoded into 0, 1, and 2, correspond ing to the calculated scores of <2.5, 2.5 − 5.5, and >5.5. A moral elevation dictionary was thus established, with 321 words and phrases that were scored 0, 1, or 2 according to the moral elevation ratings, as illustrated in Fig. 1B.

Selection of video stimuli in EEG recordings
After constructing the danmaku comment dictionary, these 10 videos obtained after the primary screening were edited to pre serve the most moral elevating scenes. The duration of videos were 79, 136, 133, 108, 60, 149, 107, 84, 80, and 104 s, respec tively. The 10 videos were then presented to a group of 49 par ticipants (30 females, mean age = 23 years, ranging from 18 to 32 years, nonoverlapping with the above experiments' groups) to further validate whether these videos could elicit moral ele vation. As a part of a larger project, participants were asked to report their emotional experiences after watching each video, with 7point Likert scales on the dimensions of joy, sadness, disgust, anger, surprise, fear, touched feeling, elevation, valence, and arousal. Following the method in Ref. [42], 3 videos with the highest average ratings of touched feeling and elevation (6.60 ± 0.19, 6.55 ± 0.17, and 6.41 ± 0.26 respectively) were selected out of 10 videos as stimuli for moral elevation in the following EEG experiment. The 3 videos' contents were about the charity behaviors of a disabled beggar, assistance in a med ical emergency from strangers, and the fighting against a flood to protect people. The duration of the selected videos were 136, 84, and 80 s, summing to 300 s in total.

Participants for EEG recordings
We recruited 23 college students from Tsinghua University with normal hearing and normal or correctedtonormal vision for our study (10 females, mean age = 21 years, ranging from 17 to 24 years). All participants signed the informed consent forms voluntarily and received financial remuneration. The complete study, including preliminary and EEG experiments, was con ducted following the Declaration of Helsinki and approved by the local Ethics Committee of the Department of Psychology, Tsinghua University (Protocol No. 201906).

Experiment procedure
As a part of a larger project, these 23 participants were invited to watch 24 emotionstimuli videos, including 3 moral eleva tion videos and 3 neutral videos (i.e., documentaries about tool manufacturing or everyday scenery in cities). The emotional videos were presented randomly in order. Participants were required to keep their heads and bodies steady during the video watching. After watching each video, participants reported their emotional experiences on 10 dimensions ranging from 0 to 7, including joy, sadness, disgust, anger, surprise, fear, ele vation, touched feeling, arousal, and valence. The intervideo interval was 30 s.
Then, participants' selfreported continuous emotional rat ings during video watching were also obtained following the practice of a previous study [3]. Specifically, after watching all the videos, participants were asked to rate their continuous realtime emotional experiences to 5 randomly chosen replayed videos for a second time. This time, a vertical sliding bar was presented alongside the video screen, and the participants could freely and continuously drag the bar with the computer mouse during video watching to rate their realtime feelings of the tobeevoked emotion, with the larger yaxis coordinate of the vertical sliding bar marked as the stronger emotion. Each moral elevation video was rated by 5 to 6 participants.

Moral elevation tagging with danmaku comments
The moral elevation dictionary was then applied to the dan maku comments from the 3 selected videos for stimuli to tag the moral elevation experience. The moral elevation experience was calculated within each 1s nonoverlapping time window as the weighted sum of the moral elevation scores of all the words and phrases within this period. Each word or phrase was then graded by the dictionary score (i.e., 0, 1, or 2, as explained above), multiplied by a different weight. The weighted sums were further normalized within each video by dividing the total number of danmaku comments in this video to reach the final score. The calculation can be described as Eq. 1: where i indicates the ith second of this video and j indicates the jth danmaku word or phrase in the ith second. w j indicates the weight of the jth danmaku word or phrase. The weight was empirically coded into −1 if an associated negator such as "not" was identified, and 0.75, 1.25, 1.5, or 2 for different levels of degree words such as "a little", "more", "very", and "most". N indicates the total number of danmaku comments in this video. In this way, the public's moral elevation experience was obtained for each 1 s of all 3 selected videos. Equation 1 was designed to integrate important danmaku comment features and reflect them on the moral elevation scores. While the dictionary scores of each word and phrase in the danmaku comments explicitly appeared in Eq. 1, the number of danmaku comments also makes contributions to the moral elevation score calculation from the following 2 aspects: firstly, the total number of the danmaku comments posted throughout the whole video is indicated as N and used as a data normalization to make the scores more robust to different videos with different amounts of danmaku com ments; secondly, the number of danmaku comments within the 1s segment implicitly contributes to the moral elevation scores by accumulating the scores from more words and (1) moral elevation score i = ∑ N j=1 w j × danmaku phrase score j N  Fig. 1. The experiment procedure and the pipeline of data processing. (A) The calculation for the temporal dynamics of moral elevation in the videos. Danmaku comments in each video for stimuli were first extracted as comment texts with time tags. Then, a natural language processing (NLP) method was performed to calculate the scores of moral elevation for each second of the videos using a danmaku comment dictionary. (B) The dictionary construction of danmaku comments. Danmaku comments from a cluster of moral elevation videos were extracted as plain texts without time tags. Texts were then segmented into single words or phrases. These words or phrases were rated in the preexperiment to build a dictionary for moral elevation. (C) The pipeline for EEG processing. After pre-processing, EEG power spectra features for each second were calculated at theta, alpha, beta, and gamma. The LASSO regression with 5-fold cross-validation was performed to predict the dynamics of moral elevation conveyed in danmaku comments.
phrases, as the danmaku comment temporal concentration becomes denser.
After calculating the moral elevation scores for each 1s seg ment within each video using Eq. 1, we further applied data normalization on the intervideo level by computing the zscores over all of the 1s segments among the 3 videos to compare them on a unified scale. The pipeline of moral elevation tagging is illustrated in Fig. 1A and B.
The recorded EEG data were first notch filtered to remove the 50Hz powerline noise, then bandpass filtered to 0.05 to 47 Hz. Independent components analysis was applied to remove artifacts related to eye movement. About 1 to 2 independent components were excluded from each participant's EEG. Data were then filtered into the frequency bands of theta (4 to 7 Hz), alpha (8 to 13 Hz), beta (14 to 29 Hz), and gamma (30 to 47 Hz). The filtered signals were segmented into nonoverlapping 1s segments corresponding to the moral elevation experiences tag. The sum of the squares of 250point values in each 1s segment was then calculated to obtain the power spectra per channel per frequency band as the feature for the followup analysis, leading to 32 (channel) × 4 (frequency band) = 128 feature dimensions per second.
As the emotion tagging was extracted from publicbased danmaku comments, these EEG features were also averaged across participants to obtain the grouplevel EEG features. The approach was suggested to effectively reflect the grouplevel emotional experience [3,38]. Finally, 128dimensional EEG features were obtained for each second of each video.
Due to the relatively highdimensional data feature (i.e., 128 dimensions) and the relatively small sample size (i.e., 300 s), the LASSO regression was employed for feature selection and regression model building. The LASSO regression model is shown as Eq. 2: where y indicates a 1 × 300 (s) vector of the moral elevation scores, w indicates a 128 × 1 vector of LASSO coefficients, x indicates a 128 × 300 features matrix, b indicates a shared bias, and λ is a hyperparameter for penalty.
Then, a 5fold crossvalidation was conducted to evaluate the prediction performance of the EEGbased moral elevation decoding. Specifically, the 300 128dimension EEG features derived from the EEG responses when watching the moral ele vation videos were split into 5 folds randomly, with 4 folds as the training sets and 1 fold as the testing set. Pearson's correla tion between the danmakubased and LASSOpredicted moral elevation scores and the prediction Normalized Mean Squared Error (NMSE) were calculated. Then, the lambda that mini mized NMSE in the training set was chosen to apply to the testing set for prediction. The procedure was repeated 5 times to get 5 crossvalidated r values. To avoid the potential bias and imbalance brought by the data split, we further repeated the whole LASSO regression procedures 100 times. The averaged r value was considered as the prediction performance.

Results
The selfreport ratings for the moral elevation videos and the neutral videos are shown in Fig. 2. The "sadness" and "joy" dimensions were displayed because these 2 emotions were mostly mentioned and compared in previous moral elevation studies [11,16]. A paired ttest with Bonferroni correction was conducted to analyze the scores for each index between the neutral and moral elevation videos. Compared to the neural videos, the moral elevation videos elicited higher moral eleva tion (P < 0.001), indicating the effectiveness of the stimuli. In addition, the moral elevation videos were more aroused than the neutral videos (P < 0.001), but the 2 types of videos did not differ in their valence dimension (P = 0.148). Besides, the sadness dimension was scored higher in the moral elevation videos compared with the neutral video (P < 0.001), while the joy dimension was scored lower (P = 0.009). Figure 3 demonstrates the temporal dynamics of moral eleva tion scores calculated from danmaku comments and selfreports in a representative video (>300,000 views, >25,000 danmaku comments). Similar trends between the scores of danmaku com ments and selfreports could be observed (Pearson correlation r = 0.67, P < 0.001), suggesting the effectiveness of danmaku based moral elevation scores. Three time periods are highlighted with screenshots in the video: the first period introduces the tragic life experience of the protagonist; the second period records his charity behaviors for the homeless senior citizens; the third period records his selfconfession of charity motivation.
Then, the correlation between the temporal courses of the grouplevel EEG power spectra features and the danmaku based moral elevation scores is shown in Fig. 4A. Significant negative correlations were found in the power spectra of beta and gamma bands at frontal and bilateral temporoparietal (2)  Then, the LASSO regression achieved the crossvalidated Pearson correlational r value at 0.44 ± 0.11 (mean ± standard deviation, calculated from 100 repetitions of 5fold cross validation) between the danmakubased and LASSOpredicted moral elevation scores, corresponding to a prediction NMSE of 0.81 ± 0.17. The selected features for the moral elevation LASSO regression model reemphasized beta and gamma bands over the frontal and bilateral temporoparietal electrodes and highlighted theta band over the left prefrontal and right parietal electrodes, together with the alpha band over the bilateral parietooccipital electrodes, shown in Fig. 4B. The predicted moral elevation scores were shown in Fig. 5A.
The LASSO regression was also applied to the prediction of valence with EEG power spectra features as a comparison. The crossvalidated Pearson correlational r value was 0.45 ± 0.11, with a prediction NMSE of 0.87 ± 0.39. The predicted valence scores are shown in Fig. 5B. The LASSO regression for moral elevation achieved comparable predictive performance for valence.

Discussion
Recent studies in affective computing have primarily focused on decoding basic emotions using brain signals. However, the decoding of moral elevation, a social emotion elicited when witnessing others' moral behaviors, has yet to be explored. The present study achieved an EEGbased decoding of moral ele vation for the first time, while combining a danmakubased continuous tagging method to capture the rapidchanging nature of affective experience. To ensure interpretability in the decoding process rather than pursuing a high decoding accu racy, we opted for a straightforward LASSObased method to demonstrate the feasibility of decoding moral elevation using EEG data. Indeed, our results suggested that it was possible to achieve the EEGbased decoding of moral elevation through such a simple method. More importantly, the high interpreta bility of LASSO offers the opportunity to take a closer look at EEG features that contribute to the decoding: we found that spatial patterns of the EEG features selected by LASSO could echo previous neuroscience studies that demonstrated the moral elevationrelated brain areas, thus providing the neural basis supporting the decoding. Our results suggested that it is pos sible to decode moral elevation at a 1s temporal resolution based on EEG signals. While comparing different decoding models is beyond the scope of the present study, future studies that integrate the advanced models are expected to further boost the decoding of moral elevation.

Tragic life experience
Charity behaviors for homeless elderly Self-confession of charity motivation  In the present study, EEG correlates for the temporal dynamic of moral elevation were reported for the first time. Significant correlations of moral elevation were found in frontal and temporoparietal electrodes, which were also highlighted in the features selected in the LASSO regression analysis. Englander et al. [26] discovered that mPFC and TPJ are activated by moral elevation videos in the fMRI recordings. As the aftereffects of moral elevation experience included reduced selfawareness and increased prosocial motivation [11,16,42], the engagements of mPFC and TPJ could be explained by the functional role of mPFC in dispositions of others and self, or interpersonal norms and scripts, and TPJ in temporary states such as goals and inten tions [43]. Our study added supportive evidence about the involvement of frontal and temporoparietal areas when expe riencing moral elevation from the modality of EEG. Moreover, by decomposing the EEG signal into different frequencies, sig nificant correlations with moral elevation were seen at beta and gamma bands, which might be explained by their functional roles in general affective processing reported in previous studies [44][45][46]. Besides, although no significance with moral elevation scores was observed in either theta or alpha band with univar iable correlation analysis, the multivariable LASSO analysis highlighted the contributions of the alpha band at bilateral pari etal electrodes and theta band at the left prefrontal electrodes as well as the right parietal electrodes. In previous EEGbased hyperscanning studies, the alphaband activities at right centro parietal regions showed interbrain synchrony when people interacted during joint actions [47]. The contribution of alpha band features might be explained by its role in social interaction. At the same time, a decrease of theta oscillations over the right parietooccipital clusters was found to correlate with greater sharing intention in previous studies [48], while theta activa tion covering the left orbitofrontal cortex was reported during morally bad judgment conditions [49]. These might together explain the contribution of theta band features. While exploring the physiological mechanism behind moral elevation experi ence has been continuing, our study provided EEG correlates of moral elevation from a computing perspective.
Furthermore, although moral elevation can be considered a positive emotion due to its potential benefits for wellbeing [23], it is important to note its differences from the classic pos itive emotion of happiness. Previous studies usually used come dic or funny videos to elicit positive emotions [3,50]; the videos used in the present study, however, elicit moral elevation by contrasting suffering with moral behaviors (such as the chari table actions of a disabled beggar). Thus, moral elevation here should not be explained as the classic "joy positive", but the "inspiration positive" [38], as supported by participants' rela tively high ratings in the "sadness" dimension and low ratings in the "joy" dimension. Further research exploring the distinc tions between different types of positive emotions is needed to provide insight into the concept of "positivity" [38,51].
The present study also demonstrated the feasibility of dan maku comments as a source for crowdsourcing tagging, which offers several advantages over traditional selfreport tagging methods. First, danmaku comments were posted by audiences spontaneously during their daily videowatching activities, which could provide a higher ecological validity than self reports collected during experimental settings. Second, with the development of NLP techniques, affective states could be calculated automatically from the video's danmaku comments, thereby saving time and labor for continuous selfreports. Third, danmakubased tagging usually involved a large number of crowds on the Internet. As our findings demonstrated that EEG data from 23 participants could effectively predict the moral elevation conveyed in danmaku comments, it suggested the possibility of predicting moral behaviors in a large popu lation with smallsample neural data [52][53][54]. Establishing quantitative methods to measure psychological states has long been a challenging problem in computational psychology, known as the inverse problem [55]. As the first study to investigate the inverse problem of decoding moral elevation based on EEG recording, there is room for further improvements. First of all, the classical power spectra features were used in the present study. Decoding performance is expected to benefit from more complex features and advanced machine learning methods in further studies. Moreover, while the present study aimed to decode moral elevation based on crowdlevel tagging, more studies on the intraindividual level were expected in the future.
Electroencephalographybased decoding of moral elevation, defined as the emotion elicited when witnessing others' moral behaviors, was investigated in a videowatching paradigm among 23 participants. Our study shed light on danmaku com ments as a crowdsourcing tagging source for continuous labe ling moral elevation. Regression analyses revealed a promising decoding performance, suggesting the potential of using small sample neural data to predict moral elevation experience con veyed in danmaku comments from a large population.