A Novel Method in Predicting Hypertension Using Facial Images

: Hypertension has been a crucial public health challenge among adults. This study aimed to develop a novel method for non-contact prediction of hypertension using facial characteristics such as facial features and facial color. The data of 1099 subjects (376 men and 723 women) analyzed in this study were obtained from the Korean Constitutional Multicenter Study of Korean medicine Data Center (KDC) at the Korea Institute of Oriental Medicine (KIOM). Facial images were collected and facial variables were extracted using image processing techniques. Analysis of covariance (ANCOVA) and Least Absolute Shrinkage and Selection Operator (LASSO) were performed to compare and identify the facial characteristic variables between the hypertension group and normal group. We found that the most distinct facial feature differences between hypertension patients and normal individuals were facial shape and nose shape for men in addition to eye shape and nose shape for women. In terms of facial colors, cheek color in men, as well as forehead and nose color in women, were the most distinct facial colors between the hypertension groups and normal individuals. Looking at the AUC value, the prediction power for women is better than men. In conclusion, we managed to explore and identify the facial characteristics variables related to hypertension. This study may provide new evidence in the validity of predicting hypertension using facial characteristics.


Introduction
Hypertension, also known as high blood pressure in layman's terms, is one of the chronic metabolism diseases with a prevalence of over a billion people in the world. In South Korea, hypertension accounted for 11.3% of the entire deaths per 100,000 populations in 2017 [1]. The number of people suffering from chronic diseases who visited the hospital in 2015 was 14.39 million, with hypertension being the highest number (5.71 million) among them [2]. In addition, the total medical cost for hypertension treatment was assessed to be 13.4%, which accounts for approximately 2.5 billion US dollars [3]. This means that monitoring and regulating hypertension is fundamental to reduce the general burden of disease in Korean society as well as to improve the quality of life.
Hypertension is usually asymptomatic in the early stages and is mainly identified through targeted or opportunistic screening in primary care. The suboptimal blood pressure (BP) control in conjunction with the ignorance of hypertension has been the stumbling block in hypertension management, resulting in the necessity of routine BP screening. Although the golden standard for hypertension diagnosis is cuff-based BP measurements at present, developments of self-screening by utilizing new technologies have the potential to increase the early detection of hypertension.
Apart from the classic BP measurement, recent technologies have provided new potential for the evaluation of various parameters such as vital signals, speech analysis, heart rate, and step count [4]. Several studies have utilized signal-derived BP measurement such as photoplethysmogram and electrocardiogram signals as the indicators of BP [5][6][7]. A recent study also proposed that BP can estimate using acoustic characteristics [8]. Such novel strategies are also gradually being developed to be compatible with smartphone health applications and wearable devices.
The face is the most observed region in a person's lifetime and perception of a person's health according to their facial appearance is a day-to-day occurrence. Numerous attempts have been made by researchers to find the association between facial appearance and health for decades [9]. Recent studies have been focusing on facial shape to investigate the elements of faces that affect the perception of health [10]. A study on craniofacial anthropometry found that the face has many manifestations associated with increased susceptibility to specific diseases and any distinct changes in face can be an indication of a health issue [11]. There are also studies that are associated with facial shape with the diagnosis of diseases where morphological characteristics in facial shape can be related to the presence of diabetes and hypertension [12][13][14]. Current approaches also showed that skin color and texture contribute to facial health. Moreover, many studies have been conducted to elucidate the association between skin color and hypertension [15,16].
In this study, we proposed that facial characteristics such as facial features or facial color can be used for hypertension screening in a non-contact, unconscious, and laborfree manner. Facial characteristics that are extracted from facial images through image processing techniques may be indicative of hypertension and have the potential to play a daily role in self-screening of hypertension. This study aimed to develop a novel method by determining the possibility of predicting hypertension using facial characteristics.

Data Collection
Data analyzed in this study were obtained from the Korean Constitutional Multicenter Study of Korean medicine Data Center (KDC) at the Korea Institute of Oriental Medicine (KIOM). For the analysis of facial characteristics, 1099 subjects (376 men and 723 women) were included. All the subjects who were eligible met the following inclusion criteria: (1) age ranging from 18 to 90 years and (2) being healthy and not suffering from chronic diseases. Subjects who have undergone plastic surgery or facial reconstruction surgery due to any circumstances were excluded. All subjects were required to sign an informed consent form, and this study was approved by the KIOM Institutional Review Board (I-0910/02-001).
Data on facial images were acquired in a standard environment. Facial images were taken with a neutral expression using a digital camera (DSLR Nikon D5300 digital camera, Nikon Corporation, Thailand, 2013) with an 85-mm lens under bilateral illumination at a fixed subject-camera distance of 1.6 m. A color chart used for color correction was attached to a photographic ruler that was placed approximately 1 cm below the chin. The image resolution was selected as 2992 × 2000 pixels with a total of 24.16 million pixel count.
The first image was the full face frontal view where subjects were asked to look directly into the camera with a neural expression. Their heads were positioned so that the central point of the two pupils and the two points (cross point between the facial contour and upper auricular perimeters) are on the same line horizontally. The photographic ruler was to be placed below the chin in vertical alignment. The second image is the profile view where the subject's face is turned approximately 90 degrees from the front. Subjects were requested to look straight ahead with a neural expression, whereas the photographic ruler will be placed below the chin in vertical alignment with the nose. Only one side of their face and the eye on the far side were shown. The central point of the pupil from the side and point of upper auricular were also on the same horizontal line.

Data Extraction
The data were collected by well-trained technicians under standard operating procedures on the general characteristics of subjects and facial characteristic variables. Facial images were collected and facial feature landmarks were extracted through using image processing techniques reproduced from our previous study [17,18]. Due to the sensitivity on illumination changes, the red (R), green (G), and blue (B) values of each pixel in the complexion regions were extracted and converted into L*, a*, b* values, where they express color numerically; L* values represent lightness, while a* and b* values represent green-red and blue-yellow color components.
Facial landmarks and facial color areas used for analysis were illustrated in Figure 1. For the facial landmarks of frontal view, the numberings shown were first marked on the right side of the face. Using the similar landmarks numberings on the right side of the face, 1 was added for the landmarks' numberings on the left side of the face. For instance, landmark number 10 was the upper eyebrow point on the right and landmark number 110 was the upper eyebrow point on the left. Descriptions of the facial landmarks were presented in Table 1. The variables that consist of two or more landmark numbers were defined using length, ratio, width, or angle as listed in Table 2. For example, FD_10_110 indicated the distance between the upper eyebrow point on the right and between the upper eyebrow points on the left in the frontal picture.

Data Extraction
The data were collected by well-trained technicians under standard operating procedures on the general characteristics of subjects and facial characteristic variables. Facial images were collected and facial feature landmarks were extracted through using image processing techniques reproduced from our previous study [17,18]. Due to the sensitivity on illumination changes, the red (R), green (G), and blue (B) values of each pixel in the complexion regions were extracted and converted into L*, a*, b* values, where they express color numerically; L* values represent lightness, while a* and b* values represent green-red and blue-yellow color components.
Facial landmarks and facial color areas used for analysis were illustrated in Figure 1. For the facial landmarks of frontal view, the numberings shown were first marked on the right side of the face. Using the similar landmarks numberings on the right side of the face, 1 was added for the landmarks' numberings on the left side of the face. For instance, landmark number 10 was the upper eyebrow point on the right and landmark number 110 was the upper eyebrow point on the left. Descriptions of the facial landmarks were presented in Table 1. The variables that consist of two or more landmark numbers were defined using length, ratio, width, or angle as listed in Table 2. For example, FD_10_110 indicated the distance between the upper eyebrow point on the right and between the upper eyebrow points on the left in the frontal picture.

Group Classification
For the diagnosis of hypertension, we referred to the 2018 Korean society of hypertension guideline for the management of hypertension where hypertension is defined using a threshold of >140/90 mmHg [19]. On the other hand, physician-diagnosed hypertension defined the subjects who were diagnosed with hypertension at least once by a doctor but Appl. Sci. 2021, 11, 2414 5 of 12 did not take any medications and/or were currently taking antihypertension medicine to control their BP. Hence, the hypertension group in this study included subjects with systolic BP (SBP) readings of ≥140 mmHg and/or diastolic BP (DBP) readings of ≥90 mmHg as well as physician-diagnosed hypertension, regardless of whether the subjects did or did not take medications. Those subjects with BP readings of <120 mmHg and <80 mmHg, including those who had fully recovered from hypertension through treatment, were classified in the normal group. Two statistical tests, independent two-sample t-tests and analysis of covariance (AN-COVA), were performed using IBM SPSS Statistics 23.0 for Windows (IBM, Armonk, New York, NY, USA) at a significant level of 0.05. A t-test was used to test the differences between the hypertension group and normal group in general characteristics such as age, height, weight, BMI, SBP, and DBP for both genders. The statistical results of t-tests were presented as mean (standard deviation). The ANCOVA, with age and BMI as covariates, were performed in each men and women to compare the differences of facial characteristic variables between the hypertension group and normal group. The statistical results of ANCOVA were presented as adjusted mean (standard error) by age and BMI.
Additionally, statistical analysis using the Least Absolute Shrinkage and Selection Operator (LASSO) and receiver operating characteristic (ROC) curve analysis were conducted using R version 3.3.3 (Another Canoe, The R Foundation for Statistical Computing, Vienna, Austria, 2017). As combined facial characteristic variables are correlated, LASSO, as a variable subset selection method in logistic regression, was applied for selecting reliable combined indices while assessing the association between hypertension and combined facial characteristics variables. The LASSO penalty selects some of the predictors and discards the others if they are correlated [20][21][22][23]. In this study, the tuning parameter λ in the objective function of LASSO was obtained using the R function cv.glmnet, which is the function used to perform cross-validation for tuning selection [23,24].
The values of the area under the ROC curve (AUC) and the confidence intervals of AUC to evaluate the performance of prediction models were calculated using 5-fold cross-validation. In 5-fold cross-validation, the data set is divided into five subsets and the test method is repeated five times. Each time 1 of the 5 subset is used for a test set, and the other 4 subsets are training sets. The average accuracy across all 5 tests is computed. In this study, we obtained the estimates and confidence intervals of AUC using the R function ci.cvAUC, which calculates the influence curve-based confidence intervals for cross-validated AUC estimates [25].

General Characteristics of the Subjects
The general characteristics of the subjects are summarized to compare the differences between the hypertension group and the normal group for both men and women in general.
The number of subjects in this research study was 1099 in total. There are 173 subjects in the hypertension group and 203 subjects in the normal group of men and 221 subjects in the hypertension group and 502 subjects in the normal group of women. Significant differences between the hypertension group and the normal group were observed in most of the general characteristics in both men and women, except for the pulse as shown in Table 3. The mean age of men subjects was 43.66 ± 15.84 years in the normal group and 55.30 ± 11.97 years in hypertension group, whereas the mean age of women subjects was 44.93 ± 14.15 years in normal group and 61.64 ± 11.62 years in hypertension group. In both men and women, age, body weight, and BMI, SDP, and DBP were significantly higher in the hypertension group than in the normal group.

Differences in Facial Characteristics among the Subject Groups
After adjustment for age and BMI, there were 10 variables in facial features that showed significant differences for both genders, whereas there were three facial color variables for men and six facial color variables for women had significant differences in the facial characteristics between hypertension and normal group. Out of the 10 facial feature variables in both men and women, six facial feature variables in men were defined by the facial shape and five facial feature variables in women were defined by eyes (Tables 4 and 5). For facial color variables, all three variables in men were cheek color variables, and three out of five variables in women were forehead color variables ( Figure 2).   0.039 * BMI, Body mass index; SBP, systolic blood pressure; DBP, diastolic blood pressure. Data are represented in the mean ± SD (standard deviation). * p < 0.05; ** p < 0.01. P-values were obtained from t-tests between the normal group and the hypertension group.

Differences in Facial Characteristics among the Subject Groups
After adjustment for age and BMI, there were 10 variables in facial features that showed significant differences for both genders, whereas there were three facial color variables for men and six facial color variables for women had significant differences in the facial characteristics between hypertension and normal group. Out of the 10 facial feature variables in both men and women, six facial feature variables in men were defined by the facial shape and five facial feature variables in women were defined by eyes (Tables 4 and  5). For facial color variables, all three variables in men were cheek color variables, and three out of five variables in women were forehead color variables (Figure 2).

Association between Hypertension and Facial Characteristics
The LASSO method was performed to select the facial characteristic variables that could be used as hypertension predictors. In this study, several facial characteristic variables were selected by LASSO analysis. In general, age and BMI were selected for both men and women. All facial characteristics variables selected by LASSO were shown in Tables 6 and 7. Illustrative figures of facial characteristics were shown in Figure 3. Cheek color ChRU_L The LASSO method was performed to select the facial characteristic variables that could be used as hypertension predictors. In this study, several facial characteristic variables were selected by LASSO analysis. In general, age and BMI were selected for both men and women. All facial characteristics variables selected by LASSO were shown in Tables 6 and 7. Illustrative figures of facial characteristics were shown in Figure 3.  Table 6. Variables selected using a model based on LASSO in men.

Variable Descriptions
Selected Variables BMI

Accuracy of Prediction Models
Both ROC and AUC metrics were used for evaluating the prediction model's performance. ROC analysis is used for evaluating the accuracy of the prediction model that classifies subjects into hypertension or non-hypertension, whereas AUC summarized the prediction accuracy. In addition, 5-fold cross-validation was also performed to obtain the estimates and confidence intervals for the AUC.
The AUC (95%CI) value in the women is 0.827 (0.794-0.860) The AUC value for men is slightly lower than total women with the AUC (95% CI) value of 0.706 (0.652-0.760) as shown in Table 8. The ROC curves for LASSO using 5-fold cross-validation were shown in Figure 4.

Discussion
In our study, we utilized the facial landmarks to enable us to investigate the difference in the facial characteristics of hypertension patients and normal subjects. Facial landmarks extracted from the automatic extraction program will provide us with the facial feature characteristics of our subjects, whereas CIELAB color space values will provide us with the subjects' facial color characteristics. At the same time, this will enable us to make use of facial recognition technology in medical research.
In this study, facial characteristics, as well as demographic characteristics, were used as variables to predict hypertension. By using the facial landmarks and CIELAB color system, our study revealed the role of facial characteristics in predicting the of hypertension. Our study obtained several interesting results after we statistically analyzed the associations between facial characteristics indices and hypertension together with demographic characteristics.

Discussion
In our study, we utilized the facial landmarks to enable us to investigate the difference in the facial characteristics of hypertension patients and normal subjects. Facial landmarks extracted from the automatic extraction program will provide us with the facial feature characteristics of our subjects, whereas CIELAB color space values will provide us with the subjects' facial color characteristics. At the same time, this will enable us to make use of facial recognition technology in medical research.
In this study, facial characteristics, as well as demographic characteristics, were used as variables to predict hypertension. By using the facial landmarks and CIELAB color system, our study revealed the role of facial characteristics in predicting the of hypertension. Our study obtained several interesting results after we statistically analyzed the associations between facial characteristics indices and hypertension together with demographic characteristics.
We found that facial shape width (jaw area) and nose length in men, in addition to eye shape and nose length in women, were the most distinct facial feature differences between hypertension patients and normal individuals. For facial colors, cheek color in men, as well as forehead and nose color in women, were the most distinct facial colors between the hypertension groups and normal individuals.
To find the predictors for hypertension using facial characteristics, we further analyzed our results using the LASSO method and evaluate the accuracy of the hypertension prediction model. According to our results, facial angle, eye length and angle, mouth ratio, forehead shape, nose tilting angle, forehead color, and cheek color variables were selected by the predicting model. As this study is in the exploratory stage, we only manage to explore all the variables related to hypertension. A larger and more balanced sample size is needed to formulate a predicting model thoroughly.
Looking at the AUC values, the case model generated for women is the better case model with the AUC (95%CI) value of 0.827 (0.794-0.860). In comparison to women, men had a lower AUC value (AUC (95%CI) value of 0.706 (0.652-0.760)), and we assumed that it may be partly due to the low sample size. In general, both case models of men and women in the prediction of hypertension can be considered fairly well. This indicates that there might be a possibility in predicting hypertension using facial characteristics.
This study examined the possibility of predicting hypertension in a non-contact method. We believed that this study can be further developed into a total unconscious screening of hypertension on a daily life basis. For example, the daily self-screening of hypertension can be performed simply by looking at the mirror which could analyze the facial characteristics of the individual. Such method can provide an alternative method to patients who were unable to monitor hypertension using labor-intensive conventional methods on a daily basis. This can also minimize health care costs through early detection of hypertension and reach a target population with minimum inconvenience to patients. At the same time, it can ensure that diagnoses of hypertension are not overlooked or discovered too late.
There are several limitations to this study. First, the sample size of the hypertension group and the normal group is unbalanced, which might introduce bias. Second, data from this study were collected by different technicians because the subjects were recruited from multiple sites across the country. Although all technicians were well-trained and followed a standard procedure, inconsistency may nonetheless exist in the data collection due to individual differences. In addition, there might be several confounding factors for hypertension required to generalize the study that we may not able to consider. Therefore, further studies should include balanced sample size, and the consistency in data collection should be increased. These studies should also include other known factors for hypertension.
We believed that this research work will contribute to the development of the study of hypertension in four important ways: first, by providing a wide perspective on the issues pertinent to the screening methods of hypertension; second, by critically examining the possibility of bringing a simpler approach on hypertension screening using facial characteristics; third, by analyzing both facial features and facial colors of hypertension individuals through various methods, allowing a meaningful comparison between facial characteristics of hypertension individuals and non-hypertension individuals; and, fourth, by formulating a prediction model that enables a non-contact and unconscious screening of hypertension on a daily life basis.

Conclusions
In conclusion, the exploration of the possibility of predicting hypertension using facial characteristics can be considered accomplished. We found that facial characteristics variables were associated with hypertension. Although more studies and research are still needed to fully develop such technology, this study provided a shred of new evidence that the prediction of hypertension using facial characteristics could be valid and advanced into an effective screening program. In the future, we aimed to recruit more subjects and conduct more research to create a real-time measurement platform based on this technology with the hope to contribute to the field of big data as well as the new era of artificial intelligence.