Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Predicting perceived visual complexity of abstract patterns using computational measures: The influence of mirror symmetry on complexity perception

  • Andreas Gartus ,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Resources, Software, Validation, Visualization, Writing – original draft, Writing – review & editing

    andreas.gartus@univie.ac.at

    Affiliation Department of Basic Psychological Research and Research Methods, Faculty of Psychology, University of Vienna, Vienna, Austria

  • Helmut Leder

    Roles Conceptualization, Project administration, Resources, Supervision, Writing – original draft, Writing – review & editing

    Affiliation Department of Basic Psychological Research and Research Methods, Faculty of Psychology, University of Vienna, Vienna, Austria

Abstract

Visual complexity is relevant for many areas ranging from improving usability of technical displays or websites up to understanding aesthetic experiences. Therefore, many attempts have been made to relate objective properties of images to perceived complexity in artworks and other images. It has been argued that visual complexity is a multidimensional construct mainly consisting of two dimensions: A quantitative dimension that increases complexity through number of elements, and a structural dimension representing order negatively related to complexity. The objective of this work is to study human perception of visual complexity utilizing two large independent sets of abstract patterns. A wide range of computational measures of complexity was calculated, further combined using linear models as well as machine learning (random forests), and compared with data from human evaluations. Our results confirm the adequacy of existing two-factor models of perceived visual complexity consisting of a quantitative and a structural factor (in our case mirror symmetry) for both of our stimulus sets. In addition, a non-linear transformation of mirror symmetry giving more influence to small deviations from symmetry greatly increased explained variance. Thus, we again demonstrate the multidimensional nature of human complexity perception and present comprehensive quantitative models of the visual complexity of abstract patterns, which might be useful for future experiments and applications.

Introduction

Complexity is an important concept. Since the birth of complexity science in the 1980s, it has influenced almost every scientific field [1]. However, complexity is hard to define. To begin with, the term complexity has two different usages: We may speak of complex systems regarding complexity as the quality that makes a system complex. On the other hand, we also often think of something being more complex than something else. In that case, complexity is used as a quantity [2]. The topic of our study is computational methods to model the quantitative complexity of images. However, defining complexity as a quantity is also far from easy, as there are at least two complications: First, there are many different potential measures of complexity (the simplest being just the number of parts of an object, however, the relationship between these parts obviously also contributes to complexity). Second, the measures also depend on the viewer (and/or context), since what counts as a part can be different for different observers [2,3].

Image complexity is also important for many areas of research and technology. For instance, visual complexity might affect perception of elapsed time [47] while on the other hand, presentation duration also changes complexity perception [8,9]. Complexity influences aesthetic evaluation during early perceptual analysis of objects of aesthetic interest [1016]. Furthermore, it is also relevant for the design of technical displays [1719] icon design [20], website development [21,22], appearance of architecture [23,24], and for many more areas.

In this study, we focus mainly on the prediction of visual complexity of abstract patterns as a quantity perceived by an average or “naïve” observer. In particular, we will compare different measures (and combinations of measures) of visual complexity and analyze their relation to perceived (and judged) average complexity. In addition, although it is not our main focus, we also analyze whether the predictive value of complexity measures varies between different observers. In other words, we explore the relationship between objective properties of the images and the subjective process of perception. However, we restricted our study to mere visual features, and did not investigate the influence of meaning on complexity perception like e.g., [25] did. Therefore, the semantic purport of the abstract patterns we used as stimuli is not addressed, because material was chosen for which it is negligible or at least very low.

Visual complexity has been extensively studied by psychology of perception and other disciplines for about 100 years [18]. The idea of sensory input being perceived as simple or complex evolved from Gestalt psychology [26,27], whose law of “Prägnanz” was reformulated as the “principle of simplicity” [28]. Visual complexity was soon utilized to explain aesthetic preferences in a formal way [2931]—although these authors have presented fundamentally different aesthetic formulas: While Birkhoff [29] derived his aesthetic measure as order divided by complexity, Eysenck [30,31] conceptualized the formula as order multiplied by complexity.

The psychobiological approach of Berlyne [10,11] assumed that the arousal potential of a stimulus (which is thought to be associated with complexity but also with novelty) is related to its hedonic value by an inverted U-shaped function. However, the results of studies testing these assumptions are mixed and e.g., [32] concluded that inverted U-shaped curves are more an exception than the rule. Some authors have found the proposed inverted U-curves to a certain extent [24,3336], while others [3740] have found a linear relation between complexity and aesthetic evaluation. Recently, researchers [41] specifically analyzed individual differences in the relation of liking and complexity. They report that an across-participant analysis led to the proposed inverted U-curve. However, an analysis on the level of individual participants revealed that the inverted U-shape was actually composed from different individual liking functions. Thus, it is possible that the inverted U-curve might be just an artifact of group-level analysis systematically ignoring individual differences.

As an additional complication, already briefly addressed in the first paragraph, visual complexity cannot be conceptualized as the exclusive property of stimulus images. For instance, it is well known that several factors can bias the perception of visual complexity. In Berlyne’s psychobiological theory [10,11], complexity is seen as a collative variable. Such variables are viewed as the result of comparisons within or between stimuli, but also with prior experience. Therefore, experience with a stimulus can change the judgment of its complexity. Specifically, [42] analyzed icon complexity and found a negative correlation of judgments of familiarity with ratings of complexity. Thus, the more familiar an icon appeared, the less complex it was perceived. Consistent with these results, it was found that unfamiliar stimuli are rated more complex than familiar ones [43]. A recent study [44] was able to predict perceived visual complexity with greater-than-chance accuracy from individual EEG (electroencephalography) recordings while participants viewed artworks. Thus, brain imaging techniques are also valuable tools for the study of human complexity perception. Furthermore, it was suggested that asking participants simultaneously for multiple image constructs can also potentially confound complexity ratings [45].

Moreover, it is also not appropriate to describe visual complexity as a single dimension. Instead, it has to be regarded as a multidimensional construct. As described above, empirical evidence for a preference of stimuli with intermediate complexity as proposed in [10,11] is mixed. In an attempt to resolve these problems, Nadal et al. [46] argued that most studies so far treated visual complexity as a one-dimensional property. However, regarding complexity as a multidimensional construct might explain the divergent results of preceding studies. They concluded that two dimensions of visual complexity (amount and variety of elements, and organization and grouping of the elements) can explain most of the variance. In addition, asymmetry might be added as a third dimension. Similar suggestions have been made by earlier researchers. E.g., [35] proposed a perceptual and a cognitive factor of complexity. Chipman [47,48] distinguished between a quantitative factor (related to amount of elements) increasing complexity, and a structural factor (determined by different forms of structural organization, but mostly by symmetry) decreasing complexity. Ichikawa [8] confirmed these two factors and explained them by a fast and a slow cognitive process. In addition, he argued that individual differences in complexity ratings can be explained by different weights of the quantitative and structural aspects of the perceived patterns.

A major aspect of the structural factor of complexity proposed by Chipman [47] and Ichikawa [8] is symmetry. Symmetry is everywhere in the environment and it can be detected fast and efficiently by human vision [4959]. It is assumed that the human visual system has a preference for mirror symmetry, which is the most salient symmetry. Interestingly, this preference seems to be not shared by Orangutans [60]. In particular, a vertical symmetry axis is easier to detect than a horizontal or obliged axis, and elements close to a symmetry axis are more important than distant elements (although contours can also be important). In sum, human symmetry detection can be characterized to be both robust and sensitive to perturbation [57,59]. On one hand, partly symmetric patterns can be easily distinguished from random patterns. On the other hand, small deviations from perfect symmetry can also be detected without much effort [58,61]. However, violations are easier to detect near the midline of a pattern [51]. Furthermore, McManus [62] speculated that small deviations from symmetry (“a little asymmetry”) might play a role in aesthetics and the arts. In contrast to this, [63] found that slightly asymmetric abstract patterns were (on average) less liked than symmetric patterns by art novices who rated the patterns at their own pace. An important biological form of bilateral (or mirror) symmetry is the form of the human face (and body). Thus, it has been argued that the human preference for symmetry stems from the relevance of symmetry as a biological function signaling health and good genes [34,6468]. On the other hand, it has been argued that symmetry preference is a mere by-product of visual processing [69]. However, it may also be the case that in fact both explanations are true [70].

Clearly, symmetry is often preferred and evaluated as aesthetically pleasing. Using abstract patterns as stimuli, Jacobsen and Höfel [3739] found symmetry to be the best predictor for the aesthetic judgment of beauty, while complexity was the second-best predictor. Furthermore, the association of symmetry with positive affective evaluation can also be found using implicit methods like the Implicit Association Test (IAT) [7173]. There is also evidence that there is an automatic response to symmetry in the brain [7476]. However, while some studies conclude that symmetry spontaneously elicits positive affect [77], the evidence for an automatic emotional response to symmetry in the strongest sense is not fully conclusive and it seems that aesthetic evaluation also requires some intention [74,78]. An evolutionary algorithm was used in [79] as a new technology to generate patterns based on eye-movement measurements. The authors demonstrate that symmetric patterns also “evolve” when people are simply asked to look at the patterns they prefer. However, the increase of symmetry was larger when people were explicitly asked to look for symmetric patterns.

It was also known for long, that symmetry can influence the perception of complexity. Already Attneave [80] found that symmetric shapes are perceived as less complex than asymmetric shapes with the same number of sides. This can probably be attributed to the increase of redundancy caused by symmetry [81,82]. Furthermore, preference of symmetric patterns over scrambled images was explained by the reduction of complexity that goes along with a higher number of symmetry axes [83]. Thus, symmetry must be considered as an important aspect of visual complexity [40,56,81,8486].

Attempts to derive visual complexity from image properties often counted elements like e.g. the “indefinitely extended straight lines which contain all the sides of the polygon” [29] (p. 34), the non-parallel sides of polygons [30], or the “number of elements in a pattern” [38] (p. 763). Extending these rather simple counting methods, algorithmic information theory (AIT) [87,88] and Kolmogorov complexity theory [89] tried to provide a mathematical formalization of visual complexity. Essentially, here the visual complexity of an image is defined as the length of the shortest program that can produce it. This measure can be approximated by the resulting file size when applying compression algorithms like ZIP or JPEG (Joint Photographic Expert Group) to an image [17,19,90,91]. In a unique approach, [4,5] combined measures of image edges and local symmetries with fuzzy entropy functions to compute the visual complexity of representational paintings. However, the researchers differentiated only between three a-priori defined complexity classes and exclusively concentrated on perceived time as a subjective measure of complexity. Several objective complexity measures were compared across domains for music, affective pictures, and representational paintings in [92]. The authors demonstrated that the relations of complexity measures with other variables like pleasantness or arousal, as well as with perceived complexity might to some extent be domain-specific [90] and could also vary with gender of the viewers. Recently, a high correlation of perceived and predicted complexity (Spearman’s rank correlation coefficient rs = .832) was achieved by applying machine learning (artificial neural networks) to combine a large number of objective predictors of visual complexity [93]. Furthermore, [94] used particle swarm optimization in a similar way to determine optimal weights of a linear combination of predictors of visual complexity of real-world images, and [95] utilized genetic programming to derive a measure for visual complexity of texture images, also considering non-linear combinations of predictors. An obvious advantage of such computational measures of visual complexity is that they are not biased by factors like familiarity of the observer with the images or individual preferences.

In sum, visual complexity has to be considered to be a multidimensional construct, consisting of at least two dimensions: A quantitative factor positively related to complexity and linked to number of elements or similar measures, and a structural factor negatively related to complexity linked to order and mostly determined by symmetry. In addition, there might be considerable individual differences in complexity perception, which are partly determined by previous experience and interpretation of content. The goal of this study is to predict perceived visual complexity of abstract patterns from combinations of objective (computational) properties of the individual images. Our approach is threefold: First, we investigate the structure and dimensionality of perceived visual complexity using easy interpretable linear models. Second, we construct a prediction model using more flexible machine learning algorithms. However, because large individual differences might exist between participants that are ignored when only averaged ratings are used, we additionally analyze differences in complexity perception and compare prediction models between participants.

General methods

Overview

In a first step, linear combinations of image parameters are used to predict visual complexity by applying ordinary linear models. Using this approach, it is possible to construct parsimonious models that are easy to interpret. Subsequently, a machine learning algorithm is applied to the data and the prediction results are compared with the linear models. Machine learning approaches can take better advantage of the information contained in the predictors, but the resulting models are more complicated and less easy to interpret. However, they can at least serve as an approximation of the upper limit of prediction quality that is possible using the available image parameters as predictors.

To improve generalizability of the results, two independent sets of black-and-white abstract patterns were used in this work. The first stimulus set (Stimulus Set 1) contains 912 abstract patterns extended from the stimuli used in Gartus and Leder [63]. The second stimulus set (Stimulus Set 2) consists of the 252 abstract patterns introduced by Jacobsen and Höfel [3739].

The patterns of these two stimulus sets were independently rated for visual complexity by between 29 and 64 participants (see individual method sections for details). For each pattern, the mean of the ratings was taken as the primary measure of perceived visual complexity. Subsequently, several potential objective predictors of visual complexity were computed from the stimulus images. As a first attempt to predict perceived visual complexity, several linear models containing one, two, or three of the predictors were calculated and compared. Then, machine learning was applied using different sets of predictors and the results were compared with the linear models. Finally, separate linear models were calculated for each participant to address individual differences.

Procedure

Participants.

Participants were mostly undergraduate psychology students of the University of Vienna who participated for course credit, and a few volunteering graduate psychology students. All gave written informed consent prior to the study and had normal or corrected-to-normal vision. Participants were informed that they could quit the study at any time without further consequences and that the data will be used for scientific publication. All data were fully anonymized. The study lasted about 20–30 min and we did not observe any inconvenience in our participants. Since the study was not in any way harmful or invasive, and did not induce stress in participants, no ethical approval was requested, which is in accordance with the ethical guidelines of the University of Vienna.

Rating procedure.

The rating of visual complexity was performed using E-Prime 2.0.8.90 (Psychology Software Tools, Inc., Pittsburgh, PA) and 19-inch monitors. Participants saw the images on a neutral light grey background (“silver”) from a distance of about 70 cm. Previously to the rating of visual complexity, participants were shown all images in advance (self-paced browsing of pages showing 42 to 48 patterns) to provide an overview over the range of complexity and to ensure mild familiarity with the patterns. Following [96] and [97], visual complexity was defined as “the amount of detail or intricacy of line in the picture” (German: “das Ausmaß der Details bzw. die Komplexität der Linenführung”). Participants rated all patterns on 5-point scales ranging from 1 = “very simple” (German: “sehr einfach”) to 5 = “very complex” (German: “sehr komplex”). All ratings were self-paced.

Calculation of predictors.

Like in [92], as well as e.g. [19,91,93], we calculated a wide range of potential predictors of visual complexity from the stimulus images using MATLAB R2013a (The MathWorks, Inc., Natick, Massachusetts, USA) and some selected image processing functions [98].

First, the ratio of compressed file size to image size in pixel was computed for GIF (Graphics Interchange Format), PNG (Portable Network Graphics), TIFF (Tagged Image File Format; Lempel-Ziv-Welch compression), and JPEG (Joint Photographic Expert Group; 90% quality) file compression. Since file compression can be different for the original and an image rotated by 90 degrees [7], we used the average compression rate of both orientations. Then, four types of edge detection were applied to the images: Phase congruency (PHC; phasecong3.m) [98], Canny edge detection (CAN; MATLAB function “edge”) [99], perimeter edge detection (PER; MATLAB function “bwperim”), and root mean square (RMS) contrast [92,100,101]. From the resulting enhanced edge images, mean (MN), standard deviation (SD), as well as the product of mean and standard deviation (MNSD) [100] were calculated. Furthermore, the four types of edge images were also compressed using GIF file compression and the ratio of compressed file size to image size was calculated like above. These calculations led to the following 20 (single and combined) predictors of visual complexity: GIF, PNG, TIF, JPG, PHCMN, PHCSD, PHCMNSD, CANMN, CANSD, CANMNSD, PERMN, PERSD, PERMNSD, RMSMN, RMSSD, RMSMNSD, PHCGIF, CANGIF, PERGIF, RMSGIF.

In addition, four more parameters as described in [102] were calculated: Assessment of preference for balance (APB) [103], deviation of the center of “mass” (DCM; Euclidian distance from geometric center), mirror symmetry (MS) [104], and homogeneity (HG; relative entropy). In [102], these parameters were used as objective measures of balance and aesthetic preference. However, here we employed them as structural measures to predict perceived visual complexity.

Because mirror symmetry turned out to be an important predictor of visual complexity, its calculation is described here in some more detail than the other parameters (see [102] for a comprehensive description). As described in [104], the symmetry of a pattern is considered to be a measure of similarity of pixels on opposite sides of an axis of reflection. Four symmetry axes were taken into account: Horizontal, vertical, main diagonal, and secondary diagonal. For each axis, corresponding pixels that have the same value on both reflection sides are counted and weighted in a way that pixels close to the axis have more influence. Subsequently, pixel counts are normalized to a value between zero and one. Let m and w be the height and width of an image. Then, to analyze symmetry with respect to the vertical symmetry axis, n = w/2 comparisons are needed if w is even. If w is odd, n = (w-1)/2 comparisons are needed. Let furthermore Xij be a binary variable that equals 1 if pixels are matching and 0 otherwise. The symmetry s of the vertical axis is then calculated as (1) After calculating the symmetry for all four axes, the average of all values is taken and multiplied by 100 (i.e., MS = 100 (MS1 + MS2 + MS3 + MS4)/4), leading to an overall mirror symmetry between zero and 100. A more detailed description of the measure can be found in [104].

Data analysis

Data was analyzed in R version 3.2.3 [105]. Linear model optimizations and corresponding plots were performed using the leaps package version 2.9 [106]. Random forests were calculated using the randomForest package version 4.6–12 [107] using standard settings. Linear mixed effects models were analyzed using the lme4 package version 1.2–12 [108] and the lmerTest package version 2.0–32 [109].

All linear models predicting perceived visual complexity from one, two, or three of the 24 predictors were calculated and compared using R2 (explained variance). Before calculating the models, all predictors were z-standardized (M = 0, SD = 1) to achieve comparable estimates. From these models, the five best for each number of predictors were investigated further. Our goal was to find parsimonious, easy interpretable models that can explain large amounts of variance. Residual plots and Q-Q plots of residuals were visually inspected to review homogeneity of variance (homoscedasticity) and normal distribution of residuals.

Finally, we also applied random forests [110]. Random forests are a machine learning algorithm that use a large number of decision or regression trees, each constructed from a different subset of predictors and data, to predict the output by majority class (in case of classification) or mean output of all trees (in case of regression as used in this work). First, we were using the same predictors for the random forests as in the linear models. That way, we could compare the linear models with the more powerful machine learning models. In addition, we also calculated random forest models using all predictors. Due to problems with multicollinearity, it was not feasible to include all predictors in the linear models. However, the machine learning models provide an estimation of the upper limit of prediction quality which is achievable by using all available information. While the linear models were simply calculated using the full data sets, we have used 10-fold cross-validation for the machine learning models to control for possible overfitting. In 10-fold cross-validation, the data is first split into 10 parts. Nine parts are used for learning and the 10th part is used to test the prediction quality of the algorithm. This is repeated 10 times using each of the 10 data parts only once for testing. By this procedure, predictions for the whole data set are created, while the algorithm always uses different data for training.

In this work, model comparisons are primarily based on Pearson’s product-moment correlation coefficients (r) or the square of it (R2, explained variance). However, for the sake of completeness and comparability with other studies, Spearman’s rank correlation coefficient (rs), root mean square error (RMSE), and mean absolute error (MAE) are also reported.

Stimulus Set 1

The first stimulus set consists of 912 abstract patterns selected from an extended set of patterns that were used by Gartus and Leder [63]. The patterns were generated by a simulated annealing [111] stochastic optimization algorithm programmed in MATLAB. The algorithm placed 36 to 44 black triangular elements in an 8 × 8 rectangular grid on a white background according to several criteria like number of objects and symmetry axes. The 912 selected abstract patterns contain 152 asymmetric and 76 symmetric patterns for each of five types of symmetry (1 or 2 orthogonal, 1 or 2 diagonal, or all 4 symmetry axes). In addition, Stimulus Set 1 also contains a “broken” version of each symmetric pattern. These broken patterns were created from the symmetric patterns by perturbation of a small 3 × 3 area of the pattern, thus producing just-not-symmetric versions of the symmetric patterns. See Fig 1 for example patterns.

thumbnail
Fig 1. Example patterns from Stimulus Set 1 [63] and Stimulus Set 2 [3739].

Top row shows the most complex patterns. Middle row shows examples with mean complexity, and bottom row the least complex patterns. Below the patterns, mean complexity ratings are given. For both stimulus sets, three example patterns are depicted for the whole stimulus set (“All”) and exclusively for the symmetric patterns (“Symmetric”). In addition, in case of Stimulus Set 1, example patterns with highest, mean, and lowest complexity are also shown for the group of patterns with broken symmetry (“Broken”). Note that in both stimulus sets, the pattern with the lowest complexity is symmetric, and the pattern with the highest complexity is asymmetric.

https://doi.org/10.1371/journal.pone.0185276.g001

Materials and methods

The 912 abstract patterns were rated in three sessions for visual complexity. The patterns had a size of 600 × 600 pixels corresponding to a physical size of 175 × 175 mm on the monitor. In the first two sessions, two sets of 288 patterns each were rated by 48 (33 women, age range: 20–31 years, M = 24.33, SD = 2.86) and 64 participants (37 women, age range: 19–45 years, M = 25.73, SD = 4.57). In the third session, 384 patterns were rated by 47 participants (35 women, age range: 18–29 years, M = 21.96, SD = 2.51). The third session included 48 patterns already rated in the first two studies. These additional ratings were not used for calculating mean visual complexity.

Initially, we calculated and reviewed the correlation matrix between all predictors together with mean visual complexity. Then, we estimated all linear models using only one, two, or three predictors and compared the models using R2 (explained variance). Finally, we applied machine learning to different sets of predictors and estimated individual linear models for each participant to assess individual differences in complexity perception.

Gartus and Leder [63] showed that small deviations from symmetry do not only lead to a decreased liking, but also considerably increase perceived visual complexity. However, for patterns with broken symmetry, our computational measure of mirror symmetry (MS) is naturally only slightly smaller than the maximum of 100. Thus, a linear combination using this mirror symmetry measure can presumably not account for the rather strong increase of perceived complexity achieved by breaking the symmetry. Therefore, we tried to apply different forms of non-linear transformations to our measure of mirror symmetry. This was done separately for the measures of the four symmetry axes, i.e. before taking the average of the four values and multiplying the result by 100 (compare S1 Fig for some examples of such non-linear transformations.)

Results and discussion

First, we inspected the correlation matrix of all predictors including mean visual complexity (see S2 Fig for a visualization of this correlation matrix). Table 1 summarizes correlations and corresponding error measures of all predictors with mean visual complexity. Many predictors are highly correlated with each other. In addition, most predictors are positively correlated with mean visual complexity (RMSGIF exhibits the highest correlation with visual complexity of r = .634, p < .001), and mirror symmetry (MS) is the only predictor which shows a significant negative correlation with visual complexity (r = –.578, p < .001). In addition, mirror symmetry (MS) and RMSGIF are only slightly negatively correlated with each other (r = –.087, p = .008). A negative relation of mirror symmetry and visual complexity is also suggested by Fig 1, which shows that the pattern with the lowest complexity is always a symmetric pattern. This is consistent with the well-known fact that symmetric patterns are perceived as less complex than asymmetric patterns [47,48,80,84,86].

thumbnail
Table 1. Performance measures of single linear predictors of visual complexity for Stimulus Set 1.

https://doi.org/10.1371/journal.pone.0185276.t001

Fig 2 depicts the five best linear models using one, two, or three predictors. As expected from the high correlation, the best single predictor is RMSGIF, which already can explain 40.2% of the variance (see upper part of Table 2 for details). The best model containing two predictors makes use of the predictors mirror symmetry (MS) and RMSGIF and can explain 67.7% of the variance. The inclusion of a third predictor can only slightly increase the explained variance to 71.3%. Thus, the linear model containing mirror symmetry (MS) and RMSGIF can be considered to be the best, most parsimonious and interpretable model (see middle part of Table 2 for details). Inspection of residual plot and Q-Q plot did not show visible inhomogeneity of variance or deviations from normality.

thumbnail
Fig 2. Best five linear models predicting mean visual complexity for Stimulus Set 1 containing one (bottom), two (middle), or three (top) parameters.

Each row corresponds to a linear model and squares symbolize the inclusion of a predictor. The quality of the models is evaluated with respect to explained variance (R2). Note that the best single predictor is RMSGIF, the best two-predictor model contains the predictors MS and RMSGIF, and the inclusion of a third predictor only slightly increases the explained variance.

https://doi.org/10.1371/journal.pone.0185276.g002

thumbnail
Table 2. Linear prediction models of visual complexity for Stimulus Set 1.

https://doi.org/10.1371/journal.pone.0185276.t002

Fig 3 shows a scatterplot of mean visual complexity versus the predictions of the two-predictor model. While overall, the scatterplot indicates a quite good linear relation between mean visual complexity and predictions (67.7% of explained variance corresponds to a correlation of r = .823), it also suggests that the visual complexity of patterns with broken symmetry might be somewhat underestimated by the linear model.

thumbnail
Fig 3. Scatterplot of mean visual complexity versus predictions of the linear model containing mirror symmetry (MS) and RMSGIF as predictors for Stimulus Set 1 (r = .823).

A LOWESS (locally weighted scatterplot smoothing) [112] regression line—a smooth curve allowing to represent linear and non-linear functions—is used to visualize the linearity of the relation. The diagonal is depicted as dashed line. It can be seen that asymmetric patterns (triangles) are mostly perceived as highly complex. In addition it looks like, that the perceived visual complexity of patterns with broken symmetry (stars) is somewhat underestimated.

https://doi.org/10.1371/journal.pone.0185276.g003

Therefore, we applied several non-linear transformations separately to all four axes of our measure of mirror symmetry (MS). Exponential functions led to very similar results as power functions and are thus not discussed any further (cf. S1 Fig). Fig 4 compares (in a similar style as Fig 2) several linear models containing RMSGIF and one of the (transformed or untransformed) measures of mirror symmetry. It can be clearly seen that the non-linear transformation of mirror symmetry increased the explained variance of the model. A somewhat “optimal” transformation could be a power function with an exponent of a = 20 (cf. S1 Fig). Applying this transformation to MS led to the predictor MSA20 = 100 (MS120 + MS220 + MS320 + MS420)/4, and the explained variance of the model increased from 67.7% to 81.6%. (See also Table 3 and S3 Fig for an overall comparison of all models.) This corresponds to a correlation coefficient of r = .903. See lower part of Table 2 for details. The updated scatterplot depicted in Fig 5 reveals that patterns with broken symmetry are no longer visibly underestimated by the new model. However, inspection of residual plot and Q-Q plot did show small signs of inhomogeneity of variance and deviations from normality.

thumbnail
Fig 4. Linear models containing RMSGIF and (transformed and untransformed) measures of mirror symmetry for Stimulus Set 1.

Each row corresponds to a linear model and squares symbolize the inclusion of a predictor. The names of the predictors correspond to the power function used to transform them (i.e., MSA6 was transformed by a power function with exponent a = 6; cf. S1 Fig). Note that a power function with exponent a = 20 seems to be a somewhat optimal non-linear transformation. It increases the explained variance (R2) of the model from 68% (MS) to 82% (MSA20).

https://doi.org/10.1371/journal.pone.0185276.g004

thumbnail
Fig 5. Scatterplot of mean visual complexity versus predictions of the linear model containing mirror symmetry transformed by a power function with exponent a = 20 (MSA20) and RMSGIF as predictors for Stimulus Set 1 (r = .903).

A LOWESS (locally weighted scatterplot smoothing) [112] regression line—a smooth curve allowing to represent linear and non-linear functions—is used to visualize the linearity of the relation. The diagonal is depicted as dashed line. Note that the perceived visual complexity of patterns with broken symmetry (stars) seems to be no longer underestimated (cf. Fig 3).

https://doi.org/10.1371/journal.pone.0185276.g005

thumbnail
Table 3. Comparison of performance measures of prediction models for Stimulus Set 1.

https://doi.org/10.1371/journal.pone.0185276.t003

One has to be careful in calling this power function the optimal non-linear transformation, since there exist a huge number of possible non-linear transformations and we obviously could only try a small number of them. Usually, finding the optimal non-linear transformation is much more difficult than in the linear case. It is also worth mentioning that thresholding (which is also a non-linear transformation) the symmetry measures separately for all four axes at a value of 0.95 and then linearly stretching the function above 0.95 to values between zero and one leads to very similar results explaining 81.2% of the variance. In sum, we can state that by applying non-linear transformations that put more emphasis on small deviations from symmetry (and less emphasis on larger differences in the lower range of symmetry), it was possible to considerably increase the explained variance of our linear model predicting perceived visual complexity, which is in line with theory.

Concerning the prediction models using machine learning, one can see in Table 3 (and S3 Fig) that random forest models achieved slightly better, but overall similar results than the linear models using the same predictors. Note that random forest models also profited from the non-linear transformation of mirror symmetry in the same way as linear models. Using MSA20 instead of MS as a predictor increased explained variance from 70.1% to 85.2%. Interestingly, if the random forest was provided with the measures of mirror symmetry separated for the four symmetry axes, it was able to achieve similar prediction quality (85.3% explained variance) than with the already transformed mirror symmetry. Thus, one can assume that in that case, machine learning was able to find the appropriate non-linear transform by itself. (This was expected, since random forests can approximate all kinds of non-linear relationships.) Finally, using all available predictors, random forest models can evidently explain even more of the variance, but the achievement is not very pronounced (maximum explained variance: 89.1%).

To investigate individual differences in perception of visual complexity, we calculated individual linear models for each of the 159 participants. (Note that individual participants did only rate a subset of the patterns in Stimulus Set 1. Thus, differences in the individual linear models might theoretically also be due to different stimulus subsets.) All individual two-predictor models were compared, and those explaining the most variance were chosen (cf. S1 Table, listing all individually best models, their explained variance, and the explained variance of models containing MS and RMSGIF). It can be seen that the explained variance is always smaller than for the averaged data. This is not surprising, because non-averaged ratings are naturally more noisy than mean ratings and can also represent only five discrete steps of complexity (due to the used 5-point rating scale). Thus, they are more difficult to predict. By far the most frequent predictors in the optimized models were mirror symmetry (MS) with 133 and RMSGIF with 120 occurrences. The next frequent predictors were PHCGIF with 34 and APB with 14 occurrences. PHCGIF and RMSGIF are highly correlated (r = .961, p < .001) and can thus be considered to be very similar predictors. The estimated slopes of MS were in all but two cases (participants 89 and 109) negative and positive in all but one case for RMSGIF (participant 109). In terms of models, 104 of the 159 individually optimized models used MS and RMSGIF as predictors. Additional models using mirror symmetry (MS) and RMSGIF as predictors were also always calculated for each participant and compared with the optimized models. Complexity perception of just a few participants seemed to differ considerably from the average. For instance, a model using APB and PNG as predictors (3.203 + 0.880*APB + 0.337*PNG) could explain 40.0% of the variance in the ratings of participant 77, while the model using MS and RMSGIF could only explain 2.0% of the variance. Participant 109 showed a peculiar optimal model explaining 37.4% of the variance and using MS and RMSGIF, but with differently signed slopes than usually observed (3.462 + 0.527*MS– 0.368*RMSGIF). However, from the optimized models, only nine exhibited more than 5% of additionally explained variance compared to the models using MS and RMSGIF. Thus, for the vast majority of our participants, linear models using MS and RMSGIF as predictors were the best or nearly-best models.

To test for significant individual differences in weights of predictors, we also computed linear mixed effects models (LMMs) using MS and RMSGIF as fixed effects. In addition to random intercepts for stimuli and participants, four models using different random effects structure were calculated and compared (cf. S2 Table, showing the four models and their corresponding AIC and BIC). It turned out that all random slopes were significant (based on likelihood-ratio tests comparing models with and without random slopes). Also, both the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC) clearly indicated that the model using random slopes for both predictors was the best. Therefore, we conclude that there are significant individual differences in the weights of MS and RMSGIF when used to predict perceived visual complexity in Stimulus Set 1.

Stimulus Set 2

This stimulus set consists of the 252 abstract graphic patterns introduced by Jacobsen and Höfel [3739]. Some example patterns are presented in Fig 1. It is worth mentioning that Stimulus Set 2 was created independently of Stimulus Set 1 and also differs somewhat in style. However, both stimulus sets are abstract black-and-white patterns placed in a square area (although, in case of Stimulus Set 2, rotated by 45 degrees and put into a black circle; cf. Fig 1). Also, both stimulus sets contain a maximum of four predefined symmetry axes per pattern, and the patterns differ in symmetry and visual complexity. Noteworthy, Stimulus Set 2 is more balanced than Stimulus Set 1, as it contains roughly the same number of symmetric and asymmetric patterns. However, it does contain only few patterns that could be classified as broken symmetric.

Materials and methods

The 252 abstract patterns from [3739] were rated by 29 participants (18 women, age range: 18–31 years, M = 23.79, SD = 3.55) for visual complexity. The patterns had a size of 500 × 500 pixels corresponding to a physical size of 146 × 146 mm on the monitor. Like with Stimulus Set 1, first the correlation matrix of all predictors and mean visual complexity was calculated and inspected. Then, all linear models of one, two, or three predictors were calculated and compared using R2 (explained variance). Different non-linear transformations were applied to the measure of mirror symmetry (separately for all four axes) and the resulting linear models were compared to the best model of step two. Finally, machine learning was applied using different sets of predictors. To further investigate individual differences in complexity perception, all linear models containing two predictors were also calculated and compared separately for each of the 29 participants who rated the patterns of Stimulus Set 2.

Results and discussion

The correlation matrix of all variables is very similar to Stimulus Set 1 (see Table 4 for performance measures of the individual predictors and S4 Fig for a visualization of the correlation matrix), and thus here not discussed in detail. The predictor showing the maximum correlation with mean visual complexity is again RMSGIF (r = .709, p < .001) and mirror symmetry (MS) is again negatively correlated with complexity (r = –.478, p < .001). Mirror symmetry (MS) is also negatively correlated with RMSGIF (r = –.127, p = .044).

thumbnail
Table 4. Performance measures of single linear predictors of visual complexity for Stimulus Set 2.

https://doi.org/10.1371/journal.pone.0185276.t004

The five best linear models using on, two, or three predictors were again compared (see Table 5 and Fig 6). Remarkably, the best single predictor is again RMSGIF explaining 50.2% of the variance, and the best model using two predictors is also exactly the same as for Stimulus Set 1 and comprises mirror symmetry (MS) and RMSGIF as predictors (65.6% explained variance). Fig 7 shows a scatterplot of visual complexity versus predictions from the linear model. Inspection of residual plot and Q-Q plot did not show visible inhomogeneity of variance or deviations from normality.

thumbnail
Fig 6. Best five linear models predicting mean visual complexity for Stimulus Set 2 containing one (bottom), two (middle), or three (top) parameters.

Each row corresponds to a linear model and squares symbolize the inclusion of a predictor. The quality of the models is evaluated with respect to explained variance (R2). Note that just like for Stimulus Set 1, the best single predictor is RMSGIF, the best two-predictor model contains the predictors MS and RMSGIF, and the inclusion of a third predictor only slightly increases the explained variance.

https://doi.org/10.1371/journal.pone.0185276.g006

thumbnail
Fig 7. Scatterplot of mean visual complexity versus predictions of the linear model containing mirror symmetry (MS) and RMSGIF as predictors for Stimulus Set 2 (r = .810).

A LOWESS [112] regression line is used to visualize the linearity of the relation. Note that a few highly complex patterns seem to be overestimated by the linear model.

https://doi.org/10.1371/journal.pone.0185276.g007

thumbnail
Table 5. Linear prediction models of visual complexity for Stimulus Set 2.

https://doi.org/10.1371/journal.pone.0185276.t005

Fig 8 shows the effect of different non-linear transformations of mirror symmetry (MS) on prediction quality of the linear models consisting of RMSGIF and transformed mirror symmetry as predictors. While on the first glance, there seems to be a maximum and an “optimal” transformation similar as for Stimulus Set 1, a second look reveals that the explained variances of the models are practically all the same. Table 6 (and S5 Fig) compares the same kind of models as used for Stimulus Set 1 with each other. It can be seen that all linear models using two predictors (regardless if and how the symmetry predictor was transformed) can explain a similar amount of variance (between 65.6% and 67.2%). Using the non-linear transformed measure of mirror symmetry (MSA20 instead of MS) did only lead to a marginal improvement (66.6% instead of 65.6% explained variance). Inspection of residual plot and Q-Q plot did not show visible inhomogeneity of variance or deviations from normality.

thumbnail
Fig 8. Linear models containing RMSGIF and (transformed and untransformed) measures of mirror symmetry for Stimulus Set 2.

Each row corresponds to a linear model and squares symbolize the inclusion of a predictor. The names of the predictors correspond to the power function used to transform them (i.e. MSA6 was transformed by a power function with exponent a = 6; cf. S1 Fig). While similar to Stimulus Set 1, on the first glance there seems to be an optimal non-linear transformation, in contrast to Stimulus Set 1, the explained variance (R2) does practically not improve.

https://doi.org/10.1371/journal.pone.0185276.g008

thumbnail
Table 6. Comparison of performance measures of prediction models for Stimulus Set 2.

https://doi.org/10.1371/journal.pone.0185276.t006

The marginal improvements achieved by transforming the mirror symmetry predictor can probably be simply explained by the fact that—in contrast to Stimulus Set 1—Stimulus Set 2 contains only very few stimuli that deviate from full symmetry by only a small amount. On the contrary, for each symmetric stimulus, Stimulus Set 1 contains also a corresponding stimulus with broken symmetry. Note however, that non-linear transformation of mirror symmetry did not lead to a decline of explained variance. Thus, one can say that for Stimulus Set 2, non-linear transforming mirror symmetry did not greatly improve predictions, but it also did not worsen them.

Table 6 (and S5 Fig) shows that there are no big differences in explained variance for the linear models and the random forests. All models using just two predictors can explain between 65.6% and 69.8% of the variance. Only the machine learning models using all available predictors can explain a bit more of the variance (between 75.8% and 77.2%). Also, the random forest models show no advantage of transforming the mirror symmetry predictor in a non-linear way. This is most likely due to the same reason as for the linear models: Stimulus Set 2 contains only very few stimuli with broken symmetry.

Like for Stimulus Set 1, we also calculated individually optimized two-predictor linear models for each of the 29 participants (cf. S3 Table, listing all individually best models, their explained variance, and the explained variance of models containing MS and RMSGIF). The explained variance is again always smaller than for the averaged data. The most common individual predictor is RMSGIF, appearing in 20 linear models. The second most common predictor is mirror symmetry (MS), occurring in 19 linear models. Estimated slopes for MS are always negative, while they are positive for RMSGIF. In terms of full models, the most common model with 16 occurrences combines mirror symmetry (MS) and RMSGIF, just like the model for the averaged ratings. (Two other models appear only once and combine MS with predictors highly correlated with RMSGIF: CANGIF (r = .924, p < .001) and PERGIF (r = .929, p < .001). Thus, it can be argued that these models are very similar to the ones combining MS and RMSGIF.)

In sum, the most common predictors of the individual linear models are just the same as in the model of the average ratings: RMSGIF and mirror symmetry (MS). Also, the most common model combines these two predictors. Thus, while the major results of the individual analysis confirm the “big picture”, again some individual differences can be found. Roughly one third of the individual models differ from the model of the averaged data. However, it is notable that the majority of the individual linear models still consist of the predictors for mirror symmetry (MS) and RMSGIF (or similar predictors, highly correlated with RMSGIF). Also, the individual linear models explain at the maximum 4.9% more variance than a linear model using MS and RMSGIF as predictors. Therefore, linear models using MS and RMSGIF as predictors were again the best or nearly-best individual models.

As with Stimulus Set 1, we calculated four linear mixed effects models to test for individual difference in weighting the predictors MS and RMSGIF (cf. S4 Table, showing the four models and their corresponding AIC and BIC). Once more, both random slopes were significant and AIC and BIC indicated the model using both random slopes to be the best. Hence, we conclude that significant individual differences in the weighting of MS and RMSGIF also exist for Stimulus Set 2.

Summary and conclusions

In this study, we sought to predict and explain perceived average visual complexity of abstract patterns and to compare our results with earlier models of visual complexity. Two sets of stimulus images were used (compare Fig 1): A relatively large first set of 912 abstract patterns (Stimulus Set 1) extended from the set originally used by Gartus and Leder [63], also containing broken symmetric patterns. And a smaller second set of 252 abstract patterns (Stimulus Set 2) independently created by Jacobsen and Höfel [3739], essentially not containing broken symmetric patterns, but being more balanced regarding symmetric vs. asymmetric patterns. The first set was rated for visual complexity by three groups of participants consisting of 48, 64, and 47 people. The second set was only rated by one group of 29 participants. Mean ratings were taken as average perceived complexity of the patterns.

Our first goal was to create parsimonious and interpretable linear models predicting average perceived visual complexity using a wide range of objective properties calculated from the evaluated images. Remarkably, the best linear models for both stimulus sets of abstract patterns consisted of the same two predictors: Mirror symmetry (MS) [102,104] and a combination of root mean square contrast (RMS) [92,100,101] and GIF (Graphics Interchange Format) image compression (RMSGIF). These models could explain 67.7% of the variance in the first (compare Table 3 and S3 Fig), and 65.6% of the variance in the second stimulus set (compare Table 6 and S5 Fig).

Likewise, Chipman [47,48] proposed that perceived visual complexity of a pattern can be explained by two factors: A quantitative variable associated with number of elements and other cumulative aspects setting an upper limit on complexity, and a structural variable representing organization, symmetry, and other similarity transformations present in the patterns, which reduces perceived complexity. Similar results were found by Ichikawa [8], who in addition proposed that individual differences in visual complexity ratings can be characterized by different weights of the structural and the quantitative factor. Moreover, Nadal et al. [46] obtained ratings for 60 stimuli on seven aspects of visual complexity: Unintelligibility of the elements, disorganization, amount of elements, variety of elements, asymmetry, variety of colors, and three-dimensional appearance. A series of discriminant analyses led to three factors of visual complexity: One associated with amount and variety of elements, a second quantifying the way the elements are organized, and finally asymmetry as a third factor. Specifically, color had only little influence on complexity perception, which was also found by [113]. Thus, one could roughly say that they have found one quantitative and two structural factors.

Our results fit well into this picture. For both stimulus sets, the linear models predicting mean perceived visual complexity consist of two predictors: The first predictor is RMSGIF, a variable resulting from a combination of root mean square contrast, enhancing edges and corners of the patterns, with GIF compression rate. This predictor is positively related to complexity. One could consider it to be a quantitative factor. The second predictor is mirror symmetry (MS), which is negatively related to complexity. Thus, it reduces complexity and can be considered to be a structural variable. It should be noted that APB (assessment of preference for balance) and DCM (deviation of the center of mass) can also be characterized as structural factors. However, they seemed to not have played a big role in predicting visual complexity for most of our participants.

Ichikawa [8] proposed in addition, that individual differences in visual complexity ratings can be characterized by different weights of the structural and the quantitative factor. In our results, about two thirds of the best individual linear models explaining perceived visual complexity consisted of two factors (MS and RMSGIF) that can be interpreted like Ichikawa suggested. In the remaining third of the data, the vast majority of models can at most explain 5% more of the variance than the model using MS and RMSGIF as predictors. In addition, linear mixed effects models showed significant random slopes for both predictors with respect to individual participants. Thus, we conclude that explaining individual difference by different weights of a structural and a quantitative factor can be considered a reasonable interpretation. Only for a small minority of our participants, complexity perception must be explained differently. (Another possibility would be of course, that these participants simply did not follow instructions correctly and in essence did not rate patterns for complexity, but for some other property.)

Motivated by the results of [63], which indicated that small deviations from symmetry can have strong effects on aesthetic evaluation and perceived visual complexity, we examined different non-linear transformations of the mirror symmetry (MS) predictor. We applied several power (and exponential) functions (compare S1 Fig) that can be used to transform a linear relation between zero (representing asymmetry) and one (representing full symmetry) to a non-linear relation giving more influence to small deviations from symmetry. It turned out that transforming the mirror symmetry measure (separately for all four symmetry axes) by applying a power function with an exponent of a = 20 could increase the explained variance of the linear model from 67.7% to 81.6% for Stimulus Set 1. However, this effect did practically not exist for Stimulus Set 2, where the explained variance only changed from 65.6% to 66.6% by applying the non-linear transformation to the measure of mirror symmetry. A comparison with models based on machine learning techniques for Stimulus Set 1 reveals that the amount of 81.6% explained variance in the linear model is quite close to the maximum of 89.1% explained variance that could be accomplished by random forest models utilizing all available predictors (compare Table 3 and S3 Fig). For Stimulus Set 2, machine learning models using all available predictors showed a bigger advantage and can explain 77.1% of the variance. But the distance to the linear model explaining 66.6% of the variance is still not particularly high (compare Table 6 and S5 Fig).

These differences between the stimulus sets can probably be explained by the fact that Stimulus Set 1 contains a large number of broken symmetric patterns (380 of 912 patterns) whose perceived visual complexity was presumably underestimated by the linear model using the untransformed version of the mirror symmetry measure (compare Figs 3 and 5). The second stimulus set does however contain only very few patterns that could be described as a broken symmetry. Thus, the predictions were much less affected by the non-linear transformation than in Stimulus Set 1. But it is notable that the predictions at least did not get worse due to the transformation.

One limitation of this study is the fact that our methods were derived from a specific kind of images, namely abstract black-and-white patterns with predefined symmetry axes. However, and in contrast to many other studies, we used two independently created sets of stimuli. And while Stimulus Set 1 might be considered somewhat unbalanced as it contains only a relatively small number of asymmetric patterns, Stimulus Set 2 is more balanced in this respect. Although this, of course, does not completely rule out potential biases in the stimulus sets, it suggests that our results most likely generalize at least to that type of stimuli. In further studies, it would be desirable to extend the approach to a wider class of visual stimuli, like for instance natural textures, architectural images, or abstract artworks. Predictors based on edges, corners, and compression rates can be directly applied to all kinds of images. In addition, for real-world images, prediction accuracy may also benefit from preprocessing steps like removing image noise, standardizing the size of the images, etc. The estimation of mirror symmetry is likely to be more difficult for unrestricted image types, since the predictor used here requires predefined symmetry axes. Yet, general symmetry detection algorithms not depending on such narrow prerequisites are also available (e.g., [114]). Thus, using modified predictors, it might be feasible to extend the methods of this study and e.g. also apply them to abstract artworks or other image types. However, using artworks instead of abstract patterns as stimuli, we would expect less pronounced effects and less accurate predictions, because complexity perception of abstract artworks is presumable also influenced by a large number of individual, not necessary mutually exclusive factors like e.g., art knowledge and interest [12,13,115], familiarity with artworks and style [9,92,116], individual preferences [117119], cultural differences [60,120], or personality traits [121126].

In a very first attempt to test our models on more realistic data, we used two texture sets consisting of 54 and 58 images as well as corresponding subjective complexity scores available from [94]. For both textures sets, TIF compression rate was the best single predictor of subjective visual complexity with correlations of r = .647 for the first and r = .775 for the second set. The mirror symmetry predictor with fixed symmetry axes, modified for grayscale images (MS_GRAY), showed negative correlations with subjective complexity for both texture sets comparable in size to our data sets of abstract patterns (r = –.543 and r = –.667). However, in stark contrast to our data sets, MS_GRAY and TIF compression rate were also highly negatively correlated in the two texture data sets (r = –.701 and r = –.793). Therefore, adding MS_GRAY to a linear model already containing TIF as the single predictor could only marginally improve explained variance. In comparison, [94] presented models of complexity perception for both texture sets with a correlation performance of r = .81, but did not use symmetry as an explicit predictor. Thus, while mirror symmetry seems to have a similar negative influence on complexity perception in textures as in abstract patterns, the relation between it and quantitative complexity predictors appears to be different. At this point, it is not clear if this is a specific property of the employed data sets, or a more general property of natural and human-made textures. This deserves further investigation using larger, more representative data sets and improved computational predictors.

To summarize, we found that perceived visual complexity of abstract black-and-white patterns with predefined symmetry axes can be parsimoniously predicted by linear models using the following two parameters: 1) A quantitative predictor calculated by a combination of root mean square (RMS) contrast and GIF file compression which is positively related to perceived complexity. Alternatively, some other edge detection methods combined with other file compression algorithms may also work similarly. 2) A structural measure of mirror symmetry negatively related to complexity. Supplementary measures of order and structure like rotational symmetry might play an additional role in other stimulus sets. Non-linear transformation of the symmetry measure in a way that give small deviations from symmetry more weight may improve prediction accuracy by a large amount if stimulus patterns with small deviations from symmetry do exist, but is not supposed to deteriorate prediction accuracy if not. Machine learning algorithms taking more parameters and parameter transformations into account may improve predictions compared to simple linear models, but not by a very large amount. Individual differences in complexity perception do exist, but for the vast majority of participants, linear models using the above mentioned two predictors are sufficient (i.e., almost as good as any other linear model using only two predictors). Our results are in line with the findings of e.g., [8,46,47]. Indeed, we could quantitatively confirm the two-factor structure of perceived visual complexity using a large number of black-and-white abstract patterns from two independently created stimulus sets. It remains a future goal to refine and extend quantitative models of visual complexity to more realistic real-world images and/or artworks, as well as studying corresponding brain activation using non-invasive imaging techniques like fMRI. As mentioned above, this will undoubtedly require taking improved computational measures, but also other additional parameters into account: Meaning of images as well as knowledge and experience of perceivers (obviously not playing a big role for abstract patterns not carrying any semantic meaning) are presumably important factors when judging the complexity of artworks [25]. Thus, most probably, also predictors explaining differences in individual perceivers (and not only in stimulus images) will have to be included in such models.

Supporting information

S1 Fig. Some examples of the non-linear transformation functions applied to the four mirror symmetry measures of the four possible symmetry axes.

Considering that x = 0 corresponds to asymmetry, and x = 1.0 to full symmetry, these transforms assign more influence to small deviations from symmetry. Note that power and exponential functions lead to quite similar non-linear transformations.

https://doi.org/10.1371/journal.pone.0185276.s001

(TIF)

S2 Fig. Correlation matrix of all predictors and mean ratings of visual complexity (“Complexity.MN”) for Stimulus Set 1.

Dark and circle-shaped ellipses represent low correlations, while bright and slender ellipses represent high correlations. Right-inclined ellipses depict positive and left-inclined ellipses negative correlations. Note that most predictors correlate positively with mean complexity ratings. One clear exception is mirror symmetry (MS), which exhibits a negative correlation.

https://doi.org/10.1371/journal.pone.0185276.s002

(TIF)

S3 Fig. Comparison of prediction models for Stimulus Set 1 by explained variance (R2) using data from Table 3.

Note that the linear model using non-linear transformed mirror symmetry (MSA20) and RMSGIF as predictors performs almost as good as the random forest models.

https://doi.org/10.1371/journal.pone.0185276.s003

(TIF)

S4 Fig. Correlation matrix of all predictors and mean ratings of visual complexity (“Complexity.MN”) for Stimulus Set 2.

Dark and circle-shaped ellipses represent low correlations, while bright and slender ellipses represent high correlations. Right-inclined ellipses depict positive and left-inclined ellipses negative correlations. Note that the correlations are very similar to the ones in Stimulus Set 1 (cf. S2 Fig).

https://doi.org/10.1371/journal.pone.0185276.s004

(TIF)

S5 Fig. Comparison of prediction models for Stimulus Set 2 by explained variance (R2) using data from Table 6.

Note that in contrast to Stimulus Set 1, the linear model could almost not be improved by using the non-linear transformed mirror symmetry (MSA20) and the random forest models using all predictors perform best regardless of the transformation of the mirror symmetry predictor. This difference might be due to the fact that Stimulus Set 2 does not contain large numbers of broken symmetric patterns.

https://doi.org/10.1371/journal.pone.0185276.s005

(TIF)

S1 Table. Best two-predictor linear models of visual complexity separate for each participant for Stimulus Set 1.

https://doi.org/10.1371/journal.pone.0185276.s006

(DOCX)

S2 Table. Fixed effects estimates (top), random effects variance estimates (middle), and information criteria (bottom) of linear mixed effects models predicting visual complexity for Stimulus Set 1.

https://doi.org/10.1371/journal.pone.0185276.s007

(DOCX)

S3 Table. Best two-predictor linear models of visual complexity separate for each participant for Stimulus Set 2.

https://doi.org/10.1371/journal.pone.0185276.s008

(DOCX)

S4 Table. Fixed effects estimates (top), random effects variance estimates (middle), and information criteria (bottom) of linear mixed effects models predicting visual complexity for Stimulus Set 2.

https://doi.org/10.1371/journal.pone.0185276.s009

(DOCX)

S1 Dataset. Stimulus properties of Stimulus Set 1.

Text file with tab-separated values. First row contains variable names. Variables MSA, MSB, MSC, and MSD are separate mirror symmetry measures of all four axes: MS = 100*(MSA+MSB+MSC+MSD)/4. Some predictors are given in a horizontal (“H…”) and a 90 degrees rotated vertical (“V…”) version. For all calculations, the average of the horizontal and the vertical value was used.

https://doi.org/10.1371/journal.pone.0185276.s010

(DAT)

S2 Dataset. Participant data and complexity ratings for Stimulus Set 1.

Text file with tab-separated values. First row contains variable names.

https://doi.org/10.1371/journal.pone.0185276.s011

(DAT)

S3 Dataset. Stimulus properties of Stimulus Set 2.

Text file with tab-separated values. First row contains variable names. Variables are the same as in S1 Dataset.

https://doi.org/10.1371/journal.pone.0185276.s012

(DAT)

S4 Dataset. Participant data and complexity ratings for Stimulus Set 2.

Text file with tab-separated values. First row contains variable names.

https://doi.org/10.1371/journal.pone.0185276.s013

(DAT)

Acknowledgments

Open access funding provided by University of Vienna. We would like to thank Thomas Jacobsen who generously shared his stimulus materials, which were used as a second stimulus set in this study. We also want to thank Helene Plasser for help with parts of the data collection.

References

  1. 1. Mitchell M. Complexity: A guided tour. New York: Oxford University Press; 2009.
  2. 2. Standish RK. Concept and definition of complexity. In: Yang A, Shan Y, editors. Intelligent complex adaptive systems. Hershey, PA: IGI Publishing; 2008. pp. 105–124. https://doi.org/10.4018/978-1-59904-717-1.ch004
  3. 3. Edmonds B. What is complexity? The philosophy of complexity per se with application to some examples in evolution. In: Heylighen F, Bollen J, Riegler A, editors. The evolution of complexity. Dordrecht: Kluwer; 1999. pp. 1–17.
  4. 4. Cardaci M, Di Gesù V, Petrou M, Tabacchi ME. A fuzzy approach to the evaluation of image complexity. Fuzzy Sets Syst. 2009;160: 1474–1484.
  5. 5. Cardaci M, Di Gesù V, Petrou M, Tabacchi ME. Attentional vs computational complexity measures in observing paintings. Spat Vis. 2009;22: 195–209. pmid:19460272
  6. 6. Folta-Schoofs K, Wolf OT, Treue S, Schoofs D. Perceptual complexity, rather than valence or arousal accounts for distracter-induced overproductions of temporal durations. Acta Psychol (Amst). 2014;147: 51–59. pmid:24161200
  7. 7. Palumbo L, Ogden R, Makin ADJ, Bertamini M. Examining visual complexity and its influence on perceived duration. J Vis. 2014;14: 1–18. pmid:25487112
  8. 8. Ichikawa S. Quantitative and structural factors in the judgment of pattern complexity. Percept Psychophys. 1985;38: 101–109. pmid:4088801
  9. 9. Marin MM, Leder H. Effects of presentation duration on measures of complexity in affective environmental scenes and representational paintings. Acta Psychol (Amst). 2016;163: 38–58. pmid:26595281
  10. 10. Berlyne DE. Novelty, complexity, and hedonic value. Percept Psychophys. 1970;8: 279–286.
  11. 11. Berlyne DE. Aesthetics and psychobiology. New York: Appleton Century Crofts; 1971.
  12. 12. Leder H, Belke B, Oeberst A, Augustin D. A model of aesthetic appreciation and aesthetic judgments. Br J Psychol. 2004;95: 489–508. pmid:15527534
  13. 13. Leder H, Nadal M. Ten years of a model of aesthetic appreciation and aesthetic judgments: The aesthetic episode–Developments and challenges in empirical aesthetics. Br J Psychol. 2014;105: 443–464. pmid:25280118
  14. 14. Mayer S, Landwehr JR. When complexity is symmetric: The interplay of two core determinants of visual aesthetics. Adv Consum Res. 2014;42: 608–609.
  15. 15. Tinio PPL, Gerger G, Leder H. Birds of a feather… Generalization of facial structures following massive familiarization. Acta Psychol (Amst). 2013;144: 463–471. pmid:24076329
  16. 16. Tinio PPL, Leder H. Just how stable are stable aesthetic features? Symmetry, complexity, and the jaws of massive familiarization. Acta Psychol (Amst). 2009;130: 241–250. pmid:19217589
  17. 17. Donderi DC. A complexity measure for electronic displays: Final report on the experiments. (Report No. CR 2003–046). Toronto, Ontario, Canada; 2003.
  18. 18. Donderi DC. Visual complexity: A review. Psychol Bull. 2006;132: 73–97. pmid:16435958
  19. 19. Donderi DC, McFadden S. Compressed file length predicts search time and errors on visual displays. Displays. 2005;26: 71–78.
  20. 20. Forsythe A, Sheehy N, Sawey M. Measuring icon complexity: An automated analysis. Behav Res Methods, Instruments, Comput. 2003;35: 334–342.
  21. 21. Tuch AN, Bargas-Avila JA, Opwis K, Wilhelm FH. Visual complexity of websites: Effects on users’ experience, physiology, performance, and memory. Int J Hum Comput Stud. 2009;67: 703–715.
  22. 22. Tuch AN, Presslaber EE, Stöcklin M, Opwis K, Bargas-Avila JA. The role of visual complexity and prototypicality regarding first impression of websites: Working towards understanding aesthetic judgments. Int J Hum Comput Stud. 2012;70: 794–811.
  23. 23. Akalin A, Yildirim K, Wilson C, Kilicoglu O. Architecture and engineering students’ evaluations of house façades: Preference, complexity and impressiveness. J Environ Psychol. 2009;29: 124–132.
  24. 24. Imamoglu Ç. Complexity, liking and familiarity: architecture and non-architecture Turkish students’ assessments of traditional and modern house facades. J Environ Psychol. 2000;20: 5–16.
  25. 25. Jakesch M, Leder H. The qualitative side of complexity: Testing effects of ambiguity on complexity judgments. Psychol Aesthetics, Creat Arts. 2015;9: 200–205.
  26. 26. Koffka K. Perception: An introduction to the Gestalt-theorie. Psychol Bull. 1922;19: 531–585.
  27. 27. Köhler W. Die physischen Gestalten in Ruhe und im stationären Zustand. Eine natur-philosophische Untersuchung [The physical Gestalten at rest and in steady state]. Braunschweig, Germany: Vieweg und Sohn; 1920.
  28. 28. Hochberg J. Effects of the Gestalt revolution: The Cornell symposium on perception. Psychol Rev. 1957;64: 73–84. pmid:13420281
  29. 29. Birkhoff GD. Aesthetic measure. Cambridge, MA: Harvard University Press; 1932.
  30. 30. Eysenck HJ. The empirical determination of an aesthetic formula. Psychol Rev. 1941;48: 83–92.
  31. 31. Eysenck HJ. The experimental study of the “good Gestalt”–a new approach. Psychol Rev. 1942;49: 344–364.
  32. 32. Martindale C, Moore K, Borkum J. Aesthetic preference: Anomalous findings for Berlyne’s psychobiological theory. Am J Psychol. 1990;103: 53–80.
  33. 33. Aitken PP. Judgments of pleasingness and interestingness as functions of visual complexity. J Exp Psychol. 1974;103: 240–244.
  34. 34. Chassy P, Lindell TAE, Jones JA, Paramei G V. A relationship between visual complexity and aesthetic appraisal of car front images: An eye-tracker study. Perception. 2015;44: 1085–1097. pmid:26562922
  35. 35. Nicki RM, Moss V. Preference for non-representational art as a function of various measures of complexity. Can J Psychol. 1975;29: 237–249.
  36. 36. Palumbo L, Bertamini M. The curvature effect: A comparison between preference tasks. Empir Stud Arts. 2016;34: 35–52.
  37. 37. Jacobsen T, Höfel L. Aesthetics electrified: An analysis of descriptive symmetry and evaluative aesthetic judgment processes using event-related brain potentials. Empir Stud Arts. 2001;19: 177–190.
  38. 38. Jacobsen T, Höfel L. Aesthetic judgments of novel graphic patterns: Analyses of individual judgments. Percept Mot Skills. Perceptual & Motor Skills; 2002;95: 755–766. pmid:12509172
  39. 39. Jacobsen T, Höfel L. Descriptive and evaluative judgment processes: Behavioral and electrophysiological indices of processing symmetry and aesthetics. Cogn Affect Behav Neurosci. 2003;3: 289–299. pmid:15040549
  40. 40. Krupinski E, Locher P. Skin conductance and aesthetic evaluative responses to nonrepresentational works of art varying in symmetry. Bull Psychon Soc. 1988;26: 355–358.
  41. 41. Güçlütürk Y, Jacobs RHAH, van Lier R. Liking versus complexity: Decomposing the inverted U-curve. Front Hum Neurosci. 2016;10: 1–11.
  42. 42. McDougall SJ, Curry MB, de Bruijn O. Measuring symbol and icon characteristics: Norms for concreteness, complexity, meaningfulness, familiarity, and semantic distance for 239 symbols. Behav Res Methods, Instruments, Comput. 1999;31: 487–519.
  43. 43. Forsythe A, Mulhern G, Sawey M. Confounds in pictorial sets: The role of complexity and familiarity in basic-level picture processing. Behav Res Methods. 2008;40: 116–129. pmid:18411534
  44. 44. Kontson KL, Megjhani M, Brantley J a, Cruz-Garza JG, Nakagome S, Robleto D, et al. Your brain on art: Emergent cortical dynamics during aesthetic experiences. Front Hum Neurosci. 2015;9: 1–17.
  45. 45. Forsythe A, Street N, Helmy M. Revisiting Rossion and Pourtois with new ratings for automated complexity, familiarity, beauty, and encounter. Behav Res Methods. 2016; 1–10.
  46. 46. Nadal M, Munar E, Marty G, Cela-Conde CJ. Visual complexity and beauty appreciation: Explaining the divergence of results. Empir Stud Arts. 2010;28: 173–191.
  47. 47. Chipman SF. Complexity and structure in visual patterns. J Exp Psychol Gen. 1977;106: 269–301. pmid:894217
  48. 48. Chipman SF, Mendelson MJ. Influence of six types of visual structure on complexity judgments in children and adults. J Exp Psychol Hum Percept Perform. 1979;5: 365–378. pmid:528946
  49. 49. Barlow HB, Reeves BC. The versatility and absolute efficiency of detecting mirror symmetry in random dot displays. Vision Res. 1979;19: 783–793. pmid:483597
  50. 50. Bertamini M, Friedenberg JD, Kubovy M. Detection of symmetry and perceptual organization: The way a lock-and-key process works. Acta Psychol (Amst). 1997;95: 119–140.
  51. 51. Bruce VG, Morgan MJ. Violations of symmetry and repetition in visual patterns. Perception. 1975;4: 239–249.
  52. 52. Carmody DP, Nodine CF, Locher PJ. Global detection of symmetry. Percept Mot Skills. 1977;45: 1267–1273. pmid:604912
  53. 53. Jenkins B. Component processes in the perception of bilaterally symmetric dot textures. Percept Psychophys. 1983;34: 433–440. pmid:6657447
  54. 54. Julesz B. Figure and ground perception in briefly presented isodipole textures. In: Kubovy M, Pomerantz J, editors. Perceptual organization. Hillsdale, NJ: Erlbaum; 1981. pp. 27–54.
  55. 55. Locher P, Nodine C. Symmetry catches the eye. In: O’Reagan J. K.Levy-Schoen A, editor. Eye movements: From physiology to cognition. Amsterdam: Elsevier/North-Holland; 1987. pp. 353–361.
  56. 56. Locher P, Nodine C. The perceptual value of symmetry. Comput Math with Appl. 1989;17: 475–484.
  57. 57. Treder MS. Behind the looking-glass: A review on human symmetry perception. Symmetry (Basel). 2010;2: 1510–1543.
  58. 58. Wagemans J. Detection of visual symmetries. Spat Vis. 1995;9: 9–32. pmid:7626549
  59. 59. Wagemans J. Characteristics and models of human symmetry detection. Trends Cogn Sci. 1997;1: 346–352. pmid:21223945
  60. 60. Mühlenbeck C, Liebal K, Pritsch C, Jacobsen T. Differences in the visual perception of symmetric patterns in orangutans (Pongo pygmaeus abelii) and two human cultural groups: A comparative eye-tracking study. Front Psychol. 2016;7: 408. pmid:27065184
  61. 61. Palmer SE, Hemenway K. Orientation and symmetry: Effects of multiple, rotational, and near symmetries. J Exp Psychol Hum Percept Perform. 1978;4: 691–702. pmid:722256
  62. 62. McManus I. Symmetry and asymmetry in aesthetics and the arts. Eur Rev. 2005;13: 157–180.
  63. 63. Gartus A, Leder H. The small step toward asymmetry: Aesthetic judgment of broken symmetries. Iperception. 2013;4: 361–364. pmid:24349695
  64. 64. Fink B, Neave N, Manning JT, Grammer K. Facial symmetry and judgements of attractiveness, health and personality. Pers Individ Dif. 2006;41: 491–499.
  65. 65. Grammer K, Thornhill R. Human (Homo sapiens) facial attractiveness and sexual selection: The role of symmetry and averageness. J Comp Psychol. 1994;108: 233–242. pmid:7924253
  66. 66. Little AC, Jones BC. Attraction independent of detection suggests special mechanisms for symmetry preferences in human face perception. Proc R Socienty B Biol Sci. 2006;273: 3093–3099. pmid:17015348
  67. 67. Jones BC, Little AC, Penton-Voak IS, Tiddeman BP, Burt DM, Perrett DI. Facial symmetry and judgements of apparent health: Support for a “good genes” explanation of the attractiveness-symmetry relationship. Evol Hum Behav. 2001;22: 417–429.
  68. 68. Perrett DI, Burt DM, Penton-Voak IS, Lee KJ, Rowland DA, Edwards R. Symmetry and human facial attractiveness. Evol Hum Behav. 1999;20: 295–307.
  69. 69. Enquist M, Arak A. Symmetry, beauty and evolution. Nature. 1994;372: 169–172. pmid:7969448
  70. 70. Rhodes G. The evolutionary psychology of facial beauty. Annu Rev Psychol. 2006;57: 199–226. pmid:16318594
  71. 71. Greenwald AG, McGhee DE, Schwartz JLK. Measuring individual differences in implicit cognition: The implicit association test. J Pers Soc Psychol. 1998;74: 1464–1480. pmid:9654756
  72. 72. Bertamini M, Makin A, Rampone G. Implicit association of symmetry with positive valence, high arousal and simplicity. Iperception. 2013;4: 317–327.
  73. 73. Makin A, Pecchinenda A, Bertamini M. Grouping by closure influences subjective regularity and implicit preference. Iperception. 2012;3: 519–527. pmid:23145305
  74. 74. Bertamini M, Makin A. Brain activity in response to visual symmetry. Symmetry (Basel). 2014;6: 975–996.
  75. 75. Jacobsen T, Schubotz RI, Höfel L, v. Cramon DY. Brain correlates of aesthetic judgment of beauty. Neuroimage. 2006;29: 276–85. pmid:16087351
  76. 76. Makin ADJ, Wilton MM, Pecchinenda A, Bertamini M. Symmetry perception and affective responses: A combined EEG/EMG study. Neuropsychologia. Elsevier; 2012;50: 3250–3261. pmid:23063934
  77. 77. Pecchinenda A, Bertamini M, Makin ADJ, Ruta N. The pleasantness of visual symmetry: Always, never or sometimes. PLoS One. 2014;9: e92685. pmid:24658112
  78. 78. Höfel L, Jacobsen T. Electrophysiological indices of processing aesthetics: Spontaneous or intentional processes? Int J Psychophysiol. 2007;65: 20–31. pmid:17400317
  79. 79. Makin ADJ, Bertamini M, Jones A, Holmes T, Zanker JM. A gaze-driven evolutionary algorithm to study aesthetic evaluation of visual symmetry. Iperception. 2016;7: 1–18. pmid:27433324
  80. 80. Attneave F. Physical determinants of the judged complexity of shapes. J Exp Psychol. 1957;53: 221–227. pmid:13416488
  81. 81. Friedenberg J, Bertamini M. Aesthetic preference for polygon shape. Empir Stud Arts. 2015;33: 144–160.
  82. 82. Garner WR. Good patterns have few alternatives: Information theory’s concept of redundancy helps in understanding the gestalt concept of goodness. Am Sci. 1970;58: 34–42. pmid:5522663
  83. 83. Chen C- C, Wu J-H, Wu C-C. Reduction of image complexity explains aesthetic preference for symmetry. Symmetry (Basel). 2011;3: 443–456.
  84. 84. Day H. The importance of symmetry and complexity in the evaluation of complexity, interest and pleasingness. Psychon Sci. 1968;10: 339–340.
  85. 85. Eisenman R. Complexity-simplicity: I. Preference for symmetry and rejection of complexity. Psychon Sci. 1967;8: 169–170.
  86. 86. Eisenman R, Gellens H. Preferences for complexity-simplicity and symmetry-asymmetry. Percept Mot Skills. 1968;26: 888–890. pmid:5657743
  87. 87. Solomonoff RJ. A formal theory of inductive inference. Part 1. Inf Control. 1964;7: 1–22.
  88. 88. Solomonoff RJ. A formal theory of inductive inference. Part 2. Inf Control. 1964;7: 224–254.
  89. 89. Li M, Vitányi P. An introduction to Kolmogorov complexity and its applications. 2nd ed. New York: Springer; 1997.
  90. 90. Chikhman V, Bondarko V, Danilova M, Goluzina A, Shelepin Y. Complexity of images: Experimental and computational estimates compared. Perception. 2012;41: 631–647. pmid:23094454
  91. 91. Forsythe A, Nadal M, Sheehy N, Cela-Conde CJ, Sawey M. Predicting beauty: Fractal dimension and visual complexity in art. Br J Psychol. 2011;102: 49–70. pmid:21241285
  92. 92. Marin MM, Leder H. Examining complexity across domains: Relating subjective and objective measures of affectiveenvironmental scenes, paintings and music. PLoS One. 2013;8: e72412. pmid:23977295
  93. 93. Machado P, Romero J, Nadal M, Santos A, Correia J, Carballal A. Computerized measures of visual complexity. Acta Psychol (Amst). 2015;160: 43–57. pmid:26164647
  94. 94. Corchs SE, Ciocca G, Bricolo E, Gasparini F. Predicting complexity perception of real world images. PLoS One. 2016;11: e0157986. pmid:27336469
  95. 95. Ciocca G, Corchs S, Gasparini F. Genetic programming approach to evaluate complexity of texture images. J Electron Imaging. 2016;25: 61408.
  96. 96. Rossion B, Pourtois G. Revisiting Snodgrass and Vanderwart´s object pictorial set: The role of surface detail in basic-level object recognition. Perception. 2004;33: 217–236. pmid:15109163
  97. 97. Snodgrass JG, Vanderwart M. A standardized set of 260 pictures: Norms for name agreement, image agreement, familiarity, and visual complexity. J Exp Psychol Hum Learn Mem. 1980;6: 174–215.
  98. 98. Kovesi P. MATLAB and Octave functions for computer vision and image processing [Internet]. 2000. Available: http://www.peterkovesi.com/matlabfns/
  99. 99. Canny J. A computational approach to edge detection. IEEE Trans Pattern Anal Mach Intell. 1986;PAMI-8: 679–698. pmid:21869365
  100. 100. Cavalcante A, Mansouri A, Mansouri LKE, Barros AK, Takeuchi Y, Matsumoto N, et al. On the relation between contrast and the perception of visual complexity. 21st Annual Conference of the Japanese Neural Network Society. 2011. pp. P2-22.
  101. 101. Cavalcante A, Mansouri A, Kacha L, Barros AK, Takeuchi Y, Matsumoto N, et al. Measuring streetscape complexity based on the statistics of local contrast and spatial frequency. PLoS One. 2014;9: e87097. pmid:24498292
  102. 102. Hübner R, Fillinger MG. Comparison of objective measures for predicting perceptual balance and visual aesthetic preference. Front Psychol. 2016;7: 335. pmid:27014143
  103. 103. Wilson A, Chatterjee A. The assessment of preference for balance: Introducing a new test. Empir Stud Arts. 2005;23: 165–180.
  104. 104. Bauerly M, Liu Y. Computational modeling and experimental investigation of effects of compositional elements on interface and design aesthetics. Int J Hum Comput Stud. 2006;64: 670–682.
  105. 105. R Core Team. R: A language and environment for statistical computing [Internet]. Vienna, Austria: R Foundation for Statistical Computing; 2015. Available: http://www.r-project.org/
  106. 106. Lumley T. leaps: Regression subset selection. Package version 2.9 [Internet]. 2009. Available: https://cran.r-project.org/web/packages/leaps/
  107. 107. Liaw A, Wiener M. randomForest: Breiman and Cutler’s random forests for classification and regression. Package version 4.6–12 [Internet]. 2015. Available: https://cran.r-project.org/web/packages/randomForest/
  108. 108. Bates D, Maechler M, Bolker B, Walker S, Christensen RHB, Singmann, H., Dai B, et al. lme4: Linear mixed-effects models using “Eigen” and S4. Package version 1.1–12 [Internet]. 2016. Available: https://cran.r-project.org/web/packages/lme4/
  109. 109. Kuznetsova A, Brockhoff PB, Christensen RHB. lmerTest: Tests in linear mixed effects models. Package version 2.0–32 [Internet]. 2016. Available: https://cran.r-project.org/web/packages/lmerTest/
  110. 110. Breiman L. Random forests. Mach Learn. 2001;45: 5–32.
  111. 111. Kirkpatrick S, Gelatt CD Jr., Vecchi MP. Optimization by simulated annealing. Science (80-). 1983;220: 671–680. pmid:17813860
  112. 112. Cleveland WS. LOWESS: A program for smoothing scatterplots by robust locally weighted regression. Am Stat. 1981;35: 54.
  113. 113. Ciocca G, Corchs S, Gasparini F, Bricolo E, Tebano R. Does Color Influence Image Complexity Perception? International Workshop on Computational Color Imaging (CCIW 2015). 2015. pp. 139–148. 10.1007/978-3-319-15979-9
  114. 114. Loy G, Zelinsky A. Fast radial symmetry for detecting points of interest. IEEE Trans Pattern Anal Mach Intell. 2003;25: 959–973.
  115. 115. Lindell AK, Mueller J. Can science account for taste? Psychological insights into art appreciation. J Cogn Psychol. 2011;23: 453–475.
  116. 116. Ticini LF, Rachman L, Pelletier J, Dubal S. Enhancing aesthetic appreciation by priming canvases with actions that match the artist’s painting style. Front Hum Neurosci. 2014;8: 391. pmid:24917808
  117. 117. Chamorro-Premuzic T, Reimers S, Hsu A, Ahmetoglu G. Who art thou? Personality predictors of artistic preferences in a large UK sample: The importance of openness. Br J Psychol. 2009;100: 501–516. pmid:19026107
  118. 118. Leder H, Goller J, Rigotti T, Forster M. Private and shared taste in art and face appreciation. Front Hum Neurosci. 2016;10: 1–7.
  119. 119. Palmer SE, Griscom WS. Accounting for taste: Individual differences in preference for harmony. Psychon Bull Rev. 2013;20: 453–461. pmid:23242798
  120. 120. Cui T, Wang X, Teo H. Effects of cultural cognitive styles on users’ evaluation of website complexity. 33rd International Conference on Information systems (ICIS 2012). Orlando; 2012. pp. 1–17.
  121. 121. Chamorro-Premuzic T, Burke C, Hsu A, Swami V. Personality predictors of artistic preferences as a function of the emotional valence and perceived complexity of paintings. Psychol Aesthetics, Creat Arts. 2010;4: 196–204.
  122. 122. Chirumbolo A, Brizi A, Mastandrea S, Mannetti L. “Beauty is no quality in things themselves”: Epistemic motivation affects implicit preferences for art. PLoS One. 2014;9: e110323. pmid:25360697
  123. 123. Fayn K, MacCann C, Tiliopoulos N, Silvia PJ. Aesthetic emotions and aesthetic people: Openness predicts sensitivity to novelty in the experiences of interest and pleasure. Front Psychol. 2015;6: 1–11.
  124. 124. Furnham A, Avison M. Personality and preference for surreal paintings. Pers Individ Dif. 1997;23: 923–935.
  125. 125. Lyssenko N, Redies C, Hayn-Leichsenring GU. Evaluating abstract art: Relation between term usage, subjective ratings, image properties and personality traits. Front Psychol. 2016;7: 1–9.
  126. 126. Wiersema D V., van der Schalk J, van Kleef GA. Who’s afraid of red, yellow, and blue? Need for cognitive closure predicts aesthetic preferences. Psychol Aesthetics, Creat Arts. 2012;6: 168–174.