Decoding aesthetic value: Can machines learn beauty?
- hsze001
- 2 days ago
- 4 min read
by Haynie Sze and Berfin Sahbal

Imagine looking at the website of a dog adoption center. Puppy pictures are reaching your eyes. How does the computer algorithm decide which puppies to show you, or know which one you may like? An everyday question became part of the aesthetics judgement research, which looks at both of the measurable characteristics such as form and proportion, and subjective preferences. Or when we go to a museum and see a catchy art piece, we start to think about how beautiful it is and try to understand how the artist created this beauty. This beauty perception relies on our aesthetic perception. But what is aesthetic perception? Aesthetic is an illusive term that describes our sensory experiences. Sensory experiences can be pleasurable, hence valuable. They are derived from our sensory system, which aims to perceive and predict the world. As humans, aesthetics values are important partly as our survival instincts. As technology has developed, so has our aesthetic perception, based on learning and life experience. We developed machines that can decode our aesthetics perceptions. Those machines are built upon our current preference on aesthetic values, that includes assumptions. Those assumptions, while developed under mathematical and engineering effort, help us to predict our possible preference in the future environment, which may be regarded as the best generative model of the sensory world. These signals serve to guide our interaction with the sensory world around us, so that we sense, perceive, and make decisions about the world.

Psychologist Aenne Brielmann explored how aesthetic values can be understood, and whether machines can capture everyday experience by considering how these experiences unfold over time, beyond the familiarity effect. The first question is how aesthetic values are evaluated. She gave a perspective that aesthetic valence should not be judged merely by the instant pleasure we sensed, but considers the long-term effect on our judgement, or wellbeing. This long-term effect is based more on ‘learning’ than on simple pleasure derived from repeated exposure, which is called mere exposure effect (Figure 1). The mere exposure effect is a psychological phenomenon where people develop a preference for things merely because they are familiar with them. This phenomenon forms the basis of the development of the aesthetics machine.
Brielmann's computational model hypothesizes this integrative theory of ‘exposure’ and ‘learning’ effect by mapping and verifying the predictive power of ‘the learning machine’ model with an established set of experimental data on aesthetic preference, by reverse engineering method. The learning machine was trained to measure people’s liking and ratings on selected images. The liking rating was given based on the established neuroaesthetics theory that ease of recognition of these images, the ‘processing fluency’, is highly correlated to the sensory pleasantness of the images. Brielmann’s experiment extended this theory. By capturing the effect of learning, a function of the expected future reward of aesthetic valence, the computer model provides a better prediction, which coincides with real data. We are therefore able to encode the long-term learning outcome in the computation model of aesthetics prediction.
Can machines learn beauty? Brielmann’s study confirmed that learning has a valid effect on prediction accuracy in the computation model for aesthetic preference. The predicted outcome of learning would provide valuable references for the adjustment of complexity of a future artwork. We now know which puppy might possess the most similar features compared to a prototypical favorite does, and how other considerations impact our decision. Brielmann suggested a decision-rule code might also be built in. By comparing the predicted aesthetic value to a set threshold, machine may predict which puppy is most likely to be adopted. Machines, therefore, can be coded to predict beauty under controlled parameters and time-based and action-based assumptions.

This integrative ‘exposure plus learning’ prediction model takes a big step forward from the established neuroaesthetics theories of the familiarity heuristic, or processing fluency and mere exposure effect on inherit aesthetic judgement (Figure 2). Nevertheless, we may be aware that the current studies still hold the limitation of handling data outside clearly defined parameters of symmetry, complexity, and mere exposure. When we look at abstract visual artwork in a museum rather than puppies, the issue may be more complex. Moment-to-moment pleasures such as a gallery full of pleasurable artworks might generate multiple and varying stimuli and responses which are not yet captured by the current model. Objective traits in contemporary painting work are also outside the scope of this study. When it comes to the prediction of a design solution, it might involve even more complex, adjustable mega-model with trillions of data, for producing generalizable and testable results.
Another question may pop up for our unfinished puppy story. Does the machine tell which puppy our family might like the most? The issue then goes to whether a neuroaesthetics model or machine might adopt idiosyncrasies and individual differences. Our aesthetic preference is based heavily on our knowledge, and subjective ideas for deriving the final aesthetic judgement or preferences.
You may already have thought of the latest development of AI, which is currently mimicking our choices based on our interaction history. Machine learning, however, may not yet be able to capture the humanistic aspects of our choices.
So, can machine tell which puppy you would adore most? We would suggest going to the adoption center. You will know.

Keywords: #processingfluency #mereexposure #learningoutcome
Reference
Brielmann, A. A., & Dayan, P. (2022). A computational model of aesthetic value. Psychological Review, 129(6), 1319–1337. https://doi.org/10.1037/rev0000337



Comments