Far from being distant, the points of encounter and collaboration between art and artificial intelligence are increasingly numerous. In addition to changing the concept of who can produce art and providing new ways to practice it, artificial intelligence is indeed a new way to approach and study the aesthetic experience, making it even more engaging and participatory.
At Stanford University a team of researchers has taught computers to recognize not only what objects are present in an image, but how those same images make people feel, creating algorithms with “emotional intelligence”. The group of scientists has developed an algorithm called ArtEmis, which is based on 81 thousand WikiArt paintings, supported by 440 thousand responses collected from over 6,500 participants who have evaluated each painting based on the emotion felt in the fruition, also providing a brief explanation of the emotional reaction chosen.
Using these responses, the team trained the algorithm to classify a painting into one of eight emotional categories – from astonishment to amusement, from fear to sadness. The algorithm, trained in this way, can analyze a new image that it has never seen, classifying it based on the emotion a viewer might feel in front of it. Moreover, it doesn’t just capture the full emotional experience of an image, it can also decipher different emotions within the painting.
When AI is the artist
In the field of computer learning, we define Generative Adversarial Networks (GAN), a couple of neural networks that are trained to compete against each other. One is called generator and has the task of producing new data, the other discriminator and learns how to distinguish them from those artificially created.
Through this dialogue, a GAN is able to process an impressive amount of data, escaping human control, with completely unexpected results. It is possible to use GANs, for example, to create absolutely realistic photographs of people who do not exist, starting from an adequate number of real images.
In October 2018, for example, an artwork by Edmond de Belamy, created with the help of an AI algorithm, sold at auction for $432,500 at Christie’s auction house. According to Christie’s, the portrait had been created through the use of Artificial Intelligence. To create a portrait of Edmond de Belamy 15,000 portraits painted between the 14th and 20th centuries were entered into the system. The two networks did the rest.
Moreover, there are several experimental platforms – such as Artbreeder, for example – that make this process possible for anyone who wants to try their hand at it. A sort of “collaborative” artistic tool, open source and accessible to anyone, to create new images by themselves through algorithms made available to users.
But artificial intelligence can also be useful in classifying works by artist, genre, and style. As more and more works of art become digitized, teaching computers to classify art is enough to assist museum staff in performing these tasks.
Researchers at Zhejiang University of Technology, in China, recently published a paper on this topic, testing seven different algorithm models on three different groups of artworks and comparing the performance of individuals, in classifying the works, when using such a tool or not. According to the article, the neural network models and computer vision techniques used provided state-of-the-art and highly refined results.