Identifying Abstract Art

Experiment with representation and abstraction in image classification systems.

The experiment focused on the fact that non-representational images can be a great challenge for algorithms to classify. In order to test this idea, I compiled a data set of digital images of abstract paintings. This was done by collecting the first 100 results for ‘abstract’ in the Metropolitan Museum of Art’s online database, based on the museum’s own system of tagging items. Each of the images was then subjected to analysis using the Wolfram image identifier, a successful online image classification system. The reason for using this particular program is that it is a tool that makes ML accessible to the general public.

The results of the experiment showed that the Wolfram image identifier was unsuccessful in identifying abstract paintings, but the misclassifications it made were often curiously insightful. A total of 98% of the abstract images were categorised incorrectly as a wide variety of different classes of objects. Only 2% of the images were correctly categorised as paintings, which may be attributable to the fact that the images that were successfully classified included frames. The experiment provided a few insights into the relationship between abstraction and representation in algorithmic image systems concerning the levels of meaning and interpretation that are involved in viewing images, which are not the same for humans and machines. Rather than framing the miscategorisations that occurred as failures, as they are on a technological level, their ambiguities can lead to new ways of understanding the cooperation between human and machine visual interpretation.

Although the abstract paintings analysed in this experiment bore little or no visual resemblance to the classes assigned to them by the computer, each misclassification could be interpreted as adding layers of poetic meaning to the respective image. For example, a hazy black and white image labelled as “atmospheric phenomenon”, and a composition of dabbling brush strokes labelled “imaginary being” take on different connotations when associated with those words. The label “memory device” applied to a painting by Piet Mondrian, too, suggests conceptual connotations that viewers may not necessarily connect to this image based on looking alone, but which nonetheless add to the experience of the work.

Abstract images are not designed to function as adversarial examples, but the results of this experiment suggest that they are nevertheless successful at fooling otherwise successful classification algorithms. Achieving a 98% misclassification rate is close to that of expressly designed adversarial examples. This may be due to a tendency of classification algorithms to assume that there are image classes for each kind of input image. Therefore, the results of the experiment suggest that adversarial images may owe their success not to being specially designed to trick algorithms, but rather to being abstractions for which there is no image class in a system that insists that all images belong in a category.

The ability of an ML system to pick up a semantic cue, like the picture frames in Identifying Abstract Art (Lee 2018), demonstrates an interesting capacity to associate patterns with meanings. This is much the same as any use of language, but if we move beyond the anthropomorphising tendencies that often prevent deeper discourse on this subject, this is an interesting development related to conceptual art. Pattern, and not just in the visual sense but also in the sense of frequency or other associations, may be associated with virtually any system of types or categories. Therefore, the detection of the pattern that “picture frames — whatever they are surrounding — mean painting”, while straightforward, offers an interesting insight into the structures that may be overlooked as externalities, but that greatly influence a given interpretive system.

References

Lee, Rosemary. Identifying Abstract Art. 2018. Experiment.

Nguyen, Anh Mai, Jason Yosinski, and Jeff Clune. “Deep Neural Networks Are Easily Fooled: High Confidence Predictions for Unrecognizable Images”. CoRR abs/1412.1897 (2015). http://arxiv.org/abs/1412.1897.

Leave a Reply