Jameson 2.0: The Aesthetics of Cognitive Mapping

Jameson 2.0: The Aesthetics of Cognitive Mapping

Digital cultures seem to give rise to a representational paradox: In a world saturated by data and information, we seem increasingly unable to grasp geopolitical and macro-economic relations. Our project proposes to address this crisis of representation via the concept of “cognitive mapping,” a term originally coined by Frederic Jameson, in 1988. The conditions of a globalized economy, he argued, tend to open a gap between individual experience and the structures that determine it, making its coordinates inaccessible. The question of agency is here reconceptualized as a question of representation: the vectors of our global infrastructure do not lend themselves to pictorial treatment; in order to render the world intelligible we need a different methodology. Drawing upon the urban theorist Kevin Lynch “The Image of the City” (1960), Jameson puts forth the concept of “cognitive mapping” as a tentative answer: a method to map a situational representation by combining a cartographic approach with a narrative strategy.

Though Jameson wrote in a pre-commercial-internet era, in our view, the digital turn intensified the condition he described. Though equipped with a growing variety of optical media, we are increasingly unable to grasp the algorithmic complexity, which surrounds us. And, as data accumulates faster than our processing capabilities are able to structure it, the above-described crisis of representation acquires a twofold dimension, which manifests itself not only at the level of phemenomenological experience but also at the level of informational proficiency. Because data primary mode of existence is not a visual one (Galloway: 2011) the question of representation becomes mainly a problem of conversion: of how to translate number into sign. Conversion is however not merely a technical operation, it is also an aesthetic one. Data does not have a ready-made form or structure. Whereas data belongs to the real of the empirical, information ––that is, data that has taken on a form–– is tied to the aesthetic. 

In addition, the rules, conventions and modalities conversion undertakes constitute a form of mediality, which would be our task to analyse and describe. The approach captured by the term mediality shifts the focus from questions of data visualization or information design to the ways and means of mediation. Our goal is not simply to engage with visualization techniques but to describe conversion as a medial situation: what entities are assigned to the function of a medium, and when do the effects of mediation become visible.

Until recently, the critical vocabulary commanded by art history and aesthetics would allow these disciplines to describe and analyse the whole scope of visual culture. With visual digital culture this is no longer the case. Visual culture is itself a misnomer when one addresses digital culture because algorithms, information and data only have a second-hand relation to the field of the visual. The disciplines, which would traditionally deal with questions of representation are thus ill-equipped to describe the new forms of mediality, which digital cultures engender. This question not only affects their method of study but also their object of study. Contemporary art practices can no longer be addressed from a one sided disciplinary perspective: contemporary art ––and not only net art or web based art –– relies heavily on information design or data visualization software. Computer sciences, however is in general not preoccupied with questions of sense-making, interpretation or narrative. Data visualization, though an important component, does not exhaust the methodology of “cognitive mapping,” since, as T.J. Clark noted, the very notion of a mediation already entails some mixture of sensory perceptual and semiotic elements

Cartographic methods operate by drawing a distinction between data as impenetrable cacophony (what Galloway termed the “technical sublime”) and data as cognitively tractable. Aesthetics, under this conception, is what sensibly mediates between individual phenomenology and our cognitive maps of global structures. The aesthetic significance of cognitive mapping is that it provides a means to navigate these complex systems: A method that allows for the interweaving of spatial visualization (cartography) subjective interpretation (hermeneutics) and information processing (IT) mapping the relative positions of deterritorialized entities and the socio-political nexus that structures their interaction. But however widespread as a method, “cognitive mapping” and the specific aesthetics it engenders, has no status in the field of Art History or in the domain of contemporary art. To generate the critical vocabulary that would allow one to describe digital mediality and its mode(s) of representation, as well as the modalities of agency they afford, or, conversely, foreclose is our project’s task. In what follows we detail our first two research subsets.

Deep Learning

Samsung’s Smart TVs come with a fine-print warning: if you enable voice recognition, your spoken words will be ‘captured and transmitted to a third party’, so you might not want to discuss personal or sensitive information in front of your TV. Even if voice recognition is disabled, Samsung will still collect your metadata – what and when you watch, and including facial recognition. The SmartSeries Bluetooth toothbrush from Oral-B, a Procter & Gamble company, connects to a brushing app in your smartphone, which keeps a detailed record of your dental hygiene. The company advertises that you can share such data with your dentist, though, in a privatized health market, it’s more likely the purpose of such technology is to share data with your insurance company.

The cultural logic of the information age is predicated on an inversion of the gaze: within this fusion of surveillance and control, the screen, as Jonathan Crary has noted, “is both the object of attention and (the object) capable of monitoring, recording and cross-referencing attentive behaviour” (Crary: 1999). Data processing – whose reaches span the NSA, credit rating agencies, health insurance providers, up to the sorting algorithms used by Google or Instagram – is predictive, modeling future actions on previous behaviour. Data processing implies a model of temporality in which the past is a standing reserve of information, waiting to be mined. Big data, Shoshana Zuboff argues, “is not a technology or an inevitable technology effect. It is not an autonomous process […] It originates in the social, and it is there that we must find it and know it” (Zuboff: 2019).

What is colloquilly called AI is, as Matteo Pasquinelli sustains, a generalization of visual pattern recognition to the non-visual sphere. Deep learning can be thus divided into 2 subsets: Classification on the one hand and prediction on the other, or pattern recognition and pattern generation (via predictive algorithms). Both instances call for a visual culture analysis, and we will, for that end, collaborate and examine the work of Matteo Pasquinelli and Vladan Joler “The Nooscope Manifested: Artificial Intelligence as Instrument of Knowledge Extractivism.”

Ghost Work

In the late eighteenth century a chess-playing automaton toured the courts of Europe. Known as the “Mechanical Turk,” the automaton defeated Napoleon and Benjamin Franklin before being exposed as a hoax: hiding in its innards, a human operator was, in fact, moving the chess pieces. In a way, this was a reverse Turing test, predating Turing: a kind of labor in which it is required that humans pass for machines—with all that this passing entails, mainly a forfeiting of needs and rights, and more importantly, a forfeiting of time. In 2005 Amazon resurrected the “Turk” but generalized its principle: the Amazon Mechanical Turk is a crowd-sourcing internet marketplace that “enables individuals and businesses to coordinate the use of human intelligence to perform tasks that computers are currently unable to do.”[1] Technically speaking, every mechanism usurps a human function. Whereas technology is usually expected to render work obsolete, to free laborers for the curse of labor, in reality it tends to render workers more pliable and prone to exploitation, and ends up extracting machine-like labor from automated humans. Our project will survey several works, such as Hans Block und Moritz Riesewieck’s documentary The Cleaners, in order to map the ghost work phenomenon.

[1] See Wikipedia entry on Amazon Mechanical Turk

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert