The digital culture we now live in was hard to imagine twenty years ago, when the Internet was hardly used outside science departments, interactive multimedia was just becoming possible, CDs were a novelty, mobile phones unwieldy luxuries and the World Wide Web did not exist. The social and cultural transformations made possible by these technologies are immense. During the last twenty years, these technological developments have begun to touch on almost every aspect of our lives. Nowadays most forms of mass media, television, recorded music and film are produced and even distributed digitally; and these media are beginning to converge with digital forms, such as the Internet, the World Wide Web, and video games, to produce a seamless digital mediascape.
At work we are surrounded by technology, whether in offices or in supermarkets and factories, where almost every aspect of planning, design, marketing, production and distribution is monitored or controlled digitally. Galleries and museums are far from exempt from the effects of these technological transformations. Indeed, it might be suggested that museums and galleries are profoundly affected and that the increasing ubiquity of systems of information manipulation and communication presents particular challenges to the art gallery as an institution. At one level these challenges are practical: how to take advantage of the new means of dissemination and communication these technologies make possible; how to compete as a medium for cultural practice in an increasingly media-saturated world; how to engage with new artistic practices made possible by such technologies, many of which present their own particular challenges in terms of acquisition, curation and interpretation.
Arguably, at another level the challenges are far more profound: they concern the status of institutions such as art galleries in a world where such technologies radically bring into question the way museums operate. This is particularly true of ‘real-time’ technologies with the capacity to process and present data at such a speed that the user feels the machine’s responses to be more or less immediate. Real-time computing underpins the whole apparatus of communication and data processing by which our contemporary techno-culture operates. Without it we would have no e-mail, word processing, Internet or World Wide Web, no computer-aided industrial production and none of the invisible ‘smart’ systems with which we are surrounded. ‘Real time’ also stands for the more general trend towards instantaneity in contemporary culture, involving increasing demand for instant feedback and response, one result of which is that technologies themselves are beginning to evolve ever faster. The increasing complexity and speed of contemporary technology is the cause of both euphoria and anxiety.
This is reflected in the recent work of a number of influential commentators. Richard Beardsworth states that ‘[O]ne of the major concerns of philosophical and cultural analysis in recent years has been the need to reflect upon the reduction of time and space brought about by contemporary processes of technicisation, particularly digitalisation’.1 In an essay published, ironically perhaps, on-line, the literary theorist J. Hillis Miller describes some of the symptoms of our current technological condition:
As this epochal cultural displacement from the book age to the hypertext age has accelerated we have been ushered ever more rapidly into a threatening living space. This new electronic space, the space of television, cinema, telephone, videos, fax, e-mail, hypertext, and the Internet, has profoundly altered the economies of the self, the home, the workplace, the university, and the nation-state’s politics. These were traditionally ordered around the firm boundaries of an inside-outside dichotomy, whether those boundaries were the walls between the home’s privacy and all the world outside or the borders between the nation-state and its neighbours. The new technologies invade the home and the nation. They confound all these inside/outside divisions. On the one hand, no one is so alone as when watching television, talking on the telephone, or sitting before a computer screen reading e-mail or searching an Internet database. On the other hand, that private space has been invaded and permeated by a vast simultaneous crowd of ghostly verbal, aural, and visual images existing in cyberspace’s simulacrum of presence. Those images cross national and ethnic boundaries. They come from all over the world with a spurious immediacy that makes them all seem equally close and equally distant. The global village is not out there, but in here, or a clear distinction between inside and out no longer operates. The new technologies bring the unheimlich ‘other’ into the privacy of the home. They are a frightening threat to traditional ideas of the self as unified and as properly living rooted in one dear particular culture-bound place, participating in a single national culture, firmly protected from any alien otherness. They are threatening also to our assumption that political action is based in a single topographical location, a given nation-state with its firm boundaries, its ethnic and cultural unity.2
French philosopher Bernard Stiegler points to a ‘technicisation of all domains’ being ‘experienced on a massive scale’. This is leading to ‘countless problems’, including:
The installation of a generalised ‘state of emergency’ caused not only by machines that circulate bodies but by data-transport networks: the growing paucity of ‘messages’, illiteracy, isolation, the distancing of people from one another, the extenuation of identity, the destruction of territorial boundaries; unemployment – robots seeming designed no longer to free humanity from work but to consign either to poverty or stress; threats surrounding choices and anticipations, owing to the delegation of decision-making procedures to machines that are on the one hand necessary since humanity is not fast enough to control the processes of informational change (as is the case for the electronic stockmarket network), but on the other hand also frightening since this decision making is combined with machines for destruction (for example in the case of polemological networks for the guidance of ‘conventional’ or non-‘conventional’ missiles, amounting to an imminent possibility of massive destruction); and, just as preoccupying, the delegation of knowledge, which not only modifies radically the modes of transmission of this knowledge but seems to threaten these forms with nothing less than sheer disappearance.3
This includes the ‘extraordinary influence on behaviour by the media, which controls the production of news that is transmitted without delay to enormous population masses of quite diverse cultural origins, by professionals whose activity is ‘rationalised’ following exclusively market-oriented criteria within an ever more concentrated industrial apparatus’. Stiegler suggests that:
In this age of contemporary technics, it might be thought that technological power risks sweeping the human away. Work, family and traditional forms of communities would be swept away by the deterritorialisation (that is, by destruction) of ethnic groups, and also of nature and politics (not only by the delegation of decision making but by the ‘marketisation’ of democracy), the economy (by the electronisation of the financial activity that now completely dominates it), the alteration of space and time (not only inter-individual spaces and times, by the globalisation of interactions through the deployment of telecommunication networks, the instantaneity of the processes, the ‘real time” and the ‘live’, but also the space and time of the ‘body proper’ itself, by tele-aesthesia or ‘tele-presence’.4
Friedrich Kittler suggests that the digitisation and circulation of information made possible by the installation of optical fibre networks is driven by Pentagon plans to construct a communications network that would not be disrupted by the electro-magnetic pulse that accompanies a nuclear explosion. This, in turn, is fundamentally altering our experiences of the media:
Before the end, something is coming to an end. The general digitisation of channels and information erases the differences among individual media. Sound and image, voice and text are reduced to surface effects, known to consumers as interface. Sense and the senses turn into eyewash. Their media-produced glamour will survive for an interim as a by-product of strategic programs. Inside the computers themselves everything becomes a number: quantity without image, sound, or voice. And once optical fiber networks turn formerly distinct data flows into a standardised series of digitised numbers, any medium can be translated into any other. With numbers, everything goes. Modulation, transformation, synchronisation: delay, storage, transposition; scrambling, scanning, mapping – a total media link on a digital base will erase the very concept of medium. Instead of wiring people and technologies, absolute knowledge will run as an absolute loop.5
Kittler does at least concede that ‘there still are media; there is still entertainment’. Literary theorist Bernard Siegart is somewhat more apocalyptic in that he sees the development of real-time networks leading to the end of art altogether:
The impossibility of technologically processing data in real time is the possibility of art.. As long as processing in real time was not available, data always had to be stored intermediately somewhere – on skin, wax, clay, stone, papyrus, linen, paper, wood, or on the cerebral cortex- in order to be transmitted or otherwise processed. It was precisely in this way that data became something palpable for human beings, that it opened up the field of art. Conversely it is nonsensical to speak of the availability of real-time processing, insofar as the concept of availability implies the human being as subject. After all, real-time processing is the exact opposite of being available. It is not available to the feedback loops of the human senses, but instead to the standards of signal processors, since real-time processing is defined precisely as the evasion of the senses.6
Meanwhile Andreas Huyssen suggests that one response to the ever-greater ubiquity of real-time systems is an increasing interest in memory. Writing about the building of Holocaust memorials Huyssen observes that:
Both personal and social memory today are affected by an emerging new structure of temporality generated by the quickening pace of material life on the one hand and by acceleration of media images and information on the other. Speed destroys space, and it erases temporal distance. In both cases, the mechanism of physiological perception is altered. The more memory we store on data banks, the more the past is sucked into the orbit of the present, ready to be called up on the screen. A sense of historical continuity or, for that matter, discontinuity, both of which depend on a before and an after, gives way to the simultaneity of all times and spaces readily accessible in the present.7
Elsewhere Huyssen proposes that:
Our obsession with memory functions as a reaction formation against the accelerating technical processes that our transforming our Lebenswelt (lifeworld) in quite distinct ways. [Memory] represents the attempt to slow down information processing, to resist the dissolution of time in the synchronicity of the archive, to recover a mode of contemplation outside the universe of simulation, and fast-speed information and cable networks, to claim some anchoring space in a world of puzzling and often threatening heterogeneity, non-synchronicity, and information overload.8
Huyssen thus suggests one idea about what the role of the museum or gallery might be in our current technological conditions; a ‘place of resistance to’ and ‘contemplation outside’ of the effects of ‘accelerating technical processes’. Indeed museums and galleries deal with things, objects, whose very materiality would seem to make them resistant to the transformations wrought on other discourses by electronic and digital media. Indeed, it would seem from visiting a gallery such as Tate Modern that art is still very much a matter of producing such objects, paintings, sculptures and so on.
But the status of the museum or gallery in relation to ‘the accelerating technical processes that are transforming our life-world’ is more complex. As an archive, a form of artificial, external, memory, the museum or gallery cannot stand outside of, separate and resistant to the technical means that structure our memories. In the mid 1980s Jacques Derrida flags for urgent attention:
[T]he immense questions of artificial memory and of modern modalities of archivation which today affects, according to a rhythm and with dimensions that have no common measure with those of the past, the totality of our relation to the world (on this side of or beyond its anthropological determination): habitat, all languages, writing, ‘culture’, art (beyond picture galleries, film libraries, video libraries, record libraries), literature (beyond libraries), all information or informatisation (beyond ‘memory’ data banks), techno-sciences, philosophy, (beyond university institutions) and everything within the transformation which affects all relations to the future.9
Derrida pursues this theme in his book Archive Fever where suggests that:
[T]he archive is not only the place for stocking and for conserving an archivable content of the past which would exist in any case, such as, without the archive, one still believes it was or will have been. No, the technical structure of the archiving archive also determines the structure of the archivable content even in its very coming into existence and in its relationship to the future. This archivisation produces as much as it records the event.10
He continues:
[W]e should not close our eyes to the unlimited upheaval under way in archival technology. It should above all remind us that the said archival technology no longer determines, will never have determined, merely the moment of the conservational recording, but rather the very institution of the archivable event this archival technique has commanded that which in the past even instituted and constituted whatever there was as anticipation of the future.11
The gallery is as performative as it is constative. It creates the past it supposedly simply shows by what it chooses to accept as a donation, to buy, to curate, conserve, and display. Thus it affects not just our understanding of and access to the past, but also our relation to the future by choosing the legacies that are available to us and to future generations. And this is not just a question of taste, fashion, finances and so on. It is fundamentally bound up with the structure of the gallery as an institution, in terms of its understanding of its role, its intentions and duties, and even its physical embodiment. For example, the most cursory comparison between the history of post-war art and the Tate’s holdings will demonstrate that, for all its intentions to represent, as best it is able, art of that period, there are many forms of practice it has failed to engage with completely or at best only partially or belatedly. These include: Cybernetic Art, Robotic Art, Kinetic Art, Telematic Art, Computer Art and net.art
It is far from coincidental that all these and others I have not mentioned are practices that emerged either in reaction against or in response to the increasing importance and ubiquity of information and communications technologies, such as telephony, television, computing, networking and so on. It is not, of course, that Tate is deliberately following a policy of exclusion in terms of the above. It is rather that an institution founded in and for the very different conditions of art production and reception of the late nineteenth century, simply is not properly equipped to show such work, or at least not as it is presently constituted.
Yet such work has a history that goes back to the Second World War. The War had necessitated a number of important technological developments, including digital computing and radar, as well as related discourses such as Cybernetics, Information Theory and General Systems Theory. In the decades that followed the War artistic responses to the possibilities that these technologies and ideas offered proliferated. These were often facilitated or inspired by the emigration of artists and designers connected to Kineticism and the Bauhaus to the United States after the War. In the 1950s and early 1960s John Cage developed work that engaged with notions of interaction and multimedia and with the possibilities of electronics, such as his famous ‘silent piece’, 4’33. His work was one of the main inspirations not just for other composers working with electronic means but also for artists interested in process, interaction and performance, such as Alan Kaprow and those involved with the Fluxus Group.
In the United States the 1950s also saw some of the first electronic artworks, made by, among others, Ben Laposky and John Whitney Sr, as well as some of the first experiments in computer-generated music, by Max Mathews at Bell Labs. Meanwhile, in Europe, composers such as Pierre Boulez, Edgar Varese and Karlheinz Stockhausen were also experimenting with electronics, while artists such as Jean Tinguely, Pol Bury, Nicolas Schöffer, Takis, Otto Piene, Julio le Parc, Tsai Wen-Ying, and Len Lye (also known as a filmmaker), and groups such as Le Mouvement, The ‘New Tendency’, ZERO and the Groupe de Recherche d‘Art Visuel, started to explore the possibilities of Kineticism and cybernetics for art. This work was accompanied and encouraged by the work of theorists such as Abraham Moles in France and Max Bense in Germany, both of whom wrote works in which information theory and cybernetics were applied to art. Bense was able to put his ideas into practice through his founding of the Stuttgart University Art Gallery. During his two-decade long tenure as head of the Gallery it held some of the very first exhibitions of computer art.
In Britain a generally pastoral and anti-technological attitude had prevailed in the arts since the nineteenth century, though there were exceptions such as the Vorticist movement in the early twentieth century. But the major force for promoting technological and systems ideas in this country was the short-lived but influential ‘Independent Group’ (IG), which was a loose collection of young artists, designers, theorists and architects connected with the Institute of Contemporary Arts. Through shows and discussions at the ICA and elsewhere, advanced ideas about technology, media, information and communications theories and cybernetics were presented and debated. The most famous exhibition with which the IG was connected was This is Tomorrow at the Whitechapel Art Gallery in 1956, which explored many of these ideas with great panache.
The needs of nuclear defence in particular, and military funding more generally, had led to the development of the computer as an interactive visual medium, rather than simply a ‘number cruncher’. Along with other technological developments this produced an increased interest in the possibilities of such technology as a tool for art. In 1965 and 1966 the first exhibitions of computer art were held at the Stuttgart University Art Gallery and the Howard Wise Art Gallery in New York. In the late 1960s the increasing sophistication and availability of technologies such as computers and video and the ideas of theorists such as Buckminster Fuller and Marshall McLuhan gave further impetus to the development of art practices involving both the technologies themselves and related concepts. It is possible to discern the emergence of a utopian ‘systems aesthetic’, in which the combination of new technologies and ideas about systems, interaction and process would produce a better world. Artists, composers, filmmakers, scientists, architects and designers all seized upon the possibilities of new technologies and ideas to produce work that either involved such technology or alluded to the world it was helping to bring about.
Among the more important artists and groups were: Roy Ascott, David Medalla, and Gordon Pask in Britain, all of whom employed ideas derived from Cybernetics; Lilian Schwartz, Edward Zajac, Charles Csuri, Ken Knowlton and Leon Harmon, and Michael Noll, who pioneered computer graphics in the United States; while Manfred Mohr and others connected with Max Bense did the same in Germany, filmmakers Stan Vanderbeek, and Len Lye, Fluxus members Wolf Vostell and Nam June Paik, who were among the first to use televisions in their work. Paik, whose work also involved other technologies such as tape, was also one of the first artists to take advantage of the development of portable video cameras, to produce some of the first video art, a practice taken up by other young artists of the time, including Les Levine and Bruce Nauman. At the same time other technologies, such as electronics, lasers and light systems were exploited by artists such as Vladimir Bonacic, Otto Piene and Dan Flavin. One of the most important developments of the period was that of large-scale multimedia environments. Among those involved in such work were Robert Rauschenberg, Robert Whitman, John Cage, LaMonte and Zazeela Young and their Theater of Eternal Music, Mark Boyle, and groups such as USCO and Pulsa. This type of work intersected with developments in psychedelic rock music and underground entertainment. Many of those later considered to be part of Conceptual Art were then allied with these kinds of projects.
Some of the most important work was undertaken under the aegis of ‘Experiments in Art and Technology’ (EAT), a group founded by Billy Klüver and Robert Rauschenberg and dedicated to fostering collaborations between artists and engineers. In 1966 EAT held its famous show 9 Evenings at the Armory in New York. Over the eponymous nine evenings a series of collaborative happenings were staged, involving both artists and engineers. In the years that followed a number of major exhibitions involving new technologies were held, including The Machine as Seen at the End of the Mechanical Age at MOMA, New York in 1968, which was accompanied by a show of work commissioned by EAT, Some More Beginnings at the Brooklyn Museum. In the same year the legendary exhibition Cybernetic Serendipity, curated by Jasia Reichardt, was held at the ICA in London. A year later there was Art by Telephone in Chicago and Event One in London (the latter organised by the Computer Arts Society, the British equivalent of EAT). In 1970 critic and theorist Jack Burnham organised Software: Information Technology, its meaning for art at the Jewish Museum in New York. Like Cybernetic Serendipity this show mixed the work of scientists, computer theorists and artists with little regard for any disciplinary demarcations. In 1971 the results of Maurice Tuchman’s five-year Art and Technology programme were shown at the Los Angeles County Museum.
Jack Burnham and Jasia Reichardt were also among those who produced critical works on the subject of art, science and technology. Burnham published his magnum opus Beyond Modern Sculpture in 1968. At around the same time Reichardt published a number of works, including a special issue of Studio International to accompany her exhibition, while Gene Youngblood published Expanded Cinema, an extraordinarily prescient vision of experimental video and multimedia. How important this area was then considered is demonstrated by the fact that Thames and Hudson published two books on art and technology within two years of each other, Science in Art and Technology Today by Jonathan Benthall in 1972 and Art and the Future by Douglas Davisin 1973, the same year in which Stewart Kranz produced his monumental work Science & Technology in the Arts: a Tour Through the Realm of Science / Art.
It is hard to recapture the utopian energy and belief embodied in these exhibitions and publications. As far as Reichardt, Burnham, Davis and others were concerned the future of art was as a means of engaging with the concepts, technologies and systems through which society was increasingly organised. Yet the apogee of this thoroughly utopian project also represented the beginning of its demise, and the replacement of its idealism and techno-futurism with the irony and critique of Conceptual Art. To begin with at least, it was hard to distinguish between conceptual art and systems art. Indeed, for much of the time they were interchangeable and indistinguishable. But by 1970 the difference was beginning to come clear. In 1970, the same year as Burnham’s Software show, Kynaston McShine curated an exhibition at MOMA with a name that placed it firmly in the systems art area. Information may have sounded very systems-oriented, and showed some of the same people as Software, but it did not include the technologists and engineers who had also been included in the earlier show. Furthermore, the general attitude evinced by the artists towards technology was increasingly distanced and critical. Perhaps one of the last gasps of systems art was in 1971, when Robert Morris, now considered a paradigmatic conceptual artist, had a show at the Tate. Though it did not involve technology per se the show was almost entirely concerned with interaction, feedback and process, with the visitors encouraged to climb on and manipulate the works on display. (A show further from the arid intellectualism that supposedly characterises Conceptual Art is hard to imagine.) Famously the show was closed after five days, and only reopened in a far less interactive form.
Thus the early 1970s represented the apparent disappearance of systems art, and its supersession by other approaches. Its failure, if it can be so described, can be put down to a number of factors; the quality of much of the work itself; the failure of the exhibitions to work as intended; a rejection on the part of artists of the collaborations with industry necessary to realise projects and exhibitions; a suspicion of the technocratic pretensions of systems art and of cybernetics with its roots in the military-industrial-academic complex; and of technologies such as computers as the means of perpetuating an instrumental and scientistic view of the world, and particularly in light of their use in the Vietnam War and elsewhere; and finally, difficulties in collecting, conserving and commodifying such work. The souring of the counter-culture in the early 1970s and the economic crises of the same period did little to encourage any kind of technologically based utopianism. In the 1970s and 1980s Video art was gradually subsumed by the mainstream art world, but new media, electronic, computer, and cybernetic art was largely ignored. Such art continued to be made and taught, but it was mostly shown in specialist and trade shows such as Siggraph, the annual conference organised by the Association of Computer Machinery for those with an interest in graphics. Many of those who had considered themselves to be artists working with technology ended up working in industry.
Yet, at another level, systems art can be said to have succeeded incredibly well, though not as art. The economic crises led to a restructuring of capitalist economies and global finance, which was aided by the increasing ubiquity of networked computing. This in turn heralded the beginnings of what became known as the post-industrial economy in which information became the dominant mode of production (in the developed countries at least), as predicted by pundits such as Alvin Toffler and Daniel Bell. The techno-utopianism of the 1960s art world re-emerged in the 1970s in relation to developments such as the personal computer and the Internet, through which technologies developed during the Cold War by the ‘Military-Industrial-Academic complex’ were appropriated and repurposed by the neo-liberal end of the counter-culture. The late 1970s saw not just those developments but also the beginnings of computer special effects, video games, and the beginnings of user-friendly systems as well as cultural responses such as Cyberpunk fiction, Techno music and Deconstructive graphic design. At the end of the decade two French academics, Simon Nora and Alain Minc, wrote a report for President Giscard d‘Estaing which declared the ‘computerisation of society’ and the advent of ‘telematics’, meaning the coming together of computers and telecommunications. Work made in this period includes that of Douglas Davis, Harold Cohen and ‘Aaron’, Stelarc, Jeffrey Shaw, Legible City, Lilian Schwartz, Paul Brown, Robert Adrian X.
It is around this time that discourses such as Poststructuralism and Postmodernism began to emerge, partly as a critical response to the ubiquity and power of information technologies and communications networks. The writings of Derrida, Baudrillard, Jameson, Deleuze and Guattari and Lyotard, whatever the differences in their approaches and their ostensible subject matter, always imply a critique of systems and communications theories. It was possibly the space opened up by this critical approach that began to make systems art of interest to the mainstream art world again. In 1979 the first Ars Electronica festival was held in Linz, Austria, which aimed to look at the application of computers and electronic technologies. In 1985 the philosopher Jean-François Lyotard curated a massive exhibition at the Beaubourg, Les Immatériaux, which aimed to show the cultural effects of new technologies and communication and information. It was also around this time that the Tate put on its first show of computer-generated art, the 1983 exhibition of work produced by Harold Cohen’s ‘Aaron’, an artificial-intelligence program which drives a drawing machine.
But it was really at the end of the 1980s and the beginning of the 1990s that systems art began to re-emerge. This period also saw the beginnings of the World Wide Web (WWW), though it would take a few years for it to be become widely available. In Liverpool in1988 Moviola, an agency for the commissioning, promotion, presentation and distribution of electronic media art was founded, under whose aegis Videopositive, an annual festival of such art was held. (Moviola later transmogrified into the Foundation for Art and Creative Technology or FACT). In the same year the first International Symposium on the Electronic Arts (ISEA) was held. A year later the Zentrum für Kunst und Medientechnologie (ZKM) was founded in Karlsruhe, Germany, which remains a major centre for media and technology arts.
In 1990 a similar institution was opened in Japan, the NTT InterCommunication Centre, Tokyo, while the San Francisco Museum of Modern Art held its first show of new media art. Throughout the 1990s the Walker Art Gallery in Minneapolis showed digital and new media works. It was also round this time that the first use of computers for public display of information was undertaken at the National Gallery in London. In 1993 the Guggenheim in New York held an exhbition of Virtual Reality: An EmergingMmedium followed three years later by Mediascape. In 1994 the first Lovebytes festival of electronic art was held in Sheffield and in 1997 the Barbican Art Gallery put on the Serious Games: Art,Technology and Interaction exhibition, curated by Beryl Graham. In Hull the Time-Based Arts Centre was started, with a remit to concentrate on new media arts and the intention to build a large Centre for Time-Based Arts. Last year FACT opened a new media arts centre in Liverpool, while the Baltic in Gateshead has committed itself to increasing its involvement in new media arts, as will Bromwich’s new arts space The Public (formerly c/Plex), when it opens. It is noticeable that the only institution in London putting on gallery displays of such work is the Science Museum.
Perhaps the most important event in terms of digital art practice at this time was the development of the first user-friendly web browser in 1994. The World Wide Web had been developed as a result of the pioneering ideas of Tim Berners-Lee, a British scientist at the European Nuclear Research Centre (CERN) in Switzerland. Berners-Lee was interested in using the Internet to allow access to digital documents. To this end he developed a version of the Standard Generalised Markup Language (SGML) used in publishing, which he called Hypertext Markup Language or HTML. This would allow users to make texts and, later on, pictures, available to viewers with appropriate software, and to embed links from one document to another. The emergence of the Web coincided almost exactly with the collapse of the Soviet Union and it was the new-found sense of freedom and the possibilities of cross-border exchange, as well as funding from the European Unions and NGOs such as the Soros Foundation that helped foster the beginnings of net art in Eastern Europe, where much of the early work was done.
When ‘user-friendly’ browsers such as Mosaic and Netscape came out in the early to mid 1990s the possibilities of the Web as a medium were seized upon by a number of artists, who, in the mid 1990s, starting producing work under the banner of ‘net.art’. This meant work that was at least partly made on and for the Web and could only be viewed on-line. The term ‘net.art’ was supposedly coined by Vuk Cosic in the early 1990s to refer to artistic practices involving the World Wide Web, after he had received an email composed of ASCII gibberish, in which the only readable element was the words ‘net’ and ‘art’ separated by a full stop. Since then there has been an extraordinary efflorescence of work done under the banners of network art, net.art or net art, from Vuk Cosic, Olia Lialina, Alexei Shulgin, Rachel Baker, Heath Bunting, Paul Sermon, 0100101110101101.org, Natalie Bookchin, Lisa Levbratt, Paul Sermon, Radioqualia, ®Tark, Matt Fuller, Thomson and Craighead, and many others.
At the same time, discussions and commentary about technology and art have proliferated through email lists such as Rhizome, Nettime and CRUMB (Beryl Graham and Sarah Cook’s digital curation list based at Sunderland University), as well as publications such as Mute. As in the late 1960s and early 1970s there have been a number of important publications on this area by, among others, Lev Manovich, Christiane Paul, Oliver Grau, Stephen Wilson, Edward Shanken, and Michael Rush, as well as PhDs in Art History departments on, for example, net.art (Josephine Berry, Manchester) and Computer Art (Nick Lambert, Oxford). Art history departments here and abroad are now starting to look seriously at this area, as shown by the Arts and Humanities Research Board’s recent decision to award Birkbeck College, University of London, a large research grant to study early British computer art, as well as many similar projects elsewhere. As the above suggests, there is a great deal of interesting and important work going on in this area, in terms of actual art practice as well as of institutional and academic engagement.
Such work reflects the fact that our lives are now so bound up with what Donna Haraway calls the ‘integrated circuit’ of hi-tech capital. It would be hard to overstate the extent to which the reality of our lives is entirely governed by technologically advanced processes and systems, from ubiquitous and increasingly invisible computer networks to mobile telephony to genetic manipulation, nanotechnology, artificial intelligence and artificial life. These technologies are intimately bound up with broader issues of globalisation, surveillance, terrorism, pornography and so on. The work undertaken in the 1960s and 1970s now looks remarkably prescient in its attention to the meaning and potential of new technologies, networks and paradigms of representation and engagement.
Yet, despite this and the proliferation of current practice in this area, such work is still under-represented in Tate. Obviously there have been welcome developments, including the net.art commissions and the Art and Money Online show, as well as the increasing interest in film, video and photography. But, welcome as these moves to encompass various forms of new media practice are, they mostly fail to encompass or engage with the kind of work mentioned previously. In particular, work that is interactive, process-based or that involve networks, systems and feedback, are generally not catered for. The new media works now being collected and displayed by Tate are almost entirely static, even if they are time-based, in that they do not alter in response to interaction or their environment. This is true even for the net.art commissions.
But this is not an attempt to blame a particular institution for some kind of failure of perception and action. There are all sorts of good reasons why Tate should be wary of the work I have described. There are many difficulties in its collection, curation and display; there are other forms of art practice that have equal claim to Tate’s attention; and its historical and contemporary importance may not be obvious. Tate has also been exemplary in organising different means by which such work and its curation can be discussed, including the Matrix: Intersections of Art and Technology series of talks and the series of talks by well-know curators of new media art held at Tate Modern in autumn 2003. But I believe that there are also compelling reasons why Tate should be thinking about how to engage with such work. It has a long and important history, which intersects at crucial points with other better-known forms of art practice. Indeed, those practices would be very different without this kind of work. Renewed interest in it will enhance and deepen our understanding of artistic developments in the post-war era. I would go so far as to suggest that no attempt to understand art of that period can be undertaken without taking into account such work.
Furthermore such practice, both in its historical and current manifestations, is of great importance in its capacity to engage with and reflect upon our current technological condition. This is one of the reasons why there are such a large number of artists working in this area. It is also why any move to collect and display such work is likely to prove very popular, especially among younger people. New technologies affect almost everybody, whatever their age, at work, at home or elsewhere. For most people in Britain under twenty-five or even thirty years of age, a world without video games, computer special effects, the Internet, the World Wide Web, mobile phones and so on, is almost unimaginable. The ubiquity of such technologies is symptomatic of deeper issues such as globalisation, genetic manipulation, and bio-terrorism, that are the concern of many people, young or old.
One of the ironies of net.art is that, despite being supposedly responsive to current developments, it repeats the gestures of previous avant-gardes. As I put it in my book Digital Culture:
Practically every trope or strategy of the post-war avant-garde has found new expression through net.art, including Lettriste-style hypergraphology, involving the representation of codes and signs, Oulipian combinatorial and algorithmic games, Situationist pranks, Fluxian or Johnsonian postal strategies, staged technological breakdowns, such as previously rehearsed in video art, virtual cybernetic and robotic systems, parodies and political interventions.12
But this repetition, far from being a reason to condemn network art, is precisely what gives it its strength, much as did similar acts of repetition by the neo-avant-garde of the 1960s of gestures and strategies first enacted by the so-called historical avant-garde of the 1920s and 1930s. In his book The Return of the Real, Hal Foster describes the latter in terms of nachtraglichkeit, the Freudian term for deferred effect, by which an experience only assumes a traumatic dimension upon repetition and the delayed assumption of a sexual meaning. As Foster puts it:
‘[O]ne event is only registered through another that recodes it; we come to be who we are only in deferred action’. Historical and neo-avant-gardes are constituted in a similar way, as a continual process of protension and retension, a complex relay of anticipated futures and reconstructed pasts – in short, in a deferred action that throws over any simple scheme of before and after, cause and effect, origin and repetition.13
He continues that, ‘[O]n this analogy the avant-garde work is never historically effective or fully significant in its initial moments. It cannot be because it is traumatic – a hole in the symbolic order of its time that is not prepared for it, that cannot receive it, at least immediately, at least without structural change. Thus despite its continued repressions, failures and supercessions, the avant-garde continues to return, but, as Foster puts it, ‘it returns from the future’. It opens out the future to the contingent and the incalculable and thus the promise of the to-come. The avant-garde is the archive of the future.
The same might be said about net.art. Commenting on net.artist Vuk Cosic’s training as an archaeologist and Cosic’s own proclamation of the similarities between net.art and archaeology, Julian Stallabrass suggests that:
Net art, then, is seen as an archaeology of the future, drawing on the past (especially of modernism), and producing a complex interaction of unrealised past potential and Utopian futures in a synthesis that is close to the ideal of Walter Benjamin.14 [my emphasis]
‘Archaeology’ is of course cognate with ‘archive’, and both are concerned with the preservation of the material remains of the past. Net art delineates the conditions of archiving in our current regime of telecommunications. Derrida reminds us that the question of the archive is:
[A] question of the future, the promise of the future itself, the question of a response, of a promise and a responsibility for tomorrow. The archive: if we want to know what that meant, we will only know in times to come. Perhaps.15