This interview takes as a starting point the VIDEO CACHE project. Mél’s research into defunct video art repositories online raises many questions about the ephemeral nature of digital culture, and the social/cultural parameters that frame the preservation of and access to such materials.
VIDEO CACHE is a research creation project emerging from Mél Hogan’s doctoral research (wayward.ca), in collaboration with Penny McCann, director of SAW Video in Ottawa, and Groupe intervention video (GIV) in Montreal.
VIDEO CACHE took place on November 24, 2010 at GIV. It was a public screening of ten works selected by McCann from the SAW Video Mediatheque collection, for which artists’ fees were paid by GIV. The Mediatheque is Canada’s first large-scale attempt to use the web as a ‘living archive’ –its server crashed in 2009 and the project has been offline since. VIDEO CACHE was also a month-long online exhibit (http://www.wayward.ca/videocache/) showcasing these ten works, carefully documented and recontextualised for the web. The documentation for VIDEO CACHE remains online, and the event catalogue is available via print-on-demand (http://www.lulu.com/product/paperback/video-cache/13585058).
On the one hand, VIDEO CACHE served to document the Mediatheque project by updating the context and addressing in a practical way what it means to ‘activate’ the online archive. On the other hand, it was and remains an entity onto itself. VIDEO CACHE has become an opportunity for Hogan to bring a creative dimension to documentation and to address loss: while it is the ‘cache’ that makes the Mediatheque’s traces visible and re-visit-able, it is the ‘crash’ that signals its ongoing (archival) value.
You’ve talked about there being a paradox in the way digital culture is created and shared and the way it is preserved. How do you think preservation, creation, and use should be interrelated in the digital realm?
I don’t know that the paradox needs to be resolved so much as it needs to be acknowledged and understood within digital preservation debates. In my work what stands out is that more attention needs to be paid to digital flows, to circulation, and to the interface and database that facilitate and mask distribution online. Preservation, as an idea and as an ideal, is transformed online, though for some reason, stating this is always a bit controversial.
In archives (traditionally) the emphasis has been on long-term preservation, which more often than not has meant rendering ‘originals’ inaccessible in the present as a means to protect or safeguard them for the future. Because archival discourse and practice have come a long way in the last decade to adapt to the continually changing technoscape, I don’t want to make it sound like the tension is between the traditional, as material/offline, and the new, as digital/online. I concentrate on the digital online as a complex realm when I study the archive, but obviously the discourses and ideas are shared with, if not borrowed from, years of traditional archival theory. I think it is almost impossible not to rely on these established ideas and systems, but at the same time, I think it is important to move beyond them and beyond comparisons between material/digital, offline/online, mainly because the foundational archival concepts—the original, the authentic, and the integral—are conceived of differently in the digital realm. So there is a need for a new basis, a point of analysis that is of the web. We need to start talking about iteration, versions, repetition, and flow…
I think preservation, creation, and use are already interrelated in the digital realm—and that the archival conundrum actually lies in the fact that these elements are difficult to distinguish from each other. I think, if anything, the digital realm will keep moving in the direction of embedding the archive into technologies of creation, dissemination, and display. So maybe the question is how do we conceive of preservation, creation, and use as distinct entities in the digital online realm—rather than interrelated—and if a distinction is no longer possible, what the implications are of that interrelatedness.
You said that in your work ‘more attention needs to be paid to digital flows, to circulation itself, and to the interface and database that facilitate and mask distribution online’. Can you talk a bit more about this and how you think the interrelation between the ‘front end’ and ‘back end’ of online systems informs our perception and use of the archive?
When I say digital flows need to be addressed, I’m talking about community as much as I’m taking about trajectory. It’s an idea I’ve been stuck on for a while but also have a hard time articulating. From reading Ann Cvetkovich, Wendy HK Chun, Josephine Bosma, Anjali Arondekar, Tess Takahashi and others, I’m reminded of the underlying communities—online and offline—the people with a need and compulsion to collect, so that later, something can be made sense of, revealed. The archive ultimately makes possible connections that are sometimes dangerous or undesirable within a particular time and place. My hunch is that while the web has the potential to highlight the connections between people and their documented pasts, and with unprecedented reach, it also risks amalgamating everything into a large undifferentiated database that completely overlooks and overwrites the affective and the unarchivable.
We pay a lot of attention to digital content as objects, albeit virtual, when really an important part of what distinguishes the digital from its material counterparts is, I think, its movement, circulation, flow… the way people share the digital as a space, and travel through that space. Digital stuff is easy to copy—much of what we do on the computer is a form of duplication—and as many artists, theorists, and archivists have pointed out, these copies can be identical to ‘originals.’ Copies are also non-rival in consumption, which has forced us to seriously reconsider value and to come up with alternative economies, which so far seem most successful when thought of as network-creation itself. The mapping out of content, including links between digital nodes, constitutes digital trajectories, and this leads me to question the potential for archival theories that could emerge out of focusing on digital flows and online circulation, rather than the content-centric view imposed onto the digital. I’d like to expand my current project into theories of the web as a mobile archive, or a transient archive—something that highlights the passage of content, but also the movement of creators. And in turn, this means thinking about localization in contrast to the shifting place and space of the virtual archive…
As for the relationship between the front end and the back end, I think that we literally interact with an interface without knowing much of what generates our experiences online beyond that top layer. This isn’t new or limited to the web—this is basically our relationship to most technologies—but in the last few years, separating content from style and function (or form), has been pushed by developers. This has been mainly because browsers display content differently, and the separation made accessibility standards possible, making it easy to quickly and efficiently change the look of the interface without affecting content. Ultimately, the idea was to have form follow function, that is, to have use determine the appearance. So if we can take that kind of approach into account for the online archive, we begin to see what ideals shape the possibilities of the web for preservation.
What role do you think video artists or other digital content creators should play in the preservation of their own work?
I think this a really hard question to answer, but I’m going to respond from a personal point of view, as someone who makes video… and I am fully aware that I might make archivists and distributors shudder. I’m really for online access in principle, though I understand that in practice, it takes time, know-how, money, resources, etc. I haven’t even bothered to upload most of my videos online, so this is an ideal, a philosophical position. But it’s an ideal by which Canadian video distributors have not yet been seduced, and probably will not adopt anytime soon. And I get this—I get that making decisions about large valuable collections is something to think about carefully because once work is posted online, it simultaneously belongs to nobody and everybody.
Part of what inspires me to launch works into cyberspace is the politics of community-based activism that were about getting stuff out, sharing, exchanging ideas. There was an urgency and purpose. And as the tools became increasingly accessible, video art was about countering the mainstream in terms of both representation and means of sharing. But now it seems like the web has taken access to another level, and this is again shifting the politics of video art.
A lot of the politics that came out of video are similar to what we hear now about the web—in terms of its democratizing potential—and yet, the more video becomes common, the more precious the distinctions between art and the vernacular seem to become.
The fact that a video can be posted and embedded in numerous online contexts does not generally appeal to video distributors in Canada, who would rather see works maintained and presented in controlled environments where issues of resolution, duration, format, storage, and so on, are all carefully calculated to maintain the scarcity model on which they rest. The idea is to keep video art out of the ‘clutter’ of vernacular video—away from YouTube or on a distinct channel within it—so as to retain a curatorial sensibility.
For the archivists reading this, I have to refer to Josephine Bosma’s idea about rethinking loss as the antithesis to preservation because it gives elegance to these ideas. She writes, “We may have a lot to gain from losing control over digital objects. We should consider the ability of some artists to embrace an inherent loss of control over their work less as a challenge to conservation, and more as an inspiration to a solution. […] Both openness to a vital context and openness in terms of physical, material and technological accessibility may well be the best way forward in the strategy of conserving art in the environment of new, networked media.” 
My personal idea of what role artists and content creators should play in the preservation of their own work or collections is aligned with Bosma, and others who believe that setting work free allows for unpredictable modes of fan-based archiving tactics to happen. If we think of preservation as a process to keep work ‘alive,’ I can’t think of a better system—even if it is highly unpredictable—than the web. Except, as pointed out by Lucas Hilderbrand, the trend towards online distribution may mean that collection habits change, making it more difficult to keep works than with VHS or DVD, for example. 
So for content creators, I think that the idea of preservation has to be disentangled from marketing strategies, which isn’t easy by any means. In fact, the question of how to monetize content on the web may be the question nobody can answer; this demands an unprecedented level of innovation from video distributors whose best move may in fact be to opt out of the online realm altogether or wait for the hype train to pass… if it ever does.
The VIDEO CACHE collaboration with SAW Video activated the archive by screening some of the works from the crashed Mediatheque repository. Re-presentation through emulation or other means is a preservation strategy often undertaken with technological art of many kinds. Did you see VIDEO CACHE in this light at all, as simultaneously documenting and preserving the works?
Yes, I see VIDEO CACHE as a documentation project, but perhaps more importantly as a means of highlighting the ways in which the politics of the archive—any archive—are a reflection of the social movement(s) from which they emerge, including art movement(s). Video art history is imbued with politics and counter-movements, and these shape the discourses surrounding the video art archive on the web.
I see it less as an attempt to preserve the work within a long-term strategy where the material objects (DVDs for example) are central to the project’s history, and more in terms of preservation-as-conversation, keeping the project ‘alive’ by way of continued dialogue. Rooted in a feminist methodology, I frame VIDEO CACHE as way of bringing to the forefront the people involved in the Mediatheque—as artists or web developers or both—and their understandings of the process and labour involved, along with how their memories shape the ideals of video art and of the archive. It’s important to remember that this all started in the early 2000s, long before YouTube and broadband internet. It’s also important to mention that this project was funded as an online archive—that concept made sense very early on somehow, in that the promise of the web for preservation was something to invest in seriously, backed by hundreds of thousands of dollars of government money.
In some ways, activating the archive through a collaboratively curated event serves to document it better than written documentation would on its own; this is research-creation. The VIDEO CACHE screening and the online exhibit preserve and regenerate the Mediatheque, but very differently.
Curating a programme for a screening makes sense when you are talking about video, but it also raises a slew of questions about this assumption, given that as an online archive the Mediatheque didn’t prioritize high quality copies for screening—it was about showcasing video art online. This is a point in video art’s history that demands a look inward rather than forward. It demands a reflection on the trajectory of video art from its activist roots and from is dissident voices against mainstream representation—by women, queers, people of colour, community activists, etc.—to the current place and value of these scarce collections in an art market.
The Mediatheque is a prime case study for an archive that functioned for and through the web and privileged wide access over long-term material preservation of the files. Whether flawed or visionary as an archival approach, VIDEO CACHE preserves this idea, the Mediatheque’s aura, and the conceptual history of the project. VIDEO CACHE was about extending what I have learned from analyzing grant reports and other administrative documents made available to me by SAW Video into a case study, by highlighting preservation issues from 2003 to the present and showcasing the collection as two different modalities.
VIDEO CACHE featured only 10 works of the 486 pieces in the Mediatheque, and this sample was anything but random. So I think it’s worth noting that selection is a subjective part of this preservation process. As the current SAW Video Director, Penny McCann was the best person to make a selection based on the videos’ connections to SAW Video’s institutional history and in relation to those involved in the development of the Mediatheque from the early 2000s on. (McCann’s curatorial statement: http://www.wayward.ca/videocache/documentation/curatorial/)
Eight artists who had work in the original Mediatheque were present for the VIDEO CACHE screening at GIV, on November 24, 2010. As a result, the act of curating, on and offline, along with the discussion that followed the screening, are directly linked to the process of documentation—this event is possibly the most complete piece of documentation that exists about the Mediatheque by the people involved in the project. (http://www.wayward.ca/wayward/exhibits/video-cache/)
We also discovered quirky and confusing things in the process of organizing VIDEO CACHE, that again speak volumes about the archive’s politics. From November 24, 2010 to December 24, 2010, 9 of the 10 videos screened at GIV were showcased online at http://www.wayward.ca/videocache. Despite being remunerated $200 as part of the Mediatheque in 2003, the distributor, VTape, opted out of letting us show Gunilla Josephson’s Hello Ingmar (2000) for the month-long online exhibit of VIDEO CACHE. VTape continues its research into fees for streaming in order to develop a standard. This apparently applies to works already online and, as is the case for Josephson’s video, works for which the Mediatheque retains online showcasing rights in perpetuity. I don’t think this is VTape’s prerogative alone—the control over video art distribution, its value, and its position within art worlds and markets continues to be debated, with a prevailing Canadian bias towards the ‘web-means-dead’ credo for video art distribution.
Through the process of curating VIDEO CACHE, we unraveled many things about the Mediatheque archival method itself that feed back into the research on documenting the initiative. This is the ideal intervention for me: collaboration that emerges from research and that also uncovers and generates new threads, new concepts, and new problems. It is a highly self-reflexive approach and one that situates the archive as object and source of study.
More recently at the May 2011 Database Narrative Archive conference in Montreal (http://www.dnasymposium.com/), Adrian Miles (http://vogmae.net.au/vlog/) asked me why I thought it was necessary to activate or revive the Mediatheque project. I think that collectively we can decide whether there is value to a particular collection—after all, appraisal has always been a crucial step for archivists. Nevertheless, a digital loss or a server crash shouldn’t determine what we keep or discard. Until the Mediatheque is revived, VIDEO CACHE and the trail of documents that have come out of it (like this interview) constitute its main preservation efforts.
In your study of defunct or crashed video repositories, what issues would you highlight related to the sustainability of these types of projects? Are there any specific pitfalls you have identified?
Sustainability, by definition, is the capacity to endure. Endurance is built in to the idea of the archive, and online, as Wendy Chun argues, it’s the ephemeral itself that endures: “Memory, with its constant degeneration, does not equal storage; although artiﬁcial memory has historically combined the transitory with the permanent, the passing with the stable, digital media complicates this relationship by making the permanent into an enduring ephemeral, creating unforeseen degenerative links between humans and machines.” 
I think identifying pitfalls is a really important step in research that deals with emergent technology and social media. There is a lot of hype and a lot of excitement about the potential of the web to make things happen, and happen differently. That said, I think it’s important to be able to talk about failure in a generative way, even if highlighting issues related to sustainability is sometimes difficult. In this case, for instance, I am dealing with incredible, invaluable, long-established collections, but am addressing only their host organization’s relationship to the web—how they have resisted it, adapted to it, appropriated it, and so on. So I guess I want to start by saying that I recognize the value of the projects—even if they have ‘failed’—and that identifying pitfalls is in line with, rather than against, this kind of recognition.
Generally, what is most striking is that a lot of the pitfalls are relegated, and often mysteriously and suddenly, to technological failures, when in fact much of what happens to archives on and offline can be tracked back to human error and social/cultural parameters. This is what I was able to confirm in my doctoral research, and this is what makes it so complicated; it becomes impossible to make a bullet point list of pitfalls that we can all avoid and build from for future projects. I think engaging with and through technology requires a lot of knowledge on different levels (even with the democratization of media tools), including the upkeep of skills and tracking the constant developments. And this is often downplayed if not made invisible by the interface itself, which in a way becomes another pitfall.
Technology facilitates a lot of things, but ultimately it relies on human decisions and energy, and goals within a specific social, cultural, and legal context. This context also largely determines funding possibilities, the handling of copyright issues, the framing of the relationship between art and ownership, and so on, which then get coded into specific projects online. The process is iterative, and technology certainly influences choices in terms of format, access, and layout, but, as almost everyone I spoke with in this research makes clear, without (human) motivation and energy, online projects die. This probably goes without saying, but there seems to be lot more energy and money going into creating websites than into maintaining them. This is perhaps a pitfall too in the sense that the trend toward constantly creating new projects (though often duplicating entire systems) rather than centralizing or bringing content form disparate sources into one content management system might make upkeep more feasible. I believe this is something that Videographe plans to test out; there has been mention of offering up the viTheque repository as a template and/or platform for other institutions.
In my study of defunct and crashed online video art repositories in a Canadian context, I found that these philosophies of use differ greatly for each project, but most shared a common discourse about the role, place, and importance of the artist. There is a layer of each of the projects—and some more superficially than others—that reflects the history and trajectory of the artists as a category in Canada, as the first country to pay exhibition fees to artists (in the mid 70s). This is, of course, not the case in most countries, and so it explains some of the particular pitfalls that Canadian repositories fall into in terms of maintaining this professionalization of art into the digital realm, and under conditions that differ greatly from similar initiatives elsewhere. So copyright—or the way it is loosely interpreted and applied—is a major element, and I would say pitfall, in most cases of Canadian online video art repositories.
Another pitfall, I think, is the way copyright is being interpreted and, in turn, how technologies are being used to put into measure some of these ideas that, from an archival point of view, seem to pose additional problems rather than provide viable solutions. Technological protection measures, like files that self-erase/destruct after a period of time (chronodégradable), locks based on password protection, locks that limit the number of copies a user can make, and so on, are all ‘solutions’ justified by the desire to protect works from illegal copying (and which by default block fair and legitimate copying). To impart technology with these roles—rather than engaging with these issues as a social process that accounts for fair dealing—is to misconceive of the function of copyright and to throw off its intended balance. Also, with increasingly long terms of copyright (across the globe), this kind of copyright rhetoric becomes commonplace, and access online somehow becomes in itself conceived as an assault to artists’ rights.
Copyright is a major issue, if only because it is conflated with other issues, and as a result, those underlying issues aren’t directly addressed. Copyright—and Creative Commons for that matter—are not systems of remuneration for artists, they simply inform the parameters for using other people’s stuff without asking, beyond fair dealing.
The initiative to create an online repository requires a huge amount of time, resources, knowledge, and money. This is a point I will keep repeating because being for or against copyright isn’t at the crux of the matter. And, while I think that for the most part an open and free exchange of materials circulating via the web is positive for creativity, I do think copyright and Creative Commons alternatives demand that we continue to question ownership in the face of large user-generated content sites that have at their disposal untapped media content.
So this brings me to the issue of funding and financial sustainability. In the projects I have looked at, it seems that funders (often government funding bodies) are eager to fund the creation and development of online repositories for about two years, after which it remains a bit unclear what is expected or how the project is meant to maintain itself. For the most part, these projects are not self-sustaining, and bring in very little in terms of revenues, at least in comparison to the costs incurred maintaining the site.
I try to always think of these pitfalls and failures as generative, but I also think that we have many (too many) examples of how trying to contain and control digital flows backfires in terms of preservation strategies.
1. http://www.naipublishers.nl/art/nettitudes_e.html; “The Gap between Now and Then: On the Conservation of Memory” in Nettitudes , Let’s Talk Net Art NAi Publishers (2011).
3. http://video.dma.ucla.edu/video/wendy-chun-the-enduring-ephemeral-or-the... Wendy Hui Kyong Chun (2008) The Enduring Ephemeral, or the Future Is a Memory In: Critical Inquiry 35 (Autumn) The University of Chicago: 148.
Joshua Noble & Greg J. Smith
[Woods Bagot / Icebergs NYC]
"While some programming is still necessary (there is no working prototype for a toilet or brain surgery app), labels such as 'dining room', 'conference room', 'library' and 'shop' are becoming increasingly unwieldy. The next genus will dispense of programme to an even greater degree, so deprogramme your city now." – Keiichi Matsuda, Cities for Cyborgs1
The traditional model of creating space have been intimately tied to authority: one shapes the land one owns, the monarch shapes the castle, and the municipal government shapes the plaza. Inhabitants and passersby are subject to these master plans, confined to the activities and relations scripted to occur within them. Several apogees of this brand of urban planning have yielded proposals for some of the most iconic urban spaces: the Forbidden City in Beijing, the Haussmann Plan for Paris, the Radiant City of Le Corbusier. These precisely calculated, 'hard' spaces assumed that the lives of those who filled them would slot neatly into prescribed roles that were fixed for extended periods of time and only altered by the most profound social upheavals. Today, discourses of programme are considerably more fluid and acknowledge that space is largely defined through the patterns of its users. While construction methodologies and structural engineering evolve slowly, our perception of space—at all scales—has been revolutionized by the adoption of a host of new tools and protocols. Artist and researcher Mark Shepard describes the gradual emergence of networked urbanism as anticipating near-future cities capable of reflexive self-monitoring and behaviour adjustment – the endgame of computation "leaving the desktop and spilling out onto the sidewalks."2
Within this milieu, the possibility of a 'soft' space emerges from the multiplicity of meanings now afforded to occupants, allowing them to define and refine as they see fit, without irrevocably altering a structure or location. Space can be shared, transformed, saved and re-made. The temptation is to imagine the ways that futuristic structures will allow for transformable buildings, parks, or homes, but the more immediate possibilities are much simpler: networked structures, meshes of inexpensive sensors and devices, accessible tools to mark and tag locations. Just as memories and historical narratives accumulate at a location, computation, data and networks now enable the addition of layers of meaning and possibility to our understanding of place. The reality of our data-driven culture and the networks that define the bonds and bounds of that society is that all spaces are becoming softer. Every square metre is now a granular location that can be tagged, altered, or repurposed to a degree of specificity unimaginable a generation ago.
The meaning of a location or object is never static but rather contextual. As Christopher Alexander noted "a building or town is given its character…by those events which keep on happening there most often"3 and this can be extended to spaces as well. It is worth considering that "those events" in the context that Alexander considered, villages, churches, homes, remained largely the same for decades, if not lifetimes. We consider a different range of sorts of activities which change dramatically. What is the character of a space that encompasses such change? It is either characterless or it has been softened. We propose various frames of reference for considering the 'softening of space' that can take many forms: engendered by architectural design, grafted into a space as a technological intervention, or organically shaped by the need for a space to function beyond its original programme. It is difficult to avoid thinking of intended uses of a structure or environs but the duration of even the most thorough planning is remarkably brief compared length of time that users and the surrounding environment will engage a location. Why not cultivate design strategies that acknowledge this fact? The soft, reconfigurable and re-programmable space provides stakeholders with agency within the environments they occupy.
In this essay we focus on strategies of softening, of working with pre-existing spaces and conditions to inject a softness, to create possibilities of configuration and collaboration. To focus solely on a soft architecture that is not contingent on pre-existing situations ignores one of the core challenges of making spaces and places: refactoring architecture and urbanism. The future of soft spaces will largely be comprised of strategies to integrate softness with spaces that were previously defined but are in need of updating.
[Michelle Teran / Video documentation of Parasitic Video Network]
Disagreements about environments always come down to the narrative that transforms indeterminate space into specified place and the power to manifest that narrative. In the United States one often sees small roadside memorials consisting of flowers and small wooden crosses that mark the sites of accidents, defining place from ephemera. Likewise, the accumulation of graffiti on walls reinscribes these surfaces with layers of meaning and new narratives. When one embeds a narrative into an otherwise occupied and defined space—highway or factory wall—they generate a signal that can be heard as either harmonious with or in discord to the previous definition. This process of defining and contesting becomes a dialectic enacted by the users of a space, those who define it as a location. As each culture creates its own methods of defining place and meaning they also create ways of contesting that meaning and in a networked age, connectivity and the informatics afforded by screen-space are means of defining and contesting. Just as an architect or developer plans for rain, heat or traffic if they wish to engage the society and culture that surrounds them, they must engage the ways that the society defines and contests space. To not do so, yields not only a dead-spot but a latent opportunity, a blank canvas that will inevitably be repurposed. At the other side of the spectrum, any structure other than a prison that attempts to enforce absolute control over how it might be used is destined to perform poorly. This is how the discussion between the dweller and space becomes an exchange—a possible discourse—shaped by inhabitation and experience. The ideal technologically enabled soft space is one that opens itself to the widest range of the modes of communication – allowing for a multiplicity of places to coexist within the same bounded region.
The materiality of soft space
While new experimental electro-active polymers such as ShapeShift can fold and flex in response to the modulation of current, a softening of space does not typically entail a softening of material – softness is most often a metaphor for mutability and openness to change. Inflatables notwithstanding, the materials of contemporary construction: glass, steel, concrete and plastic are not soft to the touch but the experience of spaces and of objects is multi-sensory. Space is a function of embodiment, we perceive space as a constellation of aural, visual and tactile sensations, and conceptualize it through physicality.
The enmeshment of spaces and technology allows space to be shaped and constructed in ways that are not necessarily physical. If we can argue that technology can be immaterial then the ways that space can be defined should extend beyond materiality as well. Communication channels can define space. Memory can define space. Virtual tags can define space. The entanglement of these varying options yields softness, as a way of designing a space and a way of experiencing the world.
A vocabulary of softening
In illustrating the shift from fixed conceptions of space towards the malleable and the soft, a number of qualities have been identified: soft spaces invite participation, can be networked and allow affordance or orientations of places to be rewritable. The following vocabulary is presented as a toolkit with which designers and citizens might think about, plan, occupy and reconfigure the structures, public space, and urban fabric around them.
(Re)programme - The first type of softening has been standard operating procedure within programme-centric architectural design for thirty years now. Using programme to drive form is a means by which designers can 'tune' a proposed space to yield optimal flexibility for its intended occupants. More adventurously, we can look to now classic models such as Bernard Tschumi's "crossprogramming" or "transprogramming"4 where, respectively, a programme is inserted into an alien spatial configuration (a hospital ward is converted into a nightclub) or seemingly incongrous programmes are combined to capitalize on dissonance (a hospital ward and nightclub co-exist and thrive while sharing the same facilities).
Network - In their 2008 article "The invisible city: Design in the age of intelligent maps", Kazys Varnelis and Leah Meisterlin schematize networked urbanism as a combination of "physical texture" and data-driven representations5. The deployment of distributed sensors in everyday objects and assemblies permits systems to provide feedback and provide users—or algorithms—the ability to 'tune' and optimize the performance of space. The same logic that allows a smart grid to improve energy efficiency can be scaled down and applied to a dwelling where natural and artificial lighting, temperature and HVAC are modulated based off usage patterns and real time data. The softness or hardness of that space is a function of the allowances the network provides for users, a flexibility in the types of services, and the meaningfulness of the connections.
[Hoppala & Superimpose / Berlin Wall 3D]
Augment - The most immediate and perhaps prevalent mode of softening is augmentation, adding layers of data or imagery to a location. This creates a multiple meaning to any feature of a location, loading it with functionality and signification. A large interactive screen, for instance, performs the roles of a wall and concierge: delimiting a space, informing, observing, and providing a point for communication. One cannot walk through a screen but its characteristics as a boundary are variable. By creating indefinite boundaries, the very nature of structures become temporal, indefinite, and multiple; a classroom transforms into a lounge and then a projection room. Taken further, a data rich layer can lay atop an entire structure. Unlike the immersive environments proposed during the heyday of virtual reality in the 1990s, augmentation does not overwrite space but refocuses it. The screen is the ideal locus of an augmentation, the allowance for easy visualization and familiar data metaphors make the implementation and alteration instantaneous and simple. Augmentation is one of the oldest strategies for softening, and due to the ubiquity of the screen and data, and it is one of the most commonplace. As a softening strategy, it requires minimal infrastructure and structural alteration: a service, device and access point are all that are needed.
Reshape - As a strategy for softening reshaping requires perhaps the most infrastructural change. A reshaping action can be as sophisticated as adding a kinetic facade to a building or as simple as using chalk to demarcate alternate usage instructions for the sidewalk (drawing a hopscotch grid). From mobile architectures, ecologically enmeshed architectures, flexible strategies of construction and formation, a reshaping can take a wide range of forms that do not rely directly on the possession or occupation of space, often an important qualification in the feasibility of a softening. Reshaping can soften by making the fixed more flexible or more temporal, opening spaces for participation and possibility, allowing for a space to be remade or unmade. Temporary structures imply an emergency of sorts, after natural disasters or wars but also express an uncertainty and immediate necessity both of which are reasonable assessments of the requirements of an living or working place and urban space. Reshaping generates results similar to augmentation while requiring a distinct approach; to augment means, in many senses, to make a space more generic, to allow more layers to coexist without signal interference while reshaping creates specificity and constrains possibilities.
Hack - Not all softening is complicit with authority: some of it occurs in defiance of design intent or even the law. The oft-quoted passage from William Gibson's 1981 short story Burning Chrome reminds us that "…the street finds its own uses for things" – stakeholders can intervene and appropriate technology, space and structures to meet their needs. From accessing a cafe's wireless signal from an adjacent public space to planting a bed of flowers in a gap at the edge of the sidewalk, the overlapping fields and assemblies of spaces and structures are rife with opportunity. Hacking has an additional component of immediate necessity or illicitness, implying a lack of authority but an abundance of need. It often fulfills a simple requirement unanticipated in the original formulation, sometimes taking advantage of refuse material or unacknowledged potential. It is also the most amorphous of strategies because it is the one that does not define a particular tactic, but rather an attitude towards the preexisting circumstances of a space.
Rather than list off technological advances, we will examine several projects that research the application of technology to "soften" living and working situations. To put it another way, the following work is what Tom Igoe would describe as "the recently possible", enterprises that utilize commercially available technology and techniques in innovative ways. While greenfield development with limitless budgets and controlled circumstances may effectively demonstrate emerging devices or techniques, we would like focus on strategies readily available to individuals and communities.
[Graffiti Research Lab / Eyewriter]
Graffitti Research Lab (GRL) is a loose collective that works with augmentation, creating tools for other artists, hackers, and prospective taggers and spaces for tagging. The collective's regard for urban space as a screen derives from their affinity for graffitti, using public intervention—projection bombing—as a mode of simultaneously exploring the digital tag in physical and virtual space. In their more poetic projects, Eyewriter for instance, GRL extends the possibility of participation and intervention to the physically handicapped, legendary graffiti writer Tempt1. In their softening, the ability to tag is a strategy to remember and to be remembered by, to turn closed or inaccessible space into personal advertisement and gesture.
Canadian artist Michelle Teran's work explores the friction between urban environments and their digital footprints. Many of her projects appropriate the logic of commonplace media systems and leverage these technologies to transform generic urban space into ephemeral zones of voyeurism and performance. Initiated in 2008, Parasitic Video Network prototypes a kit for an immersive environment that can be temporarily installed within stock architectural typologies such as shopping malls or office buildings. A user of the project carries a device called the Parasitic Video Interceptor (aka The Spy) with access to the live feeds of 25 low-range wireless cameras that have been mounted within the environment. The participant is immersed in a reflexive media experience where their proximity to individual nodes within a surveillance mechanism determines the video output on their receiver, forcing them to become editors in their own cinematic interlude and defamiliarize the act of moving through space.
Since 2004, MIT's SENSEable City Laboratory has been exploring the potential for developing real time visualizations with data collected from mobile phones and sensors. Founded on the notion that distributed computing can be harnessed as 'smart-dust', the group develops sophisticated proof-of-concept urban informatics that reveal and clarify the intangible communicative and migratory flows that animate cities. Trash Track was produced for the 2009 exhibit Toward the Sentient City and set out to visualize the 'removal chain' of waste management in Seattle by attaching custom designed radio transmitting tags to refuse. This workflow transformed discarded consumer electronics, disposable coffee cups and bagged garbage into geolocated nodes that yielded vectors delineating the movement of waste through and out of the city. The resulting diagrams delivered a bottom-up representation of waste management with which to scrutinize speed and efficiency to compliments the traditional conception of waste management as an exercise in logistics.
[Usman Haque / Natural Fuse, diagram]
Usman Haque has long worked with his own vocabulary of architectural softness: softspace, the material of the perceived and experienced architecture, in contrast with hardspace, the physical form of the structure. This sensitivity to the variety of experienced space is evident in Natural Fuse, a project exhibited first in London in 2009, where Haque linked one of the most fundamental requirements of a modern living or working space, electrical power, to the carbon offset of a plant. The more carbon offset that is provided by a plant, the more power is allowed through a fuse connected to an outlet. If that were the extent of the project it would be a simple allegory but it extends to link devices together in a network to create a pool of available offset power, generating a network of capacity, awareness and need. As Natural Fuse runs, it generates a map which shows tags for the devices which are being used "selfishly" or "selflessly". While this network does not affect the actual structure of the building, it links the softspace and the activity of the inhabitants to a larger network that is environmentally and behaviorally defined.
The appearance of GPS enabled smartphones with built-in camera, accelerometer and magnometer (compass) functionality has positioned mobile handsets as an ideal platform for augmented reality (AR) applications. 2009 saw the launch of Layar, the first AR browser, an interface that allows users to survey space through their handset to display real time information overlays based off the location and field of view of the device. One of the most explicitly architectural 'layars' released thus far is Berlin Wall 3D, which allows users to explore how the infamous concrete barrier divided east and west Berlin from 1961-1989. A joint effort of two German developers (Hoppala and Superimpose), this application overlays a 3D model of the wall and related checkpoints along the former border line, in situ alongside nearby landmarks such as the Brandenburg gate and Potsdamer Platz. Berlin Wall 3D softens by collapsing history, erasing the distinction between past and present while allowing users access to an imposing shadow from of a bygone era.
[Theo Watson / Audio Space]
Augmentation of space is fundamentally the addition of another data layer or dimension, visual information or any other data point overlaid atop the current space. While this is often seen as mapped projections as in the mobile handset related project discussed above or the work of Pablo Valbuena, space and form are not solely registered in terms of visual phenomena but also by sound. Echoes, whispers, volumes all contribute to the definition and character of a space and the sensation of being in that space that a user takes away. In Theo Watson's 2005 installation Audio Space, a headset and microphone are used to mark a user's position and allow them to leave and receive messages at any given point in the room, combining private utterances with public space. This tagging of location and 'mark making' creates another mode of presence and communication, one of the most fundamental allowances. In a poetic sense, to 'leave' the voice behind creates a corporeal memory of a particular moment in space and time. Augmentation of space need not be a projection or visual overlay, it can be a more subtle addition.
The economic meltdown had global repercussions on commercial real estate markets throughout the world. Noting the numerous construction projects across Manhattan that had ground to halt, in 2010 the New York based architecture firm Woods Bagot proposed an ingenious solution for making use of the vacant, 'stalled' construction sites that were scattered across the island. Icebergs NYC is system for reshaping the (non)use of these sites as a venue for temporary, inflatable structures that generate revenue while the financing for the project originally slotted for the location floats in limbo. These structures are 100% recyclable – composed of simple steel frames wrapped in EFTE pillows, and they employ modular HVAC elements that do not require permanent infrastructure. EFTE is a flexible enough material that it can be inflated to give an Iceberg a dynamic, faceted roof that doubles as a dramatic projection surface. When the constructed project planned for the site is re-initiated, the Iceberg can be quickly dissembled, packed into a single shipping container and transported to a new site.
Softening as a strategy enables users to participate, creating a dynamic sense of place and a flexible approach to space that allows varied activity by users and encourages participation and reshaping. To design such a space requires not only an understanding of an activity but an understanding of the tools related to those activities: spaces must integrate with action. The usage and meaning of any given space are contingent on how it serves the needs, tools and capabilities of its users. Following Doreen Massey "for the future to be open, space must be open too"6 – we posit that for place to be truly functional, it must be open as well.
1. Keiichi Matsuda. "Cities for Cyborgs," Quiet Babylon, September 23, 2010, http://quietbabylon.com/2010/cities-for-cyborgs-10-rules
You may need: Adobe Flash Player.
Our eleventh VT Audio Edition is live! Contributed by the Toronto-based Andrew Zealley (whose work bridges composition and audio art) who has served up an impressive, highly conceptual soundscape. Andrew describes the structure of his submission below:
'Sonnet 56 is a composition in 5 parts, without pause. Each part lasts 5 minutes, bringing the total duration to 25 minutes and establishing a splendid sense of symmetry. Based on and around the text to "Sonnet 56" by William Shakespeare (1564-1616), this audio is a meditation on love. The subject is expressed from four personal memory locations and also the present, in non-chronological fashion just as memories and recollections may shift, temporally, backwards and forwards in our thoughts.'
Jump through to the release page for more info and a download link.
We recently received a note from our friend Adam Young (of Direwires fame – note his Audio Edition release here) about a festival he is involved in organizing that will take place in Sarnia in August.
"CURRENT is an experimental art and music festival, showcasing creative minds who explore the tension between natural and unnatural in their work and performance, reflecting the very essence of the city it takes place in: Sarnia, a city where beautiful parks and beaches are found on one side, a valley of chemical industry on the other and everyone that lives there in between."
Visit currentfestival.com/program for event and workshop details – the festival takes place in approximately two weeks.
[ecoarttech / Indeterminate hikes]
The Eighteenth International Symposium on Electronic Art, ISEA2012 Albuquerque: Machine Wilderness is a symposium and series of events exploring the discourse of global proportions on the subject of art, technology and nature. The ISEA symposium is held every year in a different location around the world, and has a 30-year history of significant acclaim. Albuquerque is the first host city in the U.S. in six years.
The ISEA2012 symposium will consist of a conference September 19 – 24, 2012 based in Albuquerque with outreach days along the state’s “Cultural Corridor” in Santa Fe and Taos, and an expansive, regional collaboration throughout the fall of 2012, including art exhibitions, public events, performances and educational activities. This project will bring together a wealth of leading creative minds from around the globe, and engage the local community through in-depth partnerships.
Machine Wilderness references the New Mexico region as an area of rapid growth and technology alongside wide expanses of open land, and aims to present artists' and technologists' ideas for a more humane interaction between technology and wilderness in which "machines" can take many forms to support life on Earth. Machine Wilderness focuses on creative solutions for how technology and the natural world can sustainably co-exist.
The program will include: a bilingual focus, an indigenous thread, and a focus on land and skyscape. Because of our vast resource of land in New Mexico, proposals from artists are being sought that will take ISEA participants out into the landscape. The Albuquerque Balloon Museum offers a unique opportunity for artworks to extend into the sky as well.