Softvideography

Original Citation:

Miles, Adrian. “Softvideography.” Cybertext Yearbook 2002-2003. Eds. Markku Eskelinen and Raine Koskimaa. Vol. 77. Jyväskylän: Research Center for Contemporary Culture, 2003. 218-36.

Softvideography

Interactive video has much that it could learn from hypertext. Not the hypertext that is most of the web, nor the hypertext that is parodied in many of the essays in the recent interactive cinema anthology “New Screen Media: Cinema/Art/Narrative”(2002), but the hypertext theory and practice that has spent over 20 years actively making, theorising, reading, and researching multilinear narrative, multilinear structure, link typologies, narrative closure in interactive narrative, anti narrative, and so on and so forth. It would be academically trivial, and probably spiteful, to catalogue the way in which recent writing on interactive cinema literally reprises the anticipation, excitement and assumptions about interactivity and nascent futures that hypertext experienced, or to point out that in three or four years exactly the same retrospective criticisms will be made of interactive video as have been made of hypertext. Multilinear narrative, structure, and architecture are problems common to new narrative in general, so I’d like to extend this research more specifically with an investigation into ‘softvideo’ practice – critical ressentiment can take care of itself.

softcopy

In 1988 Diane Balestri published a paper that by its regular appearance in introductory books on hypertext, for example Bolter (1991) and Snyder (1996), can be considered a canonical work. In this essay she discriminates between using the computer to author ‘hard copy’, that is work which requires a material substrate such as the page, and using the computer to author ‘softcopy’, work that is only intended to be presented via the immaterial substrate of the screen.

The implications of the distinction between hard and softcopy have been extensively explored in hypertext theory, and have been used to affirm numerous key qualities of hypertext as softcopy: screens are variable in their dimensions and may be multiple; work no longer need have a front, back, or middle; content can be changed at will; readers can manipulate the presentation of content through the alteration of font size and properties, window dimensions, text and window colour; content and structure can be variable during the reading or use of a work; a work may have multiple (and simultaneous) narrative architectures; and it may be unable to be represented or distributed in any adequate hard copy format.

However, much of the literature on softcopy appears to have concentrated on its implications for the presentation or reading of content or documents, whereas softcopy also has obvious and significant implications for how we consider the authoring of content, a point well made in a different context by Moulthrop and Kaplan (1991). All that is meant by this is that if we are working in a fully softcopy environment, that is working on a computer to present material that is only to be realised via the computer, then we can use new or different tools, and certainly different methodologies and practices, than those we might use for hard copy authoring and presentation. This should not be confused with the common Information Technology instrumentality that uses digital tools to do old jobs in new ways, softcopy is instead a paradigmatic shift in the sorts of objects that can now be authored which entails, like most paradigm shifts, new ideologies (or at least the revisiting of old ones) of not just reading but also authoring.

As an example of hard copy (print) ideology in softcopy environments, consider the role of alphabetisisation, which we ordinarily use to serially order things like academic bibliographies. Of course, they only need to have this order because they have traditionally appeared on paper, so that if you need to find a particular bibliographic entry you need to know where in the list it ought to appear. However, in a softcopy environment this is no longer necessary (and its persistence is largely due to the hegemonic hold of print literacy in academic culture) because a simple search function can easily retrieve and display any bibliographic entry, anywhere in the document or docuverse. This is not an argument against the alphabetical ordering of lists in bibliographies, it is only to point out that features like ‘sort’ in word processing programs are there to facilitate content into hard copy and maintain existing paradigms of content authoring and dissemination. Most hypertext tools, on the other hand, tend not to rely on things like alphabetisation to sort or categorise information, including lists.

To write in a softcopy environment is to begin to recognise some of the ideological assumptions drawn from the hard copy world that have naturalised (and socialised) our approach to the authoring, publishing, and reading of content in softcopy domains. These assumptions are derived from our intimate understanding and experience of the material resistance of hard copy environments (which for instance is why and how we have things like pagination), as it is a sophisticated material literacy that lets me make a list using a biro on paper rather than glass, and mistakes the decision to do this as my own. In print literacy this material resistance includes such things as the materiality of language (de Saussure’s signifier), our tools of inscription, and what it is we think we want to say with these. That each of these have been naturalised by the hardcopy paradigm of traditional print literacy is foregrounded by softcopy as a writing and reading practice.

softcopy towards digital video

Simply put, softcopy suggests that it is not only the presentation of objects that may change in digital environments but also the sorts of objects that can be created. However, we can only make these new objects when we are able, as most of us are with print literacy, to recognise and write in and with the qualitative material resistances and affordances of the softcopy world. This materiality is often thought to be superfluous in electronic environments, after all digitisation erases difference at the machine level, but of course as any new media practitioner knows there is resistance in code, the screen, bandwidth, users, and so on. There is a difference however between being able to affirm this resistance as that which constitutes work — work as an object and work as praxis — and regarding this resistance as noise that an imagined electronic future will dissolve.

When we move to video and digital technologies it is apparent that digitisation is firmly established in the production work flow of film making at almost all professional levels, with significant creative and industrial consequences, as Elsaesser and Hoffman’s (1998) anthology and McQuire’s (1997) report show. Certainly, from introductory film schools to high budget features have adopted digitisation in various guises, and with the introduction of iMovie on the Macintosh and Windows Movie Maker on Windows XP, digitised video is largely the only way domestic users have ever had to edit their work. While the aesthetic influence of the digital as a mode of narration on commercial and industrial production remains relatively minor, though landmark narrative fiction works are probably Tykwer’s Lola Rennt (1998), Figgis’ Timecode, (2000), Cochran and Surnow’s television series 24 (2001), and Nolan’s Memento (2000), see for instance Hales (2002) and Dovey (2002), it is also the case that most of the digital tools used in cinema production have been resolutely orientated towards hard copy outcomes, and in fact treat digital video largely as if it were only hard copy orientated. This hard copy view also extends to computer based video work that is not intended for traditional screen based (television and cinema) delivery, such as online, CD and DVD content.

desktop digital video into softvideo

Desktop based digital video is probably at a similar point to what desktop publishing was with the introduction of the first Macintosh in 1984. It is now possible to shoot, edit and print to tape broadcast quality video using a small digital video camera and a laptop computer (indeed Apple’s influence here is striking with Final Cut Pro largely revolutionising and redefining digital video production as it moves from the desktop to the laptop). Like desktop publishing before it – right down to the almost but not quite affordable tools — desktop digital video primarily decentralises the ability to do what previously could only be done with very expensive, essentially centralised, capital and skill intensive resources. And, just like desktop publishing, its major effect is not to see a revolution in genres or a revisited poetics of cinema, but simply facilitates access to production resources so that more people more or less can now do more of the same.

This, it must be stressed, is not a bad thing. However, as hypertext rather elegiacly showed us, it was not WYSIWYG design and printing that led to significant new digital genres (indeed the laser printer is irrelevant in this context), but work that was designed to take advantage of the environment that the computer in its entirety provided. In the case of digital video, the same applies. For example, iMovie and Final Cut Pro, Windows Movie Maker, Adobe Premiere, Avid Xpress DV, and Media 100 are all digital editing systems intended to facilitate content for presentation in existing televisual contexts. This means, much like the printer in desktop publishing, that they’re primarily used to print content back to tape, whether to the camera or a VCR hardly matters.

If you like, these digital tools are primarily orientated towards publication (or transmission) and this form of publication requires a linear time based substrate, the privileged model of which is of course film and its avatar the video cassette — video hard copy. Now, obviously I am suggesting that this is not so very different from using your computer to get words out onto paper, and it isn’t. But of course it also suggests an alternative conception where we can use our computers to work with time based media where the delivery environment is not subject to the temporal temperance of cinema and video. This does not simply mean that we can now make works that are multilinear, which seems to have been the popular understanding (and practice) of much networked interactive media. As we’ve seen, softcopy in relation to writing includes much more than multilinearity, and while all the formal qualities of softcopy may be formed in relation to their interrogation of the stability of the page, they have also provided a poetics of screen based textual production and reception that productively looks outside of the page or the book. This suggests that we ought to be able to articulate a new poetics for desktop video — where historically digitisation in regard to video has been understood to be little more than a combination of a moving image plus sound track that can be played ‘randomly’ — that is neither specifically cinematic, videographic, or generically multimediated, a poetics that looks towards the formal possibilities afforded by digital, networked, screen based video. This poetics requires the video equivalent of softcopy, or as I prefer, softvideo.

applied softvideo

A softvideo poetics requires an immanent form of working with digital video that is perhaps modelled as much on writing as it is on film making practice. This is not an argument for the necessity or inevitability of code but only the simple observation that we have a sophistication in the way we use a biro and a piece of paper that is an exemplar of an informed literate poetics. We doodle, take notes, write in the margins, sideways, on recto and verso, apply different pressures for variable densities of ink, and so on. Contrast this to interactive digital video, as a specific, immanent, and emergent digital computer process, and what largely happens is that traditional praxis remains untouched. We shoot, we cut, we compress, we put the moving image plus sound track online or into our interactive work.

What is the difference between hard and softvideo? Largely that in hard video the digitised video remains a singular or if you like sufficient media object in or for itself. I mean this not only in terms of perhaps what the video segment, fragment or sequence might mean, but more specifically as an object in itself where the integrity of the object is and remains singular. It is a moving picture track with a sound track with a fixed duration. This is digital video as a delivery envelope, and even where such content might be inserted and made a part of multilinear interactive works, whether fiction, nonfiction, experimental, online, CDROM or DVD it is not interactive for the video remains mute in regards to interaction which generally happens outside of itself. In other words, it is not that much different from television, click, it plays, click, it stops, click, it gets louder, or perhaps quieter.

However, if we approach video as softcopy, that is as softvideo, then we can think about digital video in dramatically different ways. This thinking, of course, can only be preliminary as I think it is clear that the genuinely novel forms or genres that will emerge from a properly digital video practice are yet to be recognised, or even found. (In much the same way that I would argue that blogs are one of the first major immanent genres for networked Web based writing, and they took a good five years to appear, and probably another two years to become obviously visible and intelligible as a genre.) But such a thinking does and will ground itself within the materiality of digital video as a practice that hears and responds towards that which is immanent to, and enabled by, these technologies, a looking forwards toward the new rather than our current looking backwards to define the forms and uses for digital video.

A first step towards softvideo is to no longer regard digital video as just a publication or delivery format, which is the current digital video as desktop video paradigm (which is of course the same as the desktop publishing model) but to treat it as an authoring and publication environment. This suggests that a major theme for a softvideo poetics to explore is the description or development of a videographic writing practice within video itself, that is to use digital video as a medium in which we write. To return to my hypertext analogy, it is the difference between writing in a native hypertext architecture (say for instance Eastgate’s Storyspace, Apple’s Hypercard, or even simply HTML) and writing in Microsoft Word and choosing to save your document as a web page. The former is writing hypertext while the latter confuses publishing in a medium with writing in the medium. To write in or with digital video should allow us to articulate a vocabulary of the elements that may constitute the formal contexts of softvideo, so that softvideo can become an engaged rather than imaginary practice.

Currently, as far as I can determine, QuickTime is the only readily accessible digital architecture that supports the qualities of softvideo, and what follows is an explanation and exploration of some of the implications of this as I have developed them in my own applied research practice.

As a simple example of what I mean by the materiality of softvideo is the difference in the way that a softvideo architecture conceives of frames and frame rates. If, or instance, I wish to show a still image with a continuous soundtrack, then in most video editing programs I simply import or capture (or draw) the requisite image, then stretch its duration to the soundtrack. When I save and export this work it will then draw this still image for the required number of frames at the specified frame rate. This is digital video as hard copy, for if my delivery environment is the computer screen then there is no need whatsoever to draw the single image at any frame rate, because there is not in fact a need for a frame rate in this sense — frames per second itself being very much a hard copy or hard video concept.

However, if I author my content in something as simple as QuickTime Player and do the same thing, then the final digital movie that is produced operates in a softcopy manner, QuickTime simply displays one image (one frame if you like) and holds it on screen for the specified duration while the soundtrack plays. In real terms this means that to add (keeping our example rather simple) an image to a soundtrack and then saving this as a completed digital video only adds the size of the still image to the final digital file. This is the case whether the movie runs for one, three, or twenty minutes. This is a softcopy conception of digital video.

A project that illustrates this well is “International Day of Time Dependent Art” (Miles, 2002) where approximately two minutes of digitised video, or if you prefer 2MB, has been stretched to run for twenty minutes twenty seconds. Of course the effect of this is to make the indexical video content (what the video footage is of) appear in extreme slow motion, but the point is that this is a digital movie that now runs for over twenty minutes, yet is only 2MB in size. Stretching its duration to forty minutes, if I am authoring in QuickTime for softvideo, makes no difference to the file size — it would still be 2MB in size.

This is, of course, just a beginning, but it does suggest some of the ways in which cinematic duration becomes problematised in softvideo (a point I shall return to). More significantly for a softvideo practice is the understanding that an architecture such as QuickTime is a multitrack and multiobject architecture. What this means is that a QuickTime (and in the near future MPEG 4) file does not need to consist of one video track and one sound track, indeed the “Day” work mentioned above consists of nine video tracks, one text and one sprite track, but can more or less include any number of video, audio, text, picture, and indeed several other sorts of tracks. Now, this immediately makes possible various forms of videographic collage and montage within a single work, what Manovich (2001) describes as spatial montage. For instance, by combining one picture track with, say, nine video tracks, a movie like “canberra rain”(Miles, 2002) is possible, with the divided video panes in themselves providing a form of montage, a literal cutting, internally within the movie, while also being simultaneously a form of video collage. In addition, nine text tracks are available within “canberra rain” which adds another layer or level of collage within the movie as they toggle between visibility and invisibility in response to user activity, and of course as each text pane partially obscures the video panes it also becomes an additional level of montage, though a montage performed via collage.

When child movies (see for example “Child movies”, Miles, 2002) are introduced, that is tracks where the content is independent of the parent movie, then further formal problems become evident. For example in “Exquisite Corpse” (Miles and Stewart, 2002) three child movies are arranged within a wide screen parent movie. As the three videos and their accompanying sound tracks are child movies each can be played independently of the parent movie, and independently of each other. This also means that each child movie may also be of variable duration. In the case of “Exquisite Corpse” the user mouses into the upper or lower bar over each video window, which simply runs its associated video pane at normal speed, with full volume. At the same time the next video pane in the series plays at half normal speed, and then the next track at quarter normal speed. Mousing into another video track simply repeats the process in series.

A simpler outcome is effected in “voxvog” (Miles 2002) where the central video window is divided into four transparent sprites that count and store the number of mouse entries (that is the action of the user moving the mouse into the sprite space is counted) and this is used as a variable to control which of 70 individual images to load in each of the four smaller video panes. These four panes are childmovie tracks and so what gets loaded in this particular work is conditional on when and where the user mouses within the video space. Of course these could also have been loaded on the basis of time, a combination of time and user activity, or indeed just randomised.

one softvideo poetics

It is important to recognise within these works, and in softvideo practice in general, that each of the tracks that constitute a QuickTime work are independent objects able to be scripted by the softvideo writer (softvideographer?). That is, each track can be conceived of as analogous to individual nodes in a hypertext work. Furthermore, each of these tracks (as objects) have a range of properties that can be controlled or negotiated via a softvideo writing practice, which in this case is literally scripting, so that their speed, visibility, volume, size, colour, transparency, direction of play, mobility, and even their presence, can be engaged with. While a softvideo movie may contain nine video and nine text tracks each can easily be made to move, play, overlap, disappear, reappear, and so on on the basis of readerly actions.

As a point from which to begin, certainly in the contexts of my own applied research practice, these formal elements constitute the domain of one particular softvideo practice (for softvideo is a methodology rather than a genre, style or formula) involving the use of softvideo for networked interactive desktop video. These works, known as vogs (video blogs) appropriate the generic form of the personal blog as one appropriate model for articulating a softvideo argot. This means the vogs are works that consider themselves to be sketches rather than monuments, after all, in an age of desktop consumer digital video it is probably time that video became as disposable (or cheap) as the word. As the vog manifesto (Miles, 2000) states, networked interactive desktop video are an applied softvideo practice that recognises a set of key terms as enabling and productive constraints. These terms include their production and delivery via a network, on desktops, they require and assume interactivity, and they treat digital video as an authorial plastic architecture rather than a delivery format.

Vogs are networked in that they are distributed via existing, viable network infrastructures, often including low band. In addition, a vog may utilise the network as an integral part of its softvideo practice, for instance in appropriating objects outside of itself that reside on the network (for instance “Bergen Appropriation” Miles, 2001), providing links to objects that are available on the network, or in a more sophisticated model utilise things like QuickTime’s child movie abilities to load external content when requested. However, vogs are also networked in a less technical sense through the softvideo writing model they offer. They are small works that, like blogs, tend towards a public intimacy and offer a model for what I’d characterise as a distributed softvideo writing practice, in the same manner as blogs (Mortensen and Walker, 2002). In other words, they are less about consumption (watching others content) than exploring models for authorship and production, for as blogs and most other successful and viable networked communication technologies indicate, it is the ability to participate as communicative peers that is much more significant and viable for distributed networks than our reconstruction into new consumers.

A vog is interactive in that the user has to do something, and this something affects in a literal way the work itself. This is more or less Aarseth’s (1997) ‘ergodics’ where the reader or user needs to perform non–trivial actions to read the text, and these actions are non–trivial because they have consequences for the text. Hence, clicking a play, pause, or stop button is not ergodic, nor is it what I would characterise as interactive — unless we want to call our everyday use of television interactive — as they are essentially trivial actions (much like turning the pages of a book) and do not qualitatively affect the text in itself. Of course, this also presents the possibility that while an individual vog ought to be ergodic, it would be perfectly reasonable when considered as a genre or a collected body of work to have a vog or vogs that in fact are not ergodic. This would be analogous to those hypermedia works that might utilise passages where there is little user choice, for instance parts of “Grammatron” (Amerika, n.d.) or “Hegirascope” (Moulthrop, 1997), and recognises that when interactivity is taken as a given then the lack of such interactivity becomes significant and meaningful. A Web example of this would be the “last page on the internet” screens where the playful irony of the work can only operate because of our now taken for granted assumption that all web pages are in fact linked in and out.

Finally, as an applied videographic networked practice vogs recognise that the visible context of publication or distribution is the personal computer screen. This does mean that the context of viewing is individual, personal, and probably domestic. It also means that users in such environments are generally time, bandwidth, and screen poor. Simply put, most people, most of the time, may not have two hours to interact with your content each week, and so much like blogs, vogs tend to be brief and either self contained or episodic. Even where users have significant bandwidth, for example first world universities, this still poses considerable restraints on the screen dimensions and resolution of softvideo works. In addition, users not only generally have smaller screens than those who work professionally in new media, but of course as a personal and domestic space user computer screens are also being used for other things at the same time, so a vog does not usually attempt to own all of a users screen space. This reflects the way that people actually do use their computers, ordinarily having several windows, programs and activities underway at once. An appropriate use of softvideo for such contexts then ought to recognise this and insert itself within or around what is the desktop computer equivalent of Raymond William’s (1990) televisual flow.

conclusions (consequences)

The implications of a work as simple in structure as “Exquisite Corpse” (Miles and Stewart, 2002) are quite dramatic. As each of the child tracks has been scripted to loop “Exquisite Corpse” is a film with no duration, that is it has no end. This is a much more radical implication than simple video looping suggests, where such looping has tended to be singular and stable and so no fixed end simply means iterative repetition, for here there are three loops, with completely independent and variable durations, where the speed of play is partially controlled and negotiated by the user. Hence, to play this work, and here play becomes a very literal and active verb, the user via their action and the scripting is controlling the playing rate of each of the three tracks, and as they move from one to the next the duration of each is constantly changing, effectively always changing the duration of the whole.

Furthermore, the relations or combinations established between each of the three video panels is and remains an open set, for mousing through the movie in the manner required to play it produces forms of collage (images on a common plane in simultaneous vision) and montage (when and where you mouse effects visual changes in the relations between consecutive parts) meaning there is no fixed work, canonical order, sequence, or teleological point that the relations among each of the three works aims towards. Hence, not only is the work of no fixed duration due to the combination of three variable loops, but also each time it is played performs a new and singular iteration of the work.

The recognition that each track within the work is in fact capable of being an independent entity is a major paradigmatic shift in terms of traditional cinematic practice. It does begin to suggest the ways in which softvideography is a qualitative shift from the more usual methods of digital video production and it also helps to illustrate the way in which softvideo is analogous to hypermedia writing rather than traditional visual and audio editing practices where mise-en–scene considers the parts within the scene as ‘objects’ to be ‘written’ with, and montage as a second level principle of organisation and decision. In the final work these choices appear as fixed and from an authorial and directorial point of view decisions are singular, as one shot is selected in lieu of another and then inserted into a fixed sequence. In softvideo each track, which of course can be of variable duration, location, size, content, and type, becomes an object to be written with, where this writing with is constituted by or in the event of authoring the work. Each track within a softvideo work is nbow considered as an object so the activity of making or writing softvideo is constituted by not only the decision about which objects to include and when, but also which of the variable properties for each of these objects ought to be scripted, and what that scripting might affect. These objects could be video sequences, soundtracks, still images, text, or even entire other QuickTime movies. Softvideo becomes more or less a form of ongoing and always variable, and so open, mise–en–scène. Once we recognise that tracks in a QuickTime softvideo work are discrete objects writing with these, providing an interface that controls them in some variable manner, and then actually playing these works (where it is clear that to play means much more than stop, pause, start), suggests that softvideo always requires and participates in an engaged and individuated process. It’s model is always a contextualised singularity.

All that I wish to mean by this, and I do want to insist on the change it represents, is that rather than compose the work visually and acoustically (before the camera and then in postproduction), then ‘flattening’ this work into hard copy, there is an ever present malleability to the material that now extends from the moment of content production, past that moment of traditional authorial closure, into its future. This malleability is not the question or problem of the ways in which the work will always be interpreted differently, but affects the very nature of the object that is to be interpreted, as such a work, at least in some respects, will always be a qualitatively different object in each presentation. It is not that the object produces varying interpretations but that users in the act of reading the work will inevitably produce varying works.

That the practice of softvideo raises significant and productive questions for traditional cinematic practice and theory ought to be obvious. One major question revolves around the way in which softvideo problematises montage as a fundamental mode of time based discourse, for each of the works discussed shifts the location, role and function of montage away from the preselection and serial ordering of an eventually fixed sequence towards other possibilities. Montage as a principal of selection and organisation can now reside somewhere between the shooting or gathering of material, a dynamic combinatory system of construction (more or less automated), and a user who (more or less) knowingly controls and determines the particular montage event and sequence.

The use of multiple windows further complicates this as the relation of window to window offers a complex collage practice, whether this be via a multiwindowed work or, in the case of the vogs, that the works appear in an always and already multiwindowed environment (the PC screen) so that a simultaneous visual relationship to other windows is always present. When time is added to this, so we recognise that the collage that is the computer screen also varies in time (this window now opens over that window) then we have a combination of collage and montage that does appear to be one of the major formal properties of such digital environments (Landow 1999, Manovich 2001).

While cinema has always had a sophisticated relationship to temporality it has also had a certain belligerence — ninety minutes of film or video has always and will always occupy ninety minutes. Not in softvideo. Like cinema and hypertext, it is the manner in which the parts reflect a qualitative change in the whole that is the principle of meaning and construction in softvideo (Deleuze 1986, Miles 1999). This suggests that not only is spatiality largely not of great significance for such screen based works, but that the cinema’s indexical relation to time may no longer be the bedrock for a screen based interactive softvideo practice. However, as Deleuze more than adequately demonstrates (and for that matter Chris Marker’s La Jetée), cinematic duration is not the same as the record or representation of time (time as quantity) but rather the expression of a qualitative change in an always open set. This suggests, to me at least, that Deleuze will offer softvideo an applied argot that will assist in our theoretical consideration and development of a new desktop cinema practice, a theoretical endeavour that I hope will complement a possible future softvideo practice, a videographic écriture.

References

Aarseth, Espen J. Cybertext: Perspectives on Ergodic Literature. Baltimore: The John Hopkins University Press, 1997.

Amerika, Mark. Grammatron. http://www.grammatron.com/ n.d. Accessed October 7, 2002.

Balestri, Diane Pelkus. “Softcopy and Hard: Wordprocessing and the Writing Process”.” Academic Computing 2.5 14 – 17.Feb (1988): 41 – 45.

Bolter, Jay David. Writing Space: The Computer, Hypertext, and the History of Writing. Hillsdale (N.J.): Lawrence Erlbaum Associates, 1991.

Deleuze, Gilles. Cinema One: The Movement–Image. Trans. Hugh Tomlinson and Barbara Habberjam. Minneapolis: University of Minnesota Press, 1986.

Dovey, Jon. “Notes Towards a Hypertextual Theory of Narrative.” New Screen Media: Cinema/Art/Narrative. Eds. Martin Rieser and Andrea Zapp. London: British Film Institute, 2002. 135 – 45.

Druckrey, Timothy. “Preface.” New Screen Media: Cinema/Art/Narrative. Eds. Martin Rieser and Andrea Zapp. London: British Film Institute, 2002. xxi-xxiv.

Elsaesser, Thomas, and Kay Hoffmann, eds. Cinema Futures: Cain, Abel or Cable? The Screen Arts in the Digital Age. Amsterdam: Amsterdam University Press, 1998.

Hales, Chris. “New Paradigms <> New Movies: Interactive Film and New Narrative Interfaces.” New Screen Media: Cinema/Art/Narrative. Eds. Martin Rieser and Andrea Zapp. London: British Film Institute, 2002. 105 – 19.

Landow, George P. “Hypertext as Collage-Writing.” The Digital Dialectic: New Essays on New Media. Ed. Peter Lunenfeld. Cambridge: The MIT Press, 1999. 150-70.

Manovich, Lev. The Language of New Media. Cambridge (MA): MIT Press, 2001.

McQuire, Scott. Crossing the Digital Threshold. Brisbane: Australian Key Centre for Cultural and Media Policy, 1997.

Miles, Adrian and Clare Stewart. “Exquisite Corpse”. Video blog: vog. http://hypertext.rmit.edu.au/vog/9.2002/corpse.html September 28, 2002. Accessed October 1, 2002.

Miles, Adrian. “Childmovies”. Video blog: vog. http://hypertext.rmit.edu.au/vog/vlog/archive/2002/102002.html#3923. (Web Log) October 7, 2002. Accessed: October 7, 2002.

Miles, Adrian. “Cinematic Paradigms for Hypertext.” Continuum: Journal of Media and Cultural Studies 13.2 July (1999): 217-26.

Miles, Adrian. “Bergen Appropriation.” Video blog: vog. http://hypertext.rmit.edu.au/vog/1.2001/bergencams.html. January 27, 2001. Accessed October 7, 2002.

Miles, Adrian. “canberra rain”. Video blog: vog. http://hypertext.rmit.edu.au/vog/8.2002/canberra.html August 8, 2002. Accessed September 19, 2002.

Miles, Adrian. “International Day of Time Dependent Art.” Video blog: vog. http://hypertext.rmit.edu.au/vog/2.2002/index.html February 20, 2002. Accessed September 19, 2002.

Miles, Adrian. “melbourne remembering Bergen (3)”. Video blog: vog. http://hypertext.rmit.edu.au/vog/5.2002/nordicsky.html May 18, 2002. Accessed September 19. 2002.

Miles, Adrian. “patience”. Video blog: vog. http://hypertext.rmit.edu.au/vog/1.2001/bergencams.html January 27, 2001. Accessed September 19, 2002.

Miles, Adrian. “Vogma: A Manifesto.” Video blog: vog. http://hypertext.rmit.edu.au/vog/manifesto/index.html. December 12, 2000. Accessed October 7, 2002.

Miles, Adrian. “Voxvog.” Video blog: vog.. http://hypertext.rmit.edu.au/vog/9.2002/voxvog.html September 24, 2002. Accessed October 7. 2002.

Mortensen, Torill, and Jill Walker. “Blogging Thoughts: Personal Publication as an Online Research Tool.” Researching ICT’s in Context. Ed. Andrew Morrison. Oslo: University of Oslo, 2002. 249-79.

Moulthrop, Stuart, and Nancy Kaplan. “Something to Imagine: Literature, Composition, and Interactive Fiction.” Computers and Composition 9.1 (1991): 7-23.

Moulthrop, Stuart. “Hegirascope”. http://raven.ubalt.edu/staff/moulthrop/hypertexts/hgs/hegirascope.html. October 1997. Accessed: July 18 2000.

Snyder, Ilana. Hypertext: The Electronic Labyrinth. Interpretations. Ed. Ken Ruthven. Melbourne: Melbourne University Press, 1996.

Williams, Raymond. Television: Technology and Cultural Form. 2nd ed. London: Routledge, 1990.

Filmography

Figgis, Mike. Timecode. 2000.

Marker, Chris. La Jetée. 1962.

Nolan, Christopher. Memento. 2000.

Tykwer, Tom. Lola Rennt. 1998.

Various. 24. 2002.

video blogging, et al.