Softvideography: Digital Video as Postliterate Practice

Original Citation:
Miles, Adrian. “Softvideography: Digital Video as Postliterate Practice.” Small Tech: The Culture of Digital Tools. Eds. Byron Hawk, David M Rieder and Ollie Oviedo. Minneapolis: University of Minnesota Press, 2008. 10-21.

Softvideography: Digital Video as Postliterate Practice

I open the box to unveil my new home computer. It might be portable, it might not, but if I’m at all interested in making my new purchase the ‘digital hub’ of my new ‘digital lifestyle’ then my computer probably has several USB ports, an IEEE 1394 (also known as FireWire or iLink) port, DVD burner, and if I went for all the options, 802.11b or 802.11g wi-fi and bluetooth. What this means, outside of the lifestyle advertising that accompanies such hardware, is that it is now technically trivial for me to connect my IEEE 1394 enabled domestic video camera to my computer, capture high quality full resolution video, edit this video, and then print this video back to tape or export it in a digital format for DVD authoring, email, or to put online. But, aside from digital home movies, what would I now do with all this audiovisual empowerment? In this chapter I’d like to suggest two answers to this question, one looks backwards to our existing paradigms of video production, distribution, and presentation, while the other looks, if not forwards, then at least sideways, to recognise that desktop networked technologies offer novel alternatives for not only production and distribution, but for what constitutes video within networked digital domains. This possible practice treats video as a writerly space where content structures are malleable, variable, and more analogous to hypertext than to what we ordinarily understand digital video to be. I call this practice softvideography.

Digitization and Production

The influence of digitization on film production is well documented, rampant and certainly shows no signs of abating (McQuire). These large scale changes in the film and television industries are affecting all sectors of the industry, from big budget Hollywood features to low budget independent documentary, yet these changes generally maintain cinema and television as a specific cultural and aesthetic institution, so what has been affected are the means and processes of production, but not the form itself. However, the rise of domestic audiovisual technologies, for example software suites such as Apple’s ‘iLife’ quartet (iDVD, iPhoto, iTunes, and iMovie), or Microsoft’s Windows Movie Maker, threatens to do for home video what the original Macintosh achieved for the word with the invention of desktop publishing: the rise of desktop video.

While the rise of digitization has encouraged the distribution of access to a wider range of video tools, and has clearly affected the distribution of labour and expertise within various cinematic and televisual industries, these desktop tools have largely concentrated on maintaining ‘film’ and ‘video’ as hegemonic aesthetic or material objects. This is what I would like to characterise more specifically as the material hegemony of video and film, and this hegemony is maintained by the manner in which digital video tools support existing paradigms of what a video ‘object’ is. This means that video for software designers, users, and consumers is still conceived of as a linear, time based object that consists principally of an image and a sound track. Even where multiple tracks may be used in what is professionally recognised as postproduction–image and sound editing, sound design, effects and so on–these are generally ‘burnt’ down, much like layers in Adobe’s Photoshop, for final delivery.

This hegemony has been maintained in teaching video and cinema, where it is common for vocational and professionally or industry orientated programs to utilise these technologies in the same manner as the broadcast and film industries.

Before exploring and demonstrating some of the potential consequences of this, and its implications for teaching in professional, vocational, and creative programs, I’d like to contextualise this paradigm shift using the example of print. This may appear odd, given the evident and obvious distance between print and the moving image, however I believe that the example of print, digitization, and hypertext has a great deal to teach image based new media practices. As I’ve argued elsewhere (Miles, “Cinematic Paradigms for Hypertext”), hypertext can be viewed as a postcinematic writing practice in its combination of minor meaningful units (shots and nodes) and their possible relations (edits and links). A great deal of the theoretical issues presented by multilinearity and narrative, whether fiction and nonfiction, including structure, causation, readerly pleasure, closure, repetition, and coherence, have a long and sophisticated history of analysis and work in hypertext. For example, many of the essays in Rieser and Zapp’s (Rieser and Zapp) recent anthology on new narrative and cinema mirrors the excitement and anticipation that hypertext authors and theorists experienced twenty years ago. Indeed, the isomorphism of the arguments and claims in some of the essays is sadly uncanny, where you could substitute ‘interactive video’ for ‘hypertext’ and not in fact notice any difference!

The isomorphic relation which exists between hypertext theory and the new wave of interactive video theory represents a traditional theoretical blindness towards the cognate discipline of hypertext in image based new media practices, and lets us anticipate, on the model of hypertext, the three historical waves of criticism that will happen within interactive video. The first, of which Rieser and Zapp’s anthology is a good example, is the work that is produced by those who establish the field and who primarily see a rich series of possibilities and a yet to be invented future. This work is full of excess, anticipation, and an almost naïve expectation about the implications of the technologies for audiences, genres, and media futures. The second wave will largely react against this first wave and will offer critiques of interactive video on the basis that interactivity isn’t ‘really’ interactive as it is scripted, that linearity is inevitable at the micro level in terms of some sort of minimal narrative structural unit, that linearity is an inevitable consequence of readerly actions, and that there have been numerous historical antecedents for the work anyway. Finally, this will mature into a third wave of theory, probably dominated by a second and younger generation of scholars and practitioners, which will accommodate and accept the idealism of the first wave but adopt a much more theoretically pragmatic attitude towards interactive video in relation to its possible media histories and futures.

This history helps us understand contemporary work in interactive video by providing some larger contexts in which it can be inserted. More significantly, it also provides a short circuit by which we can map, or at least point towards, some possible futures simply by recognising that the minor disciplinary and definitional wars that will occur–what is interactivity, when is interactive video interactive, is it still cinema?–is important to the development of the field but only in terms of the productive problems it will generate, rather than the hypostatised positions it will produce.

Softcopy Hardvideo

Diane Balestri (Balestri) in a canonical 1988 essay characterised the distinction between using a computer for writing in terms of hardcopy and softcopy. Hardcopy is where we use a computer to write but we maintain our publication medium as the page, more or less traditionally conceived. This is to use all the benefits afforded by desktop publishing and word processing, for instance spell and grammar checking, nonlinear editing, cut and paste, WYSIWYG design, inclusion of graphics, outlining, typographic design, reproducibility, and the various other formal and informal writing practices that have accrued to these word-based technologies, however hardcopy retains the material hegemony of the page. Content is still presented in primarily linear forms, the dimensions are relatively stable within a document, documents tend to be single objects, pagination and textual features such as headers, footers, alphabetisation, indices and tables of contents are enforced to manage usability. Readers and writers are largely constructed via the constraints imposed by the medium, for example closure, temporal coherence and linear cause and effect are distinguishing features and have been hypostatised as the major formal properties of writing and narrative.

Softcopy, on the other hand, is the use of the computer for writing where the publication format is understood to be the computer screen associated with a modern graphical user interface. This means that content spaces are no longer pages but screens, they can be multiple, variable in size, altered by the user, and that content can now be presented, and not only written, in multilinear and multisequential ways. As has been well described by much of the traditional published literature on hypertext (Bolter; Gaggi; Landow; Lanham) the function of the reader tends to change in such environments. The implications of softcopy for the reader have probably been overstated because there are many reading practices that are multilinear and semi-random, television viewing armed with a remote control springs to mind, while traditionally the use of a dictionary, encyclopaedia or lifestyle magazine are also good examples of non or multi linear reading. However, for writers softcopy has much more substantial implications as writing on the basis that your work lacks a definitive beginning and end, may be read in variable sequences, and not in its entirety, does deeply affect the authority and task of the writer and the status of the text as a particular kind of object. The example that hypertext, hardcopy and softcopy provides for desktop video is that the relationship between computing as a practice and the discursive objects authored is the same in word processing versus hypertext as exists between desktop video and interactive videography.

The necessity for desktop video software to adopt a hardcopy paradigm is apparent when video material is to be ‘published’ on film or video tape, as both formats basically require an image track and an audio track, though this has some variation across formats. Such media are quintessentially linear and time based as they physically require the continuous playing of their substrate through a projection or reading apparatus, and so ideally support the development of time-based narrative arts. Of course, it is theoretically possible to have only a single still image presented for the duration of a work, for example Derek Jarman’s 1993 feature Blue, which consists of a single image held on screen for 79 minutes, but of course in this case an image is recorded and represented 24, 25, or 30 times a second for the duration of the work. The technical necessity of this serialised reading and display requires any digital video editing software to reproduce this so that once editing is completed it can be ‘printed down’ so that any native digital file structure must match the material demands of video.

In other words, the majority of the tools that are used domestically, professionally and pedagogically for editing video and sound on the computer adopt a hardcopy, or as I prefer hardvideo, paradigm. This hardvideo paradigm is evidenced by the way in which all editing systems assume and provide for publication back to tape, and so maintain the video equivalent of hardcopy for video on the computer. Hence, these video and audio editing systems, much like word processing, provide numerous advantages and features compared to analogue editing, but do not require us as ‘authors’ or readers to question or rethink any of our assumptions about video as an object. A simple way to illustrate this is simply to think of frame rate and frames per second. Film has traditionally been standardised to 24 frames per second during recording and playback (though technically this is varied during recording to achieve things like fast and slow motion and stop motion animation), while video is either 25 or 30 frames per second (PAL or NTSC). However, if the computer were to be the publication environment for a digital video work, what would constitute frame rate? Frame rate exists in digital video largely as a legacy of its hardvideo heritage. In the example of Jarman’s Blue, to edit this film on a digital edit suite and then to ‘publish’ the film, even to the computer rather than to video or film, would require the editing program to ‘draw’ the image 24, 25, or 30 times a second for its 79 minutes. However, a softvideo environment would have no need to do this, simply because the image is static for 79 minutes on a computer screen so all a softvideo tool would need to do is to draw it once, and then simply hold it on screen for 79 minutes. This is how QuickTime works when it is used as an authoring and publishing environment, and this drawing of the frame once and holding it for a specific duration is an example of the difference between hardvideo and softvideo.

This difference may appear to be only a quantitative difference as in the softvideo example the image track of this 79 minute movie would literally only be as big as a single blue image at, let’s say, 1152 x 768 pixels at 72 dpi. This image, even at very high quality (little or no compression) would be approximately 100Kb in size, whereas the hardvideo digital equivalent for this image track would be approximately 600MB. However, once we introduce the network into the softvideo paradigm, this difference is size moves from a quantitative to a qualitative change.

Networks

Pedagogically, the distinction between hardcopy and softcopy in relation to text has, in my experience, proved to be a useful analogy for introducing and illustrating the relation of hardvideo to softvideo. Even where students have regarded themselves as primarily image makers they are deeply immersed and interpellated in and by print literacy, and so it provides a familiar context from which to problematize our ‘commonsense’ notions of what constitutes a possible softvideo practice. However, Balestri’s original work pays little regard to the role of the network, and it is obvious that while the difference between hard and softcopy, and for that matter hard and softvideo, does offer a paradigmatic shift, the introduction of networked technologies and their associated literacies offers a further and dramatic qualitative change.

In this context the writer-reader of the Web has become the prosumer of the digital hub, combining consumer electronics with desktop technologies to make and view, produce and listen, distribute and download. Clearly, the network is the most fluid and dynamic environment for this to take place in, and it is in the combination produced by desktop video with the network that allows for the rise of a genuinely videographic discourse. This needs to be a practice that accepts the constraints offered by the network as enabling conditions, and will become a form of video writing that, like hypertext before it, will produce a hybrid form that no longer looks backwards to existing media forms and instead peers forward towards the future genres that it will invent. What prevents this possible future is largely the constraints provided by adopting television or cinema as the primary index defining desktop video as a practice.

Softvideography

Once the computer screen and the network are regarded as the authoring and publication environment for softvideo, video can be treated as hypertext, and, in the case of QuickTime, digital video moves from being a publication environment to a writerly environment. This ability to write in and with the multiple types and iterations of tracks that constitute the QuickTime architecture is the basis for softvideography. There is, perhaps, some irony in this, as in the past I have argued strongly that hypertext systems, particularly sophisticated standalone environments like Eastgate’s Storyspace are postcinematic writing environments (Miles, “Cinematic Paradigms for Hypertext”). In this postcinematic conception of hypertext, nodes are structurally equivalent to cinematic shots in that they are minimal meaningful structural units, and links operate much as cinematic edits, joining these meaningful units into larger syntagmatic chains. Of course, the difference in hypertext, unlike traditional cinema or television, is that the syntagmatic chains formed are singular, whereas in hypertext they are plural, and their particular formations can vary between the reader, author, and the system. This is in fact a claim I regularly make with my undergraduate students undertaking hypertext work, and after six years I still have not had a student, all of whom have had some form of audio or video editing experience, not understand that this is in fact a very simple but powerful way to consider hypertext content and authoring. However, in softvideography this cinematic hypertextuality is returned to cinema, but in a manner that means considerably more than linked movies. To illustrate this I will use the example of hypertext.

As a writer within a hypertext system, it is useful to consider each node as a minimal meaningful unit that may, or may not, appear in any particular reading. This approach to writing hypertext encourages the writers to shift their conception of authorship and reading away from a print and televisual model which assumes that the user or reader will comprehensively read or watch the entire work, from beginning to end. It also means the writer needs to recognise that each node exists within an economy of individuated readings where the link structures authored or generated constitutes the possibility for this economy. The content of a node has a meaningful status outside of its specific or possible link structure; however, nodes gain their meaning by virtue of the larger structures that they form via links. Simply, each node is a variable that, in conjunction with its links, may or may not appear in an individual reading. Whether they get ‘used’ or read is subject to various authorial, readerly and scripted decisions. They may appear, or they may not, and when they appear can be varied. The same consideration of content spaces is necessary in softvideography, so that the softvideograph equivalent of nodes is not conceived of as fixed or essential units or blocks, but as available and possible. This does require a shift in conception, largely because in such a temporally subjected media as video all shots or other significant units within the work are regarded as essential and inevitable. Shot sixteen is always shot sixteen, and because of this is thought of as in some manner quintessentially necessary to the work.

QuickTime is the structural architecture that supports softvideography in this way. In general, QuickTime provides two broad ways of treating content so that a movie’s structural units are contingent and variable and not immutable and fixed. The first way is to take advantage of QuickTime’s track based architecture, where a QuickTime file can consist of not only numerous track types, including video, audio, text, colour, sprite, midi and picture, but also multiple instances of each track type. Therefore, it is possible to make a QuickTime ‘movie’ that contains three video tracks, four soundtracks, a text track and two picture tracks. Each of these tracks is an object within the ‘movie’ and as an object it is structurally identical to how I have characterised a node, so that each track object, for instance each of the three video tracks, is able to be made available within the single movie on a contingent or programmatic basis. This means that you can script a QuickTime movie so that each of the three video tracks plays, or some combination of the three plays, and similarly with the soundtracks you may play all soundtracks at once, some combination of these, and of course you can vary the volume of each of the soundtracks subject to system, user, movie, or external variables. This applies to each of the track types, and individually for each track, and all of these can be varied in time, as the QuickTime file plays, which obviously suggests that complex permutations and relations are possible between all of the tracks of the QuickTime file.

An example of this is “Collins Street” (Miles, “Vog: Collins Street”) which is a small QuickTime work that consists of nine video tracks, three sound tracks, one sprite track, and a colour track. The sprite track, which is a fully scriptable track type in QuickTime, contains nine still images that are temporarily collaged over individual video panes, and the colour track is simply the movie’s black background, which in QuickTime is not an image but more like a vector based track and so draws a colour at a specified size. As “Collins Street” (a downtown Melbourne street) plays the user can mouse over each of the video panes, and doing so ‘triggers’ the sprite track which turns on and displays for a pre-scripted duration a jpeg image which contains text. The same sprite track also controls which of the three simultaneous soundtracks is being heard, and its relative volume. While this particular work might be thought of as an experimental documentary, it does illustrate some of the things that can be done using QuickTime, and the way in which tracks can be considered ‘independent’ objects within the movie, so that the movie now becomes not a linear audio and visual track but a container for a multiplicity of such tracks that are enabled variably.

As a more complex example, imagine a video image of a student cafeteria, with several tables of animated conversation in view. Mousing over each table could, for example, allow the user to hear the conversation at that particular table, while clicking on each table could load a new QuickTime movie that would take you to that table. To make this example more sophisticated, imagine that within this cafeteria scene when you click on a particular table to ‘zoom’ in to that specific conversation is significant–to click on a specific table in the second thirty seconds loads a different movie than if you had clicked in the first thirty seconds. Once you begin to appreciate that this is possible, then the sorts of narratives and content that can be made becomes distinctly different to our existing conceptions of video narrative. Time dependent, or otherwise variable links, embedded within the field of the video, shifts authorial activity away from the ‘button’ model common to multimedia and most existing forms of online video. These contextual intra-video links are qualitatively different sorts of link events than navigational, volume, and play buttons, in the same manner that text links within a written hypertext work are qualitatively different to those links constructed and provided by a navigational menu (Ricardo).

The second manner in which QuickTime supports work like this is through its provision of ‘parent movies’ and ‘child movies’. A parent movie is a container movie that may, like any other QuickTime movie, consist of numerous tracks, but it will also include one or more movie tracks. A movie track, which should not be confused with a video track, is a QuickTime track that allows you to load external QuickTime content, in effect other QuickTime movies, into another movie. The movie that contains the movie track is known as the parent movie, and the content that is loaded within the parent movie is known as a child movie. Child movie content can be any data type that QuickTime can read, and it can reside anywhere that the parent movie can access, so if the parent movie is designed to be delivered via the network, then the child movie content can, literally, reside anywhere else on the network. A parent movie can contain multiple child movie tracks, but more impressively an individual movie track in a parent movie operates as a list so that it may contain numerous individual external files. For example, you can make a QuickTime parent movie that contains a child track, and that individual child track consists of, let’s say, a list of nine sound tracks. The parent movie can be scripted so that one of the nine child movies is loaded subject to whatever conditions or actions are scripted for, and this can be altered dynamically during the playing of the parent movie. Child movies exist in complex relations to parent movies, as it is possible to tie a child movie’s duration and playback to its parent, or for the child to be independent of the parent. Where a child movie is slaved to the parent movie it may only play when the parent movie is playing, and it will stop playing when the parent movie ends. Where a child movie track is not slaved, then it can play independently of the parent movie’s duration, and even separately from the parent movie’s play state, so that even where a parent movie may be paused, the child movie can continue to play.

One example of this is “Exquisite Corpse 1.1″ (Miles, “Exquisite Corpse 1.1″) which is a triptych which has a single parent movie which loads one child movie in each of three movie tracks. Within this brief work the movie tracks appear as video panes in the parent movie, but since they are child movies the three video movies that appear all reside outside of the parent movie. The child movies have been scripted to loop, and for their duration’s to be independent of the parent movie, which in this case is a QuickTime movie that is only one frame long. In addition the bar above and below each video pane is a sprite track, so that mousing into any of the bars controls the volume and the playback rate of each of the three child movies, such that the current video pane plays at twenty four frames per second at full volume, then the next plays at twelve at zero volume, and the next at six also at zero volume. Each of the three movies varies slightly in content, and the effect of this structure means that to view the movie the user literally plays the movie, and that when and where they mouse controls the combinations formed between each of the three simultaneous video panes. This has several rather intriguing consequences. The first is that as each of the three child movies have durations that are independent from the parent movie, and from each other, then the work as a whole would appear to be endless, or at least of indeterminate duration. This is not simply the looping used in animation and programming, and described in detail by Manovich (Manovich), as there is no sense of necessary or inevitable repetition involved. The work continues, indefinitely and variably, until the user stops, and while they play it loops, but the combinations formed, the rates of playback, what is heard and seen will always be novel. The sense of duration implied here is fundamentally different to that associated with film and video, which have traditionally been subject to and by time.

Another implication of this structure is that if we consider montage as the relations between images in time, then here montage is apparent not only within each video but also in the ongoing relations between each video pane, and that this larger set of relations is partially controlled by the user. Hence montage, which is ordinarily conceived of as fixed and immutable, has become unfixed and mutable, which in turn provides a preliminary illustration of how the ‘place’ or ‘site’ of the event of montage, will move towards the user in interactive video. This is analogous to the manner in which hypertext theory conceives of the reader’s role in relation to the realised text, so that the discursive system becomes a field for the provision of possibilities, and individual readings or playings becomes the realisation of individual variations within this field of possibilities.

Softvideo Pedagogy

There are several software packages available at the time of writing that support using QuickTime as an authorial and writerly environment. Some of these tools are cross platform; however, much of the innovation in interactive video appears to be developing around Apple’s OS X operating system, and QuickTime remains the only widespread audiovisual file structure that conceives of time-based media as something other than a delivery platform. QuickTime Pro, which is what QuickTime becomes when you pay the licence fee and register your existing copy, provides access to a great deal of these authoring possibilities. EZMediaQTI is a recently developed commercial software package that provides a very simple interface to much of QuickTime’s underlying programmable architecture, while Totally Hip’s cross platform LiveStage Professional is currently the major tool used in QuickTime authoring, outside of programming QuickTime in Java. However, it is not the software product that is the tool in the context of interactive networked video, but QuickTime as an architecture, and like considering text on a page as an ‘architecture’, the specific tools are less significant than developing literacies around what the architecture makes possible. It is these literacies that allow us to not only use these software products as tools, but lets us appropriate them for novel uses and possibilities. After all, one of the major issues confronting teaching technologies in networked and integrated media contexts is the balance and confusion students experience between learning the ‘tool’ and learning what can be done with the ‘tool’. To encourage this I use three simple exercises to help students move their attention away from the software as an apparatus towards their ‘content objects’ as the apparatus. Or, to return to my earlier terms, to stop thinking of the software as the architecture and understanding that the architecture is a combination of what and how the work is ‘built’.

The first exercise uses QuickTime Pro only and is intended to show that you can make collaged and montaged time-based video works using very simple technologies. The introductory tutorial for this exercise, including all the necessary content, is published online (Miles, “Desktop Vogging: Part One”), and demonstrates how to import still images into a QuickTime movie, scale the image to a nominated duration, for example to accompany a soundtrack, how to then embed other still images to create a collage, and then to embed video tracks over these image tracks to end up with a collaged movie that contains four still images, one soundtrack and three video tracks. While demonstrating the desktop nature of such work, after all QuickTime Pro is a simple thirty dollar piece of software, it also foregrounds the manner in which tracks in QuickTime are samples and fragments of a larger whole, and not completed content that is exported via QuickTime for publication. After this tutorial, students are then invited to collect their own material, using domestic camcorders, digital cameras, scanners, and minidisk, and to make a short standalone QuickTime collage. As experience builds constraints are introduced to the exercise, so that the final work must be two minutes in length and may be limited for example to a total of 2MB, containing a nominated number of specified track types. This aspect of the task is where one facet of network literacy is introduced as bandwidth in its various forms becomes concrete under such constraints.

The second exercise is based on this QuickTime collage project that students have already completed and uses QuickTime’s HREF track type which allows a movie to load Web based content as it plays. The HREF track is a specific type of text track within QuickTime that contains a combination of timecode and URLs. Timecode is how specific moments in the film are nominated and the URLs can be either ‘manual HREFs’ or ‘auto HREFs’. For example this is an extract from a HREF track indicating an in and out point for each HREF, and also illustrating that some HREFs are manual and some are automatic.

[00:00:00.00]

[00:00:02.00]

[00:00:04.00]

A

[00:00:06.00]

AT

[00:00:10:00]

The usual way in which a QuickTime movie with a HREF track is used is to embed the movie on a web page within a frame, and to use the HREF track to target another frame, and so the URLs contained in the HREF track are automatically displayed in the target frame as the movie plays. The distinction between an automatic and a manual HREF is simply that the automatic HREF URL will load in the nominated frame as the movie plays, while the manual HREF URL will only load if the user clicks on the movie during the relevant time interval. The URL that the HREF track indicates is simply a Web page and so can contain or display any content that can be displayed via a Web server and within a Web browser, including of course other QuickTime movies.

The task for the students is to write a series of text only Web pages that the QuickTime file loads as it plays within the frame, and for these pages to complement the collaged QuickTime work in a meaningful way. They may, of course, make a new QuickTime collage piece for this task. Text only is nominated because it loads much more quickly than any other sort of content via http, and so is viable for this project when the work is viewed on the Internet. It is also to encourage students to begin to think about the relation of word to image in terms of aesthetic practice and narrative, and of course to model the idea that text may exist within the QuickTime movie as embedded, concealed and operative code rather than surface effect and narrative. This assignment also provides a very minimal form of explicit interactivity between the user, the QuickTime movie, and the loaded pages, particularly where HREF or a combination of HREF and auto HREF URLs are used, and this requires students to extend their understanding of the possible relations between parts outside of the example of the single QuickTime collage and towards external objects and their possible relations to their movies.

The HREF track is ordinarily written using a simple text editor, imported into QuickTime, converted into a HREF track, exported from QuickTime with time code added, and then edited for reimporting into the movie. This is clumsy, and there are more efficient ways of doing this, but it also demystifies what a text track and a HREF track is in a QuickTime movie, and insistently demonstrates the desktop nature of softvideography as a writerly practice as in this example an interactive Web-based, mixed-media movie has been made using only QuickTime pro, and whatever free text editor comes with the computer’s operating system.

The third exercise is also network based and is to help students think about and understand the possible relations between parts and the implications of this. While the intention of softvideography is to use QuickTime as a writerly environment this does extend beyond the internal relations of tracks and samples to include the relations of external tracks or samples, which is of course important when working with parent and child movies, as well as understanding multilinear environments in general. The formal task, which can involve video, still image, or a combination of both, is to develop a narrative that consists of seven shots where each of the seven shots may appear at any point in the sequence. In other words each shot or image may appear once at any point in the sequence of seven, and regardless of where it appears the sequence must still retain narrative coherence. Intertitles can be used, though this counts as a shot. The students then embed their video on a web page that contains a script that automatically randomises the insertion of each of the seven movie files. This is done by taking advantage of QuickTime’s QTNext tag, available when QuickTime is embedded via HTML, which allows you to play up to 256 QuickTime files in a row, so that as the first QuickTime file ends the QuickTime plug in then requests the next file, and so on. This means that when the Web page that contains their movies is viewed, each individual viewing displays one of five thousand and forty possible sequences.

This exercise is useful because it allows students to see how complex narrative or multilinear possibilities develop from quite simple and small sets, and that complexity is not synonymous with the large scale nested or branching structures that is common when students first start trying to conceive of multilinearity. This task also demonstrates one of the most difficult aspects of new media narration, for to conceive of seven possible moments, images, shots, or words that can be randomly presented in this manner, and yet retain something that might be identified as narrative, is a particularly demanding task. Many of the works produced are more like tone poems or mood pieces, what film semiotician Christian Metz has catalogued as a bracket syntagma (Metz), which of course suggests the difficulty of constituting narrative via fragments that need to be narratively ‘permeable’ in this manner. Incidentally, this exercise also helps students in their reading of canonical hypertext literature, such as Joyce’s Afternoon or Moulthrop’s Victory Garden (Joyce; Moulthrop), as it provides them with a cognitive and formal template to understand the structural problems and processes of these works. This might indicate that narrative is not a reasonable expectation of work that is intended to be as multivalent as this.

Conclusion

When we learn to treat desktop digital video as a writerly space, with all the baggage that this connotes, we can recognise that an architecture such as QuickTime is a programmatic environment and like many other networked programmatic environments involves consideration of the relations between image and word. This is the crux of what constitutes new media and networked literacy and is why digital video as a networked, distributed and writerly practice becomes an exemplar for the issues confronting teaching and learning in contemporary media contexts. Softvideography reconfigures the relation of author to viewer in the domain of time-based media and provides one model for a future pedagogy that emerges from the implications of networked digital practice. Such tools ought to allow us to reconsider not only what to do with video and sound online, but also offers the possibility for developing novel expressions of learning and knowledge. This is an ambitious agenda, but one that our students deserve and require for the networked ecology that they are inheriting.

Works Cited

Balestri, Diane Pelkus. “Softcopy and Hard: Wordprocessing and the Writing Process.” Academic Computing. Feb (1988): 41-45.

Bolter, Jay David. Writing Space: The Computer, Hypertext, and the History of Writing. Hillsdale: Lawrence Erlbaum Associates, 1991.

Gaggi, Silvio. “Hyperrealities and Hypertexts.” From Text to Hypertext: Decentering the Subject in Fiction, Film, the Visual Arts, and Electronic Media. Philadelphia: University of Pennsylvania Press, 1997. 98-139.

Joyce, Michael. Afternoon: A Story. Watertown: Eastgate Systems, 1987.

Landow, George P. Hypertext 2.0: The Convergence of Contemporary Critical Theory and Technology. Baltimore: John Hopkins University Press, 1997.

Lanham, Richard A. “The Electronic Word: Literary Study and the Digital Revolution.” The Electronic Word: Democracy, Technology, and the Arts. Chicago: The University of Chicago Press, 1993. 3-28.

Manovich, Lev. The Language of New Media. Cambridge: MIT Press, 2001.

McQuire, Scott. Crossing the Digital Threshold. Brisbane: Australian Key Centre for Cultural and Media Policy, 1997.

Metz, Christian. Film Language: A Semiotics of the Cinema. Trans. Michael Taylor. New York: Oxford University Press, 1974.

Miles, Adrian. “Cinematic Paradigms for Hypertext.” Continuum: Journal of Media and Cultural Studies 13.2 (July 1999): 217-26.

Miles, Adrian. “Desktop Vogging: Part One.” Fine Art Forum 17.3 (2003).

Miles, Adrian. “Exquisite Corpse 1.1.” Videoblog: Vog 2002. 19 February 2003.

.

Miles, Adrian. “Vog: Collins Street.” BeeHive 4.2 (2001).

Moulthrop, Stuart. Victory Garden. Watertown: Eastgate Systems, 1991.

Ricardo, Francisco J. “Stalking the Paratext: Speculations on Hypertext Links as a Second Order Text.” Proceedings of the Ninth Acm Conference on Hypertext and Hypermedia: Links, Objects Time and Space – Structure in Hypermedia Systems. Eds. Frank Shipman, Elli Mylonas and Kaj Groenback. Pittsburgh: ACM, 1998. 142-51.

Rieser, Martin, and Andrea Zapp, eds. New Screen Media: Cinema/Art/Narrative. London: British Film Institute, 2002.

video blogging, et al.