drew davidson sent me an email today alerting me of a slashdot story on videoblogs. a lot of comments. many of the ‘blogs are dumb’ or ‘vanity video’ variety, but for such a geared up digital community they’re surprisingly low bro when it comes to knowing the first thing about video. like the post that says you need a qtss server (or similar). nope, http is just fine for video blogging.
anyway, apart from feeling smug since my video blog is over two years old i remain dumbfounded that the only model people seem to be able to think of for video blogging is middle brow distributed talking heads. aka tv journalism meets reality tv stirred for the web. well, yes, this will work, and is viable, now. but isn’t it little more than vanity video wannabes?
then i’m reading scott’s blog today and he’s discussing noah’s talk down his end of the world oh my goodness he too makes the same mistake (and he should know better) in an aside about hypermedia and ted nelson:
Noah also cleared up some confusion I had about the term “hypermedia” — in Nelson’s terminology, hypermedia is not just multimedia, but multimedia the user can manipulate interactively (my paraphrase). So an online dissection kit is hypermedia, a Quicktime video is not.
all my vogs are quicktime video. quicktime video is pretty much the only environment that lets you script interactive video. well, there’s DVD of course which is mpeg2, but really. quicktime is a file architecture that supports different file formats (over 100 graphics formats, i don’t remember how many video formats) and it has a sophisticated whatver you call it in programming speak, where you can script extraordinary things. (write a movie that responds to xml, user events, other movies, any input you can script (mouse, keyboard, microphone, external files), and so on.
it isn’t that myjop is to save quicktime from the flash kiss of death. it is to get people to understand that if you want to work simply with interactive video on your desktop, then quicktime is the environment. everything else still thinks it is television. i’ve an essay about this out later this year, in the meantime check out part one of a tutorial (also being published shortly):
This year’s Digital Resources in the Humanities gig is at the University of Gloucestershire. This is an annual British conference that is similar to the ACH conference series, though DRH tends to have more media rich works. It’s a useful event that is a beginning in the effort to bring together the predominately archival and literary computing crowd with those crossing more fully into new media, education, and similar.
April Melbourne Weather
Today I remembered why I wanted MelbourneDAC to be held in April, preferably over Easter. All week the weather’s been (will be) cool mornings, blue skied mid 20s afternoons. Never mind. But it is weather that encourages conversation.
Rosella Take Two
Well, I returned to the Rosella vog self portrait that I finished last week, since I’ve been unsatisfied with it. Looking back at some earlier work I realised how much I like the work that has the multiple video panes, and so I’m returning to that for a while. I really have not explored enough of the possibilities that this offers.
(Last week I gave a lecture to a group of students about working creatively and productively in networked media – what I always characterise as affirming the constraints of networked media, and I realised afterwards that all my recent work has not been using the video panes, it prompted me to return for a while.)
This version of “rosellasorselfportrait2-hand” is exactly the same as the previous except I’ve upped the data rate slightly (a whopping 1Kb per pane, each pane of the main video is compressed to 12 fps, 3kbs with a peak of 6kbs using the Sorenson 3 professional codec with 2 pass Variable Bit Rate turned on and with natural aka automatic key frames enabled), and have tiled the video into 9 panes.
This returns me to the montage with collage material that I’m particularly interested in exploring.
My interactive video aka video blog project has been included in the most recent low-fi net art projects online show. This is sort of rather pleasing (as invitations to present your work always are) since I don’t think I actually applied so they have seen my work and included it amongst others.
Next week I’ll go back and spend some time looking at the other projects, but for now, I just really like the homepage. Call me slow but it actually took me days to realise that the spinning cursor was the hotspot – I’d just figured that there was some weird Safari and Flash problem I’d never encountered before. Then I realised that cursor was always in the same place, and so I moused over and yep, that’s the spot.
Increasingly I’m learning and realising that I like interfaces and work that don’t just make me ‘work’ (in the way that earlier hypertext theory talked about the new role of the hypertext reader) but which shift control away from me as user to a stronger dialogue with the work.
This is sort of the standard claim of interactive media, but most works just don’t do this, they’re images that are rollovers or are not much more than menus-as-pictures with the same level of action or interaction as a text list in html, just prettier. But here something as simple as a cursor spinning – what I normally see for instance Safari has frozen on a page, teases me with its irony, discreteness, and hidden invitation. I like that. Control should be distributed, and then, if we’re lucky, it might not be control anymore.
New Media Reader
Nick and Noah’s anthology, “The New Media Reader” has just arrived on my desk. Took ages, an evaluation copy sent surface mail by MIT Press tends to take a while to arrive in Australia. Immediate impressions. It’s big. It’s thorough. It is catholic. I’d probably use it as a excellent source text for a review subject.
I actually would like to read it cover to cover, since there’s a lot of material in there that I know about but haven’t actually read (as a contemporary academic, who has time to read cover to cover?) and now these nice people have put it all together for me. It is a survey, but even includes a brief section from Deleuze and Guattari’s opening chapter of “A Thousand Plateaus”, so I’m impressed.
The self assessment that students are doing in hypertext theory and practice is, well, an interesting experience to date. Several students still struggle with what it is they are assessing, and want me to either authorise their assessment by agreeing with their marks or their claims, or simply retreat to a very content focussed model to judge what they’ve done and learnt.
This means that a student might have learnt a lot about using the computer and blogging (what I characterise as collateral learning) but they haven’t learnt theory X, and don’t know or recognise that they have actually learnt something, and that something is of value.
From my own point of view it is interesting that every week in our labs it is only at the end that we do the self assessments, which inevitably means we run out of time. This is because I too am content focussed and so try to get to an ‘end’ in the lab and then we can do the self assessments. This is counterproductive, and today I realised the very simple and retrospectively obvious solution. We will do the self assessments at the begi
I’ve just finished another vog, this one is the second of a proposed trilogy of self portraits and is me feeding some crimson rosellas at Christmas time.
Technically this is a simple vog, one video track, three animation sprites and a child movie track that loads one of three audio tracks on mouse enter events. The animation sprites are made up of four still images from the video that are run through using the idle event time of each sprite. Each of these sprites also has a mouse enter action which changes the idle event time for the sprite (so the animation speed varies) and it also loads a child movie audio track. These child movie tracks (child movies are movie assets – stills, audio, video or whatever, that are stored externally to the parent movie and only downloaded if the client/user requests them) vary in size from 60 to 180 kb and so there will be some lag as enough downloads via http for the track to begin playing.
The video track just plays, all the user action happens down below in the animation tracks.
Aesthetically the work is thin. The visual layout with the three panes being wider than the video pane was a continuation of the recent work where the kinesthetic areas of a vog move away from simple or singular video panes, but it is too concrete and present. I think it might work better if some of the panes varied (faded in and out for instance) over time, or on the basis of user actions.
The audio tracks are also, um, not quite there. Commentary from me which was originally made as text on screen as in the walking vog, but I just wanted a more audiovisual experience happening so decided to shift it to soundtracks (which actually makes it much faster and easier to make). But the commentary don’t give the texture to the images that is required, it just stays too flat. This is not an artefact of the format – television ads can tell compelling stories in 30 seconds – but is largely a product of what I’m slowly recognising as my high formalism. Probably why I see myself as an academic and not an artist (artists I suspect see my work as too academic, academics of course only see art).
However, I like the visual texture that is starting to happen, though I want it to be more variable, so next time I will think about having the mouse events happen in the video pane, and these control the animation sprites, so that some fluidity enters the interaction. I’m not a big one on mousing into sprite A controls sprite A.
Desktop Vogging Tutorial
The current issue of the Fine Art Forum Ezine is out. Has part one of a tutorial I’m writing on desktop vogging. In the first part I show how to make a collaged/montaged movie using QuickTime Player and in the introduction I mention some of the qualities that need to be affirmed to work successfully in internet video. In the tute itself I also point out how doing this in QuickTime Player produces a completely different outcome to working in a nonlinear editing package. All the media files needed for the tutorial are included. YMMV.
My kids and I have a pet yabby (Cherax destructor). His name (well I haven’t confirmed gender actually) is Kevin because he is such an avid and serious gardener (if you grew up in Victoria then the name ought to make immediate obvious sense).
And I’ve become a bit of a fan of Kevin as a pet. Yabbies tolerate extreme water conditions, can go without food for three months, and are rather active beasties. Though it turns out that they are very territorial, so our two yabbies in one tank are now one . . .Yesterday Kevin moulted. The third time since Christmas. First time I’ve seen it happen though. As the new exoskeleton matures he pumps water out of his body to shrink, his old exoskeleton splits and he emerges out of it. He then pumps water back into his body to fill the new exoskeleton, and keeps rather quiet for a day or so at is hardens. So yesterday when he was lying on his back keeping rather still I decided he was moulting rather than dying, and sure enough his shell split along the length of his underside and he somehow wiggled and squeezed his way out of it. Normally he’d eat the old exoskeleton, largely for the calcium, but Sophie’s taken it to school for show and tell. It really is quite remarkable.
In a lecture the other day someone asked “What is the difference between a blog and a web page?” An excellent and