Tag Archives: Vogging Practice

QuickTime Got Broke

Finally part aside the time to begin the rebuild of my vog site. The problem, as I recently discovered, is that the QuickTime plugin no longer seems to support much of the deep programmatic interactivity that has been a part of QuickTime since, well, it was a twinkle in JOb’s eye. (A lot of the work that was going to become a bigger new Hypercard actually got put into QuickTime.) This means nearly all of my online video work doesn’t work. But it does in QuickTime player. So the plan was to rebuild the site, using HTML5, and target the QuickTime player, taking advantage of the JavaScript libraries that Apple provides that let you do nifty things. Except, well, fuck me. The most recent QuickTime player, that dumbed down thing that, dare I say it, is all about surface and polish but is empty underneath. Well, that is just like the plugin. All the programmatic goodness has been stripped out of that too! QuickTime Player 7, which is still available, works just fine. But this means most of my stuff will now only work if I put up some sort of declaration saying “open and play with QuickTime 7 please, you can get it here”. But I can’t target it.

I don’t know what Apple’s rationale is here, except to shift all the stuff about video and QuickTime towards iDevices and production (Final Cut). In other words they’ve taken a sophisticated technology and stripped it bare, dumbed it down, turned it from a programmatic form to just a media container for making and delivering. A moment worthy of, well I was going to say Microsoft but that might be a bit harsh. At this point I have no idea what I will do. Perhaps make all the work available for download so it can be played locally with QuickTime 7. Perhaps make a 100 screencasts of what they actually do and turn the site into a bloody archive.

Back to the Future

Turns out that some aspects of video for iOS devices and the like takes us back to the early days of video on the web. These days, for those of us into video blogging and the like, we pretty much just upload and embed video and deliver it via HTTP. Either of a hosting service that we pay for ourselves, or pumped out via a third party video service like blip, or vimeo. There’s enough processor power, bandwidth and the like to do this without too much fuss. Indeed, some compress to crazy bandwidth polluting specifications, but it pretty much still gets pumped around the interweb OK.

(Bandwidth excess is pollution, it is the digital equivalent of having cheap oil so building cars that burn lots of fuel because it is cheap, and then spending billions on infrastructure for all those cars, but public transport, the environment, all sort of take a back seat.)

However, in the early days, when bandwidth was narrow, video was small, and not many of us actually played with it online, QuickTime had this thing called a reference movie (which was not the same as the reference movie option you get when saving a QuickTime file in QuickTime) which was a tiny little place holder video you created. This movie (which you made with the MakeRefMovieX utility) contains links to different sized versions of your QuickTime file. You embedded the ref movie in your page and the QuickTime plugin had a conversation between itself, your computer and the ref movie to work out bandwidth, and served up the appropriate version of the file.

Now, this was sort of clever and cool, as it meant that bandwidth appropriate stuff could be sent. But it turned out that in practice it wasn’t that great for the very simple reason that it removed choice for users. Let’s say you have a big file of the last 10kms of the 2009 Mens’ World Road Race championship. Nice quality. But it is big. Real big. You also have a middle sized and small versions, and you’ve made ref movie so this is solved for me, the user. I’m not on great bandwidth, so it sends me the middle sized one. Yeah, ok, but I really want the big one. I know it will take 4 hours to download, but I’m a fan and for this content, 4 hours is fine. I want the hi-rez. A ref movie prevents me from choosing this. So what we did pretty quickly was simply compress our 3 versions and provide links to all three, with some indication of dimension and data size.

Well, as I’ve been catching up on the world of online video recently it seems we may be heading back to this older ref movie model. There have been some developments that I’ve missed, the main one of which is HTTP Live Streaming. This is sort of like RTSP but via HTTP and is an Apple developed protocol. The advantages are that RTSP uses funny port numbers and all that, so firewalls, proxies can all get cranky, which just means the content won’t appear. But if you deliver via HTTP, well, that’s the protocol and port (number 80 if you’re interested) that the Web uses, and there is no firewall that simply just stops traffic on port 80. It also turns out that the Adobe Flash Media Server, which I is Adobe’s version of video streaming (but unlike Apple’s Darwin server costs somewhere around USD4500 – heck the paid version of Apple’s OSX server which includes the video server is only $52.00!) also provides for this protocol largely because it is the only way to get video into iOS device apps.

This is going back to the future because a series of steps and constraints have been reintroduced:

  1. if you want to send video or audio that is over 10 minutes in length over a mobile network (NOT WIFI) to an iOS device then you just have to use HTTP live streaming (this is the overview page)
  2. If that isn’t enough then you must include a low bit rate version with a max data rate of 64 Kbps that it can default to when you hit that service shadow
  3. you need to run your compressed video through some other tools to get it ready for Live Streaming – this includes the Media Stream Segmenter, the Media File Segmenter, and the Media Stream Validator
  4. But it is ambiguous in the documentation if you have to do this, or it is recommended – essentially the protocol lets the client jump from different versions as it plays. This matters because the client may walk out of wifi and into mobile coverage while watching the content (as an example) – but after some more reading you have to do this for app approval, if the video isn’t going through an app then I guess you can do what you like, including trying to only rely on wifi for delivery

UPDATE: saw mention that RTSP is just plain not supported on iOS, and that the requirement that HTTP live streaming must be used is enforced through the app approval process, so presumably if you just progressively downloaded the material it will still work, though the user experience could suck.

The Vogma Manifesto (2000)

(Original post: December 6, 2000)

a vog respects bandwidth
a vog is not streaming video (this is not the reinvention of television)
a vog uses performative video and/or audio
a vog is personal
a vog uses available technology
a vog experiments with writerly video and audio
a vog lies between writing and the televisual
a vog explores the proximate distance of words and moving media
a vog is dziga vertov with a mac and a modem

(added on February 2, 2002)

a vog is a video blog where video in a blog must be more than video in a blog

HTML 5

Does some nice things for video. For one for the first time there is a video tag. Just like the image tag. Makes putting video on a web page about 300% easier. July found some information about this, while Celine has a brief note to self regarding the same. Henry is stuck trying to embed his video, and so some HTML 5 video tag goodness his way ought to come. So two of the best – Video on the Web – courtesy of diveintohtml5.org (the code stuff is at the very end of the screen), and the detailed stuff at w3schools.com on the video tag (you need to click on each of the tags attributes for more details).

About A Year 0.2

Have realised that my video practice is so doggedly observational and, well, quotidian might do as a word, that I rarely film anything that actually does anything. Nothing happens. I’ve also realised this is one of the reasons why I often use text in my movies in the way that I do. While I’m drawn to video I think I am a writer first, and where I use images they are propositions, ideas, thoughts, they are not used to describe or narrate. As a consequence as I’ve been experimenting through the first stage of the new K-film project I’ve realised that nearly all the material could just be still images, that really interesting things happen when you see, say, 12 images as thumbnails, but if you choose one to view the video not much more happens. You don’t learn much more. So as a user you then tend to keep ‘surfing’ to see the visual patterns (that are meaningful relations within the work) rather than spending time with the video. Harsh realisation that one. So in response I am now adding some commentary come text to each video, they are not to be as atomistic as in the Reveries project, though neither is it to be a linear narrative. I’ll finish writing these for the current 30 video clips that I’m using and see what it’s like. At the moment my method is to view each clip and to write, with the previous clips text visible (as the first clip returned in About a Year is always the next clip in oldest to newest temporal order), something that sort of could flow from the previous clip, but also survives by itself. Non committal, riffs. I think a better model would be to simply write, one short line for each node, and then copy and paste in. Anyway’s current draft from today is available – About a Year Version 0.2.

Student K-Films

Have finally finished writing up what is in effect a curatorial work. Seven thousand odd words describing, critiquing the Korsakow films that students made as a part of Integrated Media One this year. The student works are located at http://vogmae.net.au/classworks/2010.html. Click image to go to a page for the individual work, with my critique attached. The quality ranges considerably, as does the content and what the projects have explored or attempted to do. However, on the whole given this was their first use of the system, and we had a twelve week semester, they have achieved a great deal.

I enjoyed teaching it a great deal, largely because I was able to situate our approach to the K-films very strongly (firmly) within hypertextual practices. (The little site was written using Tinderbox – vaguely related description of why is in a draft project.)