Justin Romack’s demo has to be slowed down quite a bit for us sighted folks watching the video

As he struggles to hold onto the faltering traces of his eyesight (and now, months later, he’s desperately hoping that a miracle cure will let him see his newborn son) Justin raves about his unexpectedly amazing experiences with the seemingly ‘blindness hostile’ smooth touch screen of the iPhone.

http://vimeo.com/21734796

He’s had more than enough to cope with lately.

A few weeks ago Ty Romack came into the world worrying everyone who knows Justin and Angela about whether he’d manage to put on the extra weight that he’d somehow failed to gain before being born.

Hardly surprising that we’ve not yet seen the follow-up video from Justin.

Speaking personally, he’s got me waiting on the edge of my seat.

Why? (apart from wanting to hear more good news about Ty, of course).

Well, this next video (from a few years ago) gives you a brief insight into why the kinds of ambitions (to show us some of the amazing possibilities that this kind of UI offers in the context of other software applications) that Justin talked about in the video above, look like they might just prove to be well worth watching out for.

What strikes me each time I watch his videos, is that he seems to have somehow arrived from another dimension.

There’s something about the issues here which has a direct parallel with the successful uptake of text messaging.

Text messaging requires you to press the digit keys, potentially multiple times in order to use alphabetical characters instead of numbers.

Just typing those last words in makes me cringe, primarily because explaining how text messaging works makes me feel like I am speaking to some time traveller from the distant past: nobody who reads this could possibly be unaware of those facts.

And yet I am one of those many millions who owned a mobile phone for well over a decade before sending their first text message.

I neither new that I could, nor felt any inclination to do so.

It was an interface which was as unintuitive and unfriendly as I could imagine.

And yet those that have become the most proficient at it can type a text message without even looking at their phone.

Instead of merely straddling the worlds of sightedness and sightlessness and struggling to build his daily existence around the unimaginably bewildering and frustrating constraints that this awkward state of affairs imposes, he brings an imagination which seems bent upon reconstructing the world out of completely (and often breathtaking) overlooked possibilities.

Wherever our assumptions seem to indicate inevitable futility and ultimate pointlessness (e.g., movie directing and video editing for the blind) to him they not only seem to make obvious sense, but they offer the prospect of opening up exciting new horizons that go far beyond the limits of our current interactions with these technologies, whatever state our eyesight happens to be in.

What occurred to me when I watched this, is that a sighted user would no longer need a supposedly exclusively visual UI in order to operate what we currently consider to be an extremely sophisticated ‘content navigation process’.

This reminds me of the manual gear shift: a seriously critical operation requiring significant sensitivity and potentially continuous interaction.

It also requires at least some degree of appreciation of ‘state’ as well as multi-directional positional manipulation.

All this without requiring any visual interaction and all operated at the same time as as other vision-intensive critical activity.

So the stick shift UI demonstrates that it is possible to create interfaces for activities which would normally be deemed to be sophisticated enough to demand visual interaction, but to make them essentially ‘non-visual’ due to ‘visual interaction not being a practical option in the context of intended use’.

What Jason demonstrates is that what appears to be an even more sophisticated non-visual interaction is not just possible, but, as in the case of the gear stick, it can and does exceed the effectiveness and/or efficiency of the relevant visual interface option.

Touch typing is the other relevant example that comes to mind.

Is this just a ‘if we did this non-visually we could do it faster/more efficiently’ (assuming that this was something we valued) or that ‘we could be doing something else at the same time’ (hopefully not driving though!).

I am thinking of the ‘continuous partial attention‘ thing in that context.