Software will soon need to start trying to convince us that it’s more stupid than it is and that it knows less than it knows about us, just so that it can stop ‘creeping us out’ by seeming too human for comfort.

First, the uncanny valley we already know about.

That widely recognised phenomenon characterised by the creepy feeling that you get when something (usually meant to be a person) straddles (and ultimately tumbles headlong into) the chasm between the real and the virtual.

They obviously seem unreal, but it’s not quite so obvious as to exactly why: it’s just uncanny.

We might say that, despite the best endeavours of their designers, some otherwise imposing efforts at creating convincing synthetic characters have produced something uncannily inhuman (rather than them seeming to have intentionally created overtly ‘cartoon-like’ animations) .

What I believe is coming up fast (but very specifically not in the realm animated characters) is the precise opposite feeling.

If the uncanny valley is merely a manifestation of computer graphics failing to pass the Turing test by offering an ultimately unconvincing simulation of a person, the test in question being an imitation game in which a computer has been programmed to try and fool us into thinking that it is indeed a person, then we probably don’t have a phrase which characterises something which essentially amounts to the same kind of thing, but in reverse.

Imagine computer software which happened to be paying keen attention to our every online move and thereby learned enough about us to be able to give us the (false but convincing) impression that it was actually a person who was doing the watching.

Just in case you happen to be sceptical about whether there could possibly be any such thing, just watch this video and see if it changes your mind:

If this technology is as effective as they are telling us, then there’s little doubt that it portends an era when the kinds of responses we will be getting online will seem downright spookily insightful.

There’s a strong possibility that we’re about to get the potentially uncomfortable feeling that we are constantly having our minds read.

This might just seem supremely convenient (if it gets things right, you get to see things you would definitely want to see and/or buy) and we may genuinely love it, or it might just heighten an effect that psychologists call ‘free-floating anxiety’, a kind of ambient, subliminal, unattributable but ultimately disturbing sense of unease.

It’s not too hard to imagine the best trick that you could use to prevent your online content (and/or advertising) from seeming like it was exclusively derived from you either having developed an authentically supernatural, server-based capacity for reading your visitors’ minds, or just the fact that you seem to them to be successfully able to watch, listen-in to and track their behaviour to an unwelcome extent.

You’d have to systematically sprinkle (amongst whatever uncannily accurate algorithmically-assisted insights into your visitor’s as yet unexpressed needs that your software can discern and present onscreen) some self-evidently computer-based, unmistakably inane and (perhaps amusingly, can software do puns?) confused misinterpretations seemingly incompetently derived from a ‘typically robotic’ misunderstanding of the implications of their most recent actions and communications.

“Yes, it might be watching what I’m doing, but I clearly have absolutely nothing to worry about, after all, just look at how pathetically little it actually appreciates about what anything I do really means. It’s just guessing, and obviously that means that sometimes it gets things right, because even a stopped and broken clock can still show the correct time twice a day: Those bizarre kinds of mistake are the sort that even the most unintelligent human would never make.”