A truly social network where humans are (mostly) excluded

The truly driverless car is a distant prospect whose arrival will be preceded by cars driven not so much by their human drivers as by their need to learn what other cars know about the road ahead

I’m just reacting to an article which is just as resolute in indefinitely parking the prospects for a driverless car as driverless car enthusiasts are in their unswerving conviction that the presence of a steering wheel is destined to become as much a matter of personal choice as those knick-knacks which can be found dangling from a motorist’s rear-view mirror.

The article in question is actually quite helpful as it focuses upon the two key concerns of driverlessness.

First, ‘coping with dynamic objects’ which are in potential or actual motion around a moving vehicle, such as cars, trucks, bikes and pedestrians, at the same time as ‘coping with comparatively static things’ that are part of the ‘built environment’ such as buildings and road features, the latter including ‘changeable but stationary’ things like road signs and traffic lights, as well as ‘stationary but temporary’ things like parked cars, traffic cones, litter and debris.

It turns out that these two categories of problems (dynamic and static) each tend to attract two quite different kinds of solutions, both of which are required for driverlessness, each of which in most cases require different hardware, software and even different support infrastructures for driverless vehicles.

It also turns out that we currently have no choice but to create and equip driverless cars with pre-constructed (rather than exclusively ‘created on-the-fly’) 3D maps to solve the static objects problem and thereby be able to safely navigate a vehicle’s physical domain.

The job of reliably interpreting exactly what some static object happens to be with enough precision for a driver to interact with it safely is something that humans acting as analysts still need to get involved with at the mapping stage.

Artificial intelligence algorithms can these days tackle this fiendishly complex problem amazingly effectively, even without maps, but the ability of human brains to do this is still nonetheless currently orders of magnitude more reliable and safe than any man-made system yet devised, and a truly driverless car can only ever be allowed on the road ’undriven’ if it consistently matches human driving capability and can match the established safety levels that humans can and are expected to meet.

Acting as the life-saving spoilsports whose job it is to unearth any weaknesses in any driverless system, testers of driverless cars can still easily devise realistic but, from the driver’s perspective unanticipated ‘static object environment conditions’ which would never confound a competent and alert human driver, but could leave even today’s most advanced car navigation software unable to comprehensively guarantee an appropriately safe and effective driving response, something which, if disregarded would have potentially catastrophic consequences.

Our only recourse when faced with this problem is, as the article states, to systematically pre-scan, humanly interpret and then pre-map the entire driverless driving environment. A daunting task, which at face value, brings the cost, timescale and feasibility of the driverless dream into question in exactly the same ways as the article suggests.

This also implies that the static object problem alone, taking into account the monumental scale of the mapping requirement, is enough to support a strongly sceptical attitude towards the prospects for driverless cars.

From this, it would appear that even bothering to discuss issues raised by the dynamic object problem would seem to be redundant, but this turns out to be where the article misses something important.

The dynamic object problem, by definition, doesn’t rely upon pre-built maps.

Everything the dynamic object software is doing relies upon data which is being updated in real-time.

Human brains work much more like the dynamic object handling software, they don’t need pre-built maps in order to avoid collisions and neither do we: any human driver is not expected to become unreliable just because they find they are driving somewhere they have never driven before.

Fortunately, it turns out that the dynamic object problem of driverless driving is generally deemed to have  already been essentially solved, which is why the ‘driver assistance functionality’ (as opposed to fully driverless capability) mentioned below is expected to be rapidly heading for the mainstream.

Humans may need maps for direction and route navigation, but never as an essential tool which, if they were deprived of it, would leave them unable to safely negotiate their interactions with their surroundings.

So, from a research perspective, it turns out that if we want to tackle the static object problem, instead of relying upon what seems to be an impractically expensive pre-scanning and map-building requirement, we should consider focusing more attention upon either getting the dynamic object system to perform the kind of interpretation that a human does when they’re driving, or finding a way to avoid needing to build the required maps in such a cumbersome and demanding way.

The implications of this kind of change of focus for the prospect of driverless cars does nothing to shorten the odds that we will ever have truly driverless cars, but what it does do is ensure that if the mapping solution as described above ultimately turns out to be impractical from a purely financial point of view (which certainly seems to be the case at the moment) then there remains another, much more imminent prospect on the horizon.

First, let’s summarise the formidable driverless challenge as it currently stands, which, when spelled out, is more than worthy of the scepticism expressed in the article:

We currently don’t know how to make software which interprets our static environment in real time consistently quickly and accurately enough, such that it can combine:

(a) the navigation of a fast-moving vehicle with

(b) the avoidance of collisions produced by interacting with

(c) a combination of other fast and slow moving vehicles, pedestrians and

(d) the static environment that the car is travelling through, unless we use

(e) 3D maps created by a vehicle which has

(f) very recently driven that exact same route before, with a

(g) human driver, driving a car which has

(h) even more specialised and sophisticated scanning equipment than a driverless car, and has

(i) created 3D maps which have then been

(j) subsequently scrutinised and  maybe even edited by humans and then

(k) uploaded to a network which is

(l) accessed and utilised in real time by driverless cars

However, when you envisage a conceivably realistic scenario (which is described below) where you can quite practically take most, if not all of the impracticalities associated with those maps out of the equation, it turns out that the whole prospect for true driverlessness is nowhere near as bleak or distant as you’d imagine.

At the moment, you can’t go fully driverless without such maps, so you will still need a car to have a steering wheel that a human can take hold of and use to steer.

The driver still needs to be sitting in a position where they look out onto the traffic through the front-facing windshield in a conventional driving seat with access to all the driving controls including all the normal pedals and instrumentation displays.

They still need all the competencies that would currently enable them to get a driving license, they still need to be wide awake, alert and attending to the road ahead, the traffic and their surroundings and be in full control of the vehicle.

Applying these kinds of constraints, it turns out that we can already provide technology which allows the driver to allow the car to drive itself safely, with no hands on the steering wheel (or feet on the pedals) right now.

We can build systems which are predicated upon the either the car or the driver ‘deciding’ that road conditions, emergencies or any other eventuality necessitates that the driver should and can re-establish control over the driving and that this handover can be made in a safe and reliable fashion.

But what is absolutely crucial about this, is that we don’t need to restrict ourselves to exclusively pre-mapped routes for a hands-free ‘less-driver’ solution: we don’t need to solve the static object problem any more effectively than we have already solved it so far, in order to deliver effective and safe ‘onboard driver assistance’, where the car can be allowed to drive itself with the human driver sitting ‘hands (and feet) free’ for at least part of a journey.

Now the fact that (as a direct result of this acknowledged do-ability) more and more automotive manufacturers are committing to providing ‘driver assistance’ functionality options in future models (at least as a relatively short-term goal) would seem to fall in line with the general drift of the article: semi-driverlessness looks like it’s coming up soon and may go mainstream, but full driverlessness looks a long way off.

But what is missed here is the inevitable interaction between semi-driverlessness and driverlessness in a connected-car world.

If there are lots of semi-driverless cars going to be out there on the road relatively soon (because the manufacturers don’t need to wait for the static object problem to be solved in order to make such vehicles available, and they don’t need to spend astronomical sums scanning roads and roadsides in order to create driveable 3D maps for the system to work) then, ironically, there will soon be a significant number of cars on the road which will have a good reason to be equipped with the means to scan their driven environment in 3D, because assisted driving can and does use and produce 3D scanning data.

This development bears directly upon another hot automotive tech topic at the moment, which is something referred to as the connected car’.

This is a different, separate vision (confronted by far fewer technological hurdles than any kind of driverlessness) which introduces the ability for cars to ‘autonomously’ wirelessly pass information between one another.

This connectivity is not carried out in such a way that the driver gets directly involved in the inter-car dialogue, but instead it is organised so that the car’s onboard navigation systems can be updated ‘in the background’ about real-time details of surrounding and upcoming road and traffic conditions that other similarly equipped connected cars have either ‘experienced’ and recorded, or which have been passed on in a relay of data received in turn from yet other cars equipped with the same kind of connected car technology.

If such a connected car also happens to be equipped with 3D scanning capability required for ‘driver assistance’, then the information that can be sent and received in a connected car network can also include the kind of 3D maps that ‘static object handling’ requires.

In addition, the scanned data can also be wirelessly uploaded by the connected car to a centrally distributed ‘3D scanned map cloud’, so that a connected car would not need to depend exclusively upon ‘encountering’ another connected, assisted driving-capable car to receive the latest local 3D scanned road map of the upcoming area.

In such a setup, the car’s systems could also be updated with recently uploaded maps which reflect an accumulation of all of the latest data gathered together from all the cars on the connected car network which have driven and are driving through the relevant locations.

Although this kind of inter-car mapping provision obviously still won’t render a semi-driverless car fully driverless for a whole host of reasons (including the fact that, at least initially, for most stretches of road, no ‘assisted driving’ connected car will have driven, scanned and uploaded any map of the area, not to mention the fact that ‘human-supervised 3D map interpretation’ will not have been performed on the latest 3D maps that have been uploaded) what it will do is create a body of 3D roadside data which will grow incredibly rapidly throughout the world, wherever assisted driving connected cars will have driven.

This unprecedentedly comprehensive body of data (despite being way too incomplete for anything resembling true driverlessness) will be accumulated at a cost which is unlikely to significantly exceed that of equipping cars with both driver assistance technology and connected-car functionality (the latter also being justified on far less ambitious grounds than offering driverlessness, e.g., ‘just giving the other car’s driver assistance system better anticipation of road conditions coming up ahead’) and would have been paid for by the car purchasers themselves.

What the rapid emergence of that body of data will inevitably encourage is the question: “what can we do with this systematically harvested and constantly updated data whose usefulness now goes way beyond its original purpose?”.

At that stage, questions about the viability of ‘true driverlessness’ will almost certainly no longer be unavoidably hamstrung by such seemingly overwhelming challenges as the amount of human resources required for conducting a detailed 3D mapping exercise covering every single road in the land.

By then, connected cars may have already crowdsourced the solution to driverlessness themselves, without human intervention.