“Oh yeah, and how will your fancy self-driving car even drive itself if it gets mud on the camera and can’t see?”
I’m so glad you asked. And, look, I admit I’d been wondering how the likes of Google and the rest of the self-driving squad would deal with the not-so-small matter of ensuring their clever cars can actually see where they’re going.
I mean, obviously the solution was never going to be that the car would ask you to hop out with a hanky and give its dome a wipe.
It turns out, in at least this case, the answer is simple: the good ol’ wiper and jet.
As part of its new ‘beta’ program that allows residents of Phoenix, Arizona, to commute in its self-driving cars, Google’s Waymo offshoot is taking steps to make sure the vehicle’s smarts aren’t hamstrung by something as sorry as a sloppy streak of pigeon shit.
As you can see in the video above, and I hope you’ll agree, it’s a weirdly mesmerising sight. And so flippin’ simple.
Of course, now that Waymo is actively testing its vehicles with members of the general public, it has to do more than wipe away the odd dollop of bird doodoo.
In a new story this week, business paper Bloomberg highlights the strategies Waymo is using to help regular everyday folk feel more at ease when travelling in the real-world incarnation of Total Recall’s Johnny Cab.
For now, the modified Chrysler Pacifica people-movers will be supervised by a Waymo staffer riding right there in the driver’s seat, assuring passengers that even though the vehicle is driving itself, an expert is on hand to take back control if needed.
But, down the road, Waymo – and just about every other car maker – is expecting that regulators will eventually approve autonomous cars that have no steering wheel, no pedals, and thus no means of human intervention.
In planning for that future, Waymo has gone on a recruiting spree, hiring experts in human-machine interaction, people who know just what it will take for humans to voluntarily surrender control to the computer.
One such example is the inclusion of displays in the cabin that serve no purpose other than to let the occupants know the car can see and recognise and differentiate between the multitude of objects and potential hazards surrounding it.
On that display, vehicles and pedestrians and other objects are highlighted as the vehicle draws near, essentially telling the occupants: yes, I see that cyclist, and that pedestrian crossing unexpectedly, and all of those buildings and cars.
The intention is to explain to the occupant – either constantly or upon request – both that the car knows what it’s doing, and why it is executing a given action.
I’ve come to a stop because a ball bounced into the street. I’m turning right because there’s a construction detour ahead that you can’t see from here but other connected vehicles have alerted me to. I’m slowing because it’s a school zone. I’m accelerating to merge. So on, and so on.
As the Australian Driverless Vehicles Initiative‘s chief scientist for human factors, Professor Michael Regan, told CarAdvice at a self-driving demonstration in 2015, a driving motivator for people in embracing autonomous technology starts with one simple point: “They need to know that when they activate the automation, they can sit back and relax and do whatever they want.”
“If they can’t do that, if they have to spend most of their time monitoring the environment, then they are actually going to be consistently distracted by the very thing the car is designed to free them up from.”