According to McKinsey, something like $100 billion has been invested in trying to get autonomous vehicles to work and yet the industry remains stuck with problems like this one here:
State-of-the-art robot cars also struggle with construction, animals, traffic cones, crossing guards, and what the industry calls “unprotected left turns,” which most of us would call “left turns.”
The industry says its Derek Zoolander problem applies only to lefts that require navigating oncoming traffic. (Great.) It’s devoted enormous resources to figuring out left turns, but the work continues.
Right now certainly feels like an autonomous vehicle winter (we have many winters going on at the moment). The industry has spent a lot of time and money getting maybe 90% of the way there, but this last bit has proven to be a lot more challenging than I think most people anticipated.
This has a lot of people thinking that it's going to be many decades before we finally get full autonomy (if ever) and that, in the interim, all we will have are very specific use cases: trucks on highways, mining machines (the above article writes about this), and so on.
This may very well be the case. Frankly, I don't know. But it's perhaps important to remember two things: (1) pessimists aren't usually the ones who change the world and (2) there is something known as the Gartner hype cycle, which is a graphical representation of how new innovations typically get adopted.
The Gartner hype cycle has five phases. The important ones for this discussion are the first three. First there is a technology trigger. Second there is an inflation of expectations (until it hits a peak). And then third, there is a trough of disillusionment. This is the moment where interest wanes and people begin to think it'll never happen (until it does happen).
That might be what we're living through right now, or it might not be. But my gut tells me that it's the former.
Benedict Evans raises a number of good points and asks a bunch of good questions about the “steps to autonomy” in his recent blog post.
Right now we’re all talking about autonomous vehicles in terms of their level of autonomy – namely 1 through 5. L1 is some degree of autonomy, but in almost all situations, you still need a human driver. L5 is no human driver needed, ever.
But as Evans points out, the level of autonomy depends on the place, and it is unlikely – at least initially – that L4 or L5 will mean L4 or L5 in all environments. Here is an excerpt from his post:
It naturally follows that we will have vehicles that will reliably reach a given level of autonomous capability in some (‘easy’) places before they can do it everywhere. These will have huge safety and economic benefits, so we’ll deploy them - we won’t wait and do nothing at all until we have a perfect L5 car that can drive itself around anywhere from Kathmandu to South Boston. And so, if we call a car even L4, we have to say, well, where are we talking about? We might mean ‘most of this country’. But more probably, it will be L4 in one neighborhood, L3 in another and only L2 in a third - and a car might encounter all three of those on one journey. Put your route into the map and it will tell you if today is an L5 day or not.