
Machine learning is one of the most important trends in tech right now. But like anything new, it naturally raises a number of important questions and concerns. Benedict Evan's most recent blog post provides a good explanation of what he refers to as the artificial intelligence bias. Here are a couple of excerpts that I found interesting.
What machine learning does:
With machine learning, we don’t use hand-written rules to recognise X or Y. Instead, we take a thousand examples of X and a thousand examples of Y, and we get the computer to build a model based on statistical analysis of those examples. Then we can give that model a new data point and it says, with a given degree of accuracy, whether it fits example set X or example set Y. Machine learning uses data to generate a model, rather than a human being writing the model. This produces startlingly good results, particularly for recognition or pattern-finding problems, and this is the reason why the whole tech industry is being remade around machine learning.
The rub:
However, there’s a catch. In the real world, your thousand (or hundred thousand, or million) examples of X and Y also contain A, B, J, L, O, R, and P. Those may not be evenly distributed, and they may be prominent enough that the system pays more attention to L and R than it does to X.
What AI isn't:
I often think that the term ‘artificial intelligence’ is deeply unhelpful in conversations like this. It creates the largely false impression that we have actually created, well, intelligence - that we are somehow on a path to HAL 9000 or Skynet - towards something that actually understands. We aren’t.
The conclusion:
Hence, it is completely false to say that ‘AI is maths, so it cannot be biased’. But it is equally false to say that ML is ‘inherently biased’. ML finds patterns in data - what patterns depends on the data, and the data is up to us, and what we do with it is up to us. Machine learning is much better at doing certain things than people, just as a dog is much better at finding drugs than people, but you wouldn’t convict someone on a dog’s evidence. And dogs are much more intelligent than any machine learning.
Photo by Ales Nesetril on Unsplash
Above is a screenshot from a presentation about the future of tech that Benedict Evans gave last week at venture capital firm a16z’s annual conference. And below is a video of the talk. If you can’t see it, click here.
[youtube https://www.youtube.com/watch?v=RF5VIwDYIJk&w=560&h=315]
The talk is positioned as “the end of the beginning.” In other words, here is where the internet and smartphones have taken us, but that’s just the beginning. Quote: “We used to do apartment listings [online] and now Opendoor will buy your home.”
It’s only 24 minutes and well worth a watch.
Benedict Evans just published a great post on his blog about “Tesla, software and disruption.” I recommend a full read. In it, he tries to answer whether Tesla is really “the new iPhone” and if it will be as disruptive to the car landscape as some/many people think.
In his line of thinking, electric (as opposed to an ICE vehicle) feels a lot more like a sustaining innovation, rather than a disruptive innovation. In other words, it something that incumbents will be able to incorporate. So it will not change the “basis of competition.”
The more critical aspect is instead autonomy. Here are two snippets from the piece:
All of this takes us to autonomy. Electric is compelling but will probably be a commodity, whereas Tesla’s improvements on top of electric may not be commodities but are not necessarily decisive. Autonomy changes the world in profound ways (I wrote about this here), and it’s a fundamentally new technology that doesn’t look at all like a commodity. And Tesla is doing this, too. Sort of.
In this competition, Tesla’s thesis is that the data it can collect from its cars will give it a crucial advantage. The only reason that anyone is interested in autonomy today is that the emergence of machine learning (ML) in the last 5 years probably gives us a way to make it work. Machine learning, in turn, is about extracting patterns from large amounts of data, and then matching things against those patterns. So how much data do you have?
But even if we are to all agree that autonomy is the “disruptive innovation”, it is not yet clear who will get there first. Maybe it is Tesla. Maybe it is Waymo. Regardless, many or most people seem to agree that it will arrive in 202x.
Image: Tesla

Machine learning is one of the most important trends in tech right now. But like anything new, it naturally raises a number of important questions and concerns. Benedict Evan's most recent blog post provides a good explanation of what he refers to as the artificial intelligence bias. Here are a couple of excerpts that I found interesting.
What machine learning does:
With machine learning, we don’t use hand-written rules to recognise X or Y. Instead, we take a thousand examples of X and a thousand examples of Y, and we get the computer to build a model based on statistical analysis of those examples. Then we can give that model a new data point and it says, with a given degree of accuracy, whether it fits example set X or example set Y. Machine learning uses data to generate a model, rather than a human being writing the model. This produces startlingly good results, particularly for recognition or pattern-finding problems, and this is the reason why the whole tech industry is being remade around machine learning.
The rub:
However, there’s a catch. In the real world, your thousand (or hundred thousand, or million) examples of X and Y also contain A, B, J, L, O, R, and P. Those may not be evenly distributed, and they may be prominent enough that the system pays more attention to L and R than it does to X.
What AI isn't:
I often think that the term ‘artificial intelligence’ is deeply unhelpful in conversations like this. It creates the largely false impression that we have actually created, well, intelligence - that we are somehow on a path to HAL 9000 or Skynet - towards something that actually understands. We aren’t.
The conclusion:
Hence, it is completely false to say that ‘AI is maths, so it cannot be biased’. But it is equally false to say that ML is ‘inherently biased’. ML finds patterns in data - what patterns depends on the data, and the data is up to us, and what we do with it is up to us. Machine learning is much better at doing certain things than people, just as a dog is much better at finding drugs than people, but you wouldn’t convict someone on a dog’s evidence. And dogs are much more intelligent than any machine learning.
Photo by Ales Nesetril on Unsplash
Above is a screenshot from a presentation about the future of tech that Benedict Evans gave last week at venture capital firm a16z’s annual conference. And below is a video of the talk. If you can’t see it, click here.
[youtube https://www.youtube.com/watch?v=RF5VIwDYIJk&w=560&h=315]
The talk is positioned as “the end of the beginning.” In other words, here is where the internet and smartphones have taken us, but that’s just the beginning. Quote: “We used to do apartment listings [online] and now Opendoor will buy your home.”
It’s only 24 minutes and well worth a watch.
Benedict Evans just published a great post on his blog about “Tesla, software and disruption.” I recommend a full read. In it, he tries to answer whether Tesla is really “the new iPhone” and if it will be as disruptive to the car landscape as some/many people think.
In his line of thinking, electric (as opposed to an ICE vehicle) feels a lot more like a sustaining innovation, rather than a disruptive innovation. In other words, it something that incumbents will be able to incorporate. So it will not change the “basis of competition.”
The more critical aspect is instead autonomy. Here are two snippets from the piece:
All of this takes us to autonomy. Electric is compelling but will probably be a commodity, whereas Tesla’s improvements on top of electric may not be commodities but are not necessarily decisive. Autonomy changes the world in profound ways (I wrote about this here), and it’s a fundamentally new technology that doesn’t look at all like a commodity. And Tesla is doing this, too. Sort of.
In this competition, Tesla’s thesis is that the data it can collect from its cars will give it a crucial advantage. The only reason that anyone is interested in autonomy today is that the emergence of machine learning (ML) in the last 5 years probably gives us a way to make it work. Machine learning, in turn, is about extracting patterns from large amounts of data, and then matching things against those patterns. So how much data do you have?
But even if we are to all agree that autonomy is the “disruptive innovation”, it is not yet clear who will get there first. Maybe it is Tesla. Maybe it is Waymo. Regardless, many or most people seem to agree that it will arrive in 202x.
Image: Tesla
Share Dialog
Share Dialog
Share Dialog
Share Dialog
Share Dialog
Share Dialog