Machine learning is one of the most important trends in tech right now. But like anything new, it naturally raises a number of important questions and concerns. Benedict Evan's most recent blog post provides a good explanation of what he refers to as the artificial intelligence bias. Here are a couple of excerpts that I found interesting.
What machine learning does:
With machine learning, we don’t use hand-written rules to recognise X or Y. Instead, we take a thousand examples of X and a thousand examples of Y, and we get the computer to build a model based on statistical analysis of those examples. Then we can give that model a new data point and it says, with a given degree of accuracy, whether it fits example set X or example set Y. Machine learning uses data to generate a model, rather than a human being writing the model. This produces startlingly good results, particularly for recognition or pattern-finding problems, and this is the reason why the whole tech industry is being remade around machine learning.
The rub:
However, there’s a catch. In the real world, your thousand (or hundred thousand, or million) examples of X and Y also contain A, B, J, L, O, R, and P. Those may not be evenly distributed, and they may be prominent enough that the system pays more attention to L and R than it does to X.
What AI isn't:
I often think that the term ‘artificial intelligence’ is deeply unhelpful in conversations like this. It creates the largely false impression that we have actually created, well, intelligence - that we are somehow on a path to HAL 9000 or Skynet - towards something that actually understands. We aren’t.
The conclusion:
Hence, it is completely false to say that ‘AI is maths, so it cannot be biased’. But it is equally false to say that ML is ‘inherently biased’. ML finds patterns in data - what patterns depends on the data, and the data is up to us, and what we do with it is up to us. Machine learning is much better at doing certain things than people, just as a dog is much better at finding drugs than people, but you wouldn’t convict someone on a dog’s evidence. And dogs are much more intelligent than any machine learning.
Photo by Ales Nesetril on Unsplash
Over 4.2k subscribers