
Machine learning is one of the most important trends in tech right now. But like anything new, it naturally raises a number of important questions and concerns. Benedict Evan's most recent blog post provides a good explanation of what he refers to as the artificial intelligence bias. Here are a couple of excerpts that I found interesting.
What machine learning does:
With machine learning, we don’t use hand-written rules to recognise X or Y. Instead, we take a thousand examples of X and a thousand examples of Y, and we get the computer to build a model based on statistical analysis of those examples. Then we can give that model a new data point and it says, with a given degree of accuracy, whether it fits example set X or example set Y. Machine learning uses data to generate a model, rather than a human being writing the model. This produces startlingly good results, particularly for recognition or pattern-finding problems, and this is the reason why the whole tech industry is being remade around machine learning.
The rub:
However, there’s a catch. In the real world, your thousand (or hundred thousand, or million) examples of X and Y also contain A, B, J, L, O, R, and P. Those may not be evenly distributed, and they may be prominent enough that the system pays more attention to L and R than it does to X.
What AI isn't:
I often think that the term ‘artificial intelligence’ is deeply unhelpful in conversations like this. It creates the largely false impression that we have actually created, well, intelligence - that we are somehow on a path to HAL 9000 or Skynet - towards something that actually understands. We aren’t.
The conclusion:
Hence, it is completely false to say that ‘AI is maths, so it cannot be biased’. But it is equally false to say that ML is ‘inherently biased’. ML finds patterns in data - what patterns depends on the data, and the data is up to us, and what we do with it is up to us. Machine learning is much better at doing certain things than people, just as a dog is much better at finding drugs than people, but you wouldn’t convict someone on a dog’s evidence. And dogs are much more intelligent than any machine learning.
Photo by Ales Nesetril on Unsplash
MIT Senseable City Lab and the World Economic Forum's Global Future Council on Cities and Urbanization are hosting a conference next month on the impact that artificial intelligence is having on our cities. Here is a summary of the event:
As AI (Artificial Intelligence) becomes ubiquitous, it transforms many aspects of the environment we live in. In cities, AI is opening up a new era of an endlessly reconfigurable environment. Empowered by robust computers and elegant algorithms that can handle massive data sets, cities can make more informed decisions and create feedback loops between humans and the urban environment. It is what we call the raise of UI (urban intelligence).
The 2019 Forum on Future Cities, organized by MIT Senseable City Lab and the World Economic Forum's Global Future Council on Cities and Urbanization, will focus on four aspects of the UI transformation: autonomous vehicles, ubiquitous data collection, advanced data analytics, and governing innovation. Panelists include mayors, academics, senior industry leaders and members of civil society to explore such topics from different points of view, highlighting the scientific and technological challenges, the critical collective decisions we as a society will have to make, and the exciting possibilities ahead.
The forum takes place on April 12th in Cambridge, Massachusetts. And since it looks to deal with many of the topics that we talk about on this blog, I figured that some of you might be interested in attending. If so, you can register here.

One of the challenges that self-driving vehicles present is not about technology per se, it is about ethics. The typical example scenario is this one: If a pedestrian were to step out in front of an autonomous vehicle illegally, should the car be programmed to hit the pedestrian or veer off the road at the risk of potentially harming its passengers?
I believe that self-driving vehicles will ultimately result in fewer accidents. Statistically they will be safer. But self-driving vehicles, particularly early on, are going to get a lot of attention when they do get into accidents, even if they are still safer as a whole. And that’s because they will make for good headlines.
Safety and statistics aside, in turns out that the answer to the above moral question could depend on where you’re from. Nature recently published what they are calling the largest ever survey of “machine ethics.” And out of this survey they discovered some pretty distinct regional variations across the 130 different countries that responded.
The responses were able to be grouped into 3 main buckets: Western, Eastern, and Southern. Here is the moral compass that was published in Nature:

And here are a few examples. In North America and in some European countries where Christianity has historically dominated, there was a preference to sacrifice older lives for younger ones. So that would guide how one might program the car for the case in which a pedestrian steps out in front.
In countries with strong government institutions, such as Japan and Finland, people were more likely to say that the pedestrian – who, remember, stepped out onto the road illegally – should be hit. Whereas countries with a high level of income inequality, often chose to kill poorer people in order to save richer people. Colombia, for example, responded this way.
Also interesting is the ethical paradox that this discussion raises. Throughout the survey, many people responded by saying that, in our example here, the pedestrian should be saved at the expense of the passengers. But they also responded by saying that they would never ever buy a car that would do this. Their safety comes first in the buying decision. And I can see that.
There’s an argument that these are fairly low probability scenarios. I mean, the last time you swerved your car, you probably weren’t driving on the edge of a cliff where any deviation from the path meant you would tumble to your death. But I still think that these are infinitely interesting questions that will need to be answered. And perhaps the answer will depend on which city you’re in.
Share Dialog
Share Dialog
Share Dialog