A Commonplace Book

Home | Authors | Titles | Words | Subjects | Random Quote | Advanced Search | About...


Search Help   |   Advanced Search

Idle Words 8Tufekci 9

 

Because machine learning tracks human performance so well in some domains... there is a temptation to anthropomorphize it. We assume that the machine's mistakes will be like human mistakes. But this is a dangerous fallacy.

As Zeynep Tufekci has argued, the algorithm is irreducibly alien, a creature of linear algebra. We can spot some of the ways it will make mistakes, because we're attuned to them. But other kinds of mistakes we won't notice, either because they are subtle, or because they don't resemble human error patterns at all.

For example, ... you can take a picture of a school bus, and by superimposing the right kind of noise, convince an image classifier that it's an ostrich, even though to human eyes it looks the same....

These failure modes become important when we start using machine learning to manipulate human beings....

The issue is not just intentional abuse (by trainers feeding skewed data into algorithms to affect the outcome), or unexamined bias that creeps in with in our training data, but the fundamental non-humanity of these algorithms.
-- Maciej Cegłowski. Build a Better Monster: Morality, Machine Learning, and Mass Surveillance, Idle Words (April 19, 2017).
permalink