Response to “Machine Bias”

Marie Christine O'Connell
3 min readMay 21, 2020

After closely reading “Machine Bias: There’s a software used across the country to predict future criminals. And it’s biased against blacks” by Angwin, Mattu, and Kirchner of ProPublica, I am shocked by the level of injustice present in the United States’ criminal justice system as a result of the Northpointe algorithm used in courts to determine future risk of criminals. This algorithm is flawed in its risk assessments as it makes “mistakes with black and white defendants at roughly the same rate but in very different ways.” To elaborate on this, the article mentions that Northpointe’s system was far more likely to accuse black defendants falsely to be criminals later on, while often falsely ranking white defendants as lower risk. This is a flawed system because it automatically flags black people as high risk, even if they are not likely to commit another crime, while simultaneously classifying white people as low risk, even if they are extremely likely to commit another crime.

Relatedly, there are most definitely ethical implications regarding the use of the Northpointe criminal risk algorithm. For example, a criminal such as Brisha Borden, a black woman who committed misdemeanors as a juvenile, is ranked as high risk, while an experienced, white, criminal such as Vernon Prater, was rated as low risk. From an ethical standpoint, this algorithm is putting black people at a higher risk of reincarceration, and clearly this problem is not actively being solved as the United States has a disproportionate amount of blacks incarcerated. This is essentially an example of institutionalized racism, making it difficult for black people to get out of the vicious cycle of imprisonment, while not giving them the justice they may deserve. At the same time, it is important to note that white criminals who are dangerous and a threat to society are being given passes from courts because they are rated as low-risk for committing another crime. In short, the mistakes this algorithm is making are enabling racism in the U.S. criminal justice system.

Finally, machine bias is omnipresent, and that is a result of the lack of diversity on teams at major tech companies. After listening to a panel of UX researchers share their experiences, I gained some insight into the rampant machine bias that has historically taken place, causing major problems with usability. For example, technologies such as Siri or Cortana, essentially speech recognition AI systems, were very low-performing in terms of recognizing human voices in their early stages. However, it seemed that the voices of males in their 30s-40s who spoke English natively were picked up far more accurately than the voices of people outside of that specific group. This is because the teams of researchers, developers, and programmers behind these technologies represented exactly what the product was tailored to. Not that it should be tailored to only native English-speaking men in their 30s-40s, as Siri and Cortana are intended to be used by everyone with an Apple of Microsft device with those capabilities. To address this problem, the companies, particularly Microsoft, made a significant effort to create more diverse product teams that included women, people of different ages, people with accents, and people with various speech impediments. As a result of this effort to create a team that represented the audience of the product, these voice recognition systems, though not perfect, have improved significantly. This is an example of machine bias, but also a story of how machine bias can be overcome to create a more inclusive user experience.

--

--