Slate: How Automation Bias Encourages the Use of Flawed Algorithms


From 2013 to June 2017, the U.S. Immigration and Customs Enforcement’s New York Field Office determined that about 47 percent of detainees designated as “low risk” should be released while they waited for their immigration cases to be resolved, according to FOIA data obtained by the New York Civil Liberties Union. But something changed in the middle of 2017. From June 2017 to September 2019, that figure fell to 3 percent: Virtually all detainees, the data shows, had to wait weeks or even months in custody before their first hearing, even if they posed little flight risk.

All that time, ICE used the same software to determine a detainee’s fate: the Risk Classification Assessment tool, which is supposed to consider an individual’s history—including criminal history, family ties, and time in the country—to recommend whether that person should be detained or released after 48 hours of arrest. When ICE introduced the algorithm in 2013, the Intercept reported Monday, it gave four options: detention without bond, detention with the possibility of release on bond, release, or referral to an ICE supervisor. In 2015, the algorithm was edited to remove the option for bond. Then, it was changed again after the 2016 election to remove the release output. According to the NYCLU and Bronx Defenders, the possibility of bond or release has been “all but eliminated.” (ICE personnel can still technically override the recommendations of the tool, which may explain why that 3 percent of low-risk detainees were still released.)

Read the full article here