eDiscovery, legal research and legal memo creation - ready to be sent to your counterparty? Get it done in a heartbeat with AI. (Get started for free)

Justice or Machine? Examining the Role of AI in Sentencing

Justice or Machine? Examining the Role of AI in Sentencing - Bias in, Bias out

As AI systems are increasingly deployed in the criminal justice system, a critical issue has emerged around algorithmic bias. The old adage "garbage in, garbage out" takes on new meaning when sentencing algorithms absorb and amplify the biases inherent in the data used to train them. Studies have shown that risk assessment tools can discriminate against minorities by overestimating their risk of reoffending. ProPublica found that COMPAS, a widely used recidivism algorithm, was twice as likely to incorrectly predict black defendants as higher risk than white defendants with the same profiles.

This happens because the algorithm learns biased patterns from records that reflect structural inequalities. Past arrest data encodes racial discrepancies in police practices like over-policing minority neighborhoods. So when the algorithm uses this data to forecast reoffending risk, it replicates those discrepancies. As legal scholar Sonja Starr warns, "The data reflect not just a biased system but a biased system in action, imposing its judgments." Once encoded into the algorithm, the biases become hidden self-fulfilling prophecies.

Some argue that removing sensitive attributes like race from the data solves the issue. But as researchers like Cynthia Rudin have shown, neutral factors like zip code and education level can operate as proxies for race. The algorithm finds correlations with groups even without explicit demographic labels. Stripping sensitive attributes is insufficient to remove systemic biases. More fundamental solutions like rethinking data inputs and equalizing false positive/negative rates across groups are required.

Justice or Machine? Examining the Role of AI in Sentencing - Automating Injustice?

The question of whether AI sentencing systems automate injustice goes beyond issues of algorithmic bias. Even algorithms trained on perfect data would still raise concerns about the automation of morally complex decision-making. Sentencing requires weighing nuanced factors like motive, context, and capacity for reform. Can fixed models truly capture the human judgment involved?

Critics argue sentencing algorithms fail to account for individual circumstances in the pursuit of standardized outcomes. They point to cases like that of Glenn Rodríguez, a disabled 19-year-old given a 12-year sentence for petty theft based on COMPAS deeming him high risk. The algorithm treated Rodríguez like other young offenders despite mitigating factors in his case. Humans can exercise discretion and empathy based on the specifics of a case. But algorithms deploy fixed rules, incapable of considering unprogrammed exceptions.

This rigidity means algorithms cannot adapt sentencing to an individual's narrative or potential. Christine Clarridge of The Seattle Times notes that "Humans on the bench can discern the difference between a redeemable teenager in a gang and a lifelong criminal." An algorithm looks only at data, not the fuller human story. So it cannot distinguish a youthful mistake from an incorrigible threat, nor recognize rehabilitation and growth.

Judges themselves have issues with automated sentencing advice. Hon. Mark W. Bennett argues algorithms often get sentencing recommendations wrong because they cannot capture the intricacies involved. He gives examples of cases where he rejected sentencing software suggestions as unduly harsh after weighing individual factors. Other judges also override algorithmic advice at times based on human discretion and domain expertise.



eDiscovery, legal research and legal memo creation - ready to be sent to your counterparty? Get it done in a heartbeat with AI. (Get started for free)



More Posts from legalpdf.io: