AI to bring radical change to the criminal justice system
What is apparent is that algorithms in their current form lend themselves to the very same biases already perpetuated in the criminal justice system. But it does present new and exciting opportunities for the sector.
In 2021, in an article for the International Bar Association, Asma Idder and Stephane Coulax poignantly asked, “Artificial Intelligence (AI) in criminal justice: invasion or revolution?”
As we find ourselves further entrenched in the Age of AI, it stands to reason that to eschew technology is to risk irrelevance. The authors concluded that humanity is called on to evolve with technical progress.
The pace of AI advancement is growing at an astonishing rate. For example, in November 2022, Open AI launched Generative Pre-training Transformer (Chat GPT), a “large language model tool”.
This is an AI tool developed by researchers from Google. It has improved the efficiency and effectiveness of natural language processing. It uses large amounts of data and computing techniques that enable the technology to predict how to string words together meaningfully, mimicking speech patterns. Thus, AI can now write poetry and speeches as the user desires. For example, it can write an address like Winston Churchill would have written it if the user desires it to do so.
This technology is so transformative that educational institutes in the United States and Europe are already sounding the alarm as students are submitting homework and assignments written by Chat GPT. Chat GPT can now potentially look at all the information in a criminal trial, written, oral, video and on the internet and write an entire summary judgment.
When Chat GPT was asked what has to be done to deal with the ethical implications of its capability, the model (Chat GPT) suggested that “ultimately, the appropriate level of regulation for ChatGPT will depend on the specific risks and potential harms associated with the technology. As with any new and powerful technology, it’s important to carefully consider the potential impacts and take steps to ensure that it is used in a responsible and ethical manner.”
The case for AI in the criminal justice system is overwhelming. Globally, there is a plethora of challenges that plague the criminal justice system. There has been an increased burden on the criminal justice system and substantial budgetary cuts.
AI technology, such as Chat GPT, addresses these challenges and makes room for the efficient and effective use of resources while modernising the criminal justice system through a proactive approach.
The potential use for AI in the sector is significant, and we have already seen its rollout in parts of the Global North. There are apparent uses for AI for law enforcement agencies, criminal proceedings in courts, prison systems and parole boards. There is also an argument to be made for crime prevention and forecasting.
A collection of studies in the United States in 2021 demonstrated that AI could accurately predict the outcomes of court cases based on historical judgments, the judge’s history and the facts of the particular case. Research also suggests that AI systems could make more rational decisions than judges, with the caveat that AI systems could also have a built-in bias.
In a 2011 Israeli study, researchers from Ben Gurion University and Columbia University observed that judges were much tougher shortly before lunch but more favourable after lunch. This speaks to the inherent subjectivity of humans presiding over cases.
Importantly, however, we must be acutely aware of the limitations and concerns around introducing AI into the criminal justice system. Aristotle once declared “the law is reason unaffected by desire.”
The philosopher perhaps had not anticipated the effect of bias and discrimination on the law and the subsequent proliferation of AI systems that perpetuate this phenomenon.
For instance, examples in the United States already indicate that AI systems are often intrinsically biased. To allude to just a few: police use Idemia, which uses algorithms to scan faces, yet results suggest that these algorithms confuse black women’s faces more than white women’s faces. This inherent bias could lead to wrongful persecution or prosecution.
Similarly, Amazon’s face recognition algorithm, Rekognition, wrongly matched 28 members of Congress with mugshots. Approximately 40% of Rekognition’s erroneous matches were of people of colour, although they constitute only 20% of Congress. Research indicates that facial recognition software is less effective in detecting differences between dark-skinned faces and women.
As conversations swirl around criminal risk assessment algorithms, there are concerns that tools designed to take in the details of a defendant’s profile and produce a recidivism score — indicating the likelihood they will re-offend — lend themselves to bias.
Machine-learning algorithms pick up patterns in data indicative of statistical correlation rather than causation, furthering the risk of discrimination. Joy Buolamwini and Timnit Gebru suggest that “intersectional phenotypic and demographic error analysis can help inform methods to improve dataset composition, feature selection, and neural network architectures.”
What is apparent is that algorithms in their current form lend themselves to the very same biases already perpetuated in the criminal justice system.
It is important to note that these systems are still in the infancy stage. Understanding the potential for bias, increasing transparency around the use of this software, testing stages and increasing the data used by algorithms contributes to combatting bias and discrimination in these systems. As Marwala suggests, this technology should be used in conjunction with current systems, and safeguards need to be implemented to eliminate bias. AI, after all, is a versatile learning tool that is improved with greater data and experience.
Documents such as the 2018 European Commission “Ethical Charter on the Use of AI in the Judiciary” are important for regulation and should provide a basis for our approach in other contexts.
As this particular document outlines, it is imperative to ensure that the design and implementation of AI tools and services are compatible with fundamental rights and in a manner that inhibits the development or increase of discrimination. This can be achieved through a multi-disciplinary approach to processes, making these processes understandable and accessible and precluding a prescriptive approach that emphasises the need for informed actors.
It is apparent that AI is not a universal remedy for all the challenges facing the criminal justice system, although it does present new and exciting opportunities for the sector. In order to ensure its effectiveness, there needs to be an interrogation into its use in other jurisdictions to outline lessons that can be learned. Additionally, the approaches adopted must be suitable for a local context.
The weight of cases cripples the criminal justice system right across the value chain, and decisions should be made to adopt and adapt usage incrementally to improve effectiveness and efficiency. AI certainly has the risk of being an invasion, but with the right safeguards in place, it denotes a revolution.