Sometimes I come across the most fascinating stuff. In this case it was an article in the American Bar Association journal.
At the ABA’s 2010 AGM, a session looked at the future of the justice system. Though it might seem perfectly normal for any AGM to discuss the future of its industry this particular session embodied my favourite topic – incorporating ideas still most commonly considered science fiction into common practice.
I think when people think about the future of justice, it’s changing legislation, laws and rehabilitation techniques that come to mind. Well, what if the future of the justice system is to dehumanize the field and bring in a system where machines dole out justice and preventative policing becomes the purview of scientists?
Four theoretical changes to the justice system included as follows:
• Human judges and juries will be rendered obsolete by artificial intelligence.
• Genetic testing and brain scans may shift the judicial focus to preventing crime rather than ad hoc punishment.
• Psychosurgery and mandatory drug treatments could replace prisons.
• A justice system could be outsourced to private corporations.
Our technological growth is a marvel. The challenge, however, is technology has no moral compass and we as a society have become almost completely reliant on our gadgets. That means caution is necessary especially when we start developing smarter machines. There are realms of society that must maintain their humanity.
Although subjectivity does lead to its own problems when juries and judges apply the law, I think there are more inherent dangers in a justice system that is truly blind (as opposed to endeavouring to be blind). There is a place for gut feelings, compassion, empathy and, at times, wrath in the justice system. Can a machine understand and apply these concepts? Some argue that eventually they will have that ability, but will they be genuine emotions or simply mimicked? And, is there really a difference? Assuming machines do one day develop genuine emotions will we then have to develop a whole new approach to justice to include machines? Logically, with emotion must come morality and with morality an understanding of right and wrong and subsequently responsibility and the need for consequences. In which case, AIs will not be better or worse than humans. In fact, it may become impossible for one to administer justice over the other as their cultures may evolve to be completely different.
Brain scan, genetic testing and psychosurgery have such an Orwellian sound the concept may never be accepted by society. If we develop the ability to detect and control behaviour to such a degree where do we draw the line? What traits do we excise and which do we keep? When do we fall into the trap of weeding out criminal behaviour opposed to undesirable behaviour? Even more frightening, who defines the parameters?
Finally, the thought of farming out justice to private corporations is such an absurd notion, I question the validity of the entire forum. We’ve all seen what corporate justice looks like: corruption, preferential treatment and justice for those with the deepest pockets would reign (worse than it already does).
As a plug, this post was inspired by a tweet courtesy of my friends over at Singularity News. If you like Sci-fi becomes science fact check these guys out. They have a recent article about re-writing Asimov’s three laws of robotics which I may blog about at a later date.