A dystopian future awaits humanity if we cannot realize the dangers of programs such as the one the British police are working on. The cops want to create an artificial intelligence program the will somehow stop crimes before they’ve been committed, aka, the thought police.
Police in the United Kingdom are piloting a project that uses artificial intelligence to determine how likely someone is to commit or be a victim of a serious crime. These include crimes involving a gun or knife, as well as modern slavery, New Scientist reported on Monday. Ironically, the police creating the program don’t see government as modern slavery, yet that’s exactly what it is.
The West Midlands Police department is heading the trial project through the end of March 2019. They are expected to have a prototype at that time. There are eight other police departments reportedly involved as well, and the hope is to eventually expand its use to all police departments in the UK.
According to Gizmodo, the program is dubbed the National Data Analytics Solution (NDAS), the system pulls data from local and national police databases. Ian Donnelly, the police lead on the project, told New Scientist that they have collected over a terabyte of data from these systems already, including logs of committed crimes and about 5 million identifiable people.
The system has 1,400 indicators from this data that can help flag someone who may commit a crime, such as how many times someone has committed a crime with assistance as well as how many people in their network have committed crimes. People in the database who are flagged by the system’s algorithm as being prone to violent acts will get a “risk score,” New Scientist reported, which signals their chances of committing a serious crime in the future. –Gizmodo
Donnelly told the New Scientist that they don’t plan to arrest anyone before they’ve committed a crime, but that they want to provide counseling to those who the system indicates might need it. He also noted that there have been cuts to police funding recently, so something like NDAS could help streamline and prioritize the process of determining who in their databases most needs intervention.
Even with that “assurance,” it isn’t too difficult to imagine how tyrannical this kind of technology could become, and quickly. There’s a serious invasion of privacy that must happen when it comes to intervening with individuals before something traumatizing has even happened. This system would be responsible for sending mental health professionals to people’s homes because an algorithm suggested that, in the future, there’s a chance they may commit or fall victim to a crime. To enact that type of intervention across an entire country most definitely paints a picture of an eerily intrusive, dystopian, and tyrannical future where the government wholly enslaves the public on a whim.