The use of technology often falls into two categories. The first is using it to improve the efficiency and speed at which we carry out certain tasks. The second, and potentially far more interesting approach, is to use it to do things that were previously impossible.
In retrospect, one of the most profound examples of the latter in policing was the introduction of speed cameras. For the first time, people in large numbers started to be fined, and in some cases stripped of their driving licences, based on evidence that was recorded automatically.
The key point here – for reasons that I’ll get to – is that the trigger to gather the evidence (and indeed determine that a crime has happened in the first place) was also automatic. With that in mind, given the rise of both facial recognition and the Internet of Things (IoT), the obvious question is to what extent is it desirable and practical to extend this approach within policing?
Jaywalkers beware!
Last year, news broke that traffic police in Shenzhen, China were working to add functionality to their facial-recognition system. This was to name and shame jaywalkers, as well as – potentially – automatically fine such offenders.
It is therefore easy to imagine this being extended to other crimes, possibly in combination with additional metrics and data sources such as gait analytics, mobile phone data and so on. At the same time, the dispatching of a drone to gather footage at different angles for more serious offences could also be an option. These could include public order and criminal damage offences, along with anti-social behaviour.
Clearly, this approach has many potential benefits. For instance, the perpetrator of a crime that might have otherwise gone either unwitnessed or unreported could be identified and punished automatically.
This would require little in the way of police resources, as no police officers would need to be physically present, while a minimum of investigative work would also be required (obviously depending on the severity and complexity of the offence).
There is also the potential to use video analytics to detect patterns of suspicious behaviour to detect pick-pockets, drug dealers and so on. Artificial intelligence is good at gradually learning what constitutes ‘normal’ behaviour, and then reporting when deviations from it occur.
Of course, all this is far easier said than done from a technological point of view. What’s more, shifting in this direction would have huge implications for society as a whole, particularly in relation to the criminal justice process. It would also – in the UK at least – need to be considered in the shadow of the core Peelian principles of policing by consent.
With that in mind, such systems would need to be implemented in such a way that police officers’ compassion and discretion are not designed out of the equation. Neighbourhood policing is of course also core to the British policing model.
At the same time, it is also important to bear in mind the corrosive effect of small offences going unpunished due to a lack of resources. Recent statistics state that nearly half of all crimes in the UK go without a suspect being identified, for instance, with only nine per cent of all crimes resulting in a charge or summons. It is clear that there is a pressing need for new tools to help turn the tables.
Getting it right
Ensuring that the police continue to enjoy the support of the public while adopting the technology in question will probably be best achieved with a gradual approach. At the same time, it will also need to be emphasised that such technology is being used to address recent and pressing problems (for instance, the ongoing population increase coupled with an ever-diminishing number of officers on the beat). Public acceptance is also likely to be aided by making it clear where such technology is in use.
Another potential pain point is the need to ensure that if we go down this path that there isn’t any further hollowing out of police forces, with the technology being used as an excuse. Clearly, officers will still need to respond to large and serious incidents, as well as being able to gather information from local communities in the traditional way.
Many of the types of crime that occupy a great deal of officers’ time will also not be suitable when it comes to the use of certain types of new technology. These include domestic abuse – although body-worn video cameras naturally have a role to play here – as well as dealing with sexual offences. Nor can such systems address the increasing amount of time that police officers currently spend as unofficial social workers.
Technology change on this scale is always a challenge. However, the successful integration of the likes of ANPR and CCTV into modern workflows – and the fact that such systems can be introduced as a bolt-on to traditional practices – give some grounds for cautious optimism.
That said, any system dependent on facial recognition at scale will need to address the problem of false positives. Big Brother Watch’s table of facial-recognition deployments makes for sober reading in this regard, and highlights just how much work has to be done in this area. It will also have to address criminals’ use of simple but effective countermeasures, such as hoods and other means of obscuring their faces, hence the need to try and combine it with other types of data where possible.
More colourful evidence regarding the technology’s fallibility was reported last year in Zhejiang province, China, where a facial-recognition system picked up the face of an air-conditioning company executive that appeared on an advert on the side of a bus. The error was quickly fixed, but it does emphasise the fact that such systems cannot be seen as ‘fire and forget’ options.
Facial-recognition vendors will also need to ensure that the accuracy of their underlying algorithms works well across all ethnic groups, not just that of the person(s) developing the algorithms and datasets.
Turning to the world of business, many low-level offences and breaches of regulations currently fall under the radar. For example, reporting of supermarket freezer temperatures, sewage overflow events and similar activities is done historically and is therefore prone to someone falsifying the information. This could change in future, with IoT-style sensors potentially allowing real-time reporting, again opening up the way for non-compliance to be automatically detected and fined if necessary.
While this is some way away from the crimes that a typical police officer concerns themselves with, it could add to the narrative around stronger enforcement of laws and regulations through the use of technology.
Ultimately, we have laws and those who enforce them to balance the freedom and rights of the individual against their ability to impact those of another person (and the well-being of society as a whole). Given the rise of facial recognition and social media analysis, there is no doubt that privacy and data protection concerns will play an increasingly important part of this equation. While new technologies may one day allow people to be fined for minor crimes such as littering, they must be fit for purpose. There also needs to be a full public debate about any shift in this direction, alongside plenty of education on how the technology will be used, as well as both its benefits and its limitations.
The backlash has started
A beginning is a very delicate time – and this is especially true in the world of technology. Just look at how the promise of genetically engineered crops, which have the potential to alleviate a vast amount of human suffering, has not been fully realised thanks to activism over health and environmental concerns, as well as fears that they increase the power of corporations, such as Monsanto. There are signs that the tide of public opinion may already be turning against facial recognition – at the time of writing, three cities in the US have banned the use of the technology (San Francisco and Oakland in California and Sommerville in Massachusetts). It is also interesting to see that Axon, the body-worn video camera vendor, has said that it “will not be commercialising face-matching products on our body cameras at this time”, although its AI team “will continue to evaluate the state of face-recognition technologies”.
Part of the problem is that it could be argued that public perception of a new technology can be heavily influenced by its least ethical use, as this can galvanise activism and polarise discussion. With this in mind, the use of facial recognition, alongside the placing of QR codes on homes that link to police files on their occupants, in Xinjiang, China, as part of the widespread programme of control that the government has instigated against the minority Muslim Uighur population is a matter of some concern.