Technologies that accelerate public safety tasks and processes can transform outcomes for those affected by the issue being addressed. Yet new technologies often create new questions for everyone with a stake in that issue. Few emerging technologies, at least in recent years, have been met with the uneasiness that has followed facial recognition trials and roll-outs. This has created an environment of distrust between citizens and those tasked with protecting them.
Dame Cressida Dick, commissioner of London’s Metropolitan Police Service, recently said that “inaccurate” critics should “justify to the victims of those crimes why police should not be allowed to use tech… to catch criminals” in response to a Royal United Services Institute report that called for tighter rules on police use of technology. A month prior to those comments, the Metropolitan Police Service announced it will begin the operational use of Live Facial Recognition (LFR) technology developed by NEC, a Japanese technology company. Those comments seem to have been intended to alleviate distrust of this technology, yet the reader will likely either welcome or reject that explanation based on their existing views on facial recognition.
It can be difficult to sort the facts from the fiction – from the current capabilities and near-term developments, through to the regulatory landscape surrounding its use. The public safety community is known for its cautious use of new technologies – is this new territory any different?
Current capabilities
NXP Semiconductors develops the components that power facial-recognition technologies. Microcontrollers, processors and sensors like those produced by NXP are foundational to enabling facial and object recognition due to the data analytics and multiple-protocol communications carried out by these components. Steve Tateosian, senior director for IoT and security solutions at NXP, explains that NXP participates in the production of facial recognition technologies from a technology perspective in three different ways: providing its processors to developers; providing tools, software libraries and a software development environment to enable developers to more easily develop facial recognition applications; and providing a complete facial recognition solution (including software and hardware).
Tateosian adds: “In some of our newer devices, there are hardware accelerators that are specifically designed to accelerate neural networks and our software development environment, called eIQ, enables developers to take a wide variety of models and inference engines and port those in an efficient way to NXP processors.”
NXP has just launched a new ‘vision solution’ that includes a hardware module design and associated software to facilitate offline face and expression recognition. This means, Tateosian explains, “a customer can buy a module with a camera on it and the second they plug it in, they can see their face through the camera and see that it’s not recognising their face. Then we have a set of different ways that they can train faces on the device, instantly.”
This simplification of the development cycles behind facial recognition technologies opens the doors to more widescale use in the coming years. Tateosian says: “What used to require very high-performance processors and cloud-based support for processing can now be done on the edge without cloud intervention. The devices themselves are changing, of course, but really the bigger breakthrough is on the software side. The software is getting more and more sophisticated and streamlined, and that is enabling face recognition to become more prevalent in the future.
“There’s always – or maybe I’m overstepping to say ‘always’ – but there’s always going to be this use-case for really sophisticated, cloud-based facial recognition for public security or through customs or immigration and other areas. There may be these powerful engines that can recognise multiple faces at a single time and process those in the cloud.”
He adds: “Being able to do all the processing and learning on the edge means the devices themselves don’t need to be connected to the cloud and any faces that are registered on the device can simply be erased by the user.” This capability might help lower citizen discomfort with facial recognition technologies being used by law enforcement in public spaces. Products like those developed by NXP mean the device “doesn’t ever need to send the face ID information to the cloud. All the facial data and the camera feed remains local on the device itself.”
NXP says its facial recognition portfolio is powered by an ‘interference engine’. Tateosian says this term is used to explain the processing that takes place on the device itself. He explains that these engines are created based on a large dataset and, for vision, the first thing the camera needs to do is find the head in the frame. The interference engine is used to find the face in the frame of the image and then focuses on the face and creates a model that can be pushed through the engine to see if there is a match on the other side.
Understanding the limits
Facial recognition falls under a broader category of biometric data. Yet other types of biometric data do not have a similar perception of inaccuracy. Merritt Maxim, vice-president and research director at Forrester, explains that fingerprints have a long history of research, and fingerprint analysis is supported by intellectual property that is accepted as highly accurate. He says: “Fingerprints have been used for decades, so there is a good understanding of how it works and how fingerprints can change. Facial recognition is much newer and therefore hasn’t had the same level of usage in the field and that’s why there continues to be questions about how well it adapts to behavioural changes. You get older, your hair gets gray, you grow a beard, or you get wrinkles or other things… does a facial recognition technology have the ability to adapt to those physiological changes and still maintain a high level of accuracy?”
That question, Maxim says, will not be answered in the near term. He says: “We won’t really know until this has been used for an extended period of time. Right now, we’re still in the early stages. What I looked like 10 years ago is a little bit different to what I look like now. Can facial recognition adjust? Until we see that in reality, I think that’s still an open question.”
Maxim adds: “The bigger issue is some of the inherent biases that seem to exist in some facial recognition algorithms, with strong racial or gender biases that mean people are misidentified. Some of that can be based on the data that’s used to train the model, and that can potentially lead to the apprehension of the wrong individual because the facial recognition technology didn’t make the correct match.”
This does not sound like an easy issue to iron out. When asked whether technology companies or public safety agencies will need to build new datasets that are representative of the population being observed, Maxim said he believes how this will play out is to be determined. More diverse sets of training data would of course help, but other functionalities or capabilities might emerge that also help deal with the issue of bias.
A need to improve the accuracy of these technologies is also an area identified by NXP’s Tateosian. He says: “I think there’s going to be more and more improvement both in the fundamental technology and doing so under different conditions. Lighting conditions, for example, have a large role to play in this. Being able to maintain accuracy across a wide range of lighting conditions is important, and I think that’s something that’s going to happen.”
Tateosian also anticipates the systems running facial recognition algorithms to develop in a way that means they require lower power. This “means you’re going to start to see these things in battery-operated devices as well”.
Existing functionalities will also gradually reduce in cost, which will make them more accessible to a wider range of applications. Those include emotion detection, age prediction and gender prediction, and Tateosian says they are “really on the high end but I think you’re going to see them become more mainstream”.
There are also various studies that demonstrate facialrecognition software can be tricked. In February, software company trinamiX shared details about its ‘skin-detection’ technology – an alternative that it says can detect the material it is analysing. This prevents masks, for example, from hiding someone’s identity. It remains to be seen whether that modification will become widespread.
The regulatory puzzle
The regulatory landscape around the use of this technology is complex and varies around the world. It is not possible to delve into these intricacies in one article.
In the UK, an interim report of the Biometrics and Forensics Ethics Group’s Facial Recognition Working Group was published in February 2019. It was a response to live facial recognition (LFR) trials undertaken by South Wales Police and the Metropolitan Police Service. The report found that there is a lack of independent oversight and governance of the use of LFR. It recommended that police trials of LFR should comply with usual standards of experimental trials until a legislative framework is developed. Given the Met Police has moved ahead with operational use of LFR, it is clear the report needs to be updated. A secretary for the working group advised that a further report, in collaboration between police forces and private entities, is expected this summer.
Then, in October 2019, Elizabeth Denham, the UK information commissioner, published a blog that made her feelings clear – “police forces need to slow down and justify its use”. The Information Commissioner’s Office (ICO) is the UK’s independent regulator for data protection and information rights law; it has specific responsibilities set out in the Data Protection Act 2018 and under the General Data Protection Regulation (GDPR).
The ICO carried out its own research to understand the thoughts of UK citizens, and found there is strong public support for the use of LFR for law enforcement purposes – some 72 per cent of those surveyed agreed or strongly agreed that LFR should be used on a permanent basis in areas of high crime. Denham links to that research in her blog, so presents a balanced approach to this technology. The ICO is not trying to prevent its use, it is saying it needs to be used cautiously.
Denham said: “Moving too quickly to deploy technologies that can be overly invasive in people’s lawful daily lives risks damaging trust not only in the technology, but in the fundamental model of policing by consent. We must all work together to protect and enhance that consensus.”
The following day, Tony Porter, the independent surveillance camera commissioner, added to the ICO’s findings. Porter said that the use of automatic facial recognition (AFR) should be within the confines of existing regulatory guidelines. He pointed to a recent Cardiff judgment that clearly set out the Surveillance Camera Code of Practice (SC Code) and section 33 of the Protection of Freedoms Act 2012 as key elements of the legal framework for the use of AFR.
While that ruling found that South Wales Police’s use of facial recognition was proportionate, it made clear that the force should be prepared to demonstrate its use is justified according to the particular facts of individual cases. This means that facial recognition should not be used without clear justification.
Below: Although most of the public support the usage of facial recognition for law enforcement, those employing it must still justify its use in order to assuage the tech’s critics. Adobe Stock/Alexander.
In the US, the regulatory landscape is even more fractured. Forrester’s Maxim points to various examples of state-level regulations, starting in Illinois in 2008 with the Biometric Information Privacy Act (BIPA). He says these regulations “reflect the growing interest and concern around facial recognition”. The political landscape in the US means that “no two laws will necessarily be written exactly the same way, so it does create some real challenges if you are a national organisation trying to deal with this growing patchwork of biometric laws. In the European case, GDPR is providing a more holistic view, but certainly here in the US, this continues to be a real problem and probably is not going to get any better because there’s really no real momentum or interest in a national equivalent of GDPR right now. That means [regulations are] going to be at the state or local level for the time being, unfortunately.”
What is interesting about that BIPA law, explains Maxim, is that it “was written well before touch ID or face ID even existed, yet it provides protections and consent if you are collecting biometric data. There have been, in the last year, several court cases against organisations that have potentially violated the spirit of that law. [Various courts have] ruled against organisations that [have been] collecting [biometric] data. This is a law that gives some consumers some protection and means to challenge what’s happening and potentially get some relief out of the misuse or miscollection of data over a period of time.”
Last summer, Somerville, Massachusetts became the second US city to bar municipal use of facial-recognition technology due to ethical concerns including the potential for government misuse and its unequal performance across racial and gender lines. At the time, Maxim explained that, despite these new limits, he expected the technology to survive scattered bans by state and local governments. He added: “The technology’s already been developed, it’s already being deployed for a range of different use-cases. It’ll continue, definitely.”
One reason why facial recognition might have attracted such controversy is because it is easier for data to be covertly collected compared with other types of biometric data. Forrester agrees with this hypothesis. He explains: “It’s noninvasive. In the case of fingerprints, you need to physically present your hand or finger to a sensor to collect that data, whereas facial ID data just needs a camera and then it can start collecting images of people without any consent at all.”
In December, cyber-security firm Comparitech published a study looking at how extensively and invasively biometric ID and surveillance systems are being deployed. It ranked 50 countries and found China uses facial recognition technologies more extensively than any other country surveyed, including the introduction of a new facial recognition check for anyone getting a new mobile phone number. It also found China does not have “a specific law to protect citizens’ biometrics”.
What facial recognition technology should be used for clearly varies across borders, yet not all perceptions of facial recognition are negative. Maxim explains: “Fingerprints are, for a lot of people, very closely affiliated with criminal activity in a sense that when people are arrested on TV or in movies, they are usually fingerprinted. And that fingerprint goes into the criminal record. So if an organisation is asking for your fingerprint, people often have a negative opinion because they think of it as a technology that’s used more for a criminal scenario. Facial recognition doesn’t have that stigma attached to it. That stigma might be unfounded, but it does persist. That also has influenced consumers’ perception and willingness to have their fingerprints collected.”
Optimistic caution
New technologies are never perfect. That is why initial implementations should tread cautiously, using the information provided as a guide, not an instruction. There is no evidence that facial recognition is yet being used in any other way than that.
More work is needed to improve the accuracy of the algorithms, and gender or racial biases must be taken into account. Law enforcement agencies should be as transparent as possible, to help reassure citizens that this technology is being used carefully and proportionally. Fingerprints have been vital in law enforcement activities, and citizens now expect these to be collected. Facial recognition, as just another type of biometric data, could eventually be thought of in the same vein, if the information is taken at face value – something that is informative but imperfect. Research and development will help improve that accuracy, we aren’t quite there yet.
Above: Adobe Stock/alice_photo.