The past decade has seen a wholesale societal transformation when it comes to the ways in which we incorporate technology into our lives.
This is obviously true in the ‘personal’ realm, with the availability of high-bandwidth solutions revolutionising how we communicate, shop, entertain ourselves and so on. It is also the case in the mission/business-critical sector, with the increasing viability of 4G-based solutions opening hitherto unforeseen avenues in terms of increasing safety and the creation of efficiencies.
At the same time, meanwhile, the past two years have also seen this process speeding up even further, thanks to the circumstances created by the ongoing COVID-19 pandemic.
To take an obvious example, the increasing willingness on the part of businesses to roll out ‘remote working’ solutions has led to a step-change in terms of how we conceive of the workplace itself. We have also witnessed the development of any number of solutions to help detect and track the virus, as well as technology to help maintain social distancing within a variety of professional environments.
With that in mind – and with the pandemic now potentially moving to its latter stages – it seems like an opportune moment to examine what our prevailing attitudes might be towards this likely proliferation of ever-more elaborate technology going into the future.
As indicated above, we are becoming increasingly reliant on the likes of AI (whether we know it or not), but to what degree do wetrustit, particularly when it comes to sifting sensitive information or helping to make life-critical decisions?
Just as interesting, meanwhile, is the potential complacency which could conceivably arise from having every problem ‘solved’ the comparative moment it arises. Could this in turn lead to increased passivity or even entitlement on the part of society at large?
The reality of the pandemic
One convenient way to get a handle on some of these questions – at least within the public safety context – is via research carried out last year by Motorola Solutions in collaboration with Dr Chris Brauer at Goldsmiths, University of London. Entitled ‘Consensus for Change’, the study suggests that not only do the public accept the use of cutting-edge technology by emergency services organisations but they also have an overwhelming expectation that this is exactly what should happen.
Findings illustrating this include 71 per cent of respondents saying that ‘advanced technologies’ are needed to “address challenges of the modern world”, with a similar figure suggesting that emergency services should be able to “predict risk”, again via the use of the technology in question. Three-quarters of respondents, meanwhile, said they are “willing to trust the organisations that hold their information, so long as they use it appropriately”.
Giving an introduction to the findings, lead researcher for the project, Dr Jennifer Barth, says: “During the pandemic we saw accelerated technological innovation, but it’s also key to recognise that this wasn’t always specifically around the development of new products. Rather, people were moving towards more creative thinking, leveraging solutions that answered very new problems quickly. We needed to assess risk, and then we needed to move forward at speed.”
She continues: “At the same time, the research also indicated a doubling down on trust and transparency. The public said, ‘Yes we will listen to you, but we also want to know what you’re doing, and we want you to communicate with us.’
“This was certainly the case when it came to public safety, with the public understanding that technology was being developed to keep them safe, but also wanting a stake in how it was being used. They wanted to be consulted and talked to.”
According to press information issued at the time of its initial publication, the report was written following consultation with a stated 12,000 citizens, as well as “50 public safety agencies, commercial organisations and industry experts across 10 global geographic markets”. That being the case – and taking the figures at face value – this indicates something approaching a global consensus on the use of new technology within the emergency services context.
While this is encouraging for the mission-critical communications sector as a whole, and certainly for Motorola Solutions, it also begs the question as to why the global public are content to put so much faith in what could conceivably be referred to as the ‘non-human element’. Why has arguably the most cynical society in human history chosen to put such enormous faith in something which is so new and ultimately so unfamiliar?
One very simple answer to this could be, as indicated above, the degree to which technology has become a key enabler across pretty much every aspect of everyday life. For Motorola Solutions’ senior vice-president of technology, Paul Steinberg, meanwhile, much of the answer can also be found in the reality of the pandemic itself.
Discussing this, he says: “I would say that there are a few things going on here. As Dr Barth said, the pandemic created a situation in which innovation needed to be accelerated out of necessity. Across a broad range of industries, we saw less focus on brand new inventions – no lightspeed travel leap, more the realisation of what could be accomplished with that which already existed.
“A good example is decentralised public safety call-taking centres, which saw staff taking emergency calls from home. That was made possible through a combination of cloud technology, distributed access to relevant software platforms, as well as the availability of push-to-talk in addition to traditional two-way radio.”
He continues: “At the same time, society as a whole has also started to realise the degree to which technology plays an ever-increasing role, not just in relation to people’s ability to function but also their overall wellbeing. Over the course of the pandemic, we all became very dependent on technology in this regard, and by and large it worked.
“The logical conclusion in relation to this is why wouldn’t I as a citizen want to know that public safety is using the same things to keep me safe that I can access at home. Of course, the technology designed for mission-critical situations must adhere to much higher standards, particularly in terms of reliability.”
Trust issues
As readers will remember, one particularly striking finding of the recent report was that 75 per cent of respondents are willing to trust organisations that hold their information on condition that it is used appropriately. Again, this is good news for both manufacturers and public safety agencies, signalling as it does an implicit level of trust in both.
Where the figures get potentially more interesting, however, are around issues where definitions of ‘appropriate’ use itself come into question. One specific pain point in relation to this coalesces on the use of artificial intelligence, with a much lower 52 per cent of respondents saying they would trust AI to “analyse situations of threat”.
For both Barth and Steinberg, this clearly indicates that there is more to do in terms of building consensus around use of this technology in particular, something which is in itself no surprise given how AI has traditionally been portrayed across the wider culture.
One only has to think of fictional AI network ‘Skynet’ becoming self-aware prior to engineering global nuclear catastrophe in the Terminator films. It is also difficult to forget the open letter on AI signed by the likes of Stephen Hawking and Elon Musk in 2015. This called for research into the wider potential impact of the technology, while also warning that for it to be beneficial, human beings must continue to be in control of it rather than vice versa.
Discussing this apparent lack of trust in AI while also linking it with wider concerns around privacy, Steinberg says: “I don’t think that it’s very surprising that 52 per cent said that they would trust AI to analyse a situation of threat. This highlights the need for more public education around how AI is actually used in this context, and the role it can play in supporting public safety.”
Illustrating this notion further, he continues: “In our day-to-day lives, AI is increasingly infused throughout our experience, generally as a way of making surrounding technology better, more efficient and more personalised. That’s different from [as in dystopian science fiction] humans being somehow displaced by it.
“We see time and again that people are willing to give something – for instance, information about themselves – if they trust the provider and understand what they get in return is to their benefit. Safety and security are clearly very important benefits to individuals.”
Steinberg illustrates his previous characterisation of AI as a kind of ‘enabler’ for human decision-making by describing Motorola’s general philosophy when it comes to this area of manufacture and deployment.
His comments are all the more compelling given the company’s ongoing development of the technology, which – in its words – is “designed to support humans with the analysis of increasingly complex data to improve efficiency and accuracy”. In other words, what the company calls the “human-in-the-loop” principle, where the person, not the AI, makes the decision on any critical action.
“Our thinking on this is that AI is human augmentation, not human displacement. It’s not going to take any action of significance on its own. [The job of AI] is information synthesis and data reduction, not drawing a conclusion, with human beings still having the ultimate responsibility. AI has an important role to play, but it should never replace the role of human judgement in critical areas such as public safety.
“For instance, a lot has been spoken about using AI to analyse radiological medical scans, which it does very well. At the same time, while it produces accurate results, it’s not preferable to radiologists in all cases.
“So, would the recipient of the diagnosis want the radiologist’s point of view, or the machine learning? Or both? I’ll take the latter every time.”
Two-way conversation
The figures presented in the study indicate that the public possesses an increasing expectation that emergency services will use cutting-edge technology to help keep society safe. Thankfully, this is something we are already seeing, as hastened - at least in large part - by the COVID-19 pandemic.
One example of this is the ‘decentralised’ control room technology referenced by Steinberg earlier in the interview (as demonstrated, for instance, by West Yorkshire Police in the UK enabling its control room operatives to remotely access its HQ-based server via the use of a VPN).
Another UK-based example, meanwhile, is the speed with which the whole of British policing rolled out Microsoft Teams at the beginning of the pandemic, something which has likewise transformed the culture of the organisation going forward.
Going back to potentially more contentious technology such as AI (not to mention the likes of facial recognition), in what ways does the conversation need to evolve? How can greater trust be established in these new technologies, both on the part of the organisations adopting them and the citizens they are designed to protect?
“Regarding the organisations themselves, what we found through the research is that they have to get comfortable with the technology,” says Steinberg. “They have to know that it’s going to behave predictably, and that it’s going to make their job easier, not harder. At the same time, society requires the use of it to be fair and accurate – ultimately that it makes life better. We always advocate transparency.”
Picking up on this idea of ‘transaction’ between public safety agencies and the people they protect, Dr Barth continues: “In terms of the research, we essentially split the respondents between ‘catalysts’, ‘advocates’ and those who were falling behind. The catalysts understood that you have to give something to get something.
“It’s the same thing that people understand about using their mobile phone. Even if they don’t fully understand why, they know that the information they give is being used to spit back out something that they need or want.”
She continues: “Regarding something like artificial intelligence, communication around what it is tends to be like something out of science fiction. It’s become on one hand hope, on one hand fear, and on one hand everything.
“It isn’t reasonable to have a technologist go out and explain these things, particularly on behalf of those providing public safety. At no point do we want that gap, so it’s a challenge.”
It is clear that what we somewhat erroneously regard as ‘future’ technologies are going to play an increasing role as we move forward. If emergency services are going to leverage these to their fullest extent, it is also clear that the public needs to be onboard and understand its uses as well as its limitations.