Encryption Holds Clue to Solving AI-driven Police Surveillance Fears

Police use of Artificial Intelligence (AI) for law enforcement is coming under intense scrutiny as fears grow over its potential for misuse. Surveillance technologies with embedded AI like biometrics and facial recognition are potentially invaluable to police investigations.

Privacy campaigners, meanwhile, have serious concerns that such powerful tools are being rolled out internationally before adequate legal safeguards are in place to protect the privacy of citizens and their data.

Already, there is broad public awareness of the extent to which online activity is monitored. This might explain why consumer adoption of Virtual Private Networks (VPNs) continues to flourish.

Encryption, the technology that creates a secure tunnel for digital communications and renders the content unintelligible to casual observers, could provide a vital clue to solving the AI surveillance conundrum.

AI Surveillance Taking Off

Governments everywhere are interested in intelligent surveillance systems.

Western democracies hope to use them to prevent terrorism or solve crimes. Authoritarian states view them as tools to exercise greater control over their citizens. In the USA and Europe, facial recognition remains controversial. 

Until recently, Microsoft had one of the largest collections of faces for training facial-recognition algorithms. However, following concerns about privacy and ethical issues the project was shelved.

Google has also decided to hold off facial recognition system developments while ethical issues remain.

Amazon, too, whose facial recognition system customers include US law enforcement agencies, has called on federal regulators to rule on how the technology is used.

The Jury is Out

Police around the world believe AI surveillance systems will become indispensable tools to prevent crime and bring to wanted offenders to justice.

In tests, live facial recognition (LFR) cameras study people’s faces and stream the content to an image recognition database. The system then analyses large volumes of facial data and compares them against a stored list of criminals wanted by the police and courts for various offences.

In the UK, London’s Metropolitan Police have tested it on several occasions, in Wales it has been used to scan crowds at a football match and it was famously used at a Taylor Swift concert in 2018.

Results to date have been inconclusive. Ensuring the database is accurate/up-to-date and race/gender bias are said to be a particular challenge.

Concerns over accuracy have been enough to halt trials in some places.

In May, San Francisco became the first U.S. city to legislate to stop police and other authorities using facial recognition. Since then further concerns have surfaced.

California lawmakers have just banned police from using facial recognition technology on body cams state-wide after a civil rights group fed pictures of legislators into a facial-recognition program. The test falsely identified 26 legislators as criminals.

Uncharted Territory

Just how far AI surveillance systems will become a part of everyday life is anyone’s guess. They may one day be as ubiquitous as ATMs.

There is certainly evidence that the public is not that concerned. People already use facial recognition to unlock their phone or log in to their laptop.

Every time someone tags themselves on social media or uses the AI features in Google Photos to curate their images the technology learns to be more reliable.

What will happen once the technology becomes 99.99% reliable is, to a large extent, uncharted territory.

In all probability, data protection regulations will be all that safeguard our personal privacy from a dystopian world of all out state surveillance.

Privacy in the Face of AI surveillance

Members of the public are fully aware that their browsing habits are being tracked by social media companies and advertisers. Many are turning to VPNs to help protect their privacy online.

The software behind facial recognition systems is readily available and inexpensive. It’s finding its way into all kinds of Internet of Things (IoT) devices from personal digital assistants (PDAs) to intercom systems.

From a technical point of view, encryption, the technology behind VPNs is well placed to protect facial recognition or other biometric data from misuse.

Encrypted data passing between remote cameras and AI surveillance databases scrambles the content, rendering indecipherable to unauthorised third parties.

In summary, police use of AI surveillance systems continues to be a divisive issue.

A variety of everyday applications ensure our facial identities are constantly being collected and stored. AI’s capacity to learn simply means the technology will become ever more reliable.

At some point, legislation must catch up with these advances to ensure appropriate checks and balances for safeguarding privacy rights are in place.

A clue to the solution may lie with existing technology for protecting the privacy of sensitive personal data.

Professional, enterprise-grade VPNs use military-grade encryption and are capable of managing the secure connectivity of many hundreds of remote IoT devices such as cameras from a single, central point of control.

Subscribe to blog

CAPTCHA image for SPAM prevention If you can't read the word, click here.