Sacrificing Freedom in the Name of Safety: The Biometric Paradox
Stephanie Hare discusses with DIGIT the use of biometric technology and how easily we can lose our freedom in the name of protecting our identity.
Biometrics is becoming evermore prevalent throughout society. In fact, the vast majority of consumers – 93% – now prefer biometrics over passwords for validating payments, according to a recent research paper conducted by Oxford University.
And professional IT network Spiceworks reported in 2018 that almost 90% of businesses will deploy biometric authentication technologies by the year 2020, with 62% already using some form of the technology.
Speaking at Data Summit 2019, technology researcher and broadcaster Dr Stephanie Hare, discussed the positives and potential negatives of biometric technologies. While they could help with identity verification and the support of law enforcement, they can be easily manipulated and misguided by companies and governments alike.
Hare explained that biometric data pertains to an individual’s body, such as DNA, fingerprint, face and voice. Some biometrics must be collected directly, for example, when an individual sends off a DNA sample for analysis of ancestry or health history or allows for their fingerprints to be taken. But biometrics and other identifiable behaviours can be gathered via the “extended self”, from a smartphone to a social media account or wearable device.
So-called ‘first generation’ biometric data, such as DNA or fingerprints, have largely been restricted for use among police services. However, a lack of regulation for their use by other branches of government, or private companies, raises serious concerns. For ‘second-generation’ biometric data, which includes the face, voice, vein print, or behavioural traits such as typing patterns, there is also a lack of regulation or oversight.
The technology is evolving and being deployed faster than society or lawmakers can keep up with, Hare argues. This creates a gap that companies and governments are exploiting. “I would argue that biometric data is its own special case. It is so powerful and so intimate and it’s potentially such a violation to have it taken and used, not just without your consent, but without your knowledge,” she says.
“That’s why it is important that you know when people are trialling that technology on you, and what they are doing with it, where they store it and what rights you have over that data.”
Recommended: RBS Launches Biometric Bank Cards in Scotland
Biometric technologies, Hare said, are so tempting to people because they afford such greater convenience and security. They can allow you to pay for goods, enter passwords, reduce instances of fraud, and save time.
Importantly, it can be used by law enforcement to solve crimes and keep tabs on suspects. However, without proper regulation and data transparency, Hare explains all this gathered information can easily be used against us. While these technologies bring significant benefits, they require a trade-off of our freedom and privacy.
Addressing those who say they don’t care about their data since they have nothing to hide, she said “there are two groups of people who will care about this for you on your behalf: companies and countries. “You need to pay attention when technology companies are begging to be regulated, they rarely do as they don’t like it.”
People, Hare says, should look at the track record of data breaches and data protection violations for the products and services they use. “Do they trust companies like Google and Facebook to self-regulate and do right when it comes to their data?”.
Already companies are introducing ‘well-being’ tech to support their staff e.g. ensuring they aren’t over-worked or that they don’t sit static for too long. However, the data gathered by these wearable devices, computer monitoring software and emotion facial recognition is fed back to management and can be used to determine an employee’s productivity, promotion prospects and whether to continue their contract of employment.
This is why, she says, we should care about our data because it could be used to put our job on the line. In healthcare, biometric data can be used to decide what premiums to charge customers or even whether to offer coverage at all.
More worryingly, biometrics data can be abused by governments. For example, Microsoft President Brad Smith said of facial recognition technology: “A government…could follow anyone anywhere, or for that matter, everyone everywhere. It could do this at any time or even all the time. It could unleash mass surveillance on an unprecedented scale.”
Hare warns that this has already started to happen in many countries around the world and says it’s a false belief that mass surveillance could never happen in the UK.
At present, the UK police force’s databases hold 10 million facial images without any legal basis to do so. Big Brother Watch is currently bringing legal action against the London Metropolitan police over their use of facial recognition technology, and Liberty is taking similar action against South Wales police.
The Information Commissioner’s Office (ICO) is investigating the UK tax authority, HMRC, which has 7 million taxpayers’ voice data, as well as police use of facial recognition, all at an undetermined cost to taxpayer.
In an interview for BBC World Service, Biometrics Commissioner Professor Paul Wiles said to Hare: “We have now got a whole new generation of biometric technologies that are being experimented with or deployed by the police but they’re also being used by the commercial sector. This really needs a legislative framework and the government’s biometric strategy does not propose to do that.”