In today’s digital world, digital problems tend to need digital solutions.
Digital problems have been brought to public discussion most recently with growing concerns about the use of biometrics on the general public. While the use of biometrics – a physical and/or behavioural measurement used to identify an individual – is not new, the use of biometric technology for facial recognition has only come to the fore in the last two decades, and the implications are still unfolding.
It’s not a piece of technology you simply roll-out into the marketplace and hope for the best. This kind of technology still very much needs human intervention and supervision, especially as it’s based on deep learning and neural networks that can be prone to algorithmic bias.
When delivering products with built-in facial recognition to industry, implementation must be done with kid gloves: sophisticated and ethical discussion, due diligence and planning.
From terrorists to toddlers
In the early 2000s, law enforcement agencies began widespread use of proactive facial recognition surveillance to protect the masses from terrorist activity. The theory was that if they could identify, track and detain suspicious people before they were able to endanger the public, this type of surveillance was justified. Unfortunately, the drawbacks were substantial: a serious privacy concern for the public and a series of false positives for the authorities, which translated to more work instead of an improvement in performance.
On a more positive note, facial recognition is being used for some amazing humanitarian purposes around the world, like finding lost children in India. It’s also starting to be used wisely and safely by many child-safe organisations to help protect their members, who are children.
It’s technology that has also been used and abused for commercial gain. As far back as 2010, Facebook implemented facial recognition technology into its platform. Privacy advocates expressed concern, but it seemed the wider community didn’t seem to understand or care about the implications. Facebook and other global brands like Amazon and Google, use facial recognition tools with a global face template pool, meaning that facial recognition is available across all galleries and/or accounts. While this makes automated identification fast and easy, it also enables greater global surveillance by organisations that are building global face databases, legal or otherwise, for commercial purposes.
Not the tool, but its use
As with any technology, it’s not the tool but its use that carries risk. Initially it’s fear of the unknown or ‘unspecified’ that presents threats, balanced against the concepts of knowledge and choice, primarily over one’s privacy and someone’s ability to control one’s identity and information. This is where the idea of identifying the true customer enters the picture: when law enforcement agencies use facial recognition technology, they are not the customer – the public is. If the public is not being kept safe because the use of the tool is unjustified or erroneous – for example, false arrests harm those mistakenly captured as well as allow potential harm by letting real criminals escape notice – the application of that technology needs to be revisited.
Facial recognition in schools sits at this turning point now. In Victoria, a small group of schools started to trail a system that used facial recognition technology to check student attendance. The benefits of automated roll-checking included more teaching time – up to 2.5 hours extra per week – and cutting down on administration.
In response, Victoria’s Education Minister James Merlino said he was “uncomfortable” with its use and told the media that facial recognition in schools would not proceed in his state. Merlino undoubtedly saw the potential for scope creep and the risk of data abuse. While he acknowledged the technology was designed to be used for good, he also recognised it could be manipulated for other more insidious purposes.
This was not to say that facial recognition would never go ahead in Victoria; approval would hinge on whether sufficient due diligence was undertaken first, ideally in the form of a Privacy Impact Assessment on the service provider, and seeking consent from the students, the true customers.
Privacy by design
The key to alleviating any concern around facial recognition technology is to create clear contractual agreements and limitations on how the service will be deployed. This doesn’t merely equate to the facial recognition technology being used, but to the cameras, their locations and who can access the data, storage and use of this content.
Users should understand the steps in how the facial recognition process works before I outline some of the main issues. It requires a mathematical (rather than interpersonal) perspective.
Effectively, a facial recognition tool:
- Ingests image/s
- Segments faces
- Submits image segments
- Conducts algorithmic measurements of key facial features
- Translates these into an encrypted numerical face template (not an identifiable image)
- Stores encrypted numerical face template for future comparison i.e. upon enrolment
- Matches incoming numerical faces against enrolments
- Delivers result
A system like this where privacy is also built in by design – that is, where its core purpose is to protect the digital identity of its customers, the children – also helps to protect the school and staff from the inadvertent duty of care and privacy risks. Put simply, if you don’t know who a student is, how can you link their consent wishes to all the images they are featured in and ensure they are used appropriately as a result? This is particularly important for special cases such as children in foster care or domestic violence situations who need greater protection.
What this looks like is a system in the school that operates as a closed-environment facial recognition function, allowing the name of an individual to be securely associated with their image and each gallery is fenced off by an electronic barrier that secures gallery-specific content and access, media (including ownership), and integration of parental consent in real time, at all times. The technology thus enables privacy rather than exploits it, and it builds greater efficiency into daily school photo management processes.
The next step is consent and the manner in which a school communicates the use of the technology. In this regard, for facial recognition, a school must specifically state why they are doing it, what they are doing with it and how it will be done.
Poor consent processes have the potential to do more harm than good. Here are two common types of consent processes that are ill-advised:
- Coercive consent: where a school may advise, for example, that, “If consent isn’t provided, a student cannot enrol”, or, “if your child cannot have their photographed shared, then they cannot participate in the school play”. Coercive consent isn’t consent at all. If a student truly cannot enrol without the school being able to do a particular thing with their images, it is a Condition of Entry, not a request for permission.
- Bundled consent: where a school’s approach is, “If we have consent for everything all on one vague form, we are covered.” Bundled consent occurs when multiple uses or disclosures of personal information are set out together without the ability for a person to consent to each item separately. For example, sometimes image consent is bundled with other types of permissions, such as medical or financial consent, and caregivers may feel pressured to give consent to the full list in order to ensure that their agreement with certain items is properly recorded, which also raises issues about whether the consent is truly voluntary.
The fact that you have access to personal information doesn’t mean you have consent to do as you please with it. Consent must be voluntary, informed, current and specific – anything short of this is not consent.
Clearly, facial recognition is improving and maturing quickly, so the method by which the above processes are delivered is imperative for consideration and communication. Facial recognition can be used in many ways and it will be the unique combination of purpose and morality that will make or break this business. The industry must be vigilant and proactive to reduce fear and welcome appropriate legislation – only then can we as the public benefit from the technology that purports to serve and protect our customers.
About Colin Anson, CEO and co-founder of pixevety:
Colin Anson is a digital entrepreneur, and the CEO and co-founder of child image protection and photo storage solution, pixevety