An earlier version of this article entitled: “Navigating Responsible Facial Recognition in the United States” was published by Security Business Magazine on Oct. 17, 2022.
For more details on this topic, "follow this link to obtain a copy of our white paper".
The market for facial recognition technologies is expected to more than double by 2028 and each month the list of Fortune 500 brands using the technology grows. According to a recent YouGov survey, 75% of the adult population in the US uses facial recognition at least once a day to access an app or service. Thanks to the use of AI, immense progress has been made over the past 5 years and facial recognition technology offered by leaders such as CyberLink is safe, accurate, unbiased, flexible and affordable.
CyberLink’s FaceMe technology offers a 99.81% accuracy rate (NIST FRVT), an error rate (False Match Rate) as low as 1 in 1 million, and secure built-in liveness and anti-spoofing capabilities. It is optimized to deliver the market’s top performance across platforms and hardware configurations. Clients around the world use FaceMe for a wide range of use cases, including security, access control, ID validation, face pay, time and attendance and more. It is used to gather precise statistics, delivering invaluable insights on visitors’ demographics and sentiment analysis. When powering AIoT devices, it can provide unseen, personalized user experiences.
Despite all this, an ever-changing mosaic of state regulations around the technology, in addition to public controversy and data privacy concerns with facial recognition in specific use cases, is making it challenging for security professionals to understand how they’re able to use it responsibly and legally – and where.
Existing laws governing the use of facial recognition are still a patchwork that varies from state to state and might seem confusing. However, they are not overly restrictive, and in most cases, simply require obtaining an individual’s consent before capturing their facial biometrics and applying existing data privacy legal requirements. Additionally, integrators and end users considering adoption and data privacy concerns with facial recognition can take extra steps to ensure the software is being used in the best interest of businesses and consumers alike.
The following overview of facial recognition regulations is meant for informative purposes and should not be used as legal guidance.
As you would expect, the US legal framework around facial recognition is primarily focused on its use in public spaces. More specifically, for which purposes and how the facial recognition system is operating.
There are two primary categories of use as defined in legislation around biometrics which concern facial recognition: active and passive.
Active facial recognition requires a consumer’s action to initiate the system scanning and matching their face. Consider mobile payment through a facial recognition-enabled smartphone. It’s not always reading its user’s face, but when they click to pay, they are enabling the system to scan and match their face to the data on file. In the case of active facial recognition, once initiating the process, it should only scan the foremost person in the room, even if multiple individuals are present.
Alternately, passive facial recognition runs in the background consistently and does not require any kind of action or permission before scanning faces. A good example of passive facial recognition would be a system at the airport, scanning all individuals as they pass through security. Multiple faces are being scanned simultaneously and there is no action required to initiate it, it’s constantly running.
All existing state legislation on biometrics permits active facial recognition with varying requirements on notice of use and opt-in/opt-out.
An important component of compliance with existing laws relates to data privacy concerns with facial recognition. Many states with Biometric Information Privacy Acts (BIPAs), such as New York and Maryland, require that the data retained by the system be deleted within a certain period. Similarly, many state regulations prohibit the selling of any data gathered by a facial recognition system.
These standards are incredibly important in ensuring individuals’ data is protected when biometric systems are in use. Additionally, it’s important to note that a variety of facial recognition systems on the market today do not gather or store any photos as they run, making them even safer options. FaceMe creates a highly encrypted template based on vectors unique to each individual at enrollment and matches with a similar template captured when performing facial recognition. Running CyberLink’s FaceMe doesn’t require the storage of any individual’s picture. The template on file is essentially a series of zeros and ones that is undecipherable. And it is virtually impossible to re-create a face from the data, even if someone was to extract facial vectors from the binary codes used in a FaceMe template.
8 states currently have laws in place relating to the use and data privacy concerns with facial recognition. Here’s an overview of some of the key measures within each state’s BIPA laws:
California
Colorado
Illinois
Maryland
New York
Texas
Virginia
Washington
Opt-in
N/A
While numerous and complicated, biometric laws in place in the US today are not overly stringent. In fact, responsible organizations implementing facial recognition should take additional precautions to ensure the privacy and security of their patrons is respected to the furthest extent by the technology. These considerations often come down to software structure and the way the facial recognition system itself operates, which can vary widely.
Consider, does it leverage a database of images gathered through non-consensual mediums such as public social media photos? Does it store photos of faces as it runs? These key questions can differentiate solutions which empower convenience and instill trust with customers, from those that feel invasive.
Here are 4 key data privacy concerns with facial recognition that should be considered when evaluating solutions on privacy and security:
Consent is arguably the most critical concept in the facial recognition ethics debate today and is at the heart of most legislation being developed around it. Many consumers want the ability to control when this technology is used on them and what information about them it can divulge to private businesses.
Any solution which operates using photos it does not have consent to leverage should be approached with skepticism. Opt for solutions that come without a pre-installed database, and instead ask you as the business owner or security integrator to build your own database by entering in photos of consent-providing staff or customers that will interact with the technology.
Facial recognition is not a ticket to know the name of every stranger who enters your store off the street. At its best, it’s a vehicle for convenience, security and personalization for consenting consumers who want it.
Solutions which only offer passive facial recognition, or facial recognition which runs constantly scanning all faces in a space, without any kind of initiating action remove the ability for anyone to opt out. Structurally, this software cannot stop itself from scanning any one individual person, it operates by constantly scanning every individual within the frame.
Active solutions have opt-out ability built in because they require a prompt to be completed before they scan. Imagine facial recognition-based mobile payment systems, if you don’t want it to run or want to opt-out, you simply wouldn’t push the button to pay to have it begin scanning your face.
Once again, consent is a key element when addressing data privacy concerns with facial recognition.
The security space is evolving constantly as new vulnerabilities are discovered and remedied – so too should your software. Security procedures and data storage should be reviewed twice a year to ensure the latest privacy safeguards are integrated into the technology and it’s not leaving your users at risk of any new threats.
Facial recognition technology is powerful, that’s inarguable. It’s incredibly effective for a vast array of use cases, making it all the more important that those use cases are ethical and will actually work well with the capabilities of the software.
Any vendor worth its salt should engage in a serious discussion about your planned use of the technology to both ensure the solution will operate correctly for your use and that it will not be used to unfairly target any individuals or groups.
Addressing data privacy concerns with facial recognition and using the technology responsibly entails both abiding by local state and federal guidelines but also taking additional steps to ensure user privacy and security is as protected as possible. Look for partners and vendors who mirror your values and ensure communication and consent are baked into how you implement facial recognition.
If you have any questions about how FaceMe technology can work for your business, please contact us. There are a wide range of AI-enhanced facial recognition uses available, and we want to help you find the right solution for your particular business needs.
We have a team of experts you can reach out to, who will be happy to answer your questions about FaceMe solutions and how they can be applied to streamline and secure various parts of your organisation.
(The information provided in this article reflects our best knowledge of data privacy concerns with facial recognition based on our own study and research, and is for informational purposes only and not for the purpose of providing legal advice. Anything mentioned herein may vary from time to time due to the change of laws. You should contact your legal counsel to obtain advice with respect to any particular issue or question.)