A Legal Profile of Facial Recognition
To win acceptance, facial-recognition technology needs to fit within a picture-perfect consumer and legal framework that balances benefits with privacy protection.
Whether it’s using your face to confirm payment over your smartphone, monitoring your child at day-camp, or being confirmed as a ticketholder going to a concert, the possibilities for facial recognition technology are virtually endless, and the technology already has become ubiquitous.
It’s a natural expansion to see automotive applications including vehicle entry and starting and driver-monitoring features. But as we seek new uses, test existing technology and launch practical solutions, there are risks and considerations that might impact certain deployments, as well as the underlying design and testing.
Strike a pose on the front page
Debate about facial-recognition technology now regularly make its home on the evening news and in consumers’ news feeds. In 2018, the American Civil Liberties Union (ACLU) found that Amazon’s facial recognition software’s algorithm performed so poorly that it falsely matched 28 members of Congress with police mugshots. More recently in April 2019, an 18-year-old college student sued Apple for $1 billion in damages because it relied on facial recognition software that led to his false arrest for thefts occurring at Apple stores in multiple states.
But it isn’t just algorithmic fails that draw the public eye. On July 24, 2019, the U.S. Federal Trade Commission announced a $5 billion settlement with Facebook after a major investigation into how the company mishandled persona data and communications. This settlement, the largest fine in FTC history, also included new restrictions on facial recognition that should make any company applying the technology – including automakers and their suppliers – take notice. This included a core finding that Facebook implied in its data policy that facial-recognition technology had not been enabled (and therefore was not always functioning) when it in fact was the default mode.
The legal framework
At the federal level, facial recognition isn’t tackled as a standalone issue; the privacy and security implications often are addressed in an industry-specific approach. Past examples include:
- Driver’s Privacy Protection Act: The 1994 act prohibits the use and disclosure of certain information, including a driver’s-license photograph.
- Health Insurance Portability and Accountability Act (HIPAA): Under the 1996 law and its rules, national standards were created for the protection of personal health information that may be used to identify an individual. The rules state that full-face images and biometric identifiers must be given heightened protection as health information.
- Children’s Online Privacy Protection Act (COPPA): This legislation, enacted in 1988 to protect children younger than age 13, includes photographs or video with a child’s image as protected information. By 2013, the FTC recognized the future risks of facial-recognition technology related to children.
But intensifying privacy concerns have pushed legislators further. Recently, Senator Roy Blunt (R-MO) proposed the Commercial Facial Recognition Privacy Act of 2019. This bill seeks to prohibit companies from using facial recognition to identify or track individuals without first obtaining notice and affirmative consent. The recorded individual would have access to lay-terminology documentation describing the capabilities and limitations of facial recognition.
Companies could not use the technology to discriminate against an individual, share the information without consent, or to repurpose it for a different use than initially described. Finally, a company would have to inform – in plain language – all individuals about the collection, storage and use of their facial images. At this time, however, it appears unlikely the Commercial Facial Recognition Privacy Act will be signed into law.
Just as progress is slow on the federal front, individual states are rushing to broaden the patchwork of state-based privacy laws. One fashion in which states address facial-recognition data is by seeking to protect consumers when their data has been exposed in a breach. Currently, thirteen states have laws that have data-breach reporting requirements that involve facial-recognition data or similarly-related biometric data components.
Another approach at the state level is to create requirements for the collection, use and storage of facial recognition information. One of the most famous of these laws is BIPA, the Illinois Biometric Privacy Act, which forbids the unauthorized collection and storage of biometric data, including face geometry scans. At the same time, the law notes that photographs are not considered biometric data and thus not subject to enforcement under BIPA.
The distinction regarding photographs, addressed in the Rivera v. Google case, involves the measurement of facial features and scanning of facial geometry. As Google was using a system that created a “face template” based off a photograph, the court held that it could be considered biometric data based on the creation of measurement data – and therefore all of Illinois’ notice and consent rules applied.
Other states with similar laws include Texas, Washington and California, with its well-known California Consumer Privacy Act. The CCPA will likely apply to the auto industry, as it pertains to businesses with gross revenues over $25 million or that annually receive the personal information of 50,000 or more consumers.
Provisions would require individuals to have the right to opt out of the sale of their facial-recognition data to third parties, along with a limited “right to be forgotten” – the ability of the individual to purge their image from a company’s stored information. New York recently rejected a bill that would impose restrictions potentially more onerous than California’s, while Texas, North Dakota and Washington are renewing efforts to further regulate facial-recognition technology.
Implications for we, the recognized
The application of facial recognition in the automobile could invoke several of the existing state laws and possibly, if passed, the Commercial Facial Recognition Privacy Act. Even though it is unlikely that the Commercial Facial Recognition Privacy Act of 2019 will pass, interest in protecting facial images will continue.
“Our faces are our identities. They’re personal. So the responsibility is on companies to ask people for their permission before they track and analyze their faces,” said Senator Brian Schatz (D-HI), Ranking Member of the Senate Sub-committee on Communications, Technology, Innovation and the Internet. The pressure on companies, Schatz said, will be to ensure that “people are given the information – and more importantly – the control over how their data is shared with companies using facial recognition technology.”
A good way to be prepared for any future legislation would be to adopt the following recommendations.
For those developing and applying the technology:
- Determine a strategy based upon whether collection is “voluntary” or “involuntary” that clearly establishes how the data is stored and used
- Assess how safeguards are being implemented to maintain the data integrity and accuracy
- Establish how can individuals contact a company about use of their facial data
- Perform a Privacy Impact Assessment (PIA) to avoid generating other unnecessary privacy concerns
- Determine if information will travel across borders, as cross-border personal-data transfer restrictions may apply
For those in testing and training data analysis:
- Consider how training sets are used to improve final recognition results and decrease overfitting
- Think about algorithmic bias as you develop your training data (issues in this area have been discovered at Amazon, police departments and Immigration and Customs Enforcement)
For drafters of data-use or privacy policies:
- Be transparent and disclose any transfer or monetization of consumer facial data
- Use concise, plain language
- Provide clear information on data retention
A self-described “recovering engineer” with 15 years of experience in automotive design and quality, Jennifer Dukarski is a Shareholder at Butzel Long, where she focuses her legal practice at the intersection of technology and communications, with an emphasis on emerging and disruptive issues that include cybersecurity and privacy, infotainment, vehicle safety and connected and autonomous vehicles.