The Flawed Claims About Bias In Facial Recognition

For one-to-one matching, most systems had a higher rate of false positive matches for Asian and African-American faces over Caucasian faces, sometimes by a factor of 10 or even 100. Somehow, the algorithmic bias studies, and the journalists who cover them, have skipped both of these steps. They do not devote much time to asking whether the differentials they’ve found can actually cause harm. Nor do they ask whether the risk of harm can be neutralized when the algorithm’s output is actually used. If they did, face recognition wouldn’t have the toxic reputation it has today. Because it turns out that the harms attributed to face recognition bias are by and large both modest and easy to control.

So it’s hard to see why being “difficult to recognize” would lead to more false arrests. Facial recognition is a biometric identification process to identify, verify, and authenticate the person using facial features from any photo or video. Facial recognition system works on comparing facial biometric patterns of the face of interest with the database of known faces to find the match. Supporting these uses of face reconition are scores of databases at the local, state and federal level. Estimates indicate that 25% or more of all state and local law enforcement agencies in the U.S. can run face recognition searches on their own databases or those of another agency. Face recognition systems use computer algorithms to pick out specific, distinctive details about a person’s face.

It has designed systems for state DMVs, federal and state law enforcement agencies, border control and airports , and the state department. Other common vendors include 3M, Cognitec, DataWorks Plus, Dynamic Imaging Systems, FaceFirst, and NEC Global. When the facial feature is extracted, and landmarks, face position, orientation & all key elements are fed into the software, the software generates a unique feature vector for each face in the numeric form. These numeric codes are also called Faceprint, similar to Fingerprint in contact biometric system.

Face recognition algorithm

If the gender differential is modest, doctors may simply ignore the difference, or they may recommend a different dose for women. And even when the differential impact is devastating—such as a drug that helps men but causes birth defects when taken by pregnant women—no one wastes time condemning those drugs for their bias. Instead, they’re treated like any other flawed tool, minimizing their risks by using a variety of protocols from prescription requirements to black box warnings. Advancements in security & surveillance have changed the way data is captured and how to drive actions and make the best use of data in the future.

Simply expanding the training set should improve accuracy and reduce differential error rates. After this, the registered face in the database is adjusted in position, size and scale to match with user’s face. It would help whenever the user’s face moves or expression changes; the software will accurately recognize it. Face recognition systems vary in their ability to identify people under challenging conditions such as poor lighting, low quality image resolution, and suboptimal angle of view .

What Is Facial Recognition Technology

These landmarks are the key to distinguish each face present in the database. We support meaningful restrictions on face recognition use both by government and private companies. We also participated in the NTIA face recognition multistakeholder process but walked out, along with other NGOs, when companies couldn’t commit to meaningful restrictions on face recognition use. There are few measures in place to protect everyday Americans from the misuse of face recognition technology. In general, agencies do not require warrants, and many do not even require law enforcement to suspect someone of committing a crime before using face recognition to identify them.

Face recognition algorithm

In turn, states allow FBI access to their own criminal face recognition databases. Recognizing faces may seem a natural and easy-going process but creating the facial recognition technology from scratch is challenging. It is quite difficult to develop an algorithm which works well with varying conditions like large datasets, low illumination, pose variations, occlusion, varying poses, etc. Despite challenges during technology implementation, facial recognition technology is continuously increasing due to its non-invasive and contactless features. MorphoTrust, a subsidiary of Idemia (formerly known as OT-Morpho or Safran), is one of the largest vendors of face recognition and other biometric identification technology in the United States.

Our Approach To Facial Recognition

For example, the Pinellas County Sheriff’s Office in Florida may have one of the largest local face analysis databases. According to research from Georgetown University, the database is searched face recognition technology about 8,000 times a month by more than 240 agencies. A “false negative” is when the face recognition system fails to match a person’s face to an image that is, in fact, contained in a database.

So simply improving the lighting and exposures used to capture images should improve accuracy and reduce race and gender differences. But, like any tool, and especially like any new technology, improvements are likely. Treating face recognition differentials as an opportunity to explore society’s inherent racism, in contrast, doesn’t lead us to expect technical improvements.

Each code uniquely identifies the person among all the others in the training dataset. The feature vector is then used to search through the entire database of enrolled users during the face detection process. Some argue that human backup identification (a person who verifies the computer’s identification) can counteract false positives.

Face recognition algorithm

The process starts with human eyes, which is one of the most accessible features to detect, and then it proceeds to detect eyebrows, nose, mouth, etc. by calculating the width of the nose, distance between the eyes, and the shape & size of mouth. Once it finds the facial region, multiple algorithm training is performed on large datasets to improve the algorithm’s accuracy to detect the faces and their positions. When researching a face recognition system, it is important to look closely at the “false positive” rate and the “false negative” rate, https://globalcloudteam.com/ since there is almost always a trade-off. For example, if you are using face recognition to unlock your phone, it is better if the system fails to identify you a few times than it is for the system to misidentify other people as you and lets those people unlock your phone . If the result of a misidentification is that an innocent person goes to jail , then the system should be designed to have as few false positives as possible. That’s why we’ve been so cautious about deploying face recognition in our products, or as services for others to use.

Recent Developments In Ai And National Security: What You Need To Know

We’ve done the work to provide technical recommendations on privacy, fairness, and more that others in the community can use and build on. In the process we’ve learned to watch out for sweeping generalizations or simplistic solutions. Face detection is not the same as face recognition; detection just means detecting whether any face is in an image, not whose face it is. Likewise, face clustering can determine which groups of faces look similar, without determining whose face is whose.

Face recognition algorithm

In other words, the system will erroneously return zero results in response to a query. Additionally, face recognition has been used to target people engaging in protected speech. In the near future, face recognition technology will likely become more ubiquitous. It may be used to track individuals’ movements out in the world like automated license plate readers track vehicles by plate numbers.

Threats Posed By Face Recognition

Similarly, facial recognition technology also needs to learn what a face is and how it looks. This is done by using deep neural network & machine learning algorithms on a large database of images with human faces looking at different angles or positions. Use of artificial intelligence and machine learning technologies has made the facial recognition process carried out in real-time. The algorithm captures incoming 2D & 3D images depending upon device’s characteristics and analyses it using algorithmic scale without any error by matching it with the database image.

  • The federal government has several face recognition systems, but the database most relevant for law enforcement is FBI’s Next Generation Identification database which contains more than 30-million face recognition records.
  • Recognizing faces may seem a natural and easy-going process but creating the facial recognition technology from scratch is challenging.
  • The process starts with human eyes, which is one of the most accessible features to detect, and then it proceeds to detect eyebrows, nose, mouth, etc. by calculating the width of the nose, distance between the eyes, and the shape & size of mouth.
  • And it needs to protect people’s privacy, providing the right level of transparency and control.
  • It has designed systems for state DMVs, federal and state law enforcement agencies, border control and airports , and the state department.
  • Recent improvements in face recognition show that disparities previously chalked up to bias are largely the result of a couple of technical issues.
  • Only two agencies (the San Francisco Police Department and the Seattle region’s South Sound 911) restrict the purchase of technology to those that meet certain accuracy thresholds.

Face recognition software also misidentifies other ethnic minorities, young people, and women at higher rates. Criminal databases include a disproportionate number of African Americans, Latinos, and immigrants, due in part to racially biased police practices. Therefore the use of face recognition technology has a disparate impact on people of color. In spite of face recognition’s ubiquity and the improvement in technology, face recognition data is prone to error. If the candidate is not in the gallery, it is quite possible the system will still produce one or more potential matches, creating false positive results.

How Law Enforcement Uses Face Recognition

Face recognition gets worse as the number of people in the database increases. Face recognition data is often derived from mugshot images, which are taken upon arrest, before a judge ever has a chance to determine guilt or innocence. Mugshot photos are often never removed from the database, even if the arrestee has never had charges brought against them. FBI also has a team of employees dedicated just to face recognition searches called Facial Analysis, Comparison and Evaluation (“FACE”) Services. The FBI can access over 400-million non-criminal photos from state DMVs and the State Department, and 16 U.S. states allow FACE access to driver’s license and ID photos. Databases are also found at the local level, and these databases can be very large.

A Us Government Study Confirms Most Face Recognition Systems Are Racist

Of 52 agencies surveyed by Georgetown that acknowledged using face recognition, less than 10% had a publicly available use policy. Only two agencies (the San Francisco Police Department and the Seattle region’s South Sound 911) restrict the purchase of technology to those that meet certain accuracy thresholds. For example, during protests surrounding the death of Freddie Gray, the Baltimore Police Department ran social media photos through face recognition to identify protesters and arrest them. According to Governing magazine, as of 2015, at least 39 states used face recogntion software with their Department of Motor Vehicles databases to detect fraud. The Washington Post reported in 2013 that 26 of these states allow law enforcement to search or request searches of driver license databases, however it is likely this number has increased over time.

Get The Latest Updates Frommit Technology Review

Given the network effects in this business, the United States may have permanently ceded the face recognition market to companies it can’t really trust. That’s a heavy price to pay for indulging journalists and academics eager to prematurely impose a moral framework on a developing technology. We have consistently filed public records requests to obtain previously secret information on face recognition systems. Face recognition has been used in airports, at border crossings, and during events such as the Olympic Games.

These details, such as distance between the eyes or shape of the chin, are then converted into a mathematical representation and compared to data on other faces collected in a face recognition database. The data about a particular face is often called a face template and is distinct from a photograph because it’s designed to only include certain details that can be used to distinguish one face from another. “massive gains in accuracy” since 2012, with error rates that fell below 0.2 percent with good lighting, exposures, focus and other conditions. In other words, used properly, the best algorithms got the right answer 99.8 percent of the time, and most of the remaining error was down not to race or gender but to aging and injuries that occurred between the first photo and the second. Law enforcement agencies are using face recognition more and more frequently in routine policing. Police collect mugshots from arrestees and compare them against local, state, and federal face recognition databases.

Once an arrestee’s photo has been taken, the mugshot will live on in one or more databases to be scanned every time the police do another criminal search. But face recognition data can be prone to error, which can implicate people for crimes they haven’t committed. Facial recognition software is particularly bad at recognizing African Americans and other ethnic minorities, women, and young people, often misidentifying or failing to identify them, disparately impacting certain groups.

Partly that’s because the government at least can control things like lighting and exposure, making technical errors less likely. Presumably that’s why the CBP report shows negligible error differentials for different races . And even where error differentials remain for some groups, such as the aged, there are straightforward protocols for reducing the error’s impact. As a practical matter, agencies that check IDs do not deny access just because the algorithm says it has found a mismatch. Instead, that finding generally triggers a set of alternative authentication methods—having a human double check your photo against your face and ask you questions to verify your identity.

Faces may also be compared in real-time against “hot lists” of people suspected of illegal activity. Some face recognition systems, instead of positively identifying an unknown person, are designed to calculate a probability match score between the unknown person and specific face templates stored in the database. These systems will offer up several potential matches, ranked in order of likelihood of correct identification, instead of just returning a single result. Face recognition is a method of identifying or verifying the identity of an individual using their face. Face recognition systems can be used to identify people in photos, video, or in real-time.

The Flawed Claims About Bias In Facial Recognition

As we’ve developed advanced technologies, we’ve built a rigorous decision-making process to ensure that existing and future deployments align with our principles. You can read more about how we structure these discussions and how we evaluate new products and services against our principles before launch. We’ve seen how useful the spectrum of face-related technologies can be for people and for society overall. It can make products safer and more secure—for example, face authentication can ensure that only the right person gets access to sensitive information meant just for them. It can also be used for tremendous social good; there are nonprofits using face recognition to fight against the trafficking of minors. In that context don’t discriminate against anyone; if anything, they work in favor of individuals who are trying to commit identity theft.

From the individual’s point of view, a risk of discrimination arises only from a false report that the subject and the photo don’t match, an error that could deny the subject access to his phone or her flight. These are the numbers that drove the still widely repeated claim that face recognition is irretrievably racist. In fact, that claim relies on data from an early stage in the technology’s development. And it has frozen the narrative by invoking a political and moral context that makes it hard to acknowledge the dramatic improvements in face recognition that followed the 2012 study. The Illinois Biometric Information Privacy Act requires notice and consent before the private use of face recognition tech.

The way these technologies are deployed also matters—for example, using them for authentication is not the same as using them for mass identification . So technical improvements may narrow but not entirely eliminate disparities in face recognition. Even if that’s true, however, treating those disparities as a moral issue still leads us astray. The world is full of drugs that work a bit better or worse in men than in women.