By 2022, passwords and PINs will be a thing of the past. Replacing these prevailing safety measures is behavioral biometrics – a new and promising generation of digital security. By monitoring and recording the pattern of human activity such as finger pressure, the angle at which you hold your device, hand-eye coordination and other hand movements, this technology creates your digital profile to prevent imposters from accessing your secure information. Behavioral biometrics does not focus on the outcome of your digital activity but rather the manner in which you enter data or conduct a specific activity, which is then compared to your profile on record to verify your identity. Largely used by banks at present, research sites predict that by 2023, there will be 2.6 billion biometric payment users.
Biometric systems necessitate and operate based on a direct and thorough relationship between a user and technology. Consequently, privacy is one of the main concerns raised by critics of biometric systems. Functioning as a digitized reserve of detailed personal information, the possibility of biometric systems being used by unauthorized parties to access stored data is a legitimate fear for many. Depending on how extensive the use of biometric technology becomes, an individual’s biometric profile could be stolen and used against them to gain access to all aspects of their life. Adding to this worry is the potential misuse of an individual’s personal information by biometric facilities. Any inessential use of private information without the individual’s knowledge is intuitively unethical and considered an invasion of privacy, yet the US currently has no law in place requiring apps that record and use biometric data to disclose this form of data collection. If behavioral biometrics is already being used to covertly record and compile user activity, who’s to say how extensive and intrusive unregulated biometric technology will become over time?
Another issue with biometric applications is the possibility of bias towards minorities, given the prominence of research that suggests certain races are more likely to be recognized by face recognition software. A series of extensive independent assessments of face recognition systems conducted by the National Institute of Standards and Technology in 2000, 2002 and 2006 showed that males and older people are more accurately identified than females and younger people. Therefore, algorithms could be designed without accounting for the possibility of unintended biases, which would make these systems unethical.
By the same token, people with disabilities may face obstacles when enrolling in biometric databases if they lack physical characteristics used to register oneself in the system. An ethical biometric system must cater to the needs of all people and allow differently abled and marginalized people fair opportunities to enroll in biometric databases. Similarly, a lack of standardization of biometric systems that can cater to geographic differences could lead to compromised efficiency of biometric applications. Because of this, users could face discrimination and unnecessary obstacles in the authentication process.
Behavioral biometrics is gaining traction as the optimum form of cybersecurity designed to prevent fraud via identity theft and automated threats, yet the social cost of incorporating technology as invasive and meticulous as this has not been fully explored. The social and ethical consequences the use of behavioral biometrics may have on individuals and society at large deserves significant consideration. It is therefore imperative that developers and utilizers of biometric systems keep in mind the socio-cultural and legal contexts of this type of technology and compare the benefits of depending on behavioral biometrics for securing personal information against its costs. Failure to do so can not only hinder the success of behavioral biometrics, but can also leave us unequipped to tackle its possible repercussions.