That document recommends encouraging people to use long, memorable passwords rather than forcing them to frequently change them or specifying they include special characters. It also lays down tougher ground rules for providing remote access to systems like those of the IRS and many other agencies with sensitive data.
In person, government departments generally ask for a photo ID like a driver’s license. Online or over the phone, many agencies have previously verified identity by asking for information that could be checked against a person’s government file or credit report. But harvesting the personal data needed to spoof that kind of check has become easier in the era of social networks and mass data breaches.
NIST’s 2017 standard says that access to systems that can leak sensitive data or harm public programs should require verifying a person’s identity by comparing them to a photo—either remotely or in person—or using biometrics such as a fingerprint scanner. It says that a remote check can be done either by video with a trained agent, or using software that checks for an ID’s authenticity and the “liveness” of a person’s photo or video.
“A tool that creates more problems can’t be hailed as a solution.”
Caitlin Seeley George, director of campaigns and operations, Fight for the Future
ID.me was well positioned to take advantage of the new standards, which federal agencies must comply with. The company was founded in 2010 as a deals website for veterans and active military and developed a system for checking military IDs used by the Department of Veterans Affairs. It won millions of dollars in federal grants to explore new approaches to digital identity that helped inform the 2017 standards and became the first company accredited as compliant with them. In 2019, ID.me signed a contract with the VA that has so far paid out more than $30 million.
During the pandemic ID.me has won a surge of new business—and scrutiny. States hired ID.me to screen claims for Covid-19 aid that overwhelmed many employment departments. But nonprofits and lawmakers have complained about its use of face recognition and said some vulnerable citizens can’t get through the company’s checks. California’s Employment Development Department said that ID.me blocked more than 350,000 fraudulent claims in the last three months of 2020. But the state auditor said an estimated 20 percent of legitimate claimants were unable to verify their identities with ID.me.
Caitlin Seeley George, director of campaigns and operations with nonprofit Fight for the Future, says ID.me uses the specter of fraud to sell technology that locks out vulnerable people and creates a stockpile of highly sensitive data that itself will be targeted by criminals. “A tool that creates more problems can’t be hailed as a solution,” she says. “Facial recognition is notorious for misidentifying Black and brown faces, gender-nonconforming people, women, and children.”
In an interview this week, ID.me CEO Blake Hall claimed that his company in fact widens access because its remote ID checking works for people without credit histories who often fail conventional checks. He claimed many problems with access to pandemic aid were caused by state agencies failing to provide adequate in-person services and that ID.me’s in-person locations provide a backstop.
Some of Hall’s claims have proven slippery. Bloomberg News questioned his estimate that $400 billion of federal pandemic relief was stolen; Hall says a detailed report on ID.me’s experience fighting unemployment fraud is coming soon. Wednesday, he reversed his earlier statements that ID.me used face recognition only to compare a person’s face to the ID they provided.
Hall told WIRED that ID.me retains images and videos uploaded during its verification process only to protect accounts from being taken over by fraudsters. He said the company used face recognition technology from Paravision, which is among the most accurate ever tested by NIST—although algorithms can perform very differently depending on how they are deployed. A 2019 NIST report on demographic bias in face recognition concluded that while many algorithms show different performance for different demographics, the most accurate can be equitable.