The State of Third-Party Insurance Verification – Research Report
January 29, 2020
In 2016, synthetic identity fraudsters were able to steal more than $6 billion. Four years later, not much has changed. Patience is the name of the game with these criminals, as each $15,000 payout per scheme (on average) is used to finance their future identity-related attacks.
Risk-detection systems are still failing to flag 85-95% of synthetic identities, which occur when a cybercriminal combines genuine personally identifiable information (PII) with fabricated biographical data to create a new identity. Once created, fraudsters can use synthetic IDs to open bank accounts, make purchases, apply for loans or credit cards, and more.
Synthetic identity fraud is often more difficult to repair or to recover from than straightforward identity theft, because there’s no true victim (except the business that’s being defrauded). The perpetrator constructs a new identity either by falsifying or stealing one or more core identity attributes – like a name, address, date of birth, and/or social security number (SSN) – and creating an entirely new identity that combines both real and fake data.
Fraudsters are always looking for ways to gain access to a system, and because SSNs are randomized attributes and have no central database, the amount of synthetic identity fraud has increased dramatically, with children being the most vulnerable to the attack vector. Children’s SSNs often lack associated activities and remain largely unchecked until they’re old enough to apply to college or a job. SSNs of deceased individuals are also easy targets due to inactivity.
Without a centralized database of SSNs and corresponding names readily available, anyone who attempts to access this type of information is required to submit a written consent form, which can’t be used for remote identity verification. Credit data based on public records can seemingly help to weed out synthetic ID fraudsters, but it’s not a source of truth and it requires big data and advanced analytics that aren’t always accurate.
The once-effective Know Your Customer (KYC) regulations – which require banks and lenders to verify customer identities to ensure they’re not doing business with criminals or engaging in illegal activity – ultimately prove useless against synthetic identity attacks. Fraudsters can meet KYC requirements with real stolen PII, while masking their true identity with falsified data that won’t trigger any red flags.
Cybercriminals have shifted to synthetic identity because of the controls that have been put in place around stolen identity, not to mention that this type of fraud can be far more lucrative than stealing real identities. Synthetic identity-related fraud numbers will only continue to rise, mainly due to the sharp increase in data breaches that expose real PII to fraudsters on the dark web.
“Synthetic identity fraud is a growing problem in the U.S. payments ecosystem that affects consumers, small and large businesses, financial institutions, government agencies and the healthcare industry. Fraudsters are more sophisticated and organized, crime rings are run as lucrative businesses, data breaches are frequent and the availability of PII on the dark web is staggering. We expect fraudsters will continue to commit this type of crime due to the lack of victims reporting fraud, difficulty in detection and high payoffs for fraudsters – compounded by increased digitization of the financial system.”
Synthetic identity attacks are a long con in which fraudsters drop breadcrumbs with a few small transactions or a handful of new user accounts to establish a track record and to generate some credibility that the synthetic identity is real and belongs to an actual person. Eventually, the transactions and accounts get larger and larger, until finally, they reach their big payoff.
Since no one consumer is affected by synthetic identity fraud, there is no reporting of identity theft, and, because synthetic IDs look like real IDs, companies often don’t even realize they’re dealing with a fraudster. This enables con artists to open up numerous accounts under multiple fake names and attributes, like in one extreme case, where an organized cybercrime ring generated more than 7,000 synthetic IDs.
There are three main personas that leverage synthetic identities:
It’s easy for any of these personas to create a synthetic ID and bypass platforms that are only verifying names, addresses, and SSNs. It’s much harder to get away with this type of fraud when companies are employing a higher level of identity assurance, or leveraging step-up verification methods that weed out bad actors.
The only way to stop synthetic identity attacks is to share information when fraud occurs, or when a synthetic identity has been detected. Methods for sharing this data must be anonymous and private, and should be easy to access across multiple industries, as synthetic identity attackers will attempt to defraud many different types of companies in their criminal pursuit. Sharing data is the only way to combat this issue, because there is simply not a large enough data set of synthetic identities to perform analysis and create analytic insights or identifiers.
As data sets continue to expand, artificial intelligence and machine learning become more effective at flagging false documents, and can even identify the slightest inaccuracies that might warrant a closer look, but because most synthetic identity data is in fact valid, some remote identity verification techniques are not as effective as others at weeding out the frauds.
Potential risk signals and indicators of synthetic identity fraud:
Continuous monitoring of an applicant’s identity is one proactive way to help combat synthetic ID fraud. Too many companies today verify an applicant once upon enrollment, and then never again. Ongoing identity management and verification can impede a fraudster’s efforts, thus deterring them from taking advantage of your business.
Liveness Detection and other types of biometric verification are also very effective when it comes to weeding out synthetic IDs. In this example, a fraudster would be asked to prove their identity by taking a “live” selfie that uses facial recognition and technology to detect movement, or by providing a fingerprint or a retina scan, all of which would indicate that they are the owner of that ID document.
Step-Up Verification incorporates a combination of techniques designed to present obstacles that only true, valid individuals can successfully navigate. Some companies will start with a low level of verification assurance, like a SSN search, and anyone who is flagged is escalated to a higher level of identity verification assurance, like an ID document capture with liveness detection, where fraudsters can be quickly disqualified from the onboarding process.
Remote In-Person Verification is one of the most effective ways to combat synthetic identity fraud, but because it has one of the highest levels of assurance, it can be very expensive for the business to implement, and it can present too much frustration and friction for the applicant.