This article focuses on a threat I’ve been talking about a lot: The rise of deepfakes.
So many companies base their user verification process on document and face scans.
If there’s a match, the user is verified. ✅
But creating a fake document was already easy a couple years ago, and it’s only gotten easier with GenAI.
And the facial recognition component? That’s less of a barrier for fraudsters now, too.
Most people have photos of their faces in a number of places online, especially on social media.
It’s easy for a fraudster to look someone up, get their picture, and use it to bypass facial recognition.
At this point, people in the facial recognition space would say, “Ah yes, but if you use liveness detection then you’re safe from that.”
No, you’re not.
The reality is, the GenAI tools fraudsters have access to have grown in sophistication very quickly. It’s important not to underestimate these tools. They can definitely bypass facial recognition even with liveness detection.
So where does that leave you?
Any identity verification method that’s based on using public information is extremely vulnerable right now.
The same applies to PII verification. Given the endless stream of new data breaches, let’s face it: Our PII data is essentially public data now.
Document verification, facial recognition, PII verification… these methods are all doomed due to the use of GenAI.
Sure, you can check the compliance box with one of these methods. But you won’t be preventing any fraud.
Any company that’s leveraging these techniques for fraud prevention needs to rethink their KYC and IDV approach right now.