Fighting fraudsters is a problematic realm in the digital era because fraud stubbornly refuses to be a static target.
Instead, digital fraud is an ever-evolving and improving field of endeavor that is always readying for its next evolution. It looks for its next entry point and adopts the latest technology or methodology to make getting the job done — and making off with ill-gotten data or funds — easier to do from the inside and harder to detect from the outside.
And while that news is far from comforting, particularly when combined with all of the successful fraud escapades we’ve seen in the digital era, Onfido Vice President of Product Management Fraud Albert Roux told PYMNTS that it’s not an impossible situation. As a whole, the financial services industry is evolving alongside the fraudster — and layering in additional defenses beyond that first layer of knowing your customer created by the compliance requirements that rule financial services.
He said those requirements aren’t an endpoint when it comes to building protections for consumers but a springboard from which financial institutions (FIs) can layer in digital signals like networks, device information and biometrics to understand what is happening within and on the transaction level. FIs need “to continuously authenticate the user to make sure that you’re not dealing with data thief or an account takeover, or someone attempting money laundering,” Roux said.
There may not be a silver bullet to put down all frauds at once, he said, but that isn’t what financial organizations need to keep fraudsters out. What they do need is layered defenses continually filtering data through various streams to verify that the consumer is whom they think they’re dealing with. That’s something fraudsters are making more difficult every day.
A few years ago, the idea that fraudsters could use deep fake technology to beat biometric scans and gain access to unwitting consumers’ accounts was considered paranoid, he said. The types of static images the program could create weren’t sophisticated enough to fool a good biometrics scanner programmed to look for signs of lifelike movement in an image.
But today, that technology has radically advanced — it is easily possible to make convincing, realistic content while faking someone’s face, or even with a “person” who never actually existed at all and is a synthetic combination of images and data streamed to create a synthetic identity designed to fool a fraud detections system, he said.
Keeping technologically ahead of these kinds of hack attempts means a lot of time working with them directly in their lab, ensuring the technology of security stays ahead of the technological advances in defrauding consumers, he said.
But more complicated than even the great leaps forward in leveraging technology to defraud innocent consumers and businesses is the tactical methodology that goes into social engineering type frauds that are also advancing. And these frauds, insofar as cybercriminals are enlisting consumer unawares into their fraudulent attempts, are much more difficult to spot.
They look like good transactions because they are, in a sense. There is no “wrong user” pretending to be someone they aren’t, only a right user being coerced and tricked into voluntarily doing something they don’t want to do.
“For us, the type of user coercion social engineering is the one that is most difficult to detect,” Roux said.
But difficult though it may be to see, fighting it back is a matter of putting simple rules into place that can secure the account and keep track of the genuine user’s safety — and that can still happen passively in the background of a transaction. Is the user making changes to their account at a time of day that makes sense given their prior use habits? Does the service or request they are making align with their historical use? Are they taking way longer than they should be to answer questions and navigate the page?
None of those pieces of data alone are proof positive that something is amiss — but when they start to appear together, Roux said, they begin to paint a picture of fraud in progress if the system is set up to recognize it.
“If you look at data over thousands and thousands of user behavioral data points, across our clients’ accounts, there’ll be specific patterns that you can see,” Roux said. “That is where maybe you can leverage some statistical approach. And that is also a little bit more complex. You have to use multiple tools from machine learning [ML] to simple rules to counter those attacks.”
The cost of fraud in financial services is high, both in terms of dollars lost to thieves and trust on the part of consumers about how safe their funds are. And fraud isn’t going to fade out, back off or disappear on its own. As long as there are consumer accounts to illicitly tap into or digital channels that can be used to launder illicit gains, fraudsters are going to seek them out and develop them at a high level because doing so is lucrative for them.
“In terms of how to stop those crime rings, you basically need to develop a new ML algorithm approach to counter them or put more specific rules in place,” he said. “With as much information as possible, it becomes possible to stop the fraudster because they have fewer and fewer places they can hide and exploit the system.”