Analyzing user behavior to stop fake accounts

Online account fraud is a big, automated business. NuData Security looks at hundreds of data points to identify malicious login attempts.

Last week, I talked about the future of user authentication, in particular continuous and seamless user authentication as the way to fight logon and transaction fraud. I mentioned that many companies, including credit card processors, had been doing something similar for decades. One of those processors, Mastercard, offers some of its fraud-detection expertise through NuData Security, which it acquired in 2017.

NuData focuses on stopping malicious automation, account take-over (ATO), new account fraud, interaction fraud, and known-user detection. It accomplishes higher accuracy rates (less user friction and higher fraud detection) using hundreds of individual user attributes and actions. NuData analyzes over 200 billion events annually claims that 40 percent of account events are ranked as high-risk. Not all of those are malicious, but a big percentage of them are.

Where to look for online fraud

Besides fighting to confirm whether a user trying to log on is a legitimate user, every online company has a bunch of other places to check for fraud. These include:

  • Account creation
  • Account authentication changes
  • Account contact information changes
  • High-risk transactions
  • Fraudulent account use
  • Account deletion

I’m sure any online site owner can come up with even more areas of fraud concern. When I talk about these types of fraudulent actions, I’m mostly thinking about individual hackers trying to commit crimes. Online fraud is big business and most of this sort of crime has been automated. The big crime families aren’t creating and abusing a few accounts; they are creating and abusing tens of thousands a day. To that end, they create a bevy of custom tools, bots, and scripts to do their bidding.

For example, Facebook recently recognized and deleted 583 million fake accounts, and Twitter removed 70 million accounts. Google and Microsoft both get tens of millions of fake email account attempts a year. Every website of any decent size fights fake accounts, postings and transactions.

CAPTCHA tests not cutting it

To fight fake online account creation, many services have turned to CAPTCHA tests, which stands for "Completely Automated Public Turing test to tell Computers and Humans Apart.” You and your friends know it as that pain-in-the-butt, “obscurified” series of twisted letters you have to figure out and retype in to finish up your account creation. CAPTCHA has been around since 1997.

CAPTCHA’s “secret” to defeat malicious automation is to be difficult enough that automated optical character recognition (OCR) programs can’t recognize them, but humans can. For the most part, they have worked, although I have read about hacking programs that pay human accomplices to solve just the CAPTCHA component while the rest of the registration was automated. Teams of these accomplices sit at their computers and solve thousands of the CAPTCHA challenges per day per accomplice.

I and many others hate traditional CAPTCHAs. To defeat the malicious OCR programs, they’ve become increasingly harder for me, the legitimate human, to solve. I find myself getting the answer wrong, asking for new CAPTCHA to solve, and sometimes even resorting to turning up my speaker and asking the CAPTCHA system to read the letters to me. Even if it works perfect the first time, CAPTCHAs are annoying, and causing “user friction is never good.

reCAPTCHA aims to remove user friction

An improved CAPTCHA called reCAPTCHA, now owned by Google is becoming more common. Instead of requiring users to figure out distorted letters, reCAPTCHA, analyzes user mouse movements and behavior to help with determining automation versus legitimate users. Google, recognizing the threat of malicious automation to the entire online ecosystem, allows other services and sites to use reCAPTCHA for free, leading to an explosion of its use.

You now see a lot of version 2 reCAPTCHA on websites today, where you are asked to click the “I’m not a robot” checkbox near the end of your user account registration. It’s a huge improvement over asking users to figure out letters, although in practice, a slight delay before or after clicking on the reCAPTCHAv2 button often leads to registration errors. Then, the user has to back up and try again--user friction.

Google is beta testing version 3 now, reCAPTCHAv3, which tries to eliminate any obvious user friction. It looks at various user behaviors and then “invisibly” ranks the user with a score between 1.0 (legitimate user) and 0.0 (malicious automation). The site can then allow or deny the new user account registration or adaptively submit the requesting user to further validity tests. This is along the lines of the seamless user authentication that I talked about last week.

NuData Security aims to lessen user friction, improve fraud detection

Although Google’s solution is free, it is far from alone in trying to solve online fraud problems. Many companies, like NuData Security, have been working on the issue for many years and feel their solutions go beyond CAPTCHA’s fraud-detection capabilities.

Again, NuData claims to achieve higher accuracy rates while recuding user friction by analyzing hundreds of user attributes and actions, including device attributes, user behavior, passive biometric verification, and what it calls a “Behavior Trust Consortium.”

NuData looks at hundreds of user and transaction attributes and compares them against their huge database of transactions to help determine if the current attempt is likely to be malicious. For example, are you coming in on the same device, from the same location, or is the location new? Is something trying to obscure the device or its location? Is the user holding the device at the same angle as they usually do? Did the user change browsers, how fast are they typing, how fast are they surfing between pages, and how long do they dwell on individual pages?

NuData then compares all the collected attributes of the current session with the user’s past history, and also against the history of all other users. That historical data is held in the Behavioral Trust Consortium.

The last comparison can’t be understated. NuData has data on many millions of interactions, and it knows what a set of attributes indicating likely fraud looks like. It can see and help respond to massive automation attacks, because it is seeing and following them in real time. Once it detects a common set of attributes that have been defined as malicious, NuData can quickly identify those same attributes in a bunch of other “users” undergoing validation.

NuData thinks that its huge data repository and sophisticated data analytics gives it a leg up on the competition. Ryan Wilk, vice president of customer success for NuData Security, puts it this way: “Without sophisticated intelligence behind the CAPTCHA puzzle, get ready for over 67 percent of all automation to walk past security controls with ease.”

All I can say is that the competition to improve fraud detection using continuous and seamless user authentication is good for all of us.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.
CSO WANTED
Have an opinion on security? Want to have your articles published on CSO? Please contact CSO Content Manager for our guidelines.

More about FacebookGoogleMastercardMicrosoftTwitter

Show Comments

Featured Whitepapers

Editor's Recommendations

Solution Centres

Stories by Roger A. Grimes

Latest Videos

More videos

Blog Posts