© 2022 Black Swan Telecom Journal | • | protecting and growing a robust communications business | • a service of |
Email a colleague |
April 2020
Like many people forced to working from home due to the Coronavirus, I’ve been educating myself on the subject of infectious viruses.
And reading the on-line literature, I notice many parallels between the behavior of viruses and that of criminal organizations who commit fraud. Some examples:
If you’ve been reading Black Swan in recent months, you’ve seen a number of interviews with firms who focus on protecting identity and guarding against subscription fraud, bank account takeover, etc.
We’ve spoken to firms who: 1) protect mobile operator signaling; 2) track identities anonymously across a network of on-line businesses; 3) deliver timely telecom number porting data; and 4) build an analytic decision framework for managing the inputs of many underlying fraud solution providers.
Well, in this story, we turn to another vital area of identity verification: behavioral analysis. And joining us to discuss this subject is Robert Capps, VP of Market Innovation for NuData Security, a Mastercard company.
Robert provides great detail on some key issues: 1) the importance of behavioral analysis — even when the device identity is known; 2) the challenge of detecting human-like automation by the fraudsters; and 3) the reasons why an analysis of browser level activity delivers an extra edge in identifying fraud attacks.
Dan Baker, Editor, Black Swan Telecom Journal: Robert, telecoms are the super-highway for on-line fraud. For that reason alone, I suppose telecoms are a key target customer for NuData. |
Robert Capps: It’s true, Dan. And when we talk telecom, two of the largest cable providers are our customers today.
Now as you’d expect, banks are leading the way in consumer protection, but telecom providers and ISPs are following on because they recognize that consumer account protection, email account protection, order modifications, SIM swaps, all start with the telecom sector.
The challenge is to get their buy-in. We’re starting to see telecoms adopt these technologies to understand who the consumer is, but also understanding where the fraudsters are using automation.
Is the on-line transaction being done by a human or a machine? And if it’s human, is it the right human?
If you can answer those questions, you’ve come a long way. And when it comes to telecoms, we can also ask the question, “Is this the same device I issued?” So here you’re looking at the IMEI, the SIM ID, and other hardware identifiers to determine if this is the right person — or if it’s a human at all.
Identity verification can mean many things and a variety of solution providers are out there. Where does NuData fit in? |
We are really about determining if the correct human is asserting their identity.
Other products verify “Robert” when he opens his account. But once open, the question we answer is: is this really Robert who is using the account?
And incidentally, that’s a question that needs to be answered every time an account is used. It’s no longer enough that the name, address, and phone number all match up.
You must ensure the identity you let in the door is actually the correct human.
Which areas of on-line behavioral protection are front and center at NuData these days? |
Well, one of the biggest issues today is blocking fraudster automated attacks: it’s a critical area for all industries doing business online.
In some industries, automated attacks are two/thirds of the traffic hitting their web servers. In other industries it gets as low as 20 to 30%. Really, the percentage is not that relevant: it is having mass-scale attacks in your platform that should worry you. And this is where we provide value.
Think about the technology used to support these websites. You’re talking about many millions of wasted infrastructure every year managing the flow of these automated fraud attacks.
So automated protection technologies are in great demand today. And as frauds evolve their methods, we are starting to see the fraudsters blending or using techniques that combine human interaction and technology.
Automated attacks account for the vast majority of transactions. But for those transactions where we detect and block the automation, the fraudster redirects the work to humans to, for example, solve a CAPTCHA. Trouble is, fraudsters have improved their automation to the point where it’s become far more human-like.
As a consequence, fraudsters are replacing their early automation systems with far more advanced automation.
In what way are fraudsters getting better at replicating humans? And what detection techniques do you use to stop them? |
In the industry, the original methods for discovering automated fraud attacks was to look for high volume, high speed, and low human interaction probability events.
But what we’re seeing now are very precise attacks that are emulating human input at a keyboard (virtually, of course).
They disguise themselves in other ways, too, such as randomizing the device IDs and not sending more than a couple of transactions per source IP. And as the fraudsters get better at hacking home routers and computers, they are using the consumer’s own PC to launch attacks on their own accounts.
After a fraudster collects behavioral data on a person, isn’t it relatively easy for them to mimic that person’s on-line habits? |
You’d be surprised, Dan. Humans don’t do the same thing the same way two times in a row. When we collect our telemetry there are very subtle differences between different interactions.
There are still broad categories of information that are within one or two standard deviations from the norm. But no one’s ever perfect. People won’t type things exactly the same way two times in a row. Nor will they use the mouse the same way two times in a row, but they will be close.
So if you collect the telemetry and reuse it, you would have a perfect match for each session. And our system would certainly pick up on that and we’d label it as risky. Even subtle tinkering with that data we can also easily recognize.
For instance, if we measured the keystroke speed and the time it takes me to type my name — from “R” to “o” to “b” to “e” to “r” to “t” — we’ll see that it is rarely exactly the same each time I type it. And an input attempt looking exactly the same as how I’ve done it in the past would be labeled as risky.
So when you layer the right technologies and you have the right measurement, it is very hard to spoof those transactions and still be within the guard rails of what we consider safe.
Wow, the example you just gave really proves there’s a lot of valuable intellectual properties embedded in the NuData product. |
Ten years of intellectual property improvement. Yes, it sounds very simple from the outside. When I’m at a conference booth, people come along and try to challenge us all the time.
It’s strong against tampering because there’s a natural interaction between the different behavioral elements, so when the fraudster starts changing individual pieces, the parameters start falling way out of standard deviation.
A solution vendor like LexisNexis says they have fully verified the person because they see what networks the device is on, the kind of websites visited, and common interactions of the anonymous customer. So if you have that, do you still need the kind of deep behavioral analysis that NuData provides? |
Well, device identity is still a bedrock technology. And it exists in our product as well. And yet some of the customers who use LexisNexis (ThreatMetrix) have actually started moving in our direction because they want behavior on top of device identity.
Right now, behavior and passive bio — which go hand-in-hand with device identity — tells you more than: “Did this device interact with various websites as we’d expect from this identity?”
The behavioral layer allows you to look at the context. Is that device hacked? Are we seeing transactions being proxied to that device? Do we actually see transactions when those transactions are occurring?
Yes, probabilistic verification will get you so far, however when you need to know that the asserted identity is coming from the right human, that’s not enough.
So that’s where products like ours fit. Customers come in for device ID and when they want additional capabilities they just turn them on, versus having to source a new provider and add a new technology to the stack.
That’s key — being ready for the next threats you are not yet facing. It’s having the capability to utilize those technologies without doing a complete technology deployment while you are being attacked.
What kinds of on-line interactions are customers asking you to watch over and protect? |
Actually, it’s any web interaction our customers want to evaluate. It could be a web registration form, or perhaps a high value transaction like ordering a new handset or new SIM card.
It really comes down to what pain point the organization wants to solve. Generally our customers don’t deploy our technology on every single web page. They instrument the pages they care about protecting.
By the way, we don’t collect data independently from the endpoint. When a customer visits XYZ Telecom on the website and logs in, there’s no data flowing from the customer to us. The data only flows to the telecom who then automatically forwards that data to us for analysis.
And that method is fundamentally different from how others capture device information. This approach allows our customers to anonymize the data, sanitize it, enrich it, and add additional context that could be useful to our analysis.
Now a lot of behavioral solution firms have popped up in recent years. What differentiates NuData’s offering? |
Well, a lot of products merely collect log-in data only at the transaction end point, then return a score back to the service provider.
NuData does that too, but we ask our customers to also send us server side information — data the browser sent to the server that wasn’t part of the telemetry we picked up from the end point?
As a result we capture highly useful intelligence from browser headers and anomalies in network connections. Did a log-in start, but never complete successfully? That level of detail allows us to make a much better risk assessment of the interaction. Knowing the log-in exists but with a bad password gives us intelligence about the IP that’s trying to log-in.
Another example: it’s very rare to see a cadence of transactions coming in with many good-user/bad-passwords interspersed with some good-user/good-passwords. Well, that could indicate a fraudster has launched a brute-force attack to figure out which accounts actually exist.
Similarly, if an attacker is not loading any page, for example, a tool that is only at the endpoint would miss it, but a tool that is also at the server side would receive the login request and realize that it came from a script that didn’t even load the JavaScript.
So analyzing the valid versus invalid log-in attempts allows us to alert the customer and say, “This account is under attack right now. Maybe you should take additional countermeasures to reduce the risk even when they are logging in with a valid password.”
Now if your solution doesn’t get that information, all the application sees is a series of log-ins but you don’t know at the time of verification whether or not there are other data points like invalid password attempts and invalid usernames.
And that really starts to make your analysis very imprecise. So this is why getting that extra info gives us the ability to add context to the interactions which creates a lot more value for our customer.
How readily are telecoms accepting your technology? Isn’t selling into the telecom market a bit tough considering their legacy systems and processes? |
Well, actually, some of the telcos we’ve worked with are among the fastest organizations in terms of using our technology.
When a telecom has a defined pain point they recognize and we are the solution for it, the shackles are off. It’s like: “how fast can you get me up and running?”
And we can fast-deploy. At one large provider, from contract sign to production we completed in less than a week. They recognized we could solve the problem and we did.
One major barrier to selling these solutions is when the vendor wants to resell the telco’s data. Some solutions do that. We do not. It’s a key reason we’re seeing customers defect from the service bureaus to us. We don’t sell data. We go by a privacy-by-design principle and the data is hashed before we see it.
We don’t know it’s John Smith; we just know that the interactions are related to other data points, which are also hashed. So we are a privacy-blind solution; we largely have no issues giving regulatory reviews in different regions.
Are the traditional credit bureaus building their own advanced biometric or network identity screening technology? |
I don’t think they have to. They definitely have partnerships with providers who have products like ours — or they work directly with us.
And that brings up a big point. Just because you’re good at big data analytics doesn’t mean you’re good at collecting actionable data that can go into risk models.
Today there’s a vibrant ecosystem of different data collection and analysis tools that can feed into these larger enterprise or global surveillance platforms. And when I say “surveillance”, I’m talking about risk surveillance, not privacy surveillance.
I think it’s a very good thing that the service bureaus are doing what they do best and relying on partners to deliver the new technologies to supplement their own product.
Robert, thank you for this important briefing. I imagine you’re going to remain very busy helping protect operators from human-like automation attacks. Another business driver is the trend towards operators doing more customer interactions and selling on-line. |
Thanks, Dan. Certainly, if there is a trend at telecoms to sell less mobile phones in a store, and more on-line, they will surely be looking at ways to beef up their fraud attack surveillance and blocking.
Across all our customers last year we did over 650 billion measurements for our customers. That’s a lot of data! — lots of consumer transactions on the internet.
So chances are, we’ve seen a device, we’ve seen a human and their behavior or interactions somewhere on the internet. So when they come to apply for the first time, on top of determining if this transaction probable or likely, have we seen interactions like this before? We’re also adding things such as: have we seen the device? Is the behavior correct for that device?
We’re ready to serve. Though we are headquartered in the U.S., our customers tend to be large multinational organizations and we have a presence in the major companies of the world who do business on-line.
Copyright 2020 Black Swan Telecom Journal