5 things in the online fraud prevention world that need fixing
Mickey Boodaei | Feb 9, 2016
Cyber security is what I’ve been doing for the last 22 years. Online fraud prevention is what I’ve been doing over the last decade. I founded Trusteer in 2006 which got acquired by IBM in 2013 and provided a range of online fraud prevention solutions to hundreds of financial organizations around the world. Fraud prevention is truly a fascinating world. I got to fight some of the sharpest criminal minds and worked with some the largest and most challenging financial organizations around the world. I’ve learned some valuable lessons and came to some interesting conclusions. This ‘5 things series’ tries to summarize some of my insights, starting with 5 things I believe are broken in the fraud prevention solution space.
The only good thing about the concept of risk scoring is that it’s easy to explain. You get a number and the higher it is the greater the risk is. Whoever came up with the concept of risk scoring is a marketing genius, that’s for sure. There is only one problem with it - it doesn’t work. In reality there are various technical fraud related risks. The risk from a malware infected PC is completely different than the risk from a newly seen device, and is completely different than the risk from social engineering. You can’t put them all on one scale.The other problem with risk scoring is where to put the threshold? Should you act on all risks greater from 200? Perhaps 600? What does it mean from a business perspective? How do you mitigate a 200 and how do you mitigate a 600? Instead of this broken model, solutions should first understand which risk mitigation options are available in each environment they operate in. They should then monitor risks over time and at a point where they find a relevant and available mitigation option they should call for it. This means a much smarter risk processing model than what we’re used to today with a very clear interface between the risk processing unit and the application.Think about a risk processing unit that tells the application: “Don’t allow this user to perform this operation during this session” or “Limit this transfer to $10,000” or “Send a confirmation request over SMS to the user ” or “Force the user to change password” - all of these based on deep understanding of what the application is capable and incapable of doing and how it correlates to the specific risks at stake.
One size fits all authentication
In most financial organizations you will find the following process: the user logs in with a username and password, then a risk-engine is called, and if the score returned is greater than a certain threshold a specific and fixed step-up authentication process is activated (for example knowledge-based questions). In some financial organizations you will also find that every, or some, transactions require a fixed secondary authentication process such as an out-of-band token.The problem with this concept is the assumption that one type of authenticator or control fits all risks. This is simply not true and that’s the reason why knowledge-based questions, tokens, and every other type of authentication technique are being easily defeated. These authenticators are effective against specific risks and are pretty much useless against other risks. Matching a risk to its relevant mitigation controls is the key for a successful fraud prevention strategy. Yet it’s a concept you will very rarely see. Organizations tend to choose a secondary authentication solution and use it with all risks. Most financial applications have many more mitigation controls that are not necessarily used in correlation with their risk factoring process.Examples are the ability to sends an SMS to customer, biometric options integrated into their mobile apps, ability to block an operation, block an IP address, read location, or force a password change. A smart risk processing unit should take all that into consideration, understand which control can best mitigate each specific risk, and activate it accordingly.
One of the first lessons you learn in fraud prevention is that fraudsters are extremely creative and fast. They keep analyzing the changes and controls you introduce to your applications. They’re very skilled and can reverse-engineer any software or process. Once they understand your changes they adapt by changing their tactics. Financial institutions on the other hand are extremely slow in introducing changes. There is a good reason for that. Each change requires long engagement, development, and testing cycles to guarantee that critical services are not impacted. After all there is no bigger loss than a service outage. As a result, even if the latest and most effective technology to fight fraud is available in the market, it will take a long time until financial institutions are able to start using it, leaving them vulnerable for long period of times. The fact that all fraud prevention controls are deeply integrated into the applications they protect is the root cause for this problem. If the infrastructure for these controls was sitting outside of the application with clear and fixed interfaces to the application it would have been much easier for financial organizations to plug new controls into their existing systems.
Consider the following common scenario: Following a rise in a specific fraud pattern the financial organization introduces a new fraud prevention control (such as tokens or transaction limits). This usually leads to a drop in fraud losses and as a result after a while fraudsters either stop targeting the organization and move to other organizations or change tactics and launch different attacks.The control on the other hand is never removed, even though risk levels have changed and it's no longer critical. The control itself usually has a bad impact on user experience and the business. Leaving it running doesn’t make any business sense. However, tearing it off is a completed project and if the bank ever needs to re-introduce that control that’s another complicated project.As a result organizations are just piling up controls making the life of their customers more and more complicated. Alternatively, controls should be automatically activated based on current risk levels and fraud activities. If the activity against the financial organization is low or the control is not addressing an active risk anymore then the control should be deactivated by the risk processing unit.
“My risk-detection-engine is not as effective anymore yet I keep renewing its license” is a complaint I keep hearing from financial organizations. Risk engines become less effective over time as they’re based on certain assumptions which are only valid for a few years and as technology changes they become less effective. Internal forces prevent risk prevention vendors from completely changing their technology every few years and as a result they keep selling and supporting less effective solutions. Financial organizations on the other hand are afraid of taking the responsibility of tearing something off and risking that fraud levels go up as well as the risk of breaking different applications and processes.As a result most financial institutions will continue using and paying for fraud prevention solutions with no real ROI. The model should be different. financial institutions should be able to easily plug in and out fraud detection capabilities and controls. The risk processing unit should be able to divert risk decisions between different detection components based on their effectiveness. Components that are no longer effective should be naturally and automatically retired by the risk processing unit.