Every security system, in the end, depends on people. Ideally, programs could be written with restricted privileges for operators so they would pose no threat — but I doubt it would work in practice.
I believe our staff is honest, but I'm not sure we can rely on employee inertia to protect us where money is involved. To determine the scope of any potential problems, we recently reviewed our 30 funds-transfer systems.
Fortunately, many of the systems limit the transfer amount, or they're set up in such a way that committing fraud would require a complicated string of collusions with other companies.
In general, I found our controls adequate. Based on the idea of separation of privilege, these systems need at least two staffers working together to move money. However, I did find some disturbing trends.
Our company is always trying to drive down costs and reduce staff. This has led to concentrations of control: A few individuals — who may feel that their jobs are at risk — now handle all funds transfers. It has also left us with audit and control groups that have little experience with fraud. The longer these trends continue, the greater the risk of a disgruntled employee trying to run away with some cash.
I found bigger problems too. I found the passwords needed to complete a transfer for two accounts taped to a monitor. We hope to prevent recurrences through staff training.
In addition to checking out each system myself, I had each business and IT team that uses the applications complete a detailed questionnaire about possible frauds. We then ran scenario-based role plays of those schemes.
Only one person in the 30 groups bothered to ask who I was and why I wanted to know how to defraud the company. I'll have to follow up with advice to all to be less trusting.
An Outside Job
The exposed passwords aside, I believe our systems are reasonably well built. But I wish I could say I had the same confidence in our new business partners. They don't have dedicated security teams; security responsibilities are shared across IT.
I've always been a little worried by companies that do that. It can mean that security is truly integrated into the business process and doesn't need a champion. But it usually means security takes second place to ease of use. Most IT developers have little knowledge of or interest in security.
My concerns rose to a near panic when one business partner explained a "clever" solution it had invented to address the problem of transferring financial information to and from other companies.
Many companies send encrypted financial data over the Internet using file transfer protocol (FTP), which has little built-in security. FTP uses an unencrypted, plain-text user name and password for authentication that can be detected using a packet-sniffing tool. The password also tends to be hard-coded into applications that perform transfers, making it difficult to change.
I was initially impressed when our partner claimed that it had worked around this issue and could allow the use of FTP without enabling hackers to intercept passwords. Not only that, but the business leader there said he was sure that his staff wouldn't write down passwords and that automated scripts didn't need to be updated. As he described it, I tried to figure out how his company had managed this. Was it using some kind of pass code, like SecureID? Perhaps biometrics from the staff initiating the transfer?
No. It simply left the password blank. Operationally, this gives the company very low overhead, but security? It has about as much security as it has password length — none.
We've tried getting the business leader to see our point of view, but he doesn't understand. "But why would I want to steal from the company? I am a loyal employee," he said.
I tried to raise the idea that other employees might not be so loyal, but he responded, "I know my team; they are good guys. Why would they consider such a thing?"
Clearly, I'm not going to convince him before our companies start exchanging data. We'll have to get his company to contractually take responsibility for any security risks.
Fox in the Henhouse
Being at risk due to an agreed-upon business decision to forgo security controls is one thing, but there are worse problems out there. Recently, I heard a story from someone whose very attempt to prevent fraud enabled it instead.
To ensure that no internal developer could defraud the company, he had each new system checked by expensive external consultants and the source code reviewed in detail. They reported any issues so he could remove the security holes in the live systems.
Did it work? Of course not. Eventually, an auditor uncovered fraud within a funds transfer system — and the security hole that allowed this fraud had been revealed in the original code review report.
How could this happen? The review was sent only to the development system analyst — the perpetrator. He altered the report to remove his pet security bug, fixed everything else and continued to defraud the company.
It couldn't happen here, of course. Then again, perhaps all my questions have given our development teams some ideas. Pardon me while I go check some logs. . . .
This week's column is written by a real security manager, "Vince Tuesday," whose name and employer have been disguised for obvious reasons.