The Target Breach

V. Kumar Murty

The data breach at Target, which was reported late last year, returned to the headlines this week when it was revealed that a malware alert had been issued prior to the breach. This raises obvious questions as to why the alert was not acted on, or if it was acted on, why the breach occurred anyway?

Undoubtedly, these and many other questions are being asked within Target itself. Their answers will be based on the details of a forensic investigation and this will be important for the sake of their business and for the sake of the security and privacy of their customers.

While the general community is not privy to the details that will form the basis of Target’s probe, it is still worthwhile to reflect on what we do know and what lessons can already be learnt from this information.

There are three important components at work here: technology, policy and people. Our information systems are becoming more and more complex while at the same time, they are becoming a part of our ‘critical infrastructure’, whether it is in manufacturing, transportation, commerce, banking, or a myriad of other sectors. These systems have to process large amount of data and act on them at high speeds. Even a small failure can have catastrophic consequences. While many ‘failsafe’ tools can be built in the technology, we still need human oversight and, when necessary, timely intervention.

In the case of the Target breach, though the malware alert tool did its job, we do not know the number of alerts that it issues during normal operations, or the proportion of ‘false positives’. If it is large, as some reports suggest, it is a nontrivial task to sift through the alerts and determine the ones that need to be acted on. 

This means that as technology becomes more sophisticated, it also has to work hard in terms of HCI: human computer interaction. Red alerts have to be very intuitively and strategically disseminated. Much as regulatory compliance has forced key information to percolate to C level executives for signoff, one might consider system architecture that will propagate red alerts to the appropriate level in the corporate hierarchy to mitigate ‘technology risk’. In the case of the Target breach, one wonders what might have happened if the CIO had received the malware warning.

This leads to the second aspect, namely policy. How one reacts to a particular scenario, both in terms of the time to react and the required intervention, should be governed by policy. Moreover, policy has to be such that the corporate culture buys into it and it has to be expressed in an uncluttered and unambiguous way that people can understand.

Regulatory compliance dictates some policy, but it should be seen as a minimal requirement. In other words, policy does not end with compliance, but rather begins there. We should not be lulled into a false sense of security because we are compliant. It is similar to assuming that having a driver’s license means that we will never get into an accident. If we do get into an accident, not having a driver’s license will have disastrous consequences for our liability. However, having a driver’s license does not mean that we will be exonerated.

The third, and most important aspect is the people. They have to understand how to interact with technology and policy, both of which have the role of servants. This requires education and culture. And this has to begin with the C level executives.

Technology and policy are tools. When used correctly by people, they empower, and when not, they can cripple. Enabling people, creating opportunity, unleashing human potential and improving their overall quality of life, is what all of this is about.

V. Kumar Murty is Professor of Mathematics at the University of Toronto and CTO of PerfectCloud Inc, which provides security and privacy in a cloud environment.