The sources of uncertainty in making decisions about cyber security range from the
shifting uses of information technology to the evolving nature of the threats. Moreover,
the consequences of not making good decisions about appropriate investment in cyber
security resources becomes more severe, as organizations store more and more types of
information, of increasing sensitivity and value. Methods of accessing the information are
expanding to include a greater number of mobile and remote devices. And the nature and
extent of the costs of a cyber attack are shifting. More methods of access to information
translate into both more modes of attack and an increased probability that an attack will
be successful. To understand the motives and goals of attackers requires cultural and
political expertise that often does not reside within organizations.
Taking measures to effectively allocate resources to enhance cyber security
requires a clear understanding of the nature and cost to the organization both of the
attacks and the benefits of those measures. Decision makers within organizations have
heterogeneous perceptions of threats and risks. Departments specializing in information
technology think in terms of preventing, detecting, and responding to specific types of
attacks. They often omit the challenge of resilience in the face of attacks and information
recovery after successful attacks; it is a difficult management, legal, and customer service
challenge to determine the best strategies for maintaining operations when critical
information is stolen, corrupted, inaccessible, or destroyed.
Given the challenge of ensuring cyber security under conditions of uncertainty,
how can organizations determine appropriate measures to enhance cyber security and
allocate resources most effectively? Many models have been proposed to help decision
makers allocate resources to cyber security, each taking a different approach to the same
fundamental question. Macro-economic input/output models have been proposed to
evaluate the sensitivity of the U.S. economy to cyber-attacks in particular sectors (Santos
and Haimes 2004). More traditional econometric techniques have been used to analyze
the loss of market capitalization after a cyber-security incident (Campbell et al. 2003).
Methods derived from financial markets have been adapted to determine the “return on
security investment” (Geer 2001; Gordon and Loeb 2005). Case studies of firms have
been performed to characterize real-world decision making with respect to cyber security
(Dynes, Brechbuhl, and Johnson 2005). Heuristic models rank costs, benefits, and risks
of strategies for allocating resources to improve cyber security (Gal-Or and Ghose 2005;
Gordon, Loeb, and Sohail 2003). Because investing in cyber security is an exercise in
risk management, many researchers have attempted to characterize behavior through a
risk management and insurance framework (Baer 2003; Conrad 2005; Farahmand et al.
2005; Geer 2004; Gordon, Loeb, and Sohail 2003; Haimes and Chittester 2005; Soo Hoo
2000). Recognizing that potential attackers and firms are natural adversaries, researchers
have also applied methods from game theory, and developed real games, to analyze
resource allocation in cyber security (Gal-Or and Ghose 2005; Horowitz and Garcia
2005; Irvine and Thompson; Irvine, Thompson, and Allen 2005).
Since each model is based on a different set of assumptions regarding the
characteristics of information systems, motivations of organizations to protect
information, the goals of attackers, and the data required for validation, no single model
provides a comprehensive framework to guide investments in cyber security. It is often
unclear how a model for cyber security can be used in practice, applying actual instead of