Given that you have finite resources, what security threats do you take seriously and what threats can you afford to ignore? Answers to this question require, somehow, making quantitative judgements about the seriousness of different threats. To do this, we need to begin with some knowledge of the threats we face, and then rank the threats.
Making a threat catalog itself is not necessarily easy. Brainstorming sessions aimed at enumerating threats can easily be derailed by the first threat someone envisions. For example, if the first thing someone mentions is a cryptographic threat, it can drive the rest of the session into a discussion of cryptographic threats, missing a host of alternatives.
The obvious way to avoid this weakness is to begin with a historical survey. What exploits have been successfully mounted against similar systems in the past? Each of these disserves discussion and consideration, and each should form the core of a cluster of related threats. The problem with this is that in many security critical domains, the historical record is very sparse. Nobody likes to publicize their failures, and as a result, it appears that many successful exploits have been suppressed. The recent (2010) stories about penetration of Google's security are typical. We know that something was stolen, but only a few details have leaked into the public discussion.
There is another problem. In many security critical areas, nobody wants to talk about potential threats. I once asked an election official to tell me what he thought were the greatest vulnerabilities of his election system. His answer was "I don't even want to think about that. It would be improper for me to say anything." My impression is that he, like many other people working in security critical positions, thought that if he said anything about how his system could be attacked, he would be doing several dangerous things: First, such a discussion could create a roadmap for attacks on his system, and second, having admitted to anyone that he thought about threats to his system would automatically make him a suspect if something did go wrong. Better not to think about the issue.
A random assortment of threats is very difficult to evaluate. A typical initial step in the evaluation is to organize a threat taxonomy, grouping threats that exploit related weaknesses together. A reasonable next step is to create a threat tree. Consider the following example:
Goal: Steal a car -- requires all of the following subgoals Subgoal: Break into car -- requires one of the following subsubgoals Subsubgoal: Find unlocked car Subsubgoal: Jimmy lock on car Subgoal: Start engine -- requires one of the following subsubgoals Subsubgoal: Hotwire the ignition Subsubgoal: Jimmy ignition lock
Note that there are two types of vertices in the tree, presented in outline form above. One type requires the completion of all of the subgoals under that goal. We refer to this as an and node in the tree. Sometimes, the subgoals under an and node are a sequence of steps (as in the above example) while sometimes they involve aquisition of resources, for example, to jimmy a lock, we need a jimmy (the tool) and we need the skill to use the jimmy.
The second type of node requires the completion of just one of the subgoals under it. We refer to this as an or node. You can steal a car by finding an unlocked car or you can steal a car by jimmying the lock. Do just one of these and you get the car.
One way to determine which threats are the most serious is to attempt to estimate the cost of each. Having constructed a threat tree, we can compute the cost of all of the threats summarized by the tree by attaching costs to each node in the tree. For leaf nodes, the cost is simply the inherent cost of the goal described in that leaf. For internal nodes, we must combine the costs of the leaves. In the case of an and node, the cost of that node is the sum of the costs of all the subgoals. In the case of an or node, the cost of the node is the cost of the minimum subgoal.
The big problem is, what is the cost? Not all costs are easy to monitarize. The cost of acquiring a skill is zero if you already have that skill, but otherwise, it can be quite high. One approach to this problem is to back away from naming exact costs and instead, work with ranges of costs or probability distribution functions of costs. Another approach is to back away from monitarization and ask a simpler question: How many people must cooperate to achieve the goal. In this case, instead of speaking of the cost of aquiring a skill, you speak of adding another person to the attack team -- someone with that skill.
Assuming you have successfully attached costs to each vertex in the tree, you now have the total cost of an attack on the system, and furthermore, you can extract, from the attack tree, the least-cost plan of attack. It is far more likely that you have a number of question marks in your tree. In that case, you can still trim the tree, throwing away the obviously expensive subgoals of or nodes before focusing on better estimates of the costs of the remaining nodes.
Given an accurate threat tree, from which the least-cost attack can be extracted, an intelligent attacker would naturally elect to use that attack. Therefore, if you have only limited resources for defense, expend those resources on raising the cost of the least-cost attack. Do not simply raise this cost arbitrarily, but rather, look at the costs of the remaining attacks. Raising the cost of one attack while another remains inexpensive does little good.
This entire approach looks better on paper than it does in reality. Attempts to work out the threat trees for real systems produce huge trees, and in general, too many of the costs are intangible and must rest on guesswork. Furthermore, as the trees grow, the need for automated tools becomes obvious. There are a few tools that are on the horizon of being useful, but none of these have seen use, beyond the proof of concept stage.