Security Threat Modeling Card Game (and more armband)

Microsoft’s come out with a neat card game to teach (and practice) threat modeling and vulnerability hunting. It’s a series of cards with common flaws, the players get to try and link them to the system under analysis. This is particularly useful for people who are new to security… think of it as a short course in the security mindset, in-a-box.

Conversely, the main flaw is its focus on canned threats — the invent-a-threat Aces really only pay lip service to the idea there might be a flaw category not one one of the cards. If you want to be really thorough, I feel like a much more effective approach would be a systematic one. This is important given the ease by which someone could say “we fixed all the flaws in the card game, we must be secure!” A systematic approach might take the form, “here’s the network cable, what can an attacker do? ok, here’s the NIC, what can an attacker do?” (note this is hypothetical, obvious flaws include problems that aren’t visible unless you consider interaction between [non-adjacent?] layers)

Armband: Someone suggested making RMS measurements of the current through the wire around the arm. I presume they meant doing it thermally ( ) so as to skip the RF test equipment prerequisite.

This is a neat idea, but not feasible for me to do (short of picking up an RF RMS IC). I suspect it also only works at higher power levels, and I really have no idea what signal level would be needed to cause an effect. The field of microwave interaction with the human body is really poorly researched, at least in public.

There’s a rather more critical flaw, too: if the armband is working off of energy stored in the body’s near field, there may not be any current flowing through the wire between the antennas. I would actually expect this to be the case, since the wire’s impedance matching will be so bad as to make it unusable for even a few tens of megahertz, much less hundreds or thousands.

Going by Schantz*, the most logical model for the armband may be to consider the active area of the armband as two circles (one on each antenna) whose radius is proportional to the frequency and width to bandwidth, while all the rest is electrically inactive. This strikes me as the only possible way for such a crude device to work — and the last time I even touched DIY antennas involved Pringles cans, a WiFi card, and some bearded geeks patiently explaining why a Schottky diode soldered across an N connector and connected to a panel meter could read out signal strength.

Anyone know antenna theory and want to comment on this?

* Schantz, Hans. “The Art and Science of Ultra Wideband Antennas.” Artech. If you’re interested in antennas and radio theory, this book rocks! Highly recommended.

“Elevation of Privilege, a game designed to draw people who are not security practitioners into the craft of threat modeling. The game uses a variety of techniques to do so in an enticing, supportive and non-threatening way.[…]

Each playing card shows a suit, a number, and a threat of the type exemplified by the suit. An example of the threat would be the 3 of Tampering, which reads “An attacker can take advantage of your custom key exchange or integrity control which you built instead of using standard crypto.” These threats, or hints, help non-security-experts find problems. Aces are slightly different: each reads “You have invented a new (Suit) attack.” This is designed to reward creativity and (quite literally) thinking outside the box. To make it easier to decide if something is covered elsewhere, the deck contains 6 reference cards which list all the threat hints. […]

Playing a card consists of reading it aloud, and explaining how it applies to the system being threat modeled, and putting it in the center of the table. Playing a card where a player knows of a compensating control is less exciting, but still valid, because it allows for discussion of compensating controls, and helps newcomers to threat modeling understand the cycle of discovery and mitigation.[…]

After the game, the scorekeeper should create bugs, one bug per threat identified, in whatever system a development team uses to track bugs. Those bugs should be triaged as any other security bugs, and appropriate mitigations or test cases created.”

%d bloggers like this: