Here’s a defensive privacy idea. Set up a call-forwarding proxy so that
your number never rings through directly, but rather that you get notified first via e.g a pager, and then turn on your phone and either wait for the caller to try again or call back. This would be a highly effective defense against various ‘ping’ attacks seeking to figure out what your IMSI/IMEI du jour are — if the attacker has an IMSI catcher near you — or just your current location. Sadly, I don’t have the setup to try this (maybe need an Asterisk box?), if anyone does I’d love to hear the results.
On to the article. In light of the recent news about Stuxtnet stuff, a thoughtful piece covering the inherent tradeoffs between prioritizing offensive warfare and defensive security improvements in the computer domain.
Basically, if you keep emphasize keeping vulnerabilities secret so they can be used to hurt people (hopefully people you don’t like, but you never really know), that means those vulnerabilities don’t get fixed —
which makes everyone using the software (including maybe you and your friends) vulnerable to anyone who can find the hole.
At the same time, if you put the emphasis on making everything secure, then it’s harder for you to hurt other people if you get pissed at them.
Now scale this bit of thinking up to a nation-state level and you can see the dilemma.
Incidentally, while the article claims this is more or less unique to computer security, this also happens to a huge degree in physical security. In locks, you hear variants on the phrase ‘N doesn’t want the design of their tool made public because then the lock manufacturer will fix the lock’ more than a few times.
And, conversely, you also hear reasonably well founded rumors of manufacturers paying significant amounts of money to people who designed tools that could pick their locks — in order to keep those tools off the market until they could fix those locks.
“Cybersecurity presents unique challenges to policymakers, especially when it comes to dealing with the twin goals of protecting one’s own networks while simultaneously developing tools and techniques to attack the networks of adversaries. Stuxnet used four previously unknown vulnerabilities in Microsoft Windows to infect its targets, vulnerabilities which were present in hundreds of millions of computers around the world and, once disclosed, were open for exploitation. The developers of Stuxnet chose to use these vulnerabilities for offence, instead of disclosing them to security firms and software vendors so they could be fixed, enabling other cyber actors to exploit the same vulnerabilities across a range of malware and attacks against governments, companies and citizens. […]
The example of Stuxnet demonstrates the difficult choices national policymakers must make if they wish to pursue cyber offence, as doing so means giving cyber defence a lower priority. Trying to keep vulnerabilities secret so that they can be used for offence will likely result in vulnerabilities going unfixed, thus hampering defensive efforts. […]
Ultimately, states must make a choice between prioritizing cyber offence or cyber defence. Both cannot be done well at the same time, and focusing on one lessens the ability to successfully accomplish the other. Although there is an increasingly loud cry for increased cybersecurity from virtually all states, both stated policy and unstated actions make it clear that many states are currently giving priority to cyber offence—in particular the US government has announced a number of initiatives to speed up military development of offensive cyber weapons.45 This will likely need to change if states want to match their rhetoric on the need for cyber defence with meaningful actions to protect themselves and their citizens. “