I had meant to post again sooner than this, but a number of unforeseen factors, including a death in the family, have conspired against me. Upcoming topics will include the NSA's declaration of a stance of "presumed compromised" when building and analyzing their systems, the need for a backout plan before beginning any project, and lessons learned from being a clutz (keep bandages on hand!).
I'm back to work tomorrow, and there are only a few more minor details before the class I T.A.'d last term is complete. Next term's class is taught, once again, by Scott Bradner: Harvard University Extension School's CSCI E-132, Security, Privacy, and Usability. It's a great class: Scott is one of the guys who's "been there, and done that." He's Harvard University's Chief Technical Security Officer, and has long been a mainstay of the IETF. The class itself is not a lab or practicum, but is designed to cover the major issues and concepts in information security and privacy, while keeping application usability firmly in mind. After all, if your security measures become too much of an obstruction, people will find their way around them - but that's a subject for a later post.
Registration for the Spring Term is open. Classes start January 28th.
Presumed Insecure
Sunday, January 2, 2011
Monday, December 27, 2010
First Post!
When I first started blogging, back in the early years of the new century, I did so because it seemed that was what everyone in my line of work (Systems/Network Administration and Security) was doing. I had a few things to say, but I wasn't sure how far I would or could go. I did write a few posts, but quickly ran out of steam.
A few years later on, now, and I'm in a totally different place - as is the security environment. The basics are all still there: systems, networks, vulnerabilities, and threats, but they've evolved, as have security measures - even the basic nature of an IT organization's infrastructure has evolved at breakneck speed. We, the implementers and securers of systems and networks, have had to evolve along with these changes to keep our heads above water. If you don't keep swimming, you drown
So, when it came to picking a name for this blog, I wanted to capture the essence of my approach to security: Assume that nothing is secure out of the box. Assume that even if the various components were secure, the act of connecting them together to form a larger system has opened new holes. Assume a default system build is porous, assume that your firewall rules are incomplete - adopt the presumption of insecurity, and go from there.
If you examine catastrophic failures of the past, you'll see that one of the common factors is that some basic tennant, some base presumption, was proven wrong at the worst possible time. You can't foresee all possible situations and actors, but limit your assumptions at the outset, begin from what you can test for yourself, and you'll have nowhere to go but up. I've heard this attributed to the Russians during the Cold War, but it sounds right to me: "Trust, but verify". You can't assume that an attack won't succeed, unless you've tested your system yourself.
That's my core belief: If you start from the presumption that your system, or whatever asset you've been entrusted with, is insecure, compromised, and has more holes than Swiss cheese, you're at a good point to begin to improve your security. Test, fix holes, batten the hatches, and test again, until further improvements won't improve your posture. I can't promise a bullet-proof defense, but you've minimized one of the factors that lead to total failure. That's thhe viewpoint that I hope will inform all my future posts. Check back, and hold me to it.
Now, out to shovel the driveway...
A few years later on, now, and I'm in a totally different place - as is the security environment. The basics are all still there: systems, networks, vulnerabilities, and threats, but they've evolved, as have security measures - even the basic nature of an IT organization's infrastructure has evolved at breakneck speed. We, the implementers and securers of systems and networks, have had to evolve along with these changes to keep our heads above water. If you don't keep swimming, you drown
So, when it came to picking a name for this blog, I wanted to capture the essence of my approach to security: Assume that nothing is secure out of the box. Assume that even if the various components were secure, the act of connecting them together to form a larger system has opened new holes. Assume a default system build is porous, assume that your firewall rules are incomplete - adopt the presumption of insecurity, and go from there.
If you examine catastrophic failures of the past, you'll see that one of the common factors is that some basic tennant, some base presumption, was proven wrong at the worst possible time. You can't foresee all possible situations and actors, but limit your assumptions at the outset, begin from what you can test for yourself, and you'll have nowhere to go but up. I've heard this attributed to the Russians during the Cold War, but it sounds right to me: "Trust, but verify". You can't assume that an attack won't succeed, unless you've tested your system yourself.
That's my core belief: If you start from the presumption that your system, or whatever asset you've been entrusted with, is insecure, compromised, and has more holes than Swiss cheese, you're at a good point to begin to improve your security. Test, fix holes, batten the hatches, and test again, until further improvements won't improve your posture. I can't promise a bullet-proof defense, but you've minimized one of the factors that lead to total failure. That's thhe viewpoint that I hope will inform all my future posts. Check back, and hold me to it.
Now, out to shovel the driveway...
Subscribe to:
Posts (Atom)