21st Century Risk Management Part 2: ARMing your infrastructure
By Drew Williams August 21, 2013
- Most IT security tools built just fine, but deploying them securely usually left out of the instruction manuals
- One approach would be to look at risk management as a process-driven strategy, to be followed by ARM methodology
I FIND it interesting that when I am at the movies; 1) I always want popcorn after the show starts, and 2) The scary stuff seems to happen just as I get settled back in to my chair – just in time to be startled by the next Big Thing (which is usually when I toss said popcorn onto the heads of the people seated in front of me).
The trend, again – not news to many people in IT, but still alarming – is that bad stuff usually happens right after everybody gets to feeling comfortable in their surroundings. When everything is moving along nice and smoothly, out pop the zombies, and the world hangs by a thread.
How is it that the same civilisation that has created a system of complex, online trade routes, in which we can order virtually anything at any time of day and have it delivered to any address anywhere this side of the Klingon Empire, can’t solve the most common of risk management problems?
Therein lies the rub: IT administrators spend most of their efforts relating to post-event analysis, where the teams often focus on just that: post-event activity. In other words, after the damage is done, the data is evaluated.
Pardon another reference to Mr Lucas’ masterpiece, but it would be like trying to visit the planet Alderaan after the Death Star – not much left to look at.
Fact is, most IT security infrastructure tools and technologies are built just fine, but the process for how to securely deploy them are usually left out of the instruction manuals.
We call this trend the ‘Company X’ Principle: When devices come shipped from the manufacturer, they are prepared for operation based on the greatest common denominators for standard performance by ‘Company X’ (where ‘X’ equals a predetermined common operational infrastructure).
Once those devices hit the server rooms, assumptions are often made that they are configured and ready to go for Company X.
The problem then, extends to the reality that Company X is not the same as Companies Y, Z, F or Q, each of whose infrastructures needing to be configured for their respective operational needs, and need to rely on key strengths that require specific tuning of each device before it gets installed into the infrastructure.
Based on a current trend in statistical analysis made by analysts, vendors and independent security focus groups, the following are among ‘Top Ten’ Lists:
- Obsolete Code
- Patch Exploits
- Server Exploits (especially Apache)
- Application Exploits (SQLI, XSS, Buffers)
- Unauthorised Access Control
- Authentication Inconsistencies
- Bugs & Non-secure Code
One prudent approach would be to look at risk management as a process-driven strategy, which first considers identifying the assets that are needing protection, followed by a three-stage, process-driven methodology, called Adaptive Risk Management (ARM).
Think about Adaptive Risk Management as if the critical assets of the organisation were the payload of a rocket, with the ARM representing three stages of that rocket.
Stage One: Evaluate current policies and access to defined ‘assets
In Stage One, we examine the ‘Policy’ sector of risk management. An effective operational policy is more than words on paper, it represents a work style – sort of the ‘Prime Directive’ for how businesses safely grow from Point A to Point Z (without fear of compromising the integrity of their assets along the journey).
While boards of directors are increasing pressure on their executives to understand how and why ‘Risk Management’ is becoming a business-logic-based tab in the organisation’s investment portfolio, CIOs (chief information officers) and risk officers are finding themselves scrambling more and more to have to justify the cost of spending to protect their complex infrastructures from potential apocalypses that may never occur (but might).
One place to start finding answers is in how the organisation weighs its efforts in establishing comprehensive operational risk management policies.
A robust operational policy is not only essential to securing the assets and balancing productivity with security, it becomes part of every security compliance mandate and framework (or should), considering each operational commonality, and also including a contingency plan for when (not ‘if’) something falls out of orbit.
We also need to be sure to understand what it is we are defining as those assets critical to the success of the organisation.
In the realm of ‘Security Operational Policies,’ perhaps the best success can be measured in reducing risk than by first authoring, monitoring and enforcing a strong policy on the ‘whos’ and ‘hows’ regarding accessing, managing and protecting operational assets.
Over the last several years, a significant number of compliance requirements have emerged, making ‘Governance Risk and Compliance’ (GRC) a top priority in many organisations, and numerous hardware/ software vendors and consulting firms have emerged as market-driven point-solution providers (as long as those ‘solutions’ map to the products and extended services they sell).
Organizations, however, that need to update (or even author) a security policy, may discover that the challenges of implementing the new requirements are difficult and costly, and often require sifting through hundreds of pages of ‘findings’ and implied needs for more technology.
These trends, are not only NOT the solution, they often open more doors to further Risk and Threat, when not properly considering the operational goals and risks facing Companies Y, Z, F and Q (you get the idea).
In our third and final instalment, we will conclude the conversation about the Adaptive Risk Management methodology, and offer a few suggestions for how to move your organisation’s risk management initiatives from the ‘idea’ phase to the boardroom.
Drew Williams is the founder and CEO of international risk management consulting services firm Condition Zebra. He has also worked with the Internet Engineering Task Force and served on the 1999-2000 President’s Partnership for Critical Infrastructure Security (precursor to the Department of Homeland Security). He is a former member of the US Navy.
21st Century Risk Management Part 1: Managing risk means taking risks