17 September 2013

Implementation and Audit Concerns with Langner’s RIPE Framework



“There is nothing novel here and, to be honest, [Langner] includes many statements that
I think are inaccurate or that I'd like to see him support with evidence,” an anonymous source states in response to Ralph Langner’s proposed ICS/SCADA security framework which he calls the Robust ICS Planning and Evaluation framework, or RIPE. Langner believes RIPE to be a better approach to ICS security than NIST’s current draft Cyber Security Framework (PDF).

My source, who chooses to remain anonymous, disagrees. And she has the experience to back it up with 30 years of Government auditing, performance measurement, and total quality management (TQM) the last five spent overseeing IT and ICS audits. Additionally, she is a former colleague of mine and I can personally vouch for her attention to detail and her longevity in audit work.

I’d like to see credible, verifiable support for his claim that the objective of corporate risk management is to minimize cost, not risk.” That objective, she states, is a huge assumption on Langner’s part. If, in fact, the objective of risk management is to minimize cost, why is it not dubbed cost management?

Furthermore, the "quality" performance measures listed in RIPE are not all quality, or effectiveness, measures. “Just because you call an input, output, or efficiency performance measure a ‘quality measure’ doesn't make it so. That is a mistake that is unforgivable because of Langner’s heavy emphasis on evaluation,” she explains. In essence, for a measure to be a quality measure, it must be part of a standardized set of technical specifications that define how to calculate quality. Quality, in terms of ICS, is a fuzzy concept. For purposes of ICS security, quality measures must come back to the basics: confidentiality, integrity, authorization, and, most importantly, availability.

It is no big secret in the ICS community, and the larger IT community, that components manufacturers almost completely forego security. In many cases, software developers follow suit, expecting that customers will have installed third-party or standalone security solutions. The Department of Homeland Security and the ICS-CERT make this point clear in their various training programs. While it may be true that security-heavy procurement specs could force manufacturers to pay closer attention to component security, it could also heavily increase the cost of the products themselves, going against Langner’s own view of risk management as cost minimization. Additionally, the lead time on component development might see a significant lengthening should manufacturers properly implement security planning, integration, and testing processes.

Langner’s comparison of his framework to Sarbanes-Oxley (SOX) compliance is also troublesome. As most security professionals will note, compliance rarely equals security. If anything, compliance is simply a low set watermark. Perhaps more troubling, however, is the cost associated with ensuring SOX (and other) compliance standards. Again, if Langner is so interested in reducing costs for plant operators, why push for an expensive compliance process with questionable effectiveness at best? It would appear, instead, that RIPE, like SOX compliance, is actually an expensive process-driven solution that fails to focus on risk.

Her biggest grievance, however, is Langner’s claim that “If security characteristics of a specific plant are documented properly and accurately… third party experts can assess the security posture of a given plant without actually going there.” From an audit perspective, failure to verify controls is an unforgiveable error. In no instance should a security assessor perform a risk assessment on paper only. The bottom line for any assessment is to trust but verify. Remote assessments fail to implement the vital second half of that rule.

Her comments regarding his framework are not all negative. In her and my own training efforts, we both stress some of the same points Langner stresses, especially knowledge of the control environment, up-to-date asset inventories, and continuous monitoring. Indeed, the 8 domains that Langner identifies, including accurate and up-to-date inventories and topologies, established policies, and security training for staff, are old hat. Each of the controls in RIPE can be tied to an equivalent control in the SANS 20 Critical Security Controls.

“I like the voice he used in the piece; it’s almost sarcastic. And I really like his term ‘cyber ecosystem,’” she says. She also notes her appreciation for his recognition of the air gap myth. While it is not a new idea, having been discussed by Eric Byres, Ăˆireann Leverett, and Billy Rios, the air gap myth has not fully been accepted by the ICS community at large.

In the end, the RIPE measures are not a well-balanced family of measures, a shortcoming that my source, as an auditor, and I, as a security professional, cannot forgive. If it is true that the measures are unbalanced or misleading, then Langner’s notions of meaningful benchmarking might be jeopardized. That is, if benchmarking were even to occur, which is questionable based on the sensitive, guarded nature of ICS vulnerabilities.