Friday, November 6, 2015

Is Your Objective Risk Assessment Methodology Really Objective? Really?


I hear a lot about Risk Assessment Methodologies (RAMs) and making risk assessment objective these days.  Let me pass on some lessons learned in a previous attempt to make risk objective.


Most organizations that attempt to make risk objective do what I affectionately call 'bucketing'.  This is when you create buckets which define a risk and then assign them values. For example, here are the buckets for the Common Vulnerability Scoring System, (CVSS), from First:

You may use other values such as whether a component is often used or what team runs it.  I call this Bucketing.  Risks fall like raindrops all over the map and you are setting out buckets to try to make sure each risk falls into one that you can give it a score from.

The Gotchas with Bucketing

Bucketing appears very alluring, but in practice it simply does not work the way it's creators expected it to.  Like all good security professionals, the users of the RAM game it to their own ends.  The list below represents some of the tricks that occur in RAMs based on bucketing.
  1. All Scores the same. See Michael Roytman's talk where he discusses issues with CVEs.  Especially pay attention to the issues with separation of scores.  If you use some form of bucketing for your RAM, you'll have a lot of options, but in practice you'll only use a few of them, which will mean you'll only get a few unique scores.
  2. No matter how many buckets, you'll never have enough.  The reason you'll have those unused buckets is the rain problem. No matter how many buckets you set out, you'll never capture every rain drop.  How that applies here is fairly straightforward.  You will start with a set of buckets such as those in CVSS.  But you will find out some risks are falling between the buckets, so you will make more, smaller, buckets.  Then you will have so many buckets that no-one can remember how they are different, so you'll combine buckets.  And no matter how many times you split or combine buckets, risks will always fall through the cracks between them.
  3. The JG Memorial Fire Axe.  This is a story of relative risk.  I was once told a power button on a mainframe was a HIGH RISK if the door to the mainframe wasn't locked.  However, an astute engineer pointed out that there was a fire axe on the wall and he could just as easily cut the cables.  In fact, anyone with physical access had ample ability to cause the mainframe to fail.  The point was that it was not the absolute risk of having a power button, but the relative change in aggregate risk that the risk causes.  Bucketing systems are simply not designed to handle the interplay of multiple risks.
  4. Back-Engineering.  This is the biggest problem with Bucketing.  The reality is you have a risk analyst in the process.  No matter how objective you make the rest of it, the analyst will look at the risk and immediately decide how significant a risk it is.  From there, if the buckets they assigned the risk don't add up to what they want, (such as getting a 'low' score for a risk they thought should be 'high'), they'll simply change them to something a that could still be true, but makes the score what they want it to be.  After getting some experience with the RAM, they will get very good at back-engineering; to the point where every risk comes out with the score they want it to have, not the score the RAM originally suggested for it.
  5. Hypotheticals.  What enables Trick 4 is the fact that the score represents more than just an atomic risk.  Instead, it represents an entire context.  Take for example a SQL injection in a webapp.  That simply doesn't tell you enough to understand the risk, instead you have to make decisions about how easily it could be exploited, what it's exposure is, etc, etc.  The analysts assigning the risks may discuss and decide that the SQLi is really hard to access due to it being blind with no debug information.  Another analyst may say "well, but if they knew the entire DB schema, it would be easy to exploit and we can't prove they don't know the DB schema" so the group ranks it "easy" to exploit and it is listed as a 'high' risk.  This leads trick 6.
  6. No Documentation.  Because the buckets are self-documenting, right?  No need to write down that a hypothetical discussion changed the risk.  No need to capture that it came out a 'low' risk in the first analysis.  The buckets documented the risk and, therefore, no additional context need be documented.  This almost ensures the score will not be repeatable.  It does, however, suggest a better way.

Other Options: Capturing Context

The good news is fixing this is a single word: CONTEXT.  Almost all of the problems stem from lack of a documentation of the context of the risk.  To alleviate them, you simply need to document the context.  The easiest way is to write down, in narrative form, all the steps you see happening in the exploitation of the risk, what the impact would be, and assumptions you've made.  Something like:
The attacker decides they want to attack us.  They've watched youtube videos on hacking and have downloaded Kali, but not much else.  They run a scanner against our webapp which returns the login and password in comments in the code.  They login and copy and paste a SQLi into the DB query form on the admin page which returns a file with the entire database.  They take it and post it on pastebin.
My experience has been that, once you fully document the context, how everyone is thinking of the risk, subjectives scores tend to be the same.  As such you could simply ask your analysts to score the risk on a scale of 0-<TOP> where 0 is cannot happen/no impact and <TOP> is will happen/the greatest impact they can think of.

Another approach to consider is the Thomas Scoring System.  Russell Thomas has put a large amount of work into it and it is a very good way of not only capturing context but also linking that context to your score.  You can watch the video explaining it as well as read the blog or just download the tool he's created!  (I'd recommend watching the video, or at least skimming it, then downloading the tool.)

And there are even ways to improve on capturing context, however we'll leave those for another blog.

It Could Be Worse Than Bucketing

And though this post isn't about them, there are things even worse than Bucketing.  "Not Tracking Risk" for example.  Or using the tool report as your risk report.   Taking what your vulnerability scanner told you carte blanche is the quickest way to have leadership ignore you when you bring in the 1500 page report listing 10,000 high risks.


In the end, the push towards objective risk management is a good thing.  That said, we have a long way to go.  If you make it to Bucketing, good for you.  It's a step in the right direction.  But don't consider the job done.  You're much better off taking the next, small, step to risk based on context!

No comments:

Post a Comment