Friday, November 6, 2015

Is Your Objective Risk Assessment Methodology Really Objective? Really?

Introduction

I hear a lot about Risk Assessment Methodologies (RAMs) and making risk assessment objective these days.  Let me pass on some lessons learned in a previous attempt to make risk objective.

Bucketing

Most organizations that attempt to make risk objective do what I affectionately call 'bucketing'.  This is when you create buckets which define a risk and then assign them values. For example, here are the buckets for the Common Vulnerability Scoring System, (CVSS), from First:

You may use other values such as whether a component is often used or what team runs it.  I call this Bucketing.  Risks fall like raindrops all over the map and you are setting out buckets to try to make sure each risk falls into one that you can give it a score from.

The Gotchas with Bucketing

Bucketing appears very alluring, but in practice it simply does not work the way it's creators expected it to.  Like all good security professionals, the users of the RAM game it to their own ends.  The list below represents some of the tricks that occur in RAMs based on bucketing.
  1. All Scores the same. See Michael Roytman's talk where he discusses issues with CVEs.  Especially pay attention to the issues with separation of scores.  If you use some form of bucketing for your RAM, you'll have a lot of options, but in practice you'll only use a few of them, which will mean you'll only get a few unique scores.
  2. No matter how many buckets, you'll never have enough.  The reason you'll have those unused buckets is the rain problem. No matter how many buckets you set out, you'll never capture every rain drop.  How that applies here is fairly straightforward.  You will start with a set of buckets such as those in CVSS.  But you will find out some risks are falling between the buckets, so you will make more, smaller, buckets.  Then you will have so many buckets that no-one can remember how they are different, so you'll combine buckets.  And no matter how many times you split or combine buckets, risks will always fall through the cracks between them.
  3. The JG Memorial Fire Axe.  This is a story of relative risk.  I was once told a power button on a mainframe was a HIGH RISK if the door to the mainframe wasn't locked.  However, an astute engineer pointed out that there was a fire axe on the wall and he could just as easily cut the cables.  In fact, anyone with physical access had ample ability to cause the mainframe to fail.  The point was that it was not the absolute risk of having a power button, but the relative change in aggregate risk that the risk causes.  Bucketing systems are simply not designed to handle the interplay of multiple risks.
  4. Back-Engineering.  This is the biggest problem with Bucketing.  The reality is you have a risk analyst in the process.  No matter how objective you make the rest of it, the analyst will look at the risk and immediately decide how significant a risk it is.  From there, if the buckets they assigned the risk don't add up to what they want, (such as getting a 'low' score for a risk they thought should be 'high'), they'll simply change them to something a that could still be true, but makes the score what they want it to be.  After getting some experience with the RAM, they will get very good at back-engineering; to the point where every risk comes out with the score they want it to have, not the score the RAM originally suggested for it.
  5. Hypotheticals.  What enables Trick 4 is the fact that the score represents more than just an atomic risk.  Instead, it represents an entire context.  Take for example a SQL injection in a webapp.  That simply doesn't tell you enough to understand the risk, instead you have to make decisions about how easily it could be exploited, what it's exposure is, etc, etc.  The analysts assigning the risks may discuss and decide that the SQLi is really hard to access due to it being blind with no debug information.  Another analyst may say "well, but if they knew the entire DB schema, it would be easy to exploit and we can't prove they don't know the DB schema" so the group ranks it "easy" to exploit and it is listed as a 'high' risk.  This leads trick 6.
  6. No Documentation.  Because the buckets are self-documenting, right?  No need to write down that a hypothetical discussion changed the risk.  No need to capture that it came out a 'low' risk in the first analysis.  The buckets documented the risk and, therefore, no additional context need be documented.  This almost ensures the score will not be repeatable.  It does, however, suggest a better way.

Other Options: Capturing Context

The good news is fixing this is a single word: CONTEXT.  Almost all of the problems stem from lack of a documentation of the context of the risk.  To alleviate them, you simply need to document the context.  The easiest way is to write down, in narrative form, all the steps you see happening in the exploitation of the risk, what the impact would be, and assumptions you've made.  Something like:
The attacker decides they want to attack us.  They've watched youtube videos on hacking and have downloaded Kali, but not much else.  They run a scanner against our webapp which returns the login and password in comments in the code.  They login and copy and paste a SQLi into the DB query form on the admin page which returns a file with the entire database.  They take it and post it on pastebin.
My experience has been that, once you fully document the context, how everyone is thinking of the risk, subjectives scores tend to be the same.  As such you could simply ask your analysts to score the risk on a scale of 0-<TOP> where 0 is cannot happen/no impact and <TOP> is will happen/the greatest impact they can think of.

Another approach to consider is the Thomas Scoring System.  Russell Thomas has put a large amount of work into it and it is a very good way of not only capturing context but also linking that context to your score.  You can watch the video explaining it as well as read the blog or just download the tool he's created!  (I'd recommend watching the video, or at least skimming it, then downloading the tool.)

And there are even ways to improve on capturing context, however we'll leave those for another blog.

It Could Be Worse Than Bucketing

And though this post isn't about them, there are things even worse than Bucketing.  "Not Tracking Risk" for example.  Or using the tool report as your risk report.   Taking what your vulnerability scanner told you carte blanche is the quickest way to have leadership ignore you when you bring in the 1500 page report listing 10,000 high risks.

Conclusion

In the end, the push towards objective risk management is a good thing.  That said, we have a long way to go.  If you make it to Bucketing, good for you.  It's a step in the right direction.  But don't consider the job done.  You're much better off taking the next, small, step to risk based on context!

Monday, October 26, 2015

Apprenticeship and Infosec

So how do you learn to be an infosec professional? Honestly, most of the leaders in the field these days were the stuckie, (i.e. the guy who didn't say "not it" quick enough), in the office when a security person was needed. While infosec academic programs exist, the reality is almost all security positions require experience. As a friend put it on Twitter, "Schools can do a great job on teaching technology, but methodology and process require more than book knowledge." Now, even strategy and process can be taught academically as military and criminal justice degrees show. But the reality is, even after completing basic training or your criminal justice degree, you're still a rookie.

The reality is infosec is much more like a traditional profession than newer, technical, professions.  And most traditional professions are based around apprenticeship.  If you were going to DO something in the old days, you didn't learn it so much in the university as the back of a current practitioner's shop.

And that's still the case in many careers.  A welder who can weld what others cannot is valuable.  Air traffic controllers have a median pay of $122k/year.  Even in highly educated careers, the education is really just an introduction to the on-the-job training.  Medical doctors have residencies.  Engineers must study under a Professional Engineer to become one.  Teachers student teach.  Nurses precept.

So what about information security?  What really needs to be taught in a classroom?  Probably the basic controls and technology, though not in any depth as it'll have changed by the time the student enters the field anyway.  Probably general strategies and some basic processes.  After that though, why is there not a formal, controlled, apprenticeship process for information security as there is for so many other fields?  Why do infosec students not practice engineering security, working incidents, and gathering intelligence the same way doctors practice internal medicine, surgery, and triage medicine?  We all know the apprenticeship is happening one way or another, so why not formalize it? 

Marisa Fagan suggested a mentorship program almost half a decade ago and not much has changed since then.  Still, we've all matured a bit.  We now understand the importance of working with the large, existing institutions where we used to go it alone.  Maybe it's time to make apprenticeship an expected and formally defined part of the information security curriculum.

Tuesday, September 29, 2015

No Average Breach Timeline

Over at the Verizon Security Blog, I just published a  new post: Incident Discovery and Containment : Average is Over.  In it I explain a little bit about discovery and containment times of incidents and breaches in the DBIR.  One big caveat, this isn't just criminal orgs installing malware or nation-state espionage.  They also include common mistakes and misuse.  (For example, if you just look at Cyber-Espionage pattern breaches, you find that the median days to discovery is 120 days.)

Thursday, August 6, 2015

Verum: How Skynet started as a context graph (bSides Las Vegas 2015)

Tuesday, I spoke at bSides Las Vegas in a talk titled Verum: How Skynet started as a context graph.  I covered a two things in the talk:  First, what the problem infosec defense is dealing with is.  Second, A machine learning algorithm and implementation called Verum that can put any piece of data in context and use it to think about a topic so as to come to a conclusion with a given confidence on the topic.  I ended on a word of warning about general AI and letting algorithms get too smart.  If that's something you might find interesting, take a look!  I've also posted the slides for those who'd like them.

UPDATE:
I've also uploaded the subgraph memory that was retrieved during the demo.  This is the subgraph around IP 142.0.37.68.  Hopefully this allows people to have a visual anchor for what the algorithms were doing.

Monday, July 27, 2015

Internal vs External Breach Detection

You may not believe this, but there's some serious differences between breaches discovered internally versus externally.  You can see them over on my new blog about it at the Verizon security blog!

Thursday, July 23, 2015

Twitter for Infosec

While a lot of people discuss infosec on Twitter and in other forums, they are dwarfed by the number of people who work in infosec but do not participate in the community.  This blog, Twitter for Infosec, is for all those people working in infosec who have wondered about twitter, but aren't quite sure how to get started.

And don't forget Trey Ford's Blackhat Attendee Guide for those jumping into the deep end that is Infosec Summer Camp.

Wednesday, July 15, 2015

DBIR The Missing Section: Phishing

Go check out my new blog at the Verizon security blog: DBIR The Missing Section: Phishing - TL:DR - Yeah, lots of espionage and criminal activity for financial gain and stealing secrets. But what's surprising is exfiltration takes days, so even though that first email is clicked in 82sec on average, you still have time to do something about it!

Saturday, June 20, 2015

Diminishing Returns on Mitigations

So now that I have the DBIR Attack Graph, I wanted to test something out.  How does the shortest attack path from start to end change when you mitigate things in the graph?  The short answer is, it plateaus quickly, probably due to there always being a direct connection to some attribute from some action.  Ultimately, that means that you need to pick the attributes you're protecting, not try and stop everything.  Check out the full analysis in this blog post on the Verizon Security Blog.

Monday, June 15, 2015

Privacy was a Passing Fad

The breach of OPM has a lot of people angry and scared about their privacy.  That's not surprising.  The federal government keeps a lot of information on its employees.  Even more on those with clearances.  Alternately, large companies have massive amounts of data on us that we only implicitly shared.  Companies like Google simply profile what we do.  Other companies make a business of collecting information about us without our knowledge or consent.

Before Privacy


We think of privacy as an implicit right, however it has a rather short history. Let's consider specialization and division of labor.  Looking back to history, we can see that specialization was what made society possible. Specialization was intrinsically tied to the agricultural revolution. Once a single person was able to provide food for many through farming, it allowed the other people in the community to specialize.  This in turn allowed the formation of complex societies.

http://allempires.com/empires/rome133bc_14ad/senate_painting.jpg

It also, for the first time, allowed people to survive without contributing, freeloading so to speak. As such, it makes sense that those who sought privacy would be ostracized for not contributing to society. Tight social cohesion was seen as a priority and privacy was looked down upon.

http://www.1st-art-gallery.com/Jacob-I-Slavery/A-Village-Fair.html

Prior to the industrial revolution, communities were local and unable to scale significantly due to transportation and population density constraints. In such a world, it is nearly impossible to hide one’s actions. Housing would be small enough that most actions would occur outside of the home or with another family member present. Larger houses would have many staff within them that would be aware of all occurrences in the home. People would have to deal directly with their neighbors for goods and services ensuring news spread from party to party.

The Dawn of Privacy


The industrial revolution brought with it a new trend. With the ability to support highly dense population centers and menial jobs requiring no special skills, people became interchangeable. You didn't need the person, you needed a person. As such there was less care about any specific person. As people traveled away from their ancestral roots to work, they began living in dense areas where they potentially shared a small apartment with no other people, allowing for complete privacy within their walls. There was no need to know your neighbors. There was no need to know those that provided you services. People were a cog in the machine of industry.

https://apeurohist.wikispaces.com/Industrial+Revolution
As efficient transportation became more available, it allowed people to spread out into the suburbs, increasing their isolation, (almost in homage to C.S. Lewis’s The Great Divorce). Now a person could have a nice house and an acre of land of their own. They could buy their supplies at the supermarket without the need to ever learn about the people they interacted with. Their work life and family life were so physically separate that they could be two completely different and even incompatible lives.

http://www.ushistoryscene.com/uncategorized/levittown

This caused a culture in which anything you could hide was ok, which lead to the idea that anything was ok that didn't hurt others. It also lead to the social norm that if you were caught doing something, it implicitly was nearly unforgivable. This is where we are in society today. Anything is ok that doesn't hurt others, but if anyone finds out about it, it is implicitly so bad it must follow you forever.

The End of Privacy


We are now leaving this golden age of privacy due to the massive amount of data which has been and is being collected as well as the tools which have become available to analyze the data.  The shared similarity is that all these tools and data stores are meant to help provide context that would otherwise not be known.  For an employer, (such as OPM), they provide a context for an employee that helps the employer interact with the employee or make decisions about the employee.  These tools can provide very helpful services, such as Google Now, Microsoft Cortana, Apple Siri, and Amazon Alexa.  They can also be used against users such as collecting information used to sway a person to make decisions they would not otherwise make or make life-changing decisions about a person simply based on how an algorithm classifies them.

http://www.theverge.com/2014/4/2/5570866/cortana-windows-phone-8-1-digital-assistant

And the Internet of Things will only accelerate this situation.  The additional information provided through sensors on our bodies, in our homes, and always around us will allow a more complete determination of our context than ever before.  It is not something you will be able to get away from.  The power company will install a smart power meter.  Your TV will be connected to the internet.  And right now, count how many microphones are listening to you.  (Don't forget your smartphone and your laptop.)  It is naive to think that, once collected, this data will not affect us.  Whether it is a company going bankrupt, a breach, or simply the explicit use of the data, it has just as effectively robbed us of our privacy as if our neighbors, church, government, or complete strangers were aware of our every move.

How we Must Face This Reality



This is not something law will solve. Any law would invariably not outlaw such systems, but instead simply limit who was allowed to have them. They would be restricted to the government who makes the rules and to the corporations who effectively lobby for the right to maintain their own context graphs. Instead, the technology should be made available to the general public. While no one person has the resources to build the big data systems available to large organizations and the government, tools may be distributed among many small, separately managed, data stores and still be effective, allowing a population to band together to build an equivalent data source to those maintained by the government. This will not return anyone’s privacy, but will provide a consistent understanding to everyone of the level of privacy they have. 

http://upload.wikimedia.org/wikipedia/commons/2/29/Bernard_d%27Agesci_La_Justice.jpg

To deal with this new reality, we are going to have to return to the principles that guided life before privacy. I believe this can be broken down into three fundamental principles:
  1. People should be productive members of society. 
  2. People should not do things they would be embarrassed about others knowing. 
  3. People should forgive others for their imperfections. 
I don't think it is surprising that these are all core tenants of most major religions. While the temporary availability of privacy in society has allowed these principles to become less important to a functioning society in short term history, they have never been wrong. Every person should contribute to society commensurate with his or her ability. This is the cornerstone of the very definition of a society. While the industrial revolution may have insulated people from the consequences of their actions, that is unlikely to continue. People will have to step up and take responsibility for what they do, even if they are not caught doing it. The simplest way to avoid this is to not do things you are not willing to take responsibility for. It is few and far between that something worth doing is also not worth taking responsibility for. Finally, forgiveness must again become a tenant of society. We cannot hold people’s imperfections against them as all people are imperfect. Instead we must work to compensate for others weaknesses, forgive their mistakes, and support their strengths. In doing so, we will build a better society that does not use privacy to hide its failings but uses truth to cement its future.