Wednesday, July 16, 2014

Security to Serve, not to Subjugate

A reoccurring theme in information security (and many other disciplines which cut across verticals) is "I could solve the problem if I could just get everyone to follow a few, simple rules".  We know, and they may even agree, that the simple rules are good practices that should be done.  However, the rules are rarely followed.  When they are followed, any adversity causes them to fall by the wayside, and no-one is particularly happy to follow the rules.

The fact is, even though we are benevolent rulers with a light burden, we are still acting as authorities over other groups in our organization.  Authority is rare appreciated, regardless of the burden.  If we want to truly get the support of our organization, we need to serve them, not rule over them.  But how do we provide security through service?

A Model for Service
With a little adaptation, the Center of Excellence (CoE) model can be adapted to provide cross-vertical competencies through service to the organization.  Our CoE will have three goals (services it provides):

  1. Evaluate Quality - The CoE will provide a repeatable approach to evaluating how well other groups in the organization are doing at infosec.
  2. Lessons Learned Sharing - The CoE will collect lessons learned about infosec from groups across the organization and distribute them to the rest of the groups.
  3. Support Execution: The CoE will support the execution of infosec in three ways based on how the supported group wants to be supported.
    1. If the group knows how to do infosec, leave them alone.  Let them do their thing.
    2. If the group wants to know how to do infosec, teach them how to do it well.
    3. If group doesn't want to deal with infosec, offer to do it for them.  Obviously they will still need to provide the resources, authority, etc, necessary for you the CoE to provide this service.

It is important that the CoE not see themselves as specialists proselytizing to the unwashed heathens.  The CoE serves others; it doesn't rule them and it isn't better than them.  To that end, the CoE should strive to provide the services when requested, only providing them unsolicited when absolutely necessary.  Also, the CoE need only charge for bullet 3.3. The CoE should be internally funded to provide the other services.

One way to start developing this CoE is for the group to begin solving problems that are likely to arise before the CoE is engaged.  If you look forward and help develop solutions before the problems arise, when groups come to you with questions, you will be able to serve them by solving their problems.  This will bring them back to you and help you establish your CoE of infosec service.  And by all means, don't be shy about your successes.  Make sure others know you are serving the organization and solving other's problems.  Soon they will be coming to you for infosec help and you can use the opportunity to establish the CoE.

P.S.
The approach doesn't just work for information security. It can work for any service: Data Analytics, Quality Assurance, etc. By applying this approach, the requirements will not be burdens, but services.

Sunday, July 6, 2014

You the Outlier - Why Privacy/Anonymity is Important in a Big-Data World

In my previous piece, I argued that privacy was dead and multi-persona anonymity needs to take its place.  This is based on a critical premise though, that we need privacy (or anonymity).  I hear many poor arguments in support of privacy.  Let's look at those first and then consider a better reason.

Being Held Accountable for Your Actions
Lets address all the poor reasons we hear.  Obviously the argument against privacy is, "Why do you need privacy if you have nothing to hide?"  There are multiple luke-warm responses:
  1. "BECAUSE" - The concept that it is something you should 'just have'.
  2. What if the acceptability of my actions changes with the progression of time or 'those in charge' think my actions are a problem when I do not?
  3. No-one is perfect.  Should that be held against us?  In perpetuity?
  4. What about the insurance company who'll raise our rates when they find out what we've done?
These are all poor arguments against lack of privacy for one reason: They all assume someone shouldn't be held accountable for their actions.  While I think forgiveness is at the foundation of humanity, I don't think not being held accountable for actions can be held up as the reason for needing privacy.

Being Held Accountable for Others' Actions
In a big-data world, we are not necessarily judged by our actions, but by the profiles we match.  This is nothing new.  But while in the past an employer might require employees to sign a letter letting them inspect his driving habits and then fire those that receive any tickets or a DUI, with massive data available it can be taken to an unprecedented level.

Instead of inspecting a driving record, an employer may install monitoring devices in personal vehicles.   The monitor had a database of speed limits.  If you went more than 5 miles over, you received a warning to slow down.  If you didn't within 6 seconds, your violation was reported which could lead to your firing.

The first case is a crude model with very bold, red lines not to cross.  The second is a much more subtle model, with ambiguous grey lines. It is one fed with every speed you have ever driven.  It says that those who spend more than 5 miles over the speed limit regularly are a liability.  However, where did that model come from? How was it validated?  Was it validated?

The reason privacy (anonymity) is important is that every model has a large number of outliers, and there is a good chance you are that outlier in some model.

In a big-data world, we are judged against models.  "If a person exhibits, A, B, and C, then they must be D".  Being D may mean being unemployable.  It may mean being paid less or paying more.  It may mean being excluded, untrusted, or any other number of things.  However, in the model, there will be a number of outliers.  No-one cares for them as, by definition, they are not the norm.  Still, on the flip side, everyone is probably an outlier in some model.  And being judged by a model to which you are an outlier is inherently being held accountable for others' actions. 

In this case, you have done nothing wrong.  You will not do what the model accuses you of doing.  But you fit some model which you will not get to challenge and which may never have been critically assessed in the first place.

This critique isn't meant to detract from the usefulness of models.  Models can co-exist with privacy and anonymity.  Models trained on real data still offer significant value in many areas including trends and decision analysis.

But we want to make sure models don't become the pre-cogs in Minority Report.  Otherwise, the movie Gattaca could easily become our future.  Where privacy is not about being held accountable for the things you did.  It's about not being held accountable for the things you didn't do.


Wednesday, July 2, 2014

Easy Security Acquisition

Intro
Now that the visibility of information security has grown, information security programs are facing a new problem, the bonanza of investments that can be made to 'enhance' a security program.  With so much money in the pool, there are many vendors doing all they can to encourage the purchase of their product.  So how is a company to choose it's investments?

The Best Way isn't Always Best
Most people would immediately go to a risk-based system.  The logic being, "If I choose the projects which mitigate the most risk, I will make the greatest improvements in my security posture."  While this is true, there is a subtle technicality hidden in that statement.

The statement above requires an extremely mature risk program. The risk program must not have any biases. It must include all areas of mitigation (identify, protect, detect, respond, recover) and methods (Doctrine, Organization, Training, Materiel, Leadership, Personnel, Facilities and Policy). It must be tailored to the threats the organization faces as well as the vulnerable conditions that exist within the organization. It must consider the entire attack path and must consider alternate branches an attack might take, (coming in the window when the door is locked).  It must capture all of these characteristics in a continuous manner across the organization.  Additionally, none of these characteristics can be biased as the bias will then be reflected in the acquisition.  While it is possible to have such a risk program, very few organizations do.

The Next Best Way
In lieu of the perfect risk program, the next best way is Operations-Based Acquisition.  In this scenario, we are going to assume our goal is to prevent attacks and that our security operations team is our last line of defense in preventing attack.

The first thing we must do is ensure our security operations team is competent.  This means that if the investments haven't been made already, they will need to be made to build the team, develop procedures, and train the team.

However, once the team is established, they will be able to identify the opportunities for investment.  Instead of measuring investments by decrease in risk, we measure by increase in security operations teams efficiency. 

We can look to the security operations team to help inform this.  When they notice that they are having to deal with attacks from a segment of the network that could be firewalled, we can segment the network and be more efficient.  When they notice that they don't find out about attacks until they are wide-spread due to lack of visibility, we can invest in IDSs and SIEMs.  When we notice human error taking lots of the security operations team's time, we can increase training.  And the beauty is that the use of the security operations team's time is measurable, and so the return on the investment can be captured!

Conclusion
Is it perfect?  No.  Is it quick, easy, and useful?  Yes!  And it is certainly better than simply buying the newest tool based on the newest report of evil hackers!  It is measurable and it is needs-driven.  All and all, a good approach.