Thursday, January 30, 2014

Infosec, It's About What You Think You Know

Both the current core failure of infosec defense and it's ultimate success are fundamentally tied to what you think you know. Let me explain.

First and foremost: We don't lose because of vulnerabilities. We lose because we believe we are in one infosec state, and the threat realizes we are in a different, more vulnerable, state. That means that it's not whether or not the vulnerable condition exists that matters, it's whether the threat actor knows it does and we think it doesn't.

Second, if losing is about believing you are in one security state when you are in another, winning is about the threat actor believing they are in an infosec state when they are actually in another. We make this happen in the one place we control: our network.

Currently, threat actors can operate with impunity because our network is operating the way the threat actor believes it's operating. To tip the balance of power in infosec conflict, we need the network to be operating differently than the threat actor believes it's operating. To do that, we need to do a few things:

  1. We need to treat the SIEM like a big data warehouse. Our SIEM should be a network telemetry data warehouse. It needs to receive as much alert data as possible through integration layers capable of dealing with the specific data types being inputted. It needs to be able to pull in additional associated data. In the data marts, it needs to detect not just malice, but any activity outside of the normal network profile. (This leads to a separate question of how to profile a network which I'll leave for another post.)
  2. The network telemetry data warehouse needs to be able to correlate detected anomalies with other data to piece together the picture of what is happening. Understanding the relationships between observations is critical to understanding the ground truth of the conflict.
  3. Most important, it needs to have a feedback loop; changing how the network is operating based on what the SIEM believes is anomalous or malicious. When potential malice is detected, it needs to take a different route though the network. Malice needs to have different rules applied to it's traffic. It needs to have different tools applied to delaying the threat, gathering intelligence, and responding to the threat to prevent negative impacts. This can all be accomplished in an automated feedback loop so that the network is pitching itself against any anomalous behavior.

On the network of the future, the network is not a static battlefield, but a living, pulsating thing. The network uses the massive amount of telemetry data at it's disposal and broad flexibility provided by Software Defined Networking and virtualization to respond to perceived threats. It put threat actors at the same type of disadvantage that defenders currently face. And, ultimately, the advantage in infosec conflict will pitch in defense's favor because threat actors will be unable to trust that the network environment they believe they are operating in is the true network state.

Saturday, January 18, 2014

Gabe's Three Assumptions of Risk Assessment. i.e. the Chain of Trust


Over the years of discussing vulnerable conditions and risks, I've come up with three assumptions which help ground the risk assessment in reality:
  1. If the threat actor has superuser privilege, they can realize the risk.  (This one has some caveats, however exceptions to this rule are so rarely implemented they are likely to not matter in any practical cases.)
    1. This might not apply if you have immutable files
    2. Off-host logging may detect the threat actor
  2. If the threat actor has physical access, they can deny availability.
  3. If the threat actor has unlimited physical access, they can gain superuser.  (see Assumption 1)
This could probably be summed up with a single concept:  the Chain of Trust.  In this case we are implying that the system's security is based on superuser's security and the system and superuser are based on the physical security.  If you haven't secured this chain of trust, any security established farther on in the chain is moot. 

I believe Dan Kaminsky's take is, "If you have root, you can get root" (paraphrased).  So when doing risk assessments, if at any point you assume the threat actor has compromised something farther back in the Chain of Trust, the rest of the line of reasoning is at issue.

As an example:  "If the threat actor pushes the power button on the computer, they could turn it off and shut everything down.  Therefore we should lock the power button." This assumes the threat has physical access in which they could pull the cables out of the computer, hit the emergency power off, or do any number of other things.

Alternately: "The bad guy can run code that can read all memory, so lets encrypt the data in memory."  This implies the threat already has superuser privileges and so could simply prevent the encryption, read prior to encryption, or copy the encryption key and decrypt.

So whether you are assessing risk or planning mitigations, remember the Assumptions and remember the Chain of Trust.

P.S. The Chain of Trust was not my idea but one that I got from Travis Howerton at Innovalysis.  It just fit well in explaining the three assumptions.

Thursday, January 16, 2014

Infosec Strategy in 1

Target, Neiman Marcus, Microsoft, and many, many more...

Corporate America has a huge security problem.  And it's not compromises.  It's a lack of strategic vision in cyber security.

With a never-ending litany of massive breaches, organizations are spending so much time trying to put fingers in the dikes, that no-one is stepping back to look at the whole levee.  Websites being compromised?  Buy WAFs.  Point of sale being compromised?  Put more tools on the PCI LAN. China hacking people?  Get a cyber intelligence feed.  PHI/PII being leaked to pastebin?  Get DLP.  No-one stops to ask the question, "Do these fit together?"  And when you don't, your infosec defense looks like this:
Friday’s Friendly Funny by Dave Blazek is licensed under a Creative Commons Attribution-NoDerivs 3.0 United States License.
Before thinking about point solutions, an organization must come up with a strategy.  I would suggest a Strategy Statement such as:
Delay threat actors from realizing risks until they give up or are detected and responded to.  Respond effectively.  Degrade gracefully and remediate effectively when threat actors realize risks.
The above single statement sum up an entire infosec program, laying out specific steps that can be used to plan and measure the program.  Yours doesn't need to be the same, but it needs to be a clear and concise statement you can make measurable progress against.  This one lays out base truths:

  • That the program will be operations driven.
  • That risk is a fundamental element of the security program (You can read some of my views on risk here, here, here, and here.)
  • That the fundamental measurement of effectiveness is Delay vs Detection & Response.
  • That the organization should expect to operate in and recover from a compromised environment.
It also establishes the stages of incident life-cycle that drive the strategy:
  1. Delay
  2. Detect
  3. Respond
  4. Remediate
Calling the first step Delay is meant to be a bit controversial.  I think normally it would be 'deny', 'protect', 'deter', or something else.  However, as a community, we need to get out of the idea that if we just build it secure enough, the threat will go away and never come back.  Obviously, not all threats will stick with their attack, however we need to plan our strategy for the ones that do and those are the cases where all we are doing is delaying.

This is a statement we can easily track progress against in one, easy to read, table:
Infosec Defense Execution Strategy

You can download the Infosec Defense Execution Strategy spreadsheet including an example. We also add reporting and after action review to the stages.  The states can easily be modified to meet an organization's process.  The Defensive Execution Strategy also breaks each step out into discrete levels of completion:

  1. Define (Document what you want to do)
  2. Build (Create anything you need to do it)
  3. Train (Practice doing it)
  4. Grade (Measure how well you do it)
  5. (There is an implicit 5th step that, if you find any deficiencies in your grading you feed the measurement back into improving the step where the deficiency can be rectified.)
Within the levels of completion we define two specific things:  Who and What.  Without who, it is unclear as to who will actually get the work done.  If an organization doesn't know who will get the work done, you can almost guarantee no-one will do it.  A good model to use is RACI: Responsible, Accountable, Consulted, Informed.

'What' is also critical to tracking the strategy.  There needs to be deliverables which clearly show that a step has been performed. Managing based on deliverables significantly simplifies tracking of progress.  In the same vain, you need to know what products need to exist prior to starting a step.  If you don't, you have no way of measuring if you are ready to begin or not.  Ultimately the topic of management by deliverables could fill a book.

From this one table of levels of completion above, all information security projects can be planned.  This also helps keep the organization focused on more than just the 'build' step.  

And each stage can be decomposed.  Delay may be broken down into:
  1. Preventing incidents
  2. Operating in a compromised environment
Detection may be broken down into:
  1. Internal awareness
  2. External intelligence
  3. Prioritizing potential malice to investigate
  4. Facilitating correlation of prioritized information
(As an aside, #3 and #4 above are a fundamentally new way of looking at DFIR that is not yet widely adopted and deserves it's own post.)

All projects and all security requirements should be traceable to the Strategy Statement through the Infosec Defense Execution Strategy and the various levels of decomposition.  With this as a starting point, organizations can see how all of their projects and requirements fit together, identify gaps, and form a unified defense that looks less like the first picture and more like this:
Image by Hao Wei, licensed licensed under the Creative Commons Attribution 2.0 Generic license.