Tuesday, November 15, 2011

The Three Flavors of Security

A boss once told me, “In a ham and egg breakfast, the chicken is involved, but the pig’s committed”.
With security, there are three separate groups which have fundamentally different views on how to provide security.  Two are involved, one’s committed. 
We can learn a lot by considering how each views security and how integrating all approaches as opposed to focusing on a single one provides better security.
First, there are the builders: the engineers, designers, coders, testers, and integrators.  They approach security as something you build.  They expect the attacker to know everything about the system minus some minimal authentication information.  They fix code, secure configurations and repeatedly test to make sure everything is perfectly secured.  They are involved.
They are the sensors and they see security as a sensor: to secure something, hide it.  Intel documents all the places where people didn’t hide things and were consequently compromised. 
Therefore counter-intel believes nothing can be perfectly secured, so instead it is best to do everything in your power to prevent the attacker from gaining information.  The engineers abhor this approach as “security through obscurity”.  Intel and counter-intel are involved.
They are committed.  Operations receives the output of engineering,  intel, and counter-intel and has to make it work.  Security is not their job; it allows their job to happen. 
As such, they are likely to ignore any security that impedes operations.  They know their systems are imperfect.  They know they can’t prevent information from getting out there. 
Instead, they strive, not to be perfect in either the intel or engineering way, but simply to be better than the attacker.  They solve problems procedurally and will substitute labor for technical solutions, (i.e. incident handling instead of an IPS). 
Any sound security solution needs to have a little of each.  Because operations is committed, all security needs to support them.  However, not all problems are solvable procedurally or with human capital. 
Engineering is required to provide operations the tools they need as well as to provide systems built to slow down the attacker as well as fail gracefully when compromised.  Intel is needed to provide operations information to help them orient and act. 
Counter-intel is needed to help operations slow the loss of information.  Only when all areas are working in concert for the common operational goal, is security realized.

Tuesday, November 1, 2011

Balkanizing the Internet

In light of the UK cyber security summit, I thought it might be appropriate to discuss the balkanization of the internet.

This is not a story about where the internet should go, or could go, but where it will go.  Market forces will simply guide us to this end.  Honestly, that's probably OK.

The internet is really not one contiguous environment.  Instead, due to the nature of service contracts and peering agreements, it's a mesh of interconnected information systems.  These information systems are already undergoing a balkanization as we speak.

Companies require business only be conducted within their network.  ISPs require strict agreements as well as providing some minimal security protections.  VPN services provide a completely open connection in which you provide your own security.

Some governments attempt to completely control the content of their information systems.  The FBI even suggested an alternate internet for critical systems.

In the end, what is important is that we explicitly recognize what is going on.  Through multiple technologies (remote desktops/shells, VPNs, hosting services, etc), we have the ability to choose an information system or systems to exist within.

We may choose to conduct our day-to-day personal network use within our home information system, buried within our ISP information system, buried within our country information system.  We may choose to host a website within a information system specifically designed to protect web servers.

We conduct our business duties through a VPN to our corporate network.  And we have one system residing on a VPN to an uncontrolled provider who does not restrict our actions but offers us no security.

At the conference, hopefully those leaders in attendance understand that they are making agreements about how their country or corporate information systems will interact with each other. 
However, they must realize that there will be information systems which will not agree to their rules, (and which they can then choose to defend themselves against).  They must also understand that people may not choose to agree to their terms for existing within their country or provider information system and instead have the choice to exist in another.

Thats not to say that people won't have to pay for their physical connection, but in most places there are multiple options (cable, DSL, dial-up, satellite, cellular, RF/wimax/wireless, etc).

And even if you are restricted to a physical provider, no group has ever been able to block people's connectivity.  The ability of malware to circumvent even the best companies' security, or people to circumvent the great firewall of China, bears this out.

There is great potential for companies and countries to offer information systems which provide varying services (security, QoS, etc) in return for the member being burdened in various ways (payment, use agreements, etc).

If we ignore the inherent balkanization as well as people's freedom of choice, the internet will grow, but without the clarity which could provide people, companies, utilities, and governments the service and security they need at burdens they are willing to accept.

Thursday, October 27, 2011

How to Plan Security and Meet Your Compliance

Crayons and Firewalls - How to Plan Your Security and Meet Your Compliance
As infosec security professionals, we tend to talk about a few things:
  • Attacking Stuff
  • Complaints about compliance
  • Complaints about how vulnerable stuff is and how no-one wants to fix it
  • Complaints about complaining
I’m going to go a different direction with this blog.  I’m going to suggest a general approach to securing an Information System (IS) that can also help you meet your compliance responsibilities. 
In fact, I’m hoping that if you follow this approach, you’ll actually start to appreciate some of your compliance requirements.
All it's going to take are three steps:
  • Map Your IS
  • Document Your Threats and Targets
  • Place Your Security Controls

1.  Mapping Your IS
We’re going to do this grade-school style.  Grab some crayons or colored pencils and some butcher-block paper!
Now draw your IS, and start big!  If you have a big system that has multiple enclaves, start by drawing each enclave as a cloud and drawing where they are interconnected. 
You can then iterate into each enclave.  Hopefully you can make it down to the host level in each enclave.  However, if it's too big for that, don't worry about it.  We'll handle it in the next step.  No matter what, make sure to capture all inter-enclave and external connections.
2. Document Your Threats and Targets
Now that you have a map of the battlefield, (because, for your purpose, that's what your IS is), it's time to place the bad guys (threats) and things you want to defend (targets).  Your targets should be the equipment required for your business to accomplish it's mission. 
Take those colored pencils and draw the targets right onto the map.  You may have multiple targets.  You may even need to prioritize your targets as well based on how important they are to your business's mission. 
Now draw your threats onto the map.  That includes both your insider threats as well as your external threats.  If you're not sure who or what your threats are, Google who's attacking people like you.  Figure out who wants what you have or to stop your business.
Be judicious as you plot threats and targets.  You can't protect everything from everything.  As a security professional you should already have a feeling for what your real threats and real critical targets are.  Draw the line and don't plot the threats/targets that are not value added to defend against.
3.  Place Your Security Controls
Now, like a general commanding his army, draw your defenses on the map.  These should fall into three overarching categories:
  1. Defenses - Things that inhibit attackers (firewalls, IPS, etc)
  2. Sensors - Things that detect attackers (includes some of your defenses)
  3. Response - Things that allow you to respond to attack (backup circuits, re-initializing VMs, blackhole'ing traffic, etc)
As you place your defenses, keep in mind you are trying to have your DEFENSES delay a THREAT from reaching your TARGETS until your SENSORS detect the attack and your computer event response team RESPONDS to the attack.
Now re-read the above sentence.  It is fundamental to information security (as well as most physical security).
At it's heart, this is an operational question because what you choose is based significantly on how you plan on responding.  This is also an excellent opportunity to capture the policies required to execute your incident handling.  (It is no use to identify a firewall as a response tool if you lack the policies to change firewall rules in near-real time.)
If you feel a bit lost with what tools you have in your (defenses, sensors, response) toolbox, you're in luck!  The good news is the toolbox is already sitting on your hard drive.  The bad news is, it's your compliance controls.  NO NO NO.  WAIT!  DON'T LEAVE!  You're used to building compliance and eeking out some security.  I want you to build security, and, if it makes sense, use compliance to do so.
Consider password requirements.  They can be an effective defense and sensor on your user interfaces.  They very likely also meet some of your compliance requirements. 
I've found that when using this method the defenses, sensors, and responses I picked were almost always one of my required controls.
Now, that said, there will be a set of compliance requirements that simply don't buy you any security.  That's ok.  Not every system is the same.  Simply implement those controls to pass your compliance testing.  Your auditors will appreciate that your system is both secure AND compliant and that the two even overlap!
Is this perfect?  I hope not.  Instead, please use this, find ways it can be improved, and share them with the security world.  Hopefully we'll be able to add:
5. Plan Network Defense
to the list of things infosec security professionals regularly talk about.

Wednesday, October 12, 2011

Security Without Patches

Let’s discuss something a bit awkward:  Not Patching.
As security professionals, the first assessment of a security problem is that it is either due to a mistake in the code, a mistake in the config, a mistake in the RFC, or a mistake in the user.  Either way, the code/config needs to be fixed to fix the mistake, (because we certainly won't fix the user).
However, as auditors, we assess risks.  Every time we recommend a risk be mitigated by patching we are recommending our customers fight the security battle on the bad guy’s turf.
As Dusko Pavlovic points out in Gaming security by obscurity, the Fortification Principle implies that defense is at an inherent disadvantage when trying to use patches as our mitigation.
So instead, I propose something profound:  Secure your network without patching.  I don’t mean to never patch, but plan to only apply security patches, (and possibly configuration changes), as part of a regular deployment cycle.
In my last blog, I suggested establishing a context for the risk using a narrative.  Using the steps outlined in the blog, a narrative of the likelihood of a typical SQLi might look something like:
  • Attacker wants to embarrass your company
  • Attacker downloads backtrack and watches a Youtube video to learn how to conduct SQLi
  • Attacker runs SQLi scanner against company website and discovers SQLi
  • Attacker enumerates tables
  • Attacker dumps user table
  • Attacker leaves website and posts contents to Pastebin
The consequence is the embarrassment of your company, risk to your customers who reuse passwords, required immediate mitigation, etc.
Now I want YOU to come up with a solution.  One rule:  YOU CANNOT PATCH THE SQLi.  The narrative provides a powerful tool for this. 
If every step is required for the attack to happen, then disrupting any step mitigates the risk.  Alternately, it may be worth letting the attack happen if the consequences can be mitigated.
Did you do it?  Good!  Post your favorite mitigations in the comments below.  Having trouble?  Consider some of these ideas:
  • Why does the attacker want to embarrass your company?  Can you prevent that?  Can you make responsible disclosure more appealing?
  • You have an idea how the attacker is training.  Train internal staff the same way and allow them to test your systems.  This way you find the issues before the attacker.  You may find it early enough that you can use your normal development cycle to patch before you’re attacked.
  • As Dusko points out, when your attacker interacts with you, you have a valuable chance to both collect information as well as pass them the information you want them to have.
    • Can you prevent scanning for SQLi?
    • Can you inject fake SQLi returns to their tool?
    • Can you redirect the SQLi?
    • Can you detect the SQLi attempt and block or degrade your level of service to the user conducting the scan?
    • Can you detect the success of the SQLi and use it as an alert to take action yourself?
    • Can you make the attacker mistrust the data he receives back? (What if many of the entries in the user table look like fake accounts.  Will the attacker be confident enough to publish them still?)
  • Again, look at the options for step three.  The attacker is interacting with you.  Use it to your advantage.
  • Writing your user table to port 80 should not be normal. Use that knowledge to your advantage.
    • How can you detect it?
    • How can you prevent it?
    • How can you provide the attacker fake information?
    • Can you make the attacker think they’ve received fake information?
    • What about including every single account created by spam bots in the attacker’s dump of the table?
  • Most attackers believe that once they’ve completed the attack, they’re home free.  Law Enforcement doesn’t.  Just because you’ve been hacked doesn’t mean you don’t still have a chance to mitigate the attack.  You potentially collected a lot of information about the person as they accessed your systems.  If you can make them aware you know who they are and the consequences of posting the information, they may think twice about doing it.  Alternately, you may be able to provide them a positive incentive not to release it.
As for the mitigating the consequences to your company, it’s worth considering what you lost.
  • What if the usernames were encrypted along with the passwords?  Would the attacker then have to decrypt them to make use of them?
  • What if the username/password were simply hashed together in the database instead of storing the clear-text username?
  • What if there were so many garbage records it would be clear to anyone downloading it that the work to filter out legitimate users wouldn’t be worth the trouble.
  • What about seeding your user table with every name in the Atlanta, GA phone book followed by @nsa.gov.  Will the attacker trust the user table then?
Some of the above ideas are good, some aren’t.  The ideas aren’t what matter.  What matters is that you took the time to look at the ENTIRE attack narrative, choose multiple mitigations, whether it be corporate policy, training for personnel, operational mitigations, technical solutions, or patching existing technology and weighed their pro’s and con’s.
By having multiple options, you can choose the one that costs you the least and costs the attacker the most.

(Cross posted from at https://www.infosecisland.com/blogview/17304-Security-Without-Patches.html)

Thursday, October 6, 2011

Risk Management: Context is the Key

I feel it’s time for me to comment on risk management a bit.  I have a good amount of history with security risk management, most of it done poorly, (much of it done poorly by me). 
Ultimately, a Christmas to New Years Eve week of brain storming led to an answer; but let’s talk about the problem first.
There is a core problem in risk management stemming from the history of those who implement it.  They are either technical people or traditional risk/management people. 
Technical people tend towards the “every security risk is important enough to fix” mantra, focusing on technical details and over-rating risks. 
Management/traditional risk people are used to much more tolerant definitions of likelihood (70% chance of happening is a 3 on the 5x5?) and impact quantifiable in dollars.
So what do we do?
The answer is context.  And context is captured through narrative.  For our purposes, a narrative describes how a vulnerability or vulnerabilities would be used to realize a risk and what the impact of the realization of the risk would be.
The narrative will still capture likelihood, but instead of a simple percentage, we’ll establish its context.  We want to establish the steps required for our threat to realize the risk. 
The steps, at the minimum, should include:
  • Means
  • Motive
  • Opportunity
  • Execution
  • Egress
We then give each step a 1-5 rating on how likely it is.  The words used for the numbers don’t matter.  It’s 1-5 whether it’s “can’t happen” to “certainty” or “really really unlikely” to “really really likely”. At the end, we’ll see which of the attacker’s steps are the ones preventing them from realizing our security risk.
The narrative also needs to capture the impact of realizing the risk.  This requires first a frank discussion of what would really happen to the company’s business should the risk be realized. 
The discussion should consider how the compromising the company’s confidentiality, integrity, and availability would affect business operations.  All the discussions should be in the context of the company’s core functions rather than a simple host. 
A remote root is not necessarily a high risk unless compromise of that system affects core business functions.
After the context is established, we can make a subjective assessment of the risk’s location on a normal 5x5 chart.  This gives us the ability to put the risk where we think it goes while forcing us to be able to justify our rating through the narrative.
By documenting the narrative, a security analyst establishes a single context of the risk for everyone.  Based on that context, all risk assessments should be fairly repeatable. 
While they may be a little different here and there, a shared context provides a reproducible risk assessment, even when done subjectively. 
And when you take this in front of the CIO, it should be very clear to him what stands between him and the threat and just what the consequences of not addressing it are.