Tuesday, December 16, 2014

The Opportunity to Create

We work in a great profession in information security.  Unlike other professions which are bound by the physical world, we work with near limitless scope as infosec's context is not just spread out over the physical world, but also the digital.  In addition, we work in a profession whose challenges are not just static, caused by physical/technical constraints, but also dynamic, caused by the competing interest of different people.  However, that also hinders us in succeeding in our profession.

Information security has always had a combative context.  It's understandable as there is a clear offensive side, a clear defensive side, and rational actors existing on both sides.  We think about solutions in the context of winning the conflict.  This leads us to look for solutions based in force.  Either the force to overcome the other side's defenses or the force to absorb the other side's attacks.

There is another way though.  Instead of thinking of conflict, we can think of building something that simply transcends the conflict.  The same way a dancer compliments their partner's movements rather than forcing their partner to do what they want, we can think of information security as the opportunity to create something that transcends the combat; to create something that makes the combat a suboptimal solution to the goals of those participating in it.

I can't say I know what those solutions are.  I'm sure they are much harder to find than simple us versus them solutions.  however I think the transcendentals are a good starting place:

  1. Goodness: Is the solution good, (and not just in the moral relativistic sense that it is good in my own context, but in all stakeholder contexts.)
  2. Truth: Is the solution true, (again not just in the relativistic sense.  It must be true for all stakeholders.)
  3. Beauty: Is the solution beautiful to all stakeholders.

However, the transcendentals are very abstract concepts to apply.  In our day to day work, the following tenants may be much easier to test:
  1. If we are thinking of how a solution helps us gain an advantage over someone or beat someone, it is not the right train of thought.
  2. Finding solutions should include thinking about all stakeholders on all sides of the conflict and their needs.
  3. Finding solutions should include thinking about how the capabilities of all stakeholders can be integrated to create something greater than the sum of the parts.
  4. The solutions may not be technical in nature and may require the inclusion of stakeholders with non-technical skills to implement.
  5. We should be prepared to compromise and sacrifice to find the solution.
Hopefully by considering these tenants as we think of how to solve information security problems, we can find solutions which transcend the daily conflict of information security.  Hopefully we can find solutions which prevent conflict not because of the risk of losing is too great, but because there is no incentive to engage in it.

So the next time you are trying to solve an information security problem, test your approach to finding a solution against the tenants above.  If you find that your approach is inconsistent with the tenants, consider what you could do to meet these tenants.  The better a solution meets these tenants, the more likely it is to be a long lasting solution.

Wednesday, November 5, 2014

VERUM

It's taken years to design, months to build, and weeks to write about, but the blogs are finally up. Meet Verum, a Context Graph System:



Saturday, November 1, 2014

Cyber Attack Graph Schema (CAGS) 2.0

This post represents the update to the Cyber Attack Graph Schema (CAGS) 1.0 I published last year.  It incorporates many practical lessons learned from version 1.0.

Schema
  1. All property names discussed must be stored as lower case.
  2. The graph must be a directed multigraph.
  3. Nodes properties:
    1. 'class': Must be 'actor', 'event', 'condition', or 'attribute'. (required)
    2. 'value': An atomic value that the node represents.  For nodes of 'class':'event' and 'class':'condition', it will contain a string holding a narrative describing the event or condition.  For 'class':'actor' and 'class':'attribute', it will be a succinct description of the atomic, (e.g. <'class':'actor', 'value':'happy panda'> or <'class':'attribute', 'value':'8.8.8.8'>.)
    3. 'start_time': The time the atomic the node represents began to exist.  Time should be in ISO 8601 combined date and time format (e.g. 2014-11-01T10:34Z) (optional but encouraged)
    4. 'finish_time': The time the atomic the node represents ceased to exist.  Time should be in ISO 8601 combined date and time format (e.g. 2014-11-01T10:34Z) (optional but encouraged)
    5. uri: Uniform Reference Indicator in the form "class=<node class>&key=<node key>&value=<node value>" with <node class>, <node key>, and <node value> filled in.  If the node is of a class without a 'key' property ('actor', 'event', and 'condition'), "&key=<node key>" should be omitted.   Note that the prefix is not included and should be handled by the client and server. (optional but encouraged)
    6. 'comments': Provides a narrative of the node. (optional)
    7. 'cpt'. A JSON string in the format defined here. (optional.  Likely unnecessary unless using a Bayesian network to predict actor attack paths.)
  4. Additional properties for nodes of 'class':'attribute':
    1. 'key': A succinct type of the atomic.  (e.g. <'class':'attribute', 'key':'ip', 'value':'8.8.8.8'>) (required)
  5. Edge Properties:
    1. 'source': the id of the source node. (required) (see Note 4)
    2. 'destination': the id of the destination node. (required) (see Note 4)
    3. 'relationship': The following table describes the relationship types based on the 'class' of the source and destination nodes:
    4. destination node 'class'
      Actor Attribute Event Condition
      source
      node
      'class'
      Actor desribed_by described_by leads_to leads_to
      Attribute desribed_by described_by influences influences
      Event leads_to described_by leads_to leads_to
      Condition influences described_by influences influences
    5. 'confidence': Float value between 0 and 1 representing the percent confidence that a relationship exists. (optional but implied to be 1 if not present)
    6. 'origin': The source of the relationship. (required)
    7. relationship chain: An edge may have a property of the same name as the relationship and following properties such that "property value"->"property names" form a chain.  For example: <'relationship':'described_by', 'described_by':'x', 'x':'y'>.  A practical example is when two domains are linked by a relationship where one is the nameserver of the other.  The edge relationship would appear as <'relationship':'described_by', 'described_by':'nameserver'>. (optional)
    8. 'start_time': The time the relationship the edge represents began to exist.  Time should be in ISO 8601 combined date and time format (e.g. 2014-11-01T10:34Z) (required)
    9. 'finish_time': The time the relationship the edge represents ceased to exist.  Time should be in ISO 8601 combined date and time format (e.g. 2014-11-01T10:34Z) (optional but encouraged)
    10. uri: Uniform Reference Indicator in the form "source=<source hash>&destination=<destination hash>&relationship=<edge relationship><& relationship chain>&origin=<edge origin>.  (optional but encouraged)
      1. <source hash>, <destination hash>, <edge relationship>, <& relationship chain>, and <edge origin> should be filled in.
      2. The has should be a md5 hash of the source and destination URI in URL namespace.
      3. All additional links in the relationship chain (as described in #6) should be included in order.
      4. Note that the prefix is not included and should be handled by the client and server. 
  6. Additional properties for edges of 'relationship' :'leads_to':
    1. 'cost': The economic cost to transverse the edge.  (optional.  May be actor-specific and algorithmically generated.)
Notes:
  1. Nodes and edges may have additional properties, however they will not be validated and may be ignored by the attack graph.
  2. Nodes and edges missing values may be accepted by the server if the missing value can be implicitly filled in.  If the server cannot imply a required property, the node or edge should be denied.
  3. In some cases, various property naming requirements may be incompatible with a piece of software.  In that case, a name should be used in which removing non-alphanumeric characters and casting to lower case results in the same name as doing so to the the standard property would.
  4. Most databases will automatically assign a unique identifier such as 'id', 'label', etc.  As the key for this value is normally hard coded in the database and varies from database to database, it is simply a node property that is tolerated.
  5. For reasoning behind schema decisions, see the comments to the CAGS 1.0 post.

Schema Summary Explanation
This schema provides two fundamental abilities: 1. It allows the description of the context of an organization's information security posture. and 2. It allows the description of an organization's risks in the form of attack paths combined to form an attack graph.

Attack paths start with an actor and progress through conditions and events until they reach a condition which represents the consequence of the risk.  Because these attack paths can be interlinked between shared conditions and events, they form a graph.  Attributes provide context to the attack paths in the form of a robust graph around the core attack graph.

The application of the attack graph is far too complex for description in this post.  Current Moirai code is based on CAGS version 1.0 and requires update to be consistent with version 2.0.

Tuesday, October 28, 2014

Examination of the Cassandra Distributed Storage System

Attached is my review of the Cassandra top-level Apache Foundation project: Examination of the Cassandra Distributed Storage System.  This may be helpful to those looking at potential solutions for building a scalable security solutions.

Friday, October 17, 2014

The Importance of Data

Based on a twitter conversation with Wade Baker, I wrote a post on why data is so significant in information security.  Check it out here at the Verizon Security blog.  Twitter just a bit short to answer Wade's question, but I think the blog does the question justice.

Wednesday, September 10, 2014

Application of a Buyer Readiness Model to Adoption of Cross-Vertical Initiatives

I just finished a talk at the Nashville Technology Council's Nashville Analytics Summit.  It covered my colleague and I's paper: Application of a Buyer Readiness Model to Adoption of Cross-Vertical Initiatives. It covers how to sell an analytics initiative, (or any other cross-vertical initiative such as security), to internal stakeholders.  If you'd like, also check out the slides for our presentation: Your Boss Buys Analytics.

Thursday, August 21, 2014

Signal to Noise

Just posted a blog at Verizon Security about the signal to noise ratio advantage that attackers have over defense.  Check it out here.

Wednesday, July 16, 2014

Security to Serve, not to Subjugate

A reoccurring theme in information security (and many other disciplines which cut across verticals) is "I could solve the problem if I could just get everyone to follow a few, simple rules".  We know, and they may even agree, that the simple rules are good practices that should be done.  However, the rules are rarely followed.  When they are followed, any adversity causes them to fall by the wayside, and no-one is particularly happy to follow the rules.

The fact is, even though we are benevolent rulers with a light burden, we are still acting as authorities over other groups in our organization.  Authority is rare appreciated, regardless of the burden.  If we want to truly get the support of our organization, we need to serve them, not rule over them.  But how do we provide security through service?

A Model for Service
With a little adaptation, the Center of Excellence (CoE) model can be adapted to provide cross-vertical competencies through service to the organization.  Our CoE will have three goals (services it provides):

  1. Evaluate Quality - The CoE will provide a repeatable approach to evaluating how well other groups in the organization are doing at infosec.
  2. Lessons Learned Sharing - The CoE will collect lessons learned about infosec from groups across the organization and distribute them to the rest of the groups.
  3. Support Execution: The CoE will support the execution of infosec in three ways based on how the supported group wants to be supported.
    1. If the group knows how to do infosec, leave them alone.  Let them do their thing.
    2. If the group wants to know how to do infosec, teach them how to do it well.
    3. If group doesn't want to deal with infosec, offer to do it for them.  Obviously they will still need to provide the resources, authority, etc, necessary for you the CoE to provide this service.

It is important that the CoE not see themselves as specialists proselytizing to the unwashed heathens.  The CoE serves others; it doesn't rule them and it isn't better than them.  To that end, the CoE should strive to provide the services when requested, only providing them unsolicited when absolutely necessary.  Also, the CoE need only charge for bullet 3.3. The CoE should be internally funded to provide the other services.

One way to start developing this CoE is for the group to begin solving problems that are likely to arise before the CoE is engaged.  If you look forward and help develop solutions before the problems arise, when groups come to you with questions, you will be able to serve them by solving their problems.  This will bring them back to you and help you establish your CoE of infosec service.  And by all means, don't be shy about your successes.  Make sure others know you are serving the organization and solving other's problems.  Soon they will be coming to you for infosec help and you can use the opportunity to establish the CoE.

P.S.
The approach doesn't just work for information security. It can work for any service: Data Analytics, Quality Assurance, etc. By applying this approach, the requirements will not be burdens, but services.

Sunday, July 6, 2014

You the Outlier - Why Privacy/Anonymity is Important in a Big-Data World

In my previous piece, I argued that privacy was dead and multi-persona anonymity needs to take its place.  This is based on a critical premise though, that we need privacy (or anonymity).  I hear many poor arguments in support of privacy.  Let's look at those first and then consider a better reason.

Being Held Accountable for Your Actions
Lets address all the poor reasons we hear.  Obviously the argument against privacy is, "Why do you need privacy if you have nothing to hide?"  There are multiple luke-warm responses:
  1. "BECAUSE" - The concept that it is something you should 'just have'.
  2. What if the acceptability of my actions changes with the progression of time or 'those in charge' think my actions are a problem when I do not?
  3. No-one is perfect.  Should that be held against us?  In perpetuity?
  4. What about the insurance company who'll raise our rates when they find out what we've done?
These are all poor arguments against lack of privacy for one reason: They all assume someone shouldn't be held accountable for their actions.  While I think forgiveness is at the foundation of humanity, I don't think not being held accountable for actions can be held up as the reason for needing privacy.

Being Held Accountable for Others' Actions
In a big-data world, we are not necessarily judged by our actions, but by the profiles we match.  This is nothing new.  But while in the past an employer might require employees to sign a letter letting them inspect his driving habits and then fire those that receive any tickets or a DUI, with massive data available it can be taken to an unprecedented level.

Instead of inspecting a driving record, an employer may install monitoring devices in personal vehicles.   The monitor had a database of speed limits.  If you went more than 5 miles over, you received a warning to slow down.  If you didn't within 6 seconds, your violation was reported which could lead to your firing.

The first case is a crude model with very bold, red lines not to cross.  The second is a much more subtle model, with ambiguous grey lines. It is one fed with every speed you have ever driven.  It says that those who spend more than 5 miles over the speed limit regularly are a liability.  However, where did that model come from? How was it validated?  Was it validated?

The reason privacy (anonymity) is important is that every model has a large number of outliers, and there is a good chance you are that outlier in some model.

In a big-data world, we are judged against models.  "If a person exhibits, A, B, and C, then they must be D".  Being D may mean being unemployable.  It may mean being paid less or paying more.  It may mean being excluded, untrusted, or any other number of things.  However, in the model, there will be a number of outliers.  No-one cares for them as, by definition, they are not the norm.  Still, on the flip side, everyone is probably an outlier in some model.  And being judged by a model to which you are an outlier is inherently being held accountable for others' actions. 

In this case, you have done nothing wrong.  You will not do what the model accuses you of doing.  But you fit some model which you will not get to challenge and which may never have been critically assessed in the first place.

This critique isn't meant to detract from the usefulness of models.  Models can co-exist with privacy and anonymity.  Models trained on real data still offer significant value in many areas including trends and decision analysis.

But we want to make sure models don't become the pre-cogs in Minority Report.  Otherwise, the movie Gattaca could easily become our future.  Where privacy is not about being held accountable for the things you did.  It's about not being held accountable for the things you didn't do.


Wednesday, July 2, 2014

Easy Security Acquisition

Intro
Now that the visibility of information security has grown, information security programs are facing a new problem, the bonanza of investments that can be made to 'enhance' a security program.  With so much money in the pool, there are many vendors doing all they can to encourage the purchase of their product.  So how is a company to choose it's investments?

The Best Way isn't Always Best
Most people would immediately go to a risk-based system.  The logic being, "If I choose the projects which mitigate the most risk, I will make the greatest improvements in my security posture."  While this is true, there is a subtle technicality hidden in that statement.

The statement above requires an extremely mature risk program. The risk program must not have any biases. It must include all areas of mitigation (identify, protect, detect, respond, recover) and methods (Doctrine, Organization, Training, Materiel, Leadership, Personnel, Facilities and Policy). It must be tailored to the threats the organization faces as well as the vulnerable conditions that exist within the organization. It must consider the entire attack path and must consider alternate branches an attack might take, (coming in the window when the door is locked).  It must capture all of these characteristics in a continuous manner across the organization.  Additionally, none of these characteristics can be biased as the bias will then be reflected in the acquisition.  While it is possible to have such a risk program, very few organizations do.

The Next Best Way
In lieu of the perfect risk program, the next best way is Operations-Based Acquisition.  In this scenario, we are going to assume our goal is to prevent attacks and that our security operations team is our last line of defense in preventing attack.

The first thing we must do is ensure our security operations team is competent.  This means that if the investments haven't been made already, they will need to be made to build the team, develop procedures, and train the team.

However, once the team is established, they will be able to identify the opportunities for investment.  Instead of measuring investments by decrease in risk, we measure by increase in security operations teams efficiency. 

We can look to the security operations team to help inform this.  When they notice that they are having to deal with attacks from a segment of the network that could be firewalled, we can segment the network and be more efficient.  When they notice that they don't find out about attacks until they are wide-spread due to lack of visibility, we can invest in IDSs and SIEMs.  When we notice human error taking lots of the security operations team's time, we can increase training.  And the beauty is that the use of the security operations team's time is measurable, and so the return on the investment can be captured!

Conclusion
Is it perfect?  No.  Is it quick, easy, and useful?  Yes!  And it is certainly better than simply buying the newest tool based on the newest report of evil hackers!  It is measurable and it is needs-driven.  All and all, a good approach.

Wednesday, June 18, 2014

Data: Defense's Home Field

If vulnerabilities are attack's home field, then data is defense's.

Vulnerabilities Are Attack's Advantage
When we talk in terms of vulnerabilities, attackers inherently have the advantage.  We have to defend against many.  They have to find few.  They can continuously look for them without our knowledge.  A new vulnerability's use may be the first time we become aware of it.  Simple imperfection means that there will always be vulnerabilities available to the attacker. Economically, it will always be more rewarding for the attacker to exploit vulnerabilities than for us to fix them.

The Goals of Defense
Data, on the other hand, is where defense has the advantage.  But to understand why, lets first step back and understand the goals of defense.  Attacks only end in three ways.  The attacker reaches his/her goal (and likely causes a negative impact for us).  Defense prosecutes the attacker (whether it be holding them accountable to company policy or the law).  Defense makes the cost of attack so high the attacker either can't or doesn't want to attack any more.

To come to either defensive win (prosecution or disengagement), defense needs data.  The attacker must be identified and profiled in either case.  To prosecute them, we must know who they are, where they are, and what they did to the point where we can prove it to others.  For disengagement, we need to know so much about them that it becomes too resource intensive for them to do something we don't know about.  (i.e. take action that we cannot identify as an incident, or as them.)

Data Is Attack's Disadvantage
If vulnerabilities economically benefit the attacker, data economically benefits defense.  To get data, defense must simply have sensors where data is being generated and a means to identify profiles within that data.

For attackers it is very resource intensive to not generate data.  In the real world, just sitting quietly generates data.  You generate a heartbeat and a heat signature, both of which can be sensed through walls.  The character Jack Reacher is based around the premise of someone minimizing the data they generate.  It takes a lot of time and effort for Jack to do so.  As can be seen from my blog on Multi-Persona Anonymity, it is very resource intensive to separate your profiles; (not generate data that links one you to another you).

Every time an attacker touches a computer, they generate reams of data.  Every time they use the network.  Every time they interact with a server or run a program, they are generating huge amounts of information.  They are generating logs of who they are, where they are, what actions they took, and what the outcomes of those actions are.  Anything that can in, any way, be tied back to their profile as a threat actor can be used by defense to end the attack.

And the more data they generate and we collect, the easier it becomes.  We can build profiles of everything they do forcing them to change everything from the computer they use to the timezone and geographical location the attack comes from.  We can force the attacker to create completely new tactics, techniques, and procedures in addition to tools for every single attack they attempt.  Attackers will no longer be able to try and fail until they get the attack right.  Every time they fail, they both increase our ability to prosecute them while having to expend significant resources to completely change their profile before trying and failing again.

Investment Needed
To realize this advantage, some investment is needed.  We need the tools to parse sensor data into standard, inoperable formats such as STIX, CYBOX, CAGS, and VERIS.  We need integration of transport systems that move data between tools and organizations such as PxGRID, TAXII, IF-MAP, and Moirai.  And we need investment in tools to parse the data and build the profiles of attackers; an active area of research from individuals and companies such as the MLSec Project.

In Conclusion
With data, the "try, try, again" approach to attack will be over.  By stopping it, the vast majority of attackers will be priced out of the market, leaving defense to deal with truly dangerous threats who are willing and able to commit massive resources to the attack.  And defense will still have the advantage.

Thursday, May 8, 2014

Multi-Persona Anonymity

Recently Janet Vertesi, an assistant professor of sociology at Princeton University, tried to hide her pregnancy from the internet.  While she found it was extremely hard and some have questioned the value of going to the trouble, I believe her experiment may be seminal.  Here's why.

Anonymity vs Privacy
But before the why, a little discussion of privacy and anonymity.  There has been much debate about privacy, but it is assuredly dead.  This report proves it.  None of the things Janet did were private.  Each was logged, tracked, analyzed.  However, they were anonymous in that they were not correlated back to a central persona; to her.  I think this is the fundamental difference between privacy and anonymity.  Privacy means no-one or few know what you did.  Anonymity means no-one or few know it was you.

What Janet did was anonymity.  Her purchases were tracked and shared.  Her browsing habits were tracked and shared.  Her purchases were associated with an anonymous address, (an Amazon delivery locker).  Of the things she did, communicating her pregnancy by phone or in person was probably the only private thing she did.  Even that was probably not a great idea as the metadata from the phone calls and the phone call content it's self could easily have been recorded.

Why What She Did Was Important
What Janet did was seminal:

  1. She proved you could could disassociate multiple personal personas through anonymity.  Something that is no longer possible through privacy.
  2. She identified the touch points necessary to disassociate and proved they could be disassociated, at least for a short period.

To expand on the first bullet, people think they want privacy.  They don't.  They want to do things without everyone knowing they did them.  That will never again be accomplished through privacy.  However, it can be accomplished through anonymity.  The trick is to maintain multiple disjointed personas.  In this case, Janet had two: "a pregnant woman" and "indeterminately pregnant Janet".  However, people could have multiple personas: "Work", "Family", "Hobbies", etc.  All kept completely separate.

The 'how' of keeping them completely separate is captured in number two of what Janet did.  She determined what the touch points were. As an example:

  1. Communication
  2. Physical interaction.  (In this case transfer of goods)
  3. Economic interaction
  4. Authentication

She also identified ways of dealing with all of these: in-person communication, amazon lockers, pre-paid debit cards/cash, separate email addresses to create accounts with.  Unfortunately, there are multiple issues with what she identified such as monetary limits, potentially monitored phone calls, physically accessing the amazon locker, etc.  This is where the technology community needs to come together.

The Future
We need to stop trying to ensure privacy and instead start trying to ensure anonymity between personas.  We already have the building blocks.  Bitcoin provides economic interaction.  VPNs, anonymizing proxies, TOR, etc provide communications.  Crypto-currency based identities such as namecoin can provide anonymous identities for personas to authenticate against.  Even physical interaction could be anonymized through things like full-body suits.  Just such a physical situation is envisioned in the movie Surrogates.  However, we need to make anonymizing multiple personas the explicit goal of the tools we create to ensure they provide the security we desire.

This does not eliminate the need for privacy.  There will be locations where a person's personas interact.  Historically, this is a person's home.  This is why it receives unique legal protection.  However, this could also be a business model, allowing people to change personas, allow interaction between personas or shed/create personas in privacy.  This would likely be a physical facility with little to no monitoring behind closed doors. An example even exists in the Game of Thrones universe.

Ultimately, it will end in an arms race.  Those hoping to attempting to associate different personas will compete against those maintaining different personas and the projects to produce the tools to allow them to do so.  However, as the abuse of breaches of privacy become more egregious, the practice of strictly maintaining multiple personas will become more socially acceptable and the act of attempting to associate personas more malevolent.

In Summary
What Janet did is seminal and should open our eyes to the world we really live in.  The sooner we start work on maintaining separate personas, the sooner we may be able to enjoy the benefits we will never again get from privacy.

Saturday, February 1, 2014

Of Vulnerabilities and Bullets

Where I explain why no-one cares about the vulnerability you found.

I've had many people try to convince me that infosec vulnerabilities are the base unit of infosec.  My experience is that vulnerabilities are something, but not much.

But lets talk guns.  Everyone has heard the saying, "Guns don't kill people, people kill people."  Probably more accurate would be that bullets kill people, but only in very specific situations.  There must be a gun, a target (in the same area) and a threat actor to pull the trigger to go with that bullet.

Vulnerabilities are about the same thing.  Vulnerabilities are like bullets.  They play a part, but no more.  The same way their are innumerable bullets out there yet very few ever kill people, there are innumerable vulnerabilities, yet very few are used to realize a risk.   This is because, just like a bullet, a vulnerability must exist in a greater context that makes it part of a risk that a threat actor can exploit and an impact that matters.  And of course the threat actor must exist to pull the trigger.

So, as a security researcher, if you feel that your vulnerabilities are not taken seriously, don't just consider the vulnerability when presenting it.  Consider the context the vulnerability is likely to exist in.  Consider whether a threat actor even exists to exploit the vulnerability.

If they do, convince people.  Show them where similar vulnerabilities have been exploited by threat actors.  Show them where the vulnerability helps known threat actors realize their stated goals.  Show them how the only part of the context preventing exploitation is that the threat actor simply hasn't made it to their organization.

Only then will the vulnerability matter.

Thursday, January 30, 2014

Infosec, It's About What You Think You Know

Both the current core failure of infosec defense and it's ultimate success are fundamentally tied to what you think you know. Let me explain.

First and foremost: We don't lose because of vulnerabilities. We lose because we believe we are in one infosec state, and the threat realizes we are in a different, more vulnerable, state. That means that it's not whether or not the vulnerable condition exists that matters, it's whether the threat actor knows it does and we think it doesn't.

Second, if losing is about believing you are in one security state when you are in another, winning is about the threat actor believing they are in an infosec state when they are actually in another. We make this happen in the one place we control: our network.

Currently, threat actors can operate with impunity because our network is operating the way the threat actor believes it's operating. To tip the balance of power in infosec conflict, we need the network to be operating differently than the threat actor believes it's operating. To do that, we need to do a few things:

  1. We need to treat the SIEM like a big data warehouse. Our SIEM should be a network telemetry data warehouse. It needs to receive as much alert data as possible through integration layers capable of dealing with the specific data types being inputted. It needs to be able to pull in additional associated data. In the data marts, it needs to detect not just malice, but any activity outside of the normal network profile. (This leads to a separate question of how to profile a network which I'll leave for another post.)
  2. The network telemetry data warehouse needs to be able to correlate detected anomalies with other data to piece together the picture of what is happening. Understanding the relationships between observations is critical to understanding the ground truth of the conflict.
  3. Most important, it needs to have a feedback loop; changing how the network is operating based on what the SIEM believes is anomalous or malicious. When potential malice is detected, it needs to take a different route though the network. Malice needs to have different rules applied to it's traffic. It needs to have different tools applied to delaying the threat, gathering intelligence, and responding to the threat to prevent negative impacts. This can all be accomplished in an automated feedback loop so that the network is pitching itself against any anomalous behavior.

On the network of the future, the network is not a static battlefield, but a living, pulsating thing. The network uses the massive amount of telemetry data at it's disposal and broad flexibility provided by Software Defined Networking and virtualization to respond to perceived threats. It put threat actors at the same type of disadvantage that defenders currently face. And, ultimately, the advantage in infosec conflict will pitch in defense's favor because threat actors will be unable to trust that the network environment they believe they are operating in is the true network state.

Saturday, January 18, 2014

Gabe's Three Assumptions of Risk Assessment. i.e. the Chain of Trust


Over the years of discussing vulnerable conditions and risks, I've come up with three assumptions which help ground the risk assessment in reality:
  1. If the threat actor has superuser privilege, they can realize the risk.  (This one has some caveats, however exceptions to this rule are so rarely implemented they are likely to not matter in any practical cases.)
    1. This might not apply if you have immutable files
    2. Off-host logging may detect the threat actor
  2. If the threat actor has physical access, they can deny availability.
  3. If the threat actor has unlimited physical access, they can gain superuser.  (see Assumption 1)
This could probably be summed up with a single concept:  the Chain of Trust.  In this case we are implying that the system's security is based on superuser's security and the system and superuser are based on the physical security.  If you haven't secured this chain of trust, any security established farther on in the chain is moot. 

I believe Dan Kaminsky's take is, "If you have root, you can get root" (paraphrased).  So when doing risk assessments, if at any point you assume the threat actor has compromised something farther back in the Chain of Trust, the rest of the line of reasoning is at issue.

As an example:  "If the threat actor pushes the power button on the computer, they could turn it off and shut everything down.  Therefore we should lock the power button." This assumes the threat has physical access in which they could pull the cables out of the computer, hit the emergency power off, or do any number of other things.

Alternately: "The bad guy can run code that can read all memory, so lets encrypt the data in memory."  This implies the threat already has superuser privileges and so could simply prevent the encryption, read prior to encryption, or copy the encryption key and decrypt.

So whether you are assessing risk or planning mitigations, remember the Assumptions and remember the Chain of Trust.

P.S. The Chain of Trust was not my idea but one that I got from Travis Howerton at Innovalysis.  It just fit well in explaining the three assumptions.

Thursday, January 16, 2014

Infosec Strategy in 1

Target, Neiman Marcus, Microsoft, and many, many more...

Corporate America has a huge security problem.  And it's not compromises.  It's a lack of strategic vision in cyber security.

With a never-ending litany of massive breaches, organizations are spending so much time trying to put fingers in the dikes, that no-one is stepping back to look at the whole levee.  Websites being compromised?  Buy WAFs.  Point of sale being compromised?  Put more tools on the PCI LAN. China hacking people?  Get a cyber intelligence feed.  PHI/PII being leaked to pastebin?  Get DLP.  No-one stops to ask the question, "Do these fit together?"  And when you don't, your infosec defense looks like this:
Friday’s Friendly Funny by Dave Blazek is licensed under a Creative Commons Attribution-NoDerivs 3.0 United States License.
Before thinking about point solutions, an organization must come up with a strategy.  I would suggest a Strategy Statement such as:
Delay threat actors from realizing risks until they give up or are detected and responded to.  Respond effectively.  Degrade gracefully and remediate effectively when threat actors realize risks.
The above single statement sum up an entire infosec program, laying out specific steps that can be used to plan and measure the program.  Yours doesn't need to be the same, but it needs to be a clear and concise statement you can make measurable progress against.  This one lays out base truths:

  • That the program will be operations driven.
  • That risk is a fundamental element of the security program (You can read some of my views on risk here, here, here, and here.)
  • That the fundamental measurement of effectiveness is Delay vs Detection & Response.
  • That the organization should expect to operate in and recover from a compromised environment.
It also establishes the stages of incident life-cycle that drive the strategy:
  1. Delay
  2. Detect
  3. Respond
  4. Remediate
Calling the first step Delay is meant to be a bit controversial.  I think normally it would be 'deny', 'protect', 'deter', or something else.  However, as a community, we need to get out of the idea that if we just build it secure enough, the threat will go away and never come back.  Obviously, not all threats will stick with their attack, however we need to plan our strategy for the ones that do and those are the cases where all we are doing is delaying.

This is a statement we can easily track progress against in one, easy to read, table:
Infosec Defense Execution Strategy

You can download the Infosec Defense Execution Strategy spreadsheet including an example. We also add reporting and after action review to the stages.  The states can easily be modified to meet an organization's process.  The Defensive Execution Strategy also breaks each step out into discrete levels of completion:

  1. Define (Document what you want to do)
  2. Build (Create anything you need to do it)
  3. Train (Practice doing it)
  4. Grade (Measure how well you do it)
  5. (There is an implicit 5th step that, if you find any deficiencies in your grading you feed the measurement back into improving the step where the deficiency can be rectified.)
Within the levels of completion we define two specific things:  Who and What.  Without who, it is unclear as to who will actually get the work done.  If an organization doesn't know who will get the work done, you can almost guarantee no-one will do it.  A good model to use is RACI: Responsible, Accountable, Consulted, Informed.

'What' is also critical to tracking the strategy.  There needs to be deliverables which clearly show that a step has been performed. Managing based on deliverables significantly simplifies tracking of progress.  In the same vain, you need to know what products need to exist prior to starting a step.  If you don't, you have no way of measuring if you are ready to begin or not.  Ultimately the topic of management by deliverables could fill a book.

From this one table of levels of completion above, all information security projects can be planned.  This also helps keep the organization focused on more than just the 'build' step.  

And each stage can be decomposed.  Delay may be broken down into:
  1. Preventing incidents
  2. Operating in a compromised environment
Detection may be broken down into:
  1. Internal awareness
  2. External intelligence
  3. Prioritizing potential malice to investigate
  4. Facilitating correlation of prioritized information
(As an aside, #3 and #4 above are a fundamentally new way of looking at DFIR that is not yet widely adopted and deserves it's own post.)

All projects and all security requirements should be traceable to the Strategy Statement through the Infosec Defense Execution Strategy and the various levels of decomposition.  With this as a starting point, organizations can see how all of their projects and requirements fit together, identify gaps, and form a unified defense that looks less like the first picture and more like this:
Image by Hao Wei, licensed licensed under the Creative Commons Attribution 2.0 Generic license.