Wednesday, February 27, 2013

The Inside Liar

 A significant topic for risk management these days is the insider threat.  While it is accepted that the insider is the most likely threat actor, we have very little ability to deal with the insider threat.  We generally treat them as an exterior threat:  we look for their malicious actions and consequences.  Whether it be sabotage or the loss of an organization's protected data, we see the results of the threat's actions and then attempt to find the root cause of what happened, hopefully associated with a threat actor (be it insider or exterior threat).  However this is extremely hard.  The complexity of our information systems provides ample space to hide malice.

There is another way:  FIND THE LIE
If you ever watch an episode of COPS, after they have the situation under control, they break up the parties and ask a simple question, "So what happened here?"  The reason is that, once they have the statements of the various parties, their job changes.  They are no longer trying to find 'root cause', they are trying to figure out who is lying.  When they find the liar, they've found the threat.

This offers a unique way for us to find insider threats.  Rather than look for the consequences of their actions, look for the lies.  From the second they get to the door to the minute they leave we expect our staff to assert information.  They assert that they have a legitimate badge and that the badge represents them.  They assert that they have a legitimate account on our information systems.  They assert that they have a legitimate reason to access information through access forms and user agreements.

To realize a consequence, our threat must lie.  Whether it's about why they are accessing information, why they are using a system, or why they are entering an area, they must lie.  Those lies provide us an opportunity to detect our insiders.  By processing the wealth of information we have on our users, we can look not for malicious actions and consequences, but instead for the lies that precede them.  While threats will actively attempt to cover up their their malicious actions and consequences, they will continuously generate more and more lies as they do so providing us additional opportunities to detect them.

It is time we focused our defensive efforts where we have the advantage and where our attackers do not.  While threats may have the advantage when it comes to hiding their malicious actions within the vast complexity of our information systems, a lie begets a lie.  And a lie has short legs.

Monday, February 25, 2013

Infosec Management Is The Problem

I regularly hear keynotes, presentations, and press releases from industry and, in particular, the government, about how there are simply not enough information security professionals to do the jobs available.  They seem to imply that what we need is a larger pool of information security professionals.  I think this is patently untrue.

The unavailability of infosec professionals is a manufactured shortage.  It's controlled by many factors: specialization within information security, required certifications, fragmented compliance standards, required software qualifications, clearances, and job locations to name a few.  Infosec, cybersec, or whatever you want to call it is one of the coolest yet most accessible STEM jobs available.  There is a huge body of programmers, systems administrators, and technology professionals who would gladly work in infosec.  No, what we have is a management problem.

Management Problem 1:  Who we will hire.
If you are CYBERCOM, you want 4,000 qualified cyber security professionals.  Except you want them in the Washington D.C. area with a TS/SCI clearance.  If you are DHS, you still want them with a clearance as well as an arbitrary certification.  If you are a business, you want them with broad infosec knowledge, experience on the tools you've bought, certification in your area of specialty, and experience in your specific compliance regimen.  The frank fact is that deep experience is somewhat mutually exclusive with specialization in your business area.  As anyone gains experience, they're going to have it mostly in one technical area (whether it be IDS-firewall-SEIM management, pen-testing, systems administration, etc) as well as one field of application (military, ICS, health care, payment, etc).  They will know about the other areas, but not be experienced in them.  They will also be older, which means they're more likely to have a family, which means they likely need to live near the business.

All of this adds up to restricting the pool of 'qualified' applications unnecessarily.  The person who understands NIST compliance will pick up PCI DSS compliance fairly quickly.  The person who can use Splunk will understand IBM Tivoli.  If the person can do the job, they will be able to gain whatever certification is necessary.

Alternately, an organization could simply hire out of the body of skilled programmers and systems administrators and plan to train them in security.  With some minor planning, this will provide much better employees anyway as you can ensure their skill set and most will enjoy the opportunity to improve their skills.  Frankly, the industry moves so fast that you're going to be continuously training your infosec professionals anyway.  Someone who has simply been the specialist in their area for years without continuous education is likely just as ready to take the job as a newcomer to infosec, regardless of what their resume says.  What you really get with an experienced employee is maturity and acclimation to the business environment.

Management Problem 2: Getting them.
First, if you happened to have found the perfect person, they know it too.  They will expect to be very well paid (regardless of how mundane the actual job is).  Additionally, if the job description says you wanted an infosec deity yet your plan is to have them write firewall rules all day, they might not be amused.  If you hire highly technical, highly skilled people, give them a broad, highly technical job.  Give the firewall rule writing to an intern with a reference manual. Also, be mindful of your location.  Do you really need the person on sight or would one week a month work?  Most people either love or hate the urban environments of the coast.  Finally, if you're the government, don't require clearances for everyone.  There's a large portion of the infosec community who doesn't want to have a clearance.  They are good, honest, professionals.  They just don't want the obligations and hassle associated with a clearance.  Additionally, be leaving defensive work as unclassified as possible, you make it available to industry who desperately needs it.


Management Problem 3: Keeping them.
So you gave a bunch of money to a rock star infosec professional.  Or maybe you hired a young gun to become one of your infosec gurus.  Like many things in life, they won't stay or be productive unless they are treated right.  The very first step is to figure out what type of person they are.  (Employers should have figured this out before they hired, but it can take some time.)  Some people enjoy 'turning the crank'. They enjoy doing a somewhat repeatable job that has clear bounds.  If that's the type of infosec professional you hired, best not to ask them to architect your infosec defense.  Put them on change request review.  Alternately, if they are a creative self-starter, asking them to predominantly push windows patches may not provide them a fulfilling work life.  Regardless, most infosec professionals have some ideas about how things could be done better.  listen and act on them!  Clearly the status quo is not good enough.  Implicitly that means new ideas will be required to get an acceptable infosec posture.

Second, provide a career, not just a job.  Currently, infosec is about desperately needing to fill some niche in an organizations security team.  This simply makes for pigeon-holed, unfulfilled, employees.  The wealth of infosec training is absolutely necessary to help employees grow in their career (as well as simply maintain their proficiency).   Additionally, there is a need to provide a career progression.  The guy watching the SEIM at night should know where he will be promoted next if he does well. There should be tractability from his position to CISO with a list of necessary skills to move to the next level.  Finally, as stated above, give your employees a chance to have their ideas assessed and supported.  You hired them to solve problems.  Listen to the solutions they provide.  When you don't listen, people leave or simply give their ideas to someone else, (most likely github).

In Conclusion.
Any organization should be able to meet their security needs.  They won't do it by hiring the perfect infosec professionals, but hiring an appropriate mix of creative thinkers and skilled crank turners, preparing them for the work the organization needs done, and then providing them rewards and career growth for accomplishing it.  Until we realize infosec staffing is a management problem and not a labor force one, our information security will continue to lag.

Epilogue.
I do want to point out that most of these are not specific to infosec.  Any seasoned manager will notice that these are general management concepts.  They apply to almost any skilled labor force.  That said, I think the mysticism behind computer security has caused us to go blind on infosec management.  Organizations believe infosec is a type of voodoo that only an appropriate witch doctor can wield.  Consequently, organizations forget everything they know about management and instead hand the entire operation (management and all) over to whoever their current chosen witch doctor is.  Instead they should treat it as any other skilled profession.  Good management will lead to good recruiting.  No voodoo necessary.

Sunday, February 10, 2013

Defensive Construct Exchange Standard


It has come time to provide a standard for the transmission of information between defensive tools.  I understand that this is not a unique endeavor, however all attempts to this point have been limited by a single approach, a set of attributes a construct must or should have.

Why do we need something else?  First, this is not a new SIEM.  This is not Arcsight, it's not Splunk, it's not CIF or ELSA.  This is not an information structure.  It's not STIX, VERIS, or Mandiant IOCs.  If anything, it's similar to TAXII or IDMEF.  However all of these approaches (and the many other existing approaches) have a primary flaw: they have structure.  The fundamental issues it that no matter what tool we use, it will collect different data.  We will have similar fields (URLs, IPs, etc) tool to tool, but each provides a slightly different construct with slightly different fields associated with each construct.  This limits all but the most general indexing tools (such as Splunk or ELSA) from importing data without an importer designed specifically for that data (such as an Arcsight connector).

Also, basically all tools (other than Paterva's Maltego) take a database approach to storing data.  While this still allows searching data to match specific patterns (such as IP address), it is less efficient as linkages are implied only by the existence of the pattern in a row with other data.  Passing data as records may hide linkages that could otherwise be uncovered.



How is this different?  In the Defensive Construct Exchange Standard (DCES), we see constructs as a small graph (in the graph theory sense).  All the fields in the construct are represented as nodes and the nodes are linked with edges (rather than with a predefined construct or record format as in most other tools).  See the example below for a visual representation.  Because of this, sending parties and tools may provide any set or subset of fields, whether it be ones defined in STIX, CybOX, IDMEF, Unified2, or one specific to their needs.  Receiving parties may easily discard or replace portions of the format that are unimportant to them while adding their own information to the construct.  I'll detail some of the uses of this approach below.


Whats the standard?  Initially, the standard is as follows.  I recognize that this is a very early approach and that working with tool builders, vendors, and stakeholders will be necessary to fully realize this standard:
  1. All discrete pieces of information within a construct will be given an individual node (in the graph theory sense).  All nodes within the construct are a type of Attribute.  The actual attribute Attribute will be stored as a tuple within the node's "Metadata" attribute.  
  2. All nodes will be linked to a node containing a construct ID generated by the construct originator.  (It will be recommended that those linking constructs into their own graphs generate a local construct ID so as to avoid conflicts within their graph.)
  3. The construct ID will be a child node of all Attributes within the construct.
  4. The nodes and edges will be represented in JSON.  They will be transmitted in accordance with the JSON format outlined by the @gephi graph streaming project (Gephi graph streaming).  In practice, all constructs should be transmittable as a grouping of 'add node' and 'add edge' messages, with the recipient deciding how to actually handle the information.
  5. Attributes within the construct may have their own Attributes.  (I.E. A threat construct's location Attribute may have a 'confidence' Attribute.  Also, Attributes within the construct may have a child attribute representing a Classification such as "company proprietary", "PII", etc.)  (Note this is less of a rule of the standard as an explicit flexibility.)

What a great idea gabe!  Can we see an example?  The following construct is used as an example in the STIX format.  In the STIX example, it represents a link within a phishing email.  Using our new format, it could be visually represented as:
This would then be represented in JSON as:
{"an":{"A":{"label":"Construct From X","Class":"Attribute","Metadata":{"ID":<value>}}}}\r
{"ae":{"1":{"source":"A","target":"B","directed":true}}}
{"ae":{"2":{"source":"A","target":"C","directed":true}}}
{"ae":{"3":{"source":"A","target":"D","directed":true}}}
{"ae":{"4":{"source":"D","target":"C","directed":true}}}
{"ae":{"5":{"source":"C","target":"B","directed":true}}}
{"ae":{"6":{"source":"A","target":"E","directed":true}}}
{"ae":{"7":{"source":"A","target":"F","directed":true}}}
{"ae":{"8":{"source":"A","target":"G","directed":true}}}
{"ae":{"9":{"source":"G","target":"F","directed":true}}}
{"ae":{"10":{"source":"F","target":"E","directed":true}}}
{"ae":{"11":{"source":"A","target":"H","directed":true}}}
{"ae":{"12":{"source":"A","target":"I","directed":true}}}
{"ae":{"13":{"source":"A","target":"J","directed":true}}}
{"ae":{"14":{"source":"J","target":"I","directed":true}}}
{"ae":{"15":{"source":"I","target":"H","directed":true}}}
{"an":{"B":{"label":"URL","Class":"Attribute","Metadata":{"URL":<value>}}}}
{"an":{"C":{"label":"DOMAIN","Class":"Attribute","Metadata":{"DOMAIN":<value>}}}}
{"an":{"D":{"label":"WHOIS","Class":"Attribute","Metadata":{"WHOIS":<value>}}}}
{"an":{"E":{"label":"DNS Query","Class":"Attribute","Metadata":{"DNS Query":<value>}}}}
{"an":{"F":{"label":"DNS Record","Class":"Attribute","Metadata":{"DNS Record":<value>}}}}
{"an":{"G":{"label":"DNS Record Type","Class":"Attribute","Metadata":{"Record Type":<value>}}}}
{"an":{"H":{"label":"DNS Query","Class":"Attribute","Metadata":{"DNS Query":<value2>}}}}
{"an":{"I":{"label":"DNS Record","Class":"Attribute","Metadata":{"DNS Record":<value2>}}}}
{"an":{"J":{"label":"DNS Record Type","Class":"Attribute","Metadata":{"Record Type":<value2>}}}}

How will this approach be used?  In the most basic sense, two tools or groups exchanging information can simply use this to exchange standard formats (such as an IDMEF message).  Alternately, it could be easily databased by tools such as Splunk or ELSA, however neither of these approaches makes use of the strength of the format and instead simply provide backwards compatibility with previous approaches and workflows.

A better use would be to maintain threat and event data in a graph.  Graphs can be stored in memory, in a standard RDB, in a graph database or in any of a number of formats.  When a DCES construct arrives at the receiving tool, it will likely parse the information, drop information it is uninterested in, and add information (such as a local ID) that it finds useful.  From there the construct can be stored, linked to the rest of the graph (based on common information such as a common IP, alert ID, or any information that is present in both the construct and the graph.  This linkage may be permanent or temporary to allow searching of the graph for other related information.  This is similar to adding information in a graph in Maltego.  

Tool wise, the clear benefit is that once a single DCES handler has been defined, there is no need to adjust it based on different construct formats which it might receive.  Therefore a tool or organization can share and receive a much larger and more diverse set of information.  From an operational standpoint this allows more robust collection and definition of threat actor (and non-threat actor) information.  It also allows new approaches to determining the reputation of an event, (i.e. is it a false positive or is it linked to other suspicious behavior).

We're on the cusp of mounting an effective information security defense and putting all the information our threats reveal about themselves to use.  To do that though, we must not just be able to tune big SIEMS to accept a specific set of information, but must be able to aggregate all information and understand it's associations.  This format is a step in that direction.

Antivirus is Dead. Long live Antivirus!


Anti-virus is an oft-maligned tool in infosec. It clearly mitigates some risks. It also clearly misses many risks. But the discussion often misses some important questions: What is AV? What role should it play in our infosec posture? How do we measure if it is doing its role?

Originally, anti-virus were simple signatures to detect viruses.  Now, as Kurt Wismer points out in his blog (debating AV effectiveness with security experts), the term AV is used to describe multiple different types of technologies. There are signatures, heuristics, and, more recently, a pseudo-white list system. AV detects anomalies. AV detects malware. AV prevents malware execution. AV prevents non-executed malware from spreading. AV maintains a baseline of the system. AV may address viruses, worms, rootkits, botnet clients, phishing scams, adware, or any combination. It may monitor the network the file system, and/or specific services such as http or email. AV has become a catch-all, defining almost any host-based technology. Because it is ingrained in even non-infosec minds that AV is necessary, everything is now AV.

However, this ubiquity prevents AV from being effectively incorporated into an overarching infosec strategy. How can we possibly hope to use AV to do a specific job when it could potentially be doing so many jobs? I've blogged previously (The Chicken and the Pig - Three Security Genres) on the different roles in information security. AV straddles those responsibilities. AV could be an integral part of your detection. It could be part of your defensive prevention. It could be used to deal with phishing attacks, untargeted self-propagating malware, drive-by web attacks, or strict white-listing of all code run on hosts. It could be used to simply weed out general attacks or an attempt to stay abreast of the most modern, evolving attacks.

AV is not a catch-all.  Just because the name covers multiple technologies does not mean that buying it covers you in so many different ways. So what is a company to do?  Specific AV capabilities must be identified and targeted at specific company needs:
  1. Understand what assets you have to protect and what threats you face. You can take no action on your infosec posture without knowing this.
  2. Understand what technology is incorporated in the AV you have deployed (or are considering deploying). Understand in real terms what threats the AV is designed to address. What is it expected to detect? What is it expected to block? More importantly, what will it not detect and block?
  3. What other technologies or means do you have to address the gaps in your AV?
  4. Are the AV's capabilities redundant with capabilities you already have?
The individual capabilities of the AV need to fit in concert with all other capabilities you have to form a holistic infosec posture.

Measuring the effectiveness of AV is an area where research is needed. While there are ways to measure it, they are not particularly targeted at any specific capability of the AV, nor are they targeted at any specific threat actor (or asset to protect). As we start to understand and articulate the individual technologies and capabilities provided by AV, we can align them to threat actor types and provide effective measures which explain what type of return on investment can be expected from deploying a specific AV tool in a specific scenario to protect a specific asset from a specific threat. Also, these metrics should have their temporal properties measured as all attacks and defenses happen over time.

It's time for AV to evolve, not necessarily technologically, but in our consciousness. It can no longer be a generic catch-all for things that sit on host. Instead, the specific technologies and capabilities need to be promoted so that they can be applied as a clear part of an infosec posture rather than a generic Band-Aid.