Tuesday, November 29, 2016

How to Handle Being Questioned

In my post, How to Converse Better in Infosec, I laid out some rules for better infosec discussions.  A key tenent of that blog post was asking questions.  But what if you are on the receiving end of that?

To the questioned:

When expressing a view, being questioned feels like a challenge.  For me, it feels as if the other person doesn't believe me and is trying to catch me in a lie.  Frankly, maybe I did embellish a bit.  Maybe I made a statement based on something I thought I remembered hearing but don't quite remember where I heard it.  Or maybe I feel the statement is so obvious, the only reason someone would question it is if the other person wanted to try and take me down a rung.

It's OK.  If, as speakers, we feel we are in the right, we can treat all questions as if the questioner doesn't know the answer and is seeking help learning, or there is some ambiguity in the questioner's mind and they are just trying to help clarify it.  (Remember, for topics we are knowledgeable on, it is hard to see the subject from the perspective of a less-informed person.)  Answer with the intent of being as genuinely helpful as possible.  Have fun!  This is our chance to help someone out!

And if we don't have the answer, we can be polite and say so.  "I honestly can't demonstrate it right now.  If you'll allow me the time, I'll collect the information for you and get back to you.  And, in the event I can't, I'll let you know."  Everyone is wrong at some point.  Big people can admit it and only weak people don't accept it from others.

And to the questioner:

Be aware that you may be unintentionally putting the questioned person in an emotionally defensive position.  They may have all the answers and be able to clearly explain it.  They may be right, but need time to collect the evidence to demonstrate it.  They may be flat out wrong but not prepared to say so.

Be a good participant in the social dynamic.  If the other person can't answer, is evasive, or is demonstrating some technique to avoid answering, give them an out.  Say, "It's OK, let's pick this up again later."  Or "If you find/remember the answer, please message it to me."  If the question is unimportant to you, you lose nothing by letting it go until the questioned person brings it up to you again.  And if it is truly relevant to you, you can look it up yourself.  If you feel you can't let it go, ask yourself if you're truly practicing the principle of charity.

In conclusion

Remember, a conversation involves multiple people. You're all in it together. Either everyone wins or everyone loses. So help everyone win.

Tuesday, November 22, 2016

What is most important in infosec?

"To crush your enemies -- See them driven before you, and to hear the lamentation of their women!" - Conan the Barbarian

Maybe not.

Vulnerabilities

Recently I asked if vulnerabilities were the most important aspect of infosec.  Most people said 'no', and the most common answer instead was risk.  Risk is likelihood and consequence (impact). (Or here for a more infosec'y reference.)  And as FAIR points out, likelihood is threat and vulnerability. (Incidentally, this is a good time to point out, when we say 'vulnerability', we aren't always saying the same thing.)  While in reality, as @SpireSec points outthreat is probably more important, I suspect most orgs make it a constant 'TRUE' in which case 'likelihood' simply becomes 'vulnerability' in disguise.  I doubt many appreciate the economic relationship between vulnerability and threat.  As many people pointed out, the impact of the risk is also important.  Yet as with 'threat', I suspect it is rarely factored into risk in more than a subjective manner.  There were other aspects of risk such as vulnerable configurationsasset management and user vulnerability.  And there were other opinions such as communication, education and law.

Risk

The first big take-away is that, while we agree conceptually that risk is complex and that all its parts are important, practically we reduce 'risk' down to 'vulnerability' by not dynamically managing 'threat' or 'impact'.  While most organizations may say they're managing risk, very likely they're really just managing vulnerabilities.  At best, when we say 'managing', we probably mean 'patching'.  At worst, it's buying and blindly trusting a tool of some kind.  Because, without understanding how those vulnerabilities fit into the greater attack-surface of our organization, all we can do is patch and buy.  Which leads to the second take-away...

Attack Surface

The second take-away "I think we need to change the discussion from vulns to attack surface." Without understanding its attack surface, an organization can never move beyond swatting flies.  If an organization is a city and they want to block attackers coming in, what we do is like blocking one lane of every road in.  Sure, you shut down a lot of little roads, but the interstates still have three lanes open.  And what about the airport, busses, and beaches?

Our Challenges

Unfortunately, if we can't move from vulns to full risk, our chances of moving beyond simple risk to attack surface are slim.  At least in FAIR, we have the methodology to manage based on full risk, if not attack surface.  However, while vulnerabilities are the data is not easy to collect.  It's not easy to combine and clean.  And it's not easy to analyze and act upon.  (All the things vulnerability data is.)  We don't even have national strategic initiatives for threat and impact, let alone attack surface the way we do for vulnerabilities, (for example bug bounties, and I Am The Cavalry).

In Conclusion

Yet we continue to spend our money and patch vulnerabilities with little understanding of the risk it addressed, let alone how that risk fits into our overall attack surface.  But for those willing to put in the work, the tools do exist.  And eventually we will make assessing attack surface as easy as a vulnerability assessment.  Until then though, we will continue to waste our our infosec resources, wandering blindly in the dark.

P.S.

The third and final take-away is that the whole discussion completely ignores operations, (the DFIR type vs the installing-patches type).  In reality, it may be a strategic decision, but the trade-offs between risk and operations based security are better left for another day blog.


Tuesday, October 18, 2016

Why Phishing Works

Why Phishing Works

I've been asked many times why old attacks like phishing or use of stolen credentials still work.  It's a good, simple, question.  We are fully aware of these types of attacks and we have good ways of solving them.  Unfortunately, there's just as simple an answer:
"The reason attackers use the same methods of attack is we assume they won't work."
 We conduct phishing training.  We install mail filters. And when something gets through, we treat it as an anomaly.  A trouble ticket.  Yet, from the 2016 DBIR, about 12% of recipients clicked the attachment or link in a phishing email.  Imagine if that happened in airplanes; for example, if 12% of bolts in an airplane failed every flight.  They wouldn't simply take the plane in for repairs when bolts failed.  They'd build the plane to fly even if the bolts failed.

This leads to a fundamental tenant of information security:

"Your security strategy CANNOT assume perfection.  Not in people. Not in processes. Not in tools.  Not in defended systems."

When you assume anything will work perfectly and treat failures as a trouble ticket, you cede an advantage to the attacker.  They are well aware that if they fire off 100 phishing emails, 10 will hit the mark.


What To Do

Do what engineers have been doing for generations, engineer resilience and graceful degradation into the system.  Assume phishing, credential theft, malware, and other common attacks WILL succeed and plan accordingly.  Build around an operational methodology.  Work under the assumption that phishing has succeeded in your organization, that credentials have been stolen, that malware is present, and that your job is to find the attacker before they find what they're looking for.

Attackers are just some other guy or gal, sitting in their version of a cube, somewhere else in the world.  They want their attacks to happen quickly and with as little additional effort as possible.  They take advantage of the fact that we treat their initial action succeeding as an anomaly.  If we assume that initial action will be partially successful and force them to exert additional effort and actively work to remain undetected, we decrease their efficiency and improve the economics of infosec in our favor.

Thursday, September 22, 2016

How to Converse Better in Infosec

In a previous blog, I spoke a bit about what to do when the data doesn't seem to agree with what we think.  But what if it's not data you disagree with, but another person?

We've grown up in a world where the only goal in a conversation is to simply be right. It is all around us and, unfortunately, drives how we converse with other professionals.  Whether it's a twitter thread or questions at the end of a conference talk, we tend to look to tear down others to build ourselves up.  The mantra "Defense has to be perfect, offense only has to succeed once" pushes us to expect it in our technical dialog even though no one and no thing is perfect.

Let's change that.  The next time you are on twitter, at a conference, or engaging in discussion with colleagues, try and follow the Principle of Charity.  I highly recommend you read the link, but the basic premise is:
Accept what the other says if it could be true.
Now, obviously it's more complex than that. It's more like "dato non concesso" which means "given, not conceded". You are accepting their statements where logic otherwise does not prevent you from doing so, not because you believe they are true, but simply because you believe they were given in good faith. It also means interpreting statements in the way most likely to be true.
If the other says something that sounds conditionally untrue, ask questions that would help clarify that it is true.
It doesn't mean you have to accept statements that can't be true. It doesn't mean you can't confirm your interpretation. And it doesn't mean you can't ask clarifying questions.  If the other's statement could be conditionally true, ask questions that help clarify that the conditions are those that make the statement true.
Do not ask questions or make statements to try and prove the other's assertion false.
It does, however, mean not nitpicking.  It does mean not taking statements out of context or requiring all edge cases be true.  If the other's position truly is false, you will simply fail at clarifying it as true.

And if we do we should be doing this, we should do one more thing:
Expect others to follow the same principles.
We should not, as a community, accept members not following this principle.  Conversations contradictory to the Principle of Charity bring our community down and they inhibit growth.  However, we will only root it out if we take a stand and speak out against it.  Whether at conferences, in blogs, in podcasts, on twitter, or anywhere else, it improves us none to tear down rather than build up.  I challenge you to adopt the Principle of Charity in your conversations, starting today, and make it a goal for the entire year!

Update: Also check out the follow-on blog: How to Handle Being Questioned!

Tuesday, August 30, 2016

Do You Trust Your Machine or Your Mind?

Data science is the new buzzword.  The promise of machine learning is to be able to predict anything and everything.  Yet, It seems like the more data we have, the harder the truth is to find.  We hear about some data that doesn't sound right to us.  We ask questions and find out that there are assumptions and biases all over the data.  Even if the data was true, once it is analyzed, it becomes contaminated in some way.  With such things, how can we possibly trust it?  Instead, as Adam Savage put it, the best course of action seems: "I reject your reality and substitute my own."
https://twitter.com/n1suzie/status/490796035376427008

The reality of your mind is: "Your mind is crazy and tells you lies."  Your brain has to do the same thing the machine does in assembling data into a complete picture that a data analysis process does.  (An analogy would be assembling the building blocks to the right into a single creation like a castle or whale.)  It can do it, but the reality is it takes a lot of skill and a lot of thought.

Pieces for a mind to assemble into a single picture.


The downside to doing it in your brain is:

  • There is no documentation of how the picture was formed from the data
  • There is no record of what data your mind included and excluded as it assembled its picture
  • It is much harder to question the process your mind used in creating it's picture
  • Is is very hard to maintain consistency so that that the picture your mind creates today is the one it will create a year from now given the same data
Your mind is a black box.  As Andy Ellis put it, "Systems are becoming too complex for risk analysis to be performed by System 1." (gut instinct).  He termed it "The Approaching Complexity Apocalypse".

This doesn't mean data doesn't have it's faults.  No data is the knowledge it represents.  All data requires analysis to produce the picture from the data.  All data has underlying assumptions and biases.  You should expect your data sources to:

  • Publish the methodologies they use to product the pictures from the data
  • The provenance of the data
  • The known assumptions and biases, both of the data and of the methodology
Also, data science is not quite classic science.  Classically, science follows the scientific method.  In classic science, a hypothesis is first established and then tests are created collect data to disprove that hypothesis. If the tests fail, they hypothesis is accepted.  Normally in data science, we start with the data and use it to identify hypotheses that are true.  XKCD highlighted the issue with this nicely:

https://xkcd.com/882/

There will always be unknown assumptions and biases in data, but if you use them to ignore the data you put yourself at a disadvantage.  If you conduct 100 studies, none of which are statistically significant, but all predicting the same thing, you have strong evidence that the thing is true.

On the other hand, this does not mean you should accept all data-based conclusions that come your way.  As multiple speakers in the bSides Las Vegas Ground Truth track suggested, machines and minds should work together.  The mind can help identify potential biases and assumptions, as well as potential improvements in the machine's methodology.  The machine can produce reproducible results to inform the mind's decisions.

The worst thing you can do is identify biases, assumptions, and flaws in the machine and then use them to justify the validity of your mind.  If you were to do so, you would need to document the methodology of your mind and subject it to the same scrutiny for biases, assumptions, and flaws.  At which point, the methodology would then be in the machine.

And if you can't make your mind and the machine agree, my preference is to trust whichever system is most thoroughly documented, investigated, and validated.  And that tends to be the machine.

Tuesday, May 10, 2016

The role of Pen Testing / Vuln Hunting in Information Security

Intro

At a security conference, ask someone in attendance what they do.  More than likely they are a consultant, either doing penetration testing, vulnerability hunting or both.  Penetration testing and vulnerability hunting are mainstays of security testing, many times required by laws, regulations, or contracts.  They exist ubiquitously in information security.

But we don't have a good model for how they fit into improving defense.  The prevailing knowledge is that disclosing vulnerabilities leads to their mitigation which leads to more security.  However there is a counter-argument that disclosing vulnerabilities helps the attackers more than the defenders.  Can we build a model that takes both views into account?  Let's see.

So what do you 'do' here?

So what do penetration testers and vulnerability hunters actually 'do'?  If we think of information security as a game, (a very high-stakes game), we could say that penetration testers and vulnerability hunters reveal paths on the game board that attackers can take to reach their objectives.  That begs the question:

How does this benefit the defenders?

Let's take four scenarios:
  1. No-one knows about the path:  In this case no-one benefits, no-one loses, because no-one knows. No change.
  2. Only the defender knows about the path: In this case, the defender either benefits none or actually loses as they expend resources to mitigate the path. Defender Cost.
  3. Both defender and attacker know about the path: In this case, the attacker either benefits some or none depending on whether they successfully exploit the path.  The defender probably loses some (mitigates the path) or loses a lot (is exploited) though there is the off chance they lose none due to the attacker's failed exploitation. Attacker potential Profit. Defender potential for more Cost.
  4. Only the attacker knows about the path: Here the attacker's chance to benefit goes up significantly as the defender is unaware of the path.  The defender, on the other hand, doesn't even have the chance to mitigate the path and can only lose.  And after exploit, they return to step 3 and still lose as they mitigate the path. Attacker most Profit. Defender most Cost.

Conclusion

Based on the model above, penetration testers and vulnerability hunters can be most helpful by using their knowledge of paths to detect when attackers know them and to disclose them to defenders in situations when the attackers already know of the path.  This helps move from Scenario 4 to Scenario 3.  It's not ideal, but it's better than the status quo.

If only it were so simple

This model is admittedly naive.  It's a starting point, not an end-all-be-all.  Some things to consider:
  • There is a time lag from knowledge of a path to its weaponization or mitigation.  The model should take that into account.
  • Attackers and defenders are not homogenous.  This model doesn't consider what some attackers/defenders know and what others do not.  Nor does it model the spread of that knowledge through the population.
  • This model relies on defender's knowledge of attacker's knowledge.  Something that will always be imperfect.
  • Paths are made up of individual pieces.  This model doesn't account for the rearranging of pieces of the path, combined with other information in the attacker/defender's knowledge, to form new paths.
This model is not perfect, but hopefully it's a start in how to consider the role of penetration testing/vulnerability hunting in information security.

Alexi Hawk's Impossible Data Set

As the author of the only unsolved puzzle in the DBIR Cover Challenge this year, I figured I should provide a bit of a write up.  I'll apologize to all of the cover challenge participants as it's quite literally 10 lines of code to solve,  only two of which are actually functional (vs loading packages and naming stuff).

The Idea

First, where the puzzle came from.  I wanted to have a data-y puzzle in the challenge, but I also wanted it to be challenging for data science-y people.  To that end, I suggested, and the team approved, a puzzle based on a dataset, but with a twist.  The solution would not be from analyzing the data statistically.  Even then, our estimate going in was that it was the hardest puzzle of the bunch and likely wouldn't be solved.

The Setup

To create the puzzle, I used gimp to create a raster image with the key text.  I then opened the image in python using the PIL package.  It lets you parse through each of the individual pixels and determine its RGB.   I took all the pixels with RGB less than 10 (i.e. black) and saved them as a csv of (x, y) coordinates.

From there I transferred it to R.  Since each point is a pixel (i.e. closer than the size of a circle drawn at that location), I filtered down to 10% of the points.  Now, the first thing a good data scientist does is looks at the data, so we can't have it be that obvious.  Instead, I added a third column with random points in the range of the first two columns.  Then I swapped the first and third column.  If creating a scatter plot of the data would have been looking straight on, now doing so (on the first two columns) is like looking at the vertical location and a completely random horizontal location of the pixel.  

As we discussed the puzzle, someone else had suggested doing something with polar coordinates.  So I did just that.  I converted the current cartesian coordinates into spherical form.  (Hopefully all the hints about spheres and looking at the ranges now make sense as two columns, the angles in radians, range from about 0 to 1.6 and one, the vector length, ranges from 0 to about 500).

The Payout

So, the solution to the dataset (in R) is as follows:
# Read in the file
alexi <- read.csv("http://cybercdc.global/static/alexi.csv")

# Convert each from spherical coordinates to cartesian
library(pracma)
back <- apply(alexi, MARGIN=1, sph2cart) 
(At this point if you look at the data you'll notice two int rows and one numeric.  That wasn't intended and gives away the correct rows a bit.)
# The output of above is 1 point per column.  Change it to rows using the 't' (transpose) command. 
back <- t(back)
# Convert it back to a dataframe to make it easily plottable with ggplot
back <- as.data.frame(back)

# Give it column names to make it easy to refer to
names(back) <- c("V1", "V2", "V3")
# Scatter plot the correct two dimensions to view the data
library(ggplot2)
ggplot(as.data.frame(back)) + aes(x=V3, y=V2) + geom_point()


You may have to squish the vertical dimension a bit to read the text, but you'll see it.

Prologue

Incidentally, I actually tested that the spherical points to make sure that they wouldn't reveal the clue when visualized directly.  The sampling had to be adjusted so that if you visualized columns V1 and V3, it didn't reveal the activation text.

Sunday, April 10, 2016

Hybrid Cybers

At the Women in Cyber Security Conference, someone posted a slide title "The Rise of the Cyber-Hybrid".  The concept was that to advance and develop in cyber security, people needed multiple disparate skills (policy, law, regulatory, interpersonal skills, leadership, etc).  While I don't disagree that having these skills makes someone more employable, I do disagree that they are a requirement.


Hiring a Cyber Hybrid

Instead, this is really more of a list of skills needed on a team in general. The higher up the org chart you go, the more of the skills are needed in aggregate.  As a hiring manager, it's easy request that full list of skills in a single employee for multiple reasons:

  • It's easier to get approval to hire one unicorn than a technical person plus a social sciences major.
  • You don't have to compromise.  The person has everything.
  • If one person presents all the skills needed for the team, there's much less risk and work required to build the team.
However, while hiring the unicorn in theory sounds perfect, the practicality is far from it.
  • The person with all skills, particularly disparate skills, are hard to find.  (And, let's face it, no-one is perfect.)
  • When you do find them, they are both expensive and demanding.
  • You may be hiring a hybrid who has multiple skills on the hope that since they are pretty good at everything, they'll figure out the role you need as well; just to find out that is not the case.
  • If you do get them, they are hard to keep.
  • Even if kept, unless all their skills are continuously utilized, they will lose some subset of them, which is a loss for both the employee and your organization.
Plus, they have an effect on the rest of the team.
  • An entire team of unicorns is under-utilized.  If all are capable in all areas, yet you need 10 hours of technical work for each 1 hour of presenting, you are wasting a significant amount of the presenting skill on your team.
  • Having one overachiever encourages the underachievers with overlapping skills to under perform.
  • Having an overachiever that does everything can hurt morale for the rest of the team who have to work with someone who can do what they can do, plus all the things they can't.  It strongly encourages imposter syndrome.

The Alternative to Hiring a Cyber Hybrid

Instead of trying to hire a single baseball player that can play every position, take the Moneyball approach.  (Not that moneyball was anything more than good, common sense, team building.)  Spread the skills you need across your team.

First, you need to understand the skills you need.  Do you really need someone versed in international law?  (Maybe. Maybe not.)  Do you need a skilled communicator? (Almost absolutely yes.)  Build a matrix of the people you have and the skills you need and note who can provide what.  Once you know what skills you are lacking or weak in, plan to hire to obtain those skills.  That might mean adding an english major to your forensics team or building a strong relationship with an editor.  It may mean hiring a marketing guy who has great interpersonal skills to your pen test team to be the face to the customer.

Many times people have the team they have and simply can't go and add head-count to get the skills they need.  The reality is you probably have what you need in your team, it just takes some skill to tease it out.
  1. First and foremost, care for the physical and emotional needs of your team.  I can't stress this enough.  If you don't, nothing else you do matters.  Your morale will be low and your team will underperform.  Everyone leaves for a reason and your team will leave you if their needs are not met.
  2. Understand the strengths of your team members and maximizes them.  Find the guy who gets along well and use him for presenting.  Find the creative guy and have him suggest solutions to problems.  Find the thorough person and set them to checking technical details.   This is certainly harder than it sounds.  It takes a lot of talking with and observing your team and a lot of trial and error.  However, success is clear.  When the person's productivity shoots up, you know you've hit the nail on the head.
  3. Compensate for team members' weaknesses.  The creative person will probably have bad ideas and miss details.  Get the pessimist's opinion on the ideas and let the detailed person check them.  The social person may not be highly technical.  Let them skip out on the high tech work.  The detail person may have trouble coming up with diverse ideas.  Don't put pressure on them to come up with solutions to hard, complex problems that the creative person can solve.
  4. Continue to grow your team.  For the members who want to improve themselves, encourage them to go to management, interpersonal skill training (and make sure to save the budget to send them), technical training, or however they wish to expand.  Then give them responsibilities that allow them to practice what they've learened.  Many times the skills people are interested in gaining will become just as important as the skills they had when hired.

Benefits

There are a lot of benefits to this approach, (on top of not having the downsides listed above of hiring the hybrid).
  • Your team is more likely to succeed, and succeed as a team.
  • Each team members' skills are utilized.  No-one's skills, (which you pay for through differences in their negotiated salary), are going unused.
  • Each individual team member is less expensive because you don't have to pay for skills you aren't using.  This also frees up money for additional training, which leads to the next bullet.
  • Morale is higher.  Each team member is contributing in a substantive way.  Hopefully each member is happier due to better fit between their role and skills.  And hopefully you have more flexibility to grow your team members.
And, as you go, you are making the better-rounded people who are prepared to take the few roles where one person does need to have it all.  (These roles tend to be in small companies with 1-man teams or in management positions where the manager must have the social skills to deal up the chain and the technical skills to deal down it.)

For Employees

This isn't a blank check to be a one-trick pony.  You may very well be the best person in the country at reverse engineering malware, and that ability may get you the job you want.  But the gal with a reasonable amount of technical experience plus many soft skills and skills from other disciplines is probably the more desirable employee in most roles.  Instead,
  • Make your boss's life easier.  The more of the skills in the WiCyS conference slide you possess, the more options they have in balancing their team.
  • Also, the more skills you have, the more valuable you are to your current and future employers.  That means more compensation, more options, and more flexibility.

Conclusion

In the end, no-one's perfect.  You can hunt the cyber hybrid, but you're probably better off hiring imperfect people and building a team greater than any one person with the same skills.  And, as an employee, always work to build those additional skills to help your team.

Thursday, January 7, 2016

Of Course the Network Diagrams are Bad!

As security professionals we know network diagrams are critical to providing security.  It's the top control in the SANS CIS CSC top 20 controls. Yet, almost every organization we go to has network diagrams that are convoluted, out-of-date, missing things, or just plain wrong.  Our pen tests produce better network diagrams than what the organization has in the span of the engagement.  Why is that?

  1. Laying out networks is HARD!  Networks are really just graphs (in the mathematical sense) and graph layout consumes a lot of trees in the pursuit of academic publication.  Honestly, I have been laying graphs out for years, up to the 100's of 1,000's of nodes (where visualizations tools tend to top out) and  any graph with over just a dozen or two nodes is no longer self-explanatory.  Just look at Figure 15 in the PHIDBR.  It's pretty, but let's be honest; you can't really draw any conclusions simply by looking at the graph.
  2. In reality, you'd need an artist.  Someone skilled in data visualization and with good artistic prowess to build useful network diagrams.  Yet, how's that going to work?  Do you hire an artist who knows your network?  Do you train your network guys in visual arts?  Do you hire a full time position simply to draw beautiful network diagrams?
  3. And the network is always changing.  Those diagrams are likely to be obsolete as soon as they are completed.  Does the artist maintain them?  Do you hold back network changes for updates to the network diagram?
  4. And even if you do get a good set of network diagrams that your artist-in-residence keeps up-to-date, what level of detail are you creating them at? Are you creating block diagrams that generally show the top level of the system in the abstract?  Are you creating wiring diagrams for the racks down to the power and ground cables?  Are you creating every potential view in the DODAF?  The reality is when people say "Show me the network diagram." what they mean is "Show me the network diagram showing me exactly the things I'm interested in at the level of detail that I think is correct but that I have never communicated."
  5. And none of this even begins to touch on the issue of determining ground-truth in how your network is connected.  It's hard when you know all the devices and can dump all the configurations.  It's damn near impossible in a practical network when people add things without saying, and use equipment that is not centrally managed.
The reality is it'd be surprising to find someone keeping great network diagrams, simply because of the amount of effort involved.  There are automated tools to help, but if a human can't easily make the network visually understandable, the software is not going to do better.  Also, the software suffers from the same problems related to level-of-detail, being up-to-date, and accurately discovering the true network that a person manually doing the job would.

So are there solutions?  I don't know.  Probably not.  I think that a real-time interactive visualization system rather than static pieces of paper is better.  A system designed with a certain amount of artificial intelligence to learn and explore the network would probably help.  However, we simply may need to accept that we won't know our network fully and that the situation is more like:
*

Knowing this, sympathize with organizations doing their best, and help plan a defense that accepts this reality.


* I saw this on twitter but can't find it again.  If anyone has proper attribution I'd be happy to add it.