It’s like 50/50 that I found this via work rather than via tumblr, so apologies if I’m failing to credit someone I follow for introducing me to this.
Some humorous bits:
So I brought up hacking contests and I asked this young man what the word Cyber meant. He told me that cyber was a word used exclusively by people in government to let everyone know that they didn’t understand how computers worked. I think maybe he was on to something. I think this definition is still universally accepted in the hacker community.
I was informed that a network monitoring approach was so effective that it continually discovered zero-day malware. To this day I don’t know what that means.
More seriously, the talk is making a case that the security community does a terrible job of communicating its ideas to policy folks (and additionally, that policy folks are not idiots to be ignored).
One point he makes is that the way the security community talks about exploits (which has fed into some hyperbolic warfare metaphors) encourages policy folks to see them as basically equivalent to weapons and to imagine that banning them will solve the problem. Worse, security folks respond by trying to explain technical details, rather than reaching for better metaphors or other persuasive techniques.
Around 2004 the global internet was struck by a worm that contained a remote exploit–a Von Neumann UDP probe, if you will–that came to be known as SASSER because of the name of the process it knocked over. I worked for a pentesting team that developed automation the day my company was struck by Sasser. My company was a global company with a Class A network and tens of thousands of globally distributed computers. The worm crippled us in hours. My team and I hatched a plan to scan our entire class A using our toolset, automatically throw the same exploit as the worm, get a shell on computers in our company, and install the patch. We got authorization to execute this plan, executed it, patched thousands of computers all around the world, and ended the worm outbreak on our network without a single administrator-installed patch, put a massive organization back in business, and got home in time for dinner. To do this, we built a framework which handled and used remote exploits and launched them against thousands of computers all over the globe. This framework engaged in global automated exploitation for the purpose of authorized access. I literally got to hack the planet- a fact I only realized years later.
So, that’s the story. I would tell this story, and the folks on the other side of the table would pause a bit, then they would say “so you attacked your own network”?
And I would say, “no, we defended our network with a tool called an automated exploitation framework”.
And then they would look again at the proposed ban with new eyes.
He also brings up the idea that an exploit, at its heart, is really a “proof of vulnerability” (an idea he credits to Meredith Patterson):
Now I could explain this language choice easily: an exploit, free of context, is a word that will make people think it’s an attack; banning exploits and their containers is simply banning the act of attacking. A proof, free of context, is scientific evidence, and banning proofs is an attack on science. Of course, I would never explain this, because if you’re explaining, you’re losing. Let’s stop losing. No one should attack science.
If we describe our complex, dual-use landscape of information/code with single-use language, we will summon mythical monsters and see them used against us. The solution is clear. Rewrite the cyberwarfare secret decoder ring, using absolutely no jargon. There are no attack or defense tools.
Describe powerful things with simple, precise language that expresses their core truth. No one in power should ever be put in a position where they act against something whose true nature was hidden from them. This is our communications problem, our failure, and we can solve it.
My entire brand in my local hacker group is “Talking to non-techs is a valuable skill so don’t treat people like shit because they don’t spend their weekends coding, asshole.”
Which is also something that I have to explain to my boss about his customers pretty regularly.
It’s super fucking hard for people to take you seriously about security when you don’t explain risk in terms they can understand and it’s super fucking hard for you to effectively explain things when you think doing so is beneath you.
Yeah, the way I see it is, if “explaining things to customers” is beneath you, I’m just going to assume “getting paid by customers” is also beneath you.