July 25, 2007
Jason had a great post today about what I would call “second order networking” – the concept of using someone you are networked to to network to someone that they don’t know. This is the equivalent of making a “3rd” connection on LinkedIn, because the request goes from you to someone you know, ultimately to a person that they don’t know (through another contact).
From Jason’s post:
“In each case I was asking for my network contacts to hook me up. Hereâ€™s the interesting thing: in every case they did not know the person that I needed to talk to.
This presents an interesting decision….
But here is what I would do. I would take the opportunity to grow my own network and try and make the connection. Why? Itâ€™s easier to go to someone that you donâ€™t know with a purpose…
It’s a great point – not only does this type of networking allow you to expand your own network, but helps your network expand theirs.
I loved the concept so much that I think I’m going to send a few emails… I could use an introduction or two.
July 20, 2007
That’s the punchline to an old physics joke about horse racing – it reflects the often-times unrealistic expectations we make when creating academic models for real-world performance.
I got thinking about this after Ken emailed me about his blog post after reading my previous post on ROI. I think that his post definitely ends the ROI debate with some very smart (and diplomatic) comments from Larry Gordon.
More importantly, I spent some time with the Gordon-Loeb model for cyber-security investment after reading Ken’s post, and it reminded me of the afforementioned joke. While it’s an interesting paper from the perspective of provoking thought, I think there’s a lot more to security investment than the model suggests. For example:
“The parameter Î» represents the monetary loss to the firm caused by a breach of security of the information set…. Even though we initially assume that this loss is a fixed value, we will investigate how changes in the value of the loss affect the firmâ€™s security investment decision.”
This is where I get frustrated by a lot of infosec economic models (and why I was so simplistic in my own post) – we miss the point that information security does not only prevent loss, but (in most cases) has the side benefit of reducing operating risk. Think about it for a second… a vulnerability in a system is as much an issue of product quality as it is an issue of security vulnerability. (This can be discovered by a thought experiment: imagine a perfectly designed and perfectly implemented product with no defects – would vulnerabilities exist?)
In such, remediating the risk presented by security issues also reduces operating risk, leading to higher up-time, more environmental awareness, and better monitoring of system state. These aren’t just loss-prevention activities, but actually lead to increased efficiencies and better effectiveness of technology.
I’ve yet to see a model take this into account – yet I see CISOs make decisions on that criteria (usually intuitively and without conscious understanding of why they’re doing it) often.
Which is why I hate the whole argument from formal economic terms. The fundamental question is always a simple one:
How much does my business increase its net profit because I have purchased this technology/implemented this process/bought more toilet paper/hired this person/etc.?
Ask that question, and the debate about whether you call it ROI, IRR, Rate of Return, Cost Reduction, or any number of other things goes away.
And you’re left with the only thing that really matters – a real horse that wins the race in the real world, not a spherical horse in a vacuum.
July 18, 2007
So, over at Anton’s blog, there’s a good roundup of the discussion of ROI in security. And Anton (among others) comes to the conclusion (with the help of his Economic Ph.D. wife) that there’s no way to have ROI from a product in security.
And I have to say, he’s right, because what he’s talking about isn’t ROI in economic terms.
And he’s wrong. Because the question of whether bringing in a product enables a business to make more money (whether by top-line growth or bottom-line cost reduction) is what’s important, whether we call it “return on investment”, “rate of return”, “cost savings”, or whether we call it cash in the bank.
Let’s create an example that Anton can’t help but love.
Suppose we have a business that’s just breaking even – the company isn’t making money or losing money. But they employ a team of 15 people to read the logs on their systems, each of whom are paid (fully-loaded) $100K/year.
Now, suppose the brilliant CISO of our fictional organization calls Anton, and brings in Log Logic at a cost of $100K. Our CISO then fires 14 of the 15 log watchers.
Over the course of the year, the company now posts a profit of $1.3 million dollars (by not paying the salaries of the 13 fired people). (Note: this ignores severance, etc. for simplicity).
Now, did the product produce a return on the investment of $100K into it? You’d be hard-pressed to say that increasing company net profit by $1.3M as the result of a purchasing decision is not a return on the investment.
But the pedantic ones out there are right: it’s not strict “ROI”.
But I don’t care about ROI. I care about $1.3M profit. Call it whatever you want – whenever you invest in something that enables you to bring in more money or reduce costs, it’s a smart decision, whether you can calculate it as strict ROI or not
July 17, 2007
I’ve referenced my new job a couple of times recently without actually saying what I’m doing – I figure it’s time to explain where I ended up. But I’ll do that by way of a story about where the industry is and what I’m focusing on in my new role.
As you probably know, I’ve spent time in a whole bunch of different roles within the security community – vendor side, customer side, service provider, product vendor, consultant, etc. Most recently, I was spending time at a large insurance company on the east coast, working as a security architect in a particularly dysfunctional security organization. And I wouldn’t have traded it for anything – the dysfunction allowed me to see clearly a whole bunch of management and career strategies that wouldn’t have been evident otherwise.
But one thing in particular really started to bother me. We were an under-resourced group (what security team isn’t?), and we had to maximize the effectiveness of our investments. So we did a good amount of due diligence on products – there weren’t any paper evals. We were really bringing in products and putting them through their paces to make sure that they worked.
And even in that type of organization, I was seeing project completion rates of 20-30%. I had heard the statistics before that the IT industry completes less than 20% of its projects, and I was seeing even a relatively well-disciplined project management organization do not much better than that.
The main reason? Product inadequacy or unfriendliness.
A great example of this was seen when we were deploying an enterprise desktop application. This particular product is a market leader and has a great reputation for usability and some great reference customers. And it passed pilot incredibly easily, so we made the decision to deploy it on 40,000 desktops and laptops throughout the organization. Of course, we didn’t have the resources or testing equipment to pilot the product on more than a few machines in a test environment. So, before moving forward, the project manager asked the sales engineers whether 40,000 machines was going to be a problematic deployment.
“Oh, no“, they replied. “Our architecture can handle that perfectly well.”
So, we went forward. And, when roll-out day came, we found out (the incredibly hard and painful way) that the machines could be deployed only 900 at a time. Our 1-week roll-out became a 2-month roll-out. There was much wailing and gnashing of teeth, and abusing the vendor. But nothing could be done – that was all the product could do.
That was just a single example, and, having spent time on the vendor side of the world, I know it’s not even a particularly egregious example of vendor sales exaggeration. I’ve seen sales people completely misrepresent product functionality to clients to get business.
To me, this type of exaggeration and misrepresentation is one of the biggest risks that information security teams face today – in the face of budgets that aren’t ever high enough, a 7-figure purchase of a product that doesn’t perform as advertised just isn’t acceptable. It’s the kind of thing that gets CISO’s and their direct reports fired, and gives security a black eye within their organizations.
So, when I got a call from Greg Shipley at Neohapsis talking about a vacancy at the top of the Neohapsis Labs organization, I got incredibly excited. Because I saw immediately the opportunity to help stem the tide of crappy information out there. Neohapsis has always had an amazing reputation for their product testing – from the old Network Computing reviews to the work that we do for individual clients, helping them validate that the product that they’re about to deploy actually works in the way that it’s supposed to, to helping vendors prove that their product works as they’re about to advertise (hint: most of the time, they have to fix something after we look at it), the work that I’m getting to do right now allows me to help fight bad product.
As I look at it, I’ve seen a few too many multi-million dollar security product engagements fail to be anything but cynical about it – the customers that use the lab before they deploy at least know that the product works as advertised. Or that it doesn’t. (If only I had known as a customer all of the things that I’ve learned in the first 3 weeks reviewing the old lab reports here, I’d have been able to avoid my team a lot of headaches and steer clear of some big mistakes)
So, if you’re about to spend a few hundred thousand or a few million dollars on a product, a good idea might be to drop me an email before you do… we might be able to keep you from making a really big career limiting move.
July 16, 2007
Trances are going to be my other topic at Defcon this year. I’m really excited to be speaking with one of my favorite security bloggers and also a master NLP Practitioner Anton Chuvakin – we’re going to be talking about how NLP and hypnosis can enhance social engineering.
But what made me think of blogging this is that I’m currently reading a copy of The Illuminatus Trilogy – to understand the workings of a master hypnotist, one need only read the first 15 pages. Just about every brilliant hypnotist trick and hypnotic language pattern is present in those 15 pages.
July 13, 2007
“ Look folks, hereâ€™s the deal. There is no job security! YOU need to take care of your career, not just your job! Do you find yourself doing any career stuff, outside of your job? Donâ€™t have time? Fine – youâ€™ll have plenty of time since a job search can take so long. Trust me, start doing a little every day, and it will add up. Do not wait until you are terminated to get moving. A little big-picture career stuff every day will go a long way.”
Usually, I’d comment here. But there’s nothing else to say.
(Except perhaps that, if you’re going to Defcon, you should come see my talk with the brilliant and funny Lee Kushner about how to create the real type of job security that Jason talks about in his post… we’re speaking on Saturday afternoon.
July 12, 2007
Andy wrote recently about urgency in security. And I think he brought up some really good and really important points:
“There is a trend in information security… to tackle the urgent issues first. These are the issues that users are screaming about, management is on you about, auditors have written you up about and then things that get you noticed. No one gets noticed for the security flaw or vulnerability that they found, patched and as a result prevented a breach. You get noticed when you put out a fire that other people see. Even if that fire is in the middle of an field and is surrounded by a mote full of water. People see you out there jumping up and down putting out that fire and they applaud you.”
He goes on to talk about the importance of proactivity and having a plan, but, in my experience, a plan survives only until the first person who has the authority to quash the plan has their own pet fire that needs to be put out.
What is far more important than a plan for getting things done is a definition of what constitutes an emergency in your world. My thinking on this has been shaped a great deal by some of the ideas in the 4-Hour Work Week (which everyone on the planet should read… it’s that good).
We have a tendency to escalate to urgent a huge number of things that simply don’t need escalation. The questions you need to ask before jumping off into fire-fighting mode:
1. Who will be seriously injured if I don’t do this right now?
2. How much will it cost (in real $$$) if I don’t do this right now?
3. What opportunities would I be giving up to do this right now?
Obviously, if it’s a matter of injury to self or others, it really is a fire. In this case, injury doesn’t have to be physical – if you have a compromise in progress, there’s a pretty serious injury going on (as well as a loss of real $$$), and it’s worth moving on right away.
Unfortunately, most often the “injury” in a given situation is the minor annoyance of someone deemed to be “important”. In that case, it’s appropriate to ask the person the prioritization question…
“I’m currently working on X, which will save/make us X number of $$. Would you like me to delay that task in order to help you right now?”
Realize, the answer may often be yes. But at that point, the importance (i.e. priority) of the decision has been made. For those who have read Covey’s The Seven Habits, this is enough to move from Quadrant 3 to Quadrant 1 – from just urgent to “urgent and important”.
Which is how you determine what’s a real fire anyways. (The 3 questions above are a guide to end up in Q1).
July 11, 2007
… or I should be submitting a whole pile of talks to next year’s Blackhat. I just read the latest article over at Dark Reading about Matasano’s upcoming Blackhat talk where they take apart a protocol that is used for financial transactions that is *GASP* badly designed and implemented!
I don’t know if this is just Dave & Thomas going on their reputation as security bad-asses (which they are), but any time I’ve seen a protocol designed for use in a particular vertical, it had many of the same design flaws described in Kelly’s article. Whether in insurance, finance, health care, or whatever, this type of error abounds.
I remember a particular engagement at a large hospital that was running on one of these specialty protocols. The protocol was incredibly secure – if you connected to the appropriate port on any number of their systems, and issued a single byte command, it would send you the next patient record on its record stack. And if you issued a different (but equally complex) command, the system would allow you to input or modify whatever patient records it contained. No authentication, no authorization. No encryption.
And did I mention that this travelled over a wireless network?
I don’t know that what Dave & Thomas are presenting is that unique – it’s cool that they’re going to do it, and I’m excited to see the talk. But they’re just scratching the surface of the tip of a very, very, very large iceburg.
July 10, 2007
Did Chris Isaak just get the lyrics to the Star Spangled Banner wrong?
I swear, he just said: “Whose broad stripes and bright stars, through the perilous night“.
July 10, 2007
One of the most important things that was drilled into me when studying hypnosis was the concept of ecology. That is, that a person is a whole and complete system, and that, when making significant change to one part of that system, it was absolutely important to ensure that the system as a whole was not adversely affected.
“They treated 19 accident or rape victims for ten days, during which the patients were asked to describe their memories of the traumatic event that had happened 10 years earlier. Some patients were given the drug, which is also used to treat amnesia, while others were given a placebo. A week later, they found that patients given the drug showed fewer signs of stress when recalling their trauma.”
This is significant – as we start to learn more and more about neuroscience, we will be given more opportunities to affect our ecology as a system. But we need to remember that we are systems by nature and that taking the bad memories out of the system won’t necessarily make the system perform better. It’s the same problem that was faced by prohibition – the intent was that, by taking the alcohol out of the system, the system would improve. Instead, taking the alcohol out of the system created an entirely new business for people who were willing to break laws.
Imagine, for a second, how the world would be different if Ghandi, while riding on the train back to India pondering the hateful treatment that he had recently seen of people in South Africa, would have simply popped a pill and forgotten? Or if Mandela, upon leaving prison, forgot all about the treatment that his tormentors gave. Or if we, as a people, could simply forget atrocities like the Holocaust.
Would the system be better for never remembering the bad things?