How security surveys could be improved

"It is a matter of opinion, there is isn't any science behind it". A quote from a partner at a Big 4 accounting firm at a presentation I attended recently. In security, very few companies have the systems in place to capture the data to accurately perform quantitative risk assessments, assess security costs and investments in a true cost benefit manner nor measure the true cost of security incidents. Sadly this means that we are reliant on surveys as a source of insight into everything from emerging threats, benchmarks of maturity and whether we are investing in the right areas. So if this is the case, why aren't security surveys better and what could be done to make them better?


There seems to be plenty of security surveys and especially at this time of the year, people are evaluating how the last year went and trying to predict the next. They mostly seem to be run by consulting companies and governance institutes with motives ranging from building business development opportunities to providing a service to their community of members. My particular experience came recently from an internal security person presenting the results of a survey he has been contributing to for over 5 years and then a few weeks later a presentation from a Big 4 accounting firm to our team on the survey they had just run. I was quite interested in both because having attempted to benchmark a security practise before on a capability maturity model, write a security strategy and write business cases for security investments the insight and methodology were highly relevant.

Sadly though when both were presenting, the main thoughts going through my head was whether this data was really accurate and whether its conclusions could be relied upon. This led me to ask the question at the end of the Big4 presentation: to me it seems like this survey based approach relies greatly on the wisdom of crowds principles, so how have you tried to align to these or something similar? The risk and technology partner presenting looked bemused, she started to stammer and didn't really seem to understand the question. I tried to explain: so the wisdom of crowds is a book that talks about how the collective view of the crowd can be smarter than the smartest person(s) in the crowd but this requires a few principles to be adhered to, these being diversity of opinion, independence, decentralization and a method to aggregate the private judgements into a collective view. She seemed even more confused but then provided a potential slip of brutal honesty: "It is a matter of opinion, there is isn't any science behind it".

The last part of that statement was what I found most disappointing. She was right, the responses were a number of opinions but why does there mean that there is no science behind it? It seems to me that if you are going to spend a lot of money, time and resources to collect this information, print glossy brochures and publications and run a number of show and tell sessions to share the insights from it, why would not employ some data scientists and people that run surveys and polls for a living and put some scientific rigour about it? Would this not greatly increase the confidence you have in the results which would then allow greater success in meeting the underlying motives e.g. sell some work in the areas that are growing or provide a service in the areas that need the most improvement / provide best bang for buck.

To me it seems like some simple measures could align security surveys to the principles in the wisdom of crowds:

Diversity of opinion 
This seems to be the big one that most security surveys struggle with. At this presentation a colleague of mine asked: so who typically fills in this questionnaire? The answer: a range of people, usually it is sent to the CISO and he/she completes it or delegates it. It seemed reasonable, except there was not any real rigour in the methodology to ensure that true diversity was achieved. The danger here was there was a high chance that someone in the security department completed the survey. This was highlighted by questions such as "is security delivering the value required by the business?" A truly diverse range of people in the organisation completing a response this question seems like it would be valuable rather than say 5% business, 95% security staff. The gentleman answering the question did not even have any stats of how many of the survey responses had been completed by someone in the security department vs. outside of that. This seems like a really simple thing that could be tracked in the survey. In fact ideally the methodology would be more proactive and actually seek responses from as diverse group as possible: from all different roles in the business, technology and security, from the C level to the coal face.


Independence
The crowd became less intelligent when "members of the crowd were too conscious of the opinions of others and began to emulate each other and conform rather than think differently". You could see this in 2008 coming upto the financial crises and with how much of the financial community was using the same models when it came to rating CDO's. This is a particular insidious problem within the security community. If you have attended a security conference in the past year or so it would have been amazing if there was not a majority of talks on mobile security, cloud, APT's etc. It is not like most companies have solved the basics, nor that the business has stopped caring about things like passwords or single-sign on, it is just that these topics are not sexy at the moment. Therefore addressing this point could be a simple benefit of doing the above: ensuring the responding parties are diverse. There could perhaps even be some filtering questions asked of the responder which would allow measurement in their alliance to the herd on key opinions e.g. can data be stored securely in a public cloud? There response could still be collected to the main questions but you could cut the data in a way that took the independence of responses into account. 
Decentralisation 
It appears if you are able to get the knowledge of specialists, especially local knowledge that only they have the the overall collective decision or insight is smarter. In a security context this could be improved by ensuring the survey is truly global rather than western English speaking countries, again ensuring that people on the coal face provide responses e.g. the firewall engineer, the investigator, the penetration tester in addition to the CISO who may have a more generalist knowledge. Also in business and technology, including the teller, the machine operator as well as the GM of Internet Banking.

Aggregation
Generally the survey itself is a good way of doing this. However some basic improvements could be made:

Ask the question with the end in mind - in many of the surveys the questions do not appear to be crafted by people with a high degree of experience in polling or surveying field. They are probably written by security people. A question should always be driven from what decision can we make with this data, what stakeholders would be interested in this and why (and does this match our actual target audience), what insight does the answer being one of the options rather than another tell me? Many of the surveys seem to want to move with the times (e.g. have cloud and mobile etc in the responses). However not only does changing the question every year reduce it's value in historical trends, the way the question is asked, the end answer does not provide anything valuable. Insights which security people are always interested in and which questions should be crafted around to answer seem to be:
  • What areas of security is my organisation performing well in or poorly in (what the maturity level is) relative to my peers?
  • Where should I focus my investment and where could I make savings? Is the level of investment / spend on security technology, people, process reasonable commiserate with peers? Is more or less justified and in what areas?
  • What areas of security are providing the most value to the business and which areas could the  most value be provided?
  • Where is the next major set of incidents or regulation coming from and how could I get in front of the curve/demands?
  • Is the investment / focus I'm making in various areas of security actually resulting in improvements and benefits being realised?
  • How can I measure improvements or reductions in my security posture / risk exposure? How secure am I relative to peers and the threat environment?
  • Where is the lack of security controls actually costing me money or causing harm in terms of reputational damage, legal, contractual or regulatory impacts vs. being a theoretical risk?
  • What should be my top three priorities for the next year or within the next three years and how would I make some practical improvements in these areas?
There are probably many more but my point being these should be identified and a clear mapping of survey question results to how these are answered. A pilot could be run of the survey or even purely computer based simulations of answers to see how various answers would provide insight, which could then be used to tune the questions prior to the real survey.

Basis statistics -  understanding average vs. median, how outliers affect / skew your results, the value of standard deviations and confidence intervals, the difference between correlation and causation. Actually presenting commentary and explanation and not just pretty graphs. The value of sorting and infographics. All of which and more I'm sure experience people in running surveys would be able to provide and all of which would greatly increase the value of the results. I mean surely a big global accounting company running an annual global survey could build a competent team of these people (e.g. hire the pollsters that run these for political campaigns) rather than simply asking managers, directors and partners to run these simply because they sell services in the risk and security space.


Summary (TLDR;)
The security industry has a high reliance on survey based data. These survey's unfortunately seem to have a lack of scientific rigour and could be improved by adhering to the principles in the wisdom of crowds:
  • Diversity of opinion - designing the survey to include a broad range of business, technology and security responders at all levels in a broad range of organisations
  • Independence - testing for herd think with filtering questions and allowing different data cutting with higher or lower independence
  • Decentralisation - looking for specialist local knowledge rather than generalists
  • Aggregation - designing questions to answer things people actually want to know, ensuring the responses can actually provide this and testing it. Improving the basic statistics and presentation

Related posts:
PS: no blogs for a whole year, epic fail and typically what happens to most blogs. No excuses really, just a lack of serious inspiration combined with other hobbies taking time, working on projects I've already talked about (DLP, SIEM) and long delivery times on other projects. Goal is to write a couple of posts in this Christmas break (Oracle security stack and Big data in SIEM are the goals), will see how we go. Don't think this will ever be a regular blog but hopefully some interesting and valuable content and a bit out of the box thinking and challenging the norms. Especially if you got this far, thanks for reading and please comment / link me your content!

Like this post? Get updates via RSS or follow me on Twitter:

Picture source:  http://siliconangle.com/files/2012/11/BigDataFail-300x208.jpg

No comments:

Post a Comment

Author

Written by