RSS

Doing Your Due Diligence on Security Scanning and Penetration Testing Vendors

29 May

All too often, development shops and IT professionals become complacent with depending on packaged scanning solutions or a utility belt of tools to provide security assurance testing of a hosted software solution.  In the past five years, a number of new entrants to the security evaluation and penetration testing market have created some compelling cloud-based solutions to perimeter testing.  These tools, while exceptionally useful for a sanity check of firewall rules, load balancer configurations, and even certain industry best practices in web application development, are starting to create a false sense of security in a number of ways.  As these tools proliferate, infrastructure professionals are becoming increasingly dependent upon their handsomely-crafted reporting about PCI, GLBA, SOX, HIPPA, and all the other regulatory buzzwords that apply to certain industries.  If you’re using these tools, have you considered:

Do you use more than one tool?  If not, and you should, is there any actual overlap between their testing criteria?

There is a certain incestuous phenomenon that develops in any SaaS industry that sees high profit margins: entrepreneurs perceive cloud-based solutions as having a low barrier to entry.  This perception drives new market entrants to cobble together solutions to compete for share in the space.  But are these fly-by-night competitors competitively differentiated from their peers?

Sadly, I have found in practical experience this not to be the case.  Too many times have I have enrolled in a free trial of a tool or actually shelled out for some hot new cloud-based scanning solution to find at best only existing known vulnerabilities are duplicatively reported by this new ‘solution’, with only false positives appearing as the ‘net new’ items to bring to my attention.  Here in lies the rub — when new entrants to this market create competing products, there is an iterative reverse engineering that goes on — they run existing scanning products on the market against websites, check to see those results, and make sure they develop a solution that at least identifies the same issues.

That’s not good at all.  In any given security scan, you may see, perhaps, 20% of the total vulnerabilities a product is capable of finding show up as a problem in a scan target.  Even if you were to scan multiple targets, you may only be seeing mostly the same kinds of issues in each subsequent scan.  Those using this as a methodology to build quick-to-market security scanning solutions are delivering sub-par offerings that may only identify 70% of the vulnerabilities other scanning solutions do.  eEye has put together similar findings in an intriguing report I highly recommend reading.  Investigating the research and development activities of a security scanning provider is an important due diligence step to make sure when you get an “all clear” clean report from a scanning tool, that report actually means something.

How do you judge your security vendor in this regard?  Ask for a listing of all specific vulnerabilities they scan for.  Excellent players in this market will not flinch at giving you this kind of data for two reasons: (1) a list of what they check for isn’t as important as how well and how thoroughly they actually assess each item, and (2) worthwhile vendors are constantly adding new items to the list, so it doesn’t represent any static master blueprint for their product.

Does your tool test more than OWASP vulnerabilities?

The problem with developing security testing tools is in part the over-reliance on the standardization of vulnerability definition and classifications.  While it is helpful to categorize vulnerabilities into conceptually similar groups to create common mitigation strategies and mitigation techniques, too often security vendors focus on OWASP attack classifications as the definitive scope for probative activities.  Don’t get me wrong, these are excellent guides for ensuring the most common types of attacks are covered, but they do not provide a comprehensive test of application security.  Too often the types of testing such as incremental information disclosure, where various pieces of the system provide information that can be used to discern how to attack the system further, are relegated to manual penetration testing instead of codified into scanning criteria.  Path disclosure and path traversal vulnerabilities are a class of incremental information disclosures that are routinely tested for by scanning tools, but they represent only a file-system basis test for this kind of security problem instead of part of a larger approach to the problem through systematic scanning.

Moreover, SaaS providers should consider DoS/DDoS weaknesses as security problems, not just customer relationship or business continuity problems.  These types of attacks can cripple a provider and draw their technical talent to the problem at hand, mitigating the denial of service attack.  During those periods, this can and has recently been used in high-profile fake-outs to either generate so much trash traffic that other attacks and penetrations are difficult to perceive or react to, or to create opportunities for social engineering attacks to succeed with less sophisticated personnel while the big-guns are trying to tackle the bigger attacks.  Until weaknesses that can allow for high-load to easily take down a SaaS application are included as part of vulnerability scanning, this will remain a serious hole in the testing methodology of a security scanning vendor.

So, seeing CVE identifiers and OWASP classifications for reported items is nice from a reporting perspective, and it gives a certain credence to mitigation reports to auditors, but don’t let those lull you into a false sense of security coverage.  Ask your vendor what other types of weaknesses and application vulnerabilities they test for outside of the prescribed standard vulnerability classifications.  Otherwise, you will potentially shield yourself from “script kiddies”, but leave yourself open to targeted attacks and advanced persistent threats that have created embarrassing situations for a number of large institutions in the past year.

What is your mobile strategy?

Native mobile applications are the hot-stuff right now.  Purists tout the HTML5-only route to mobile application development, but mobile web development alone isn’t enough to satisfy Apple to get access to the iOS platform, (since 2008) and consumers still can detect a web app that is merely a browser window and prefer the feature set that comes from native applications, including camera access, accelerometer data, and usage of the physical phone buttons into application navigation.  The native experience is still too nice to pass up to be at the head-of-the-class in your industry.

If you’re a serious player in the SaaS market, you have or will soon have a native mobile application or hybrid-native deliverable. If you’re like most other software development shops, mobile isn’t your forte, but you’ve probably hired specific talent with a mobile skill set to realize whatever your native strategy is.  Are your architects and in-house security professionals giving the same critical eye to native architecture, development, and code review as they are to your web offering?  If you’re honest, the answer is: probably not.

The reason your answer is ‘probably not’ is because it is a whole different technology stack, set of development languages, and testing methodology where the tools you invested in to secure your web application do not apply to your native application development.  This doesn’t mean your native applications are not vulnerable, it means they’re vulnerable in different ways that you don’t even know or are testing for yet.  This should be a wake-up call for enterprise software shops: because a vulnerability exists only on a native platform does not mitigate its seriousness.  It is trivial to spin up a mobile emulator to host a native application and use the power of a desktop or server to exploit that vulnerability on a scale that could cripple a business through disclosure or denial of service.

Your native mobile security scanning strategy should minimally cover two important surface areas:

1. Vulnerabilities in the way the application stores data on the device in memory and on any removable media

2. Vulnerabilities in the underlying API serving the native application

If you’re not considering these, then you probably have not selected a native application security scanning tool checking for these either.

In Conclusion

Security is always a moving target, as fluid as the adaptiveness of the techniques of attackers and the rapid pace of change in technologies they attack.  Don’t treat security scanning and penetration testing as a checklist item for RFP’s or to address auditor’s concerns — understand the surface areas, and understanding the failings of security vendors’ products.  Understand your assessments are valid only in the short-term, and re-evaluation of your vendor mix and their offerings on a continual basis is crucial.  Only then will you be informed and able to make the right decisions to be proactive, instead of reactive, regarding the sustainability of your business.

 
Leave a comment

Posted by on May 29, 2013 in Security

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
%d bloggers like this: