RSS

Category Archives: Security

Aside

In controlled environments, it’s useful to know when outbound connectivity is not restricted to a predefined list of required hosts, as many standards like PCI require.  Here’s a helpful one-liner that will query your Active Directory instance for computer accounts that are enabled, and then for each of them try to connect to a site from that machine, as orchestrated by WinRM.  If you use this script, just know that you will probably see a sea of errors for machines that connect be reached from your source host via WinRM.  My go-to site for testing non-secure HTTP is asdf.com, but you could use anything target and port you desire based on what should not be allowed in your environment.  I have changed the snippet below to example.com (which will not work) so I don’t spam the poor soul who runs asdf.com, but you should replace that with google.com or whatever host to which you wish to verify connectivity.

Invoke-Command -ComputerName (Get-ADComputer -Filter {Enabled -eq "True"}
 -Property Name,Enabled | foreach { $_.Name }) -ScriptBlock
 { Test-NetConnection -Port 80 "example.com" | Select TcpTestSucceeded }

The output will be dropped into look something like this:

 TcpTestSucceeded PSComputerName RunspaceId 
 ---------------- -------------- ---------- 
             True YOUR-HOST-1    d5fd044c-c268-460e-a274-d3253adc8ce2 
             True YOUR-HOST-2    98206f71-80c1-4e7e-a467-fec489c542ee 
            False YOUR-HOST-3    d0b6cf57-e833-44a6-a7bb-aebd4d854b5c 
             True YOUR-HOST-4    14af618b-1ca7-4c1f-bb56-ce58dbd4af94

It’s a great sanity check before an audit or after major changes to your network architecture or security controls.  Enjoy!

 

 

 

PowerShell one-liner to find outbound connectivity via WinRM

 
Leave a comment

Posted by on June 24, 2017 in Programming, Security

 

Tags: ,

SQL Injection with New Relic [PATCHED]

SQL Injection with New Relic [PATCHED]

Background

First off, I have found New Relic to be a great application performance monitoring (APM) tool.  Its ability to link transaction performance from the front-end all the way to back-end database queries that slow your web application is pretty awesome.  This feature lets you see specific queries that are running slowly, including the query execution plans and how much time is spent on processing various parts of a database request.  From their online documentation, the interface looks similar to this:

What’s not so awesome is when your APM’s method for retrieving this data creates a SQL injection flaw in your application that wasn’t there before.  In October 2016, I became aware of some strange errors when a DBA was trying to load SQL Server trace files into PSSDiag, due to a formatting problem in the trace file itself.  Our DBA discovered that unclosed quotation marks were causing problems with PSSDiag loading trace files.  So, how could an unclosed quotation mark even be happening?  It’s a hallmark of a SQL injection exploit, and so I began digging.

It appeared our ORM (NHibernate at the time) was sending unparameterized queries, and one of the field values had an unescaped quotation mark, which was causing the error in PSSDiag.  However, in other cases the same query, unique to an area of our code, would be issued with parameters.  Upon further digging, it actually appeared our application was submitting the same query twice, first with the parameterized query version, and a second with parameter values replaced into the query string, sandwiched with a SET SHOWPLAN_ALL.  It looked a bit like this:

exec sp_executesql N'INSERT INTO dbo.Table (A, B, C) 
VALUES (@p0, @p1, @p2);select SCOPE_IDENTITY()'
,N'@p0 uniqueidentifier,@p1 uniqueidentifier, @p2 nvarchar(50)'
,@p0='{Snipped}',@p1='{Snipped}',@p2=N'I don''t even'

Followed by:

SET SHOWPLAN_ALL ON
INSERT INTO dbo.Table (A, B, C)
VALUES ('{Snipped}', '{Snipped}', 'I don't even');select SCOPE_IDENTITY()

As you can see in the first example created by NHibernate, the word “don’t” was properly escaped; however, in the subsequent execution, it was not.  This second statement is sent by our very same application process, which New Relic will instrument using the ICorProfilerCallback2 profiler hook to retrieve application performance statistics.  But it doesn’t just snoop on the process, it actually hijacks database connections to periodically piggyback on their ‘echo’ of requests to retrieve metrics used to populate their slow queries feature.  The SET SHOWPLAN_ALL directive causes the subsequent request not actually to return data, but to just return the execution plan.

(DBA’s will note this is actually not a reliable way retrieve this data at all, as parameterized queries can and often do have very different query execution plans when parameter sniffing and lopsided column statistics are in play.  But that’s how New Relic does it.)

This is pretty bad, because now virtually every user-provided input that is sent to your database, even if programmed using secure programming practices to avoid SQL injection flaws, becomes vulnerable with New Relic is installed with the Slow Queries feature enabled.  That being said, New Relic does not send this second ‘show plan’ and repeated statement set for every query.  It samples, appending it only onto some executions of any given statement.  An attacker attempting to exploit this would not be able to do so consistently; although, repeated attempts on something like the username field of a login screen, which in many systems is likely log to a database table that stores usernames of failed login attempts, would occasionally succeed when the subsequent SHOWPLAN_ALL and unparamaterized version of the original query is injected at the end of the request by New Relic.

Timeline

  • October 5, 2016: Notified New Relic
  • October 5: New Relic acknowledges issue and provides a workaround (disabling explain plans)
  • October 6: New Relic’s application security team responds with details explaining why they believe the issue is not exploitable as a security vulnerability. Their reasoning is based on the expected behavior of SHOWPLAN_ALL, which would not execute subsequent commands
  • October 6: I provide a specific example of how to bypass the ‘protection’ of the preceding SHOWPLAN_ALL statement that confirms this is an exploitable vulnerability.
  • October 6 New Relic confirms the exploit and indicates it is targeted for resolution in their upcoming 6.x version of the New Relic .NET Agent.  I confirm the issue in New Relic .NET Agent 5.22.6.
  • October 7: New Relic indicates they will not issue a CVE for this issue.
  • October 12: New Relic updates us a fix is still in development, but a new member of their application security team questions the exploit-ability of the issue.
  • October 12: I provide an updated, detailed exploit to the New Relic security team to demonstrate how to exploit the flaw.
  • November 8: Follow-up call with New Relic security team and .NET product manager on progress.  They confirm they have resolved the issue as of the New Relic .NET Agent 6.3.123.0.
  • November 9: .NET Agent with issue fixed addressed.
  • May 26, 2017: Public disclosure

Conclusion

First off, I want to applaud New Relic on their speedy response and continued dialogue as we worked through the communication of this issue so they understood how to remediate it.  On our November 8 call, I specifically asked if New Relic would reconsider their stance of not issuing a CVE for the issue, or at least clearly identify 6.3.123.0 as a security update so developers and companies that use this agent would know they needed to prioritize this update.  They thoughtfully declined, and I did inform them that I would then be publicly disclosing the vulnerability if they did not.

Even if I don’t agree with it, I understand the position companies take about not proactively issuing CVE’s.  However, I do believe software creators must clearly indicate when action is needed by their users to update software they provide to resolve security vulnerabilities. Many IT administrators take the ‘if it’s not broken, don’t update it’ approach to components like the New Relic .NET Agent, and if no security urgency is communicated for an update, it could take months to years for it to be updated in some environments.  While some companies may be worried about competitors’ narratives or market reactions to self-disclosing, the truth is vulnerabilities will eventually be disclosed anyway, and providing an appropriate amount of disclosure and timely communications for security fixes is a sign of a mature vulnerability management program within a software company.

Also, be sure if you put any mitigation techniques in place that they actually work.  We stumbled upon another bug in working around the issue that was subsequently fixed in 6.11.613 where trying to turn off the ‘slow query’ analysis feature per the New Relic documentation did not consistently work.

Given the potential gravity of this issue, I have quietly sat on this for almost 7 months to allow for old versions of this agent to be upgraded by New Relic customers, in the name of responsible disclosure.  I have not done any testing on versions of New Relic agents other than the .NET one, but I would implore security researchers to test agents from any APM vendor that collects execution plans as part of their solution for this or similar weaknesses.

 
2 Comments

Posted by on May 26, 2017 in Security

 

Security Advisory for Financial Institutions: POODLE

Yesterday evening, Google made public a new form of attack on encrypted connections between end-users and secure web servers using an old form of encryption technology called SSL 3.0.  This attack could permit an attacker who has the ability to physically disrupt or intercept an end-user’s browser communications to execute a “downgrade attack” that would could cause an end-user’s web browser to attempt to use the older SSL 3.0 encryption protocol rather than the newer TLS 1.0 or higher protocols.  Once an attacker successfully executed a downgrade attack on an end-user, a “padded oracle” attack could then be attempted to steal user session information such as cookies or security tokens, which could be further used to gain illicit access to an active secure website sessions.  This particular flaw is termed the POODLE (Padding Oracle On Downgraded Legacy Encryption) attack.  At this time this advisory was authored, US-CERT had not yet published a vulnerability document for release yet, but has reserved advisory number CVE-2014-3566 for its publication, expected today.

It is important to know this is not an attack on the secure server environments that host online banking and other end-user services, but is a form of attack on end-users themselves who are using web browsers that support the older SSL 3.0 encryption protocol.  For an attacker to target an end-user, they would need to be able to capture or reliably disrupt the end-user’s web browser connection in specific ways, which would generally limit the scope of this capability to end-user malware or attackers on the user’s local network or that controlled significant portions of the networking infrastructure an end-user was using.  Unlike previous security scares in 2014 such as Heartbleed or Shellshock, this attack targets the technology and connection of end-users.  The nature of this attack is one of many classes of attacks that exist that target end-users, and is not the only such risk posed to end-users who have an active network attacker specifically targeting them from their local network.

The proper resolution for end-users will be to update their web browsers to versions that have not yet been released that completely disable this older and susceptible SSL 3.0 technology.  In the interim, service providers can disable SSL 3.0 support, with the caveat that IE 6 users will no longer be able to access sites with SSL 3.0 without making special settings adjustments in their browser configuration.  (But honestly, if you are keeping IE 6 a viable option for your end-users, this is one of many security flaws those issues are subject to).  Institutions that run on-premises software systems for their end-users may wish to perform their own analysis of the POODLE SSL 3.0 security advisory and evaluate what, if any, server-side mitigations are available to them as part of their respective network technology stacks.

Defense-in-depth is the key to a comprehensive security strategy in today’s fast-developing threat environment.  Because of the targeted nature of this type of attack, and its prerequisites for a privileged vantage point to interact with an end-user’s network connection, it does not appear to be a significant threat to online banking and other end-user services, and this information is therefore provided as a precaution and for informational purposes only.

All financial institutions should subscribe to US-CERT security advisories and to monitor the publication of CVE-2014-3566 once released for any further recommendations and best practices.  The resolution for end-users of updated versions of Chrome, Firefox, Internet Explorer, and Safari which remove all support for the older SSL 3.0 protocol will be made through their respective vendor release notification channels.  For more information from US-CERT once published, refer to the Google whitepaper directly at https://www.openssl.org/~bodo/ssl-poodle.pdf

 
Leave a comment

Posted by on October 15, 2014 in Security

 

Security Advisory for Financial Institutions: Shell Shock

“Shell Shock” Remote Code Execution and Compromise Vulnerability

Yesterday evening, DHS National Cyber Security Division/US-CERT published CVE-2014-6271 and CVE-2014-7169, outlining a serious vulnerability in a widely used command line interface (or shell) for the Linux operating system and many other *nix variants.  This software bug in the Bash shell allows files to be written on remote devices or remote code to be executed on remote systems by unauthenticated, unauthorized malicious users.  Because the vulnerability involves the Bash shell, some media outlets are referring to this vulnerability as Shell Shock.

Nature of Risk

By exploiting this parsing bug in the Bash shell, other software on a vulnerable system, including operating system components, can be compromised, including the OpenSSH server process and the Apache web server process. Because this attack vector allows an attacker to potentially compromise any element of a vulnerable system, effects from website defacement to password collection, malware distribution, and retrieval of protected system components such as private keys stored on servers are possible, and the US-CERT team has rated this it’s highest impact CVSS rating of 10.0.

Please be specifically aware that a patch was provided to close the issue for the original CVE-2014-6271; however, this patch did not sufficiently close the vulnerability.  The current iteration of the vulnerability is CVE-2014-7169, and any patches applied to resolve the issue should specifically state they close the issue for CVE-2014-7169.  Any devices that are vulnerable and exposed to any untrusted network, such as a vendor-accessible extranet or the public Internet should be considered suspect and isolated and reviewed by a security team due to the ability for “worms”, or automated infect-and-spread scripts that exploit this vulnerability, to quickly affect vulnerable systems in an unattended manner.  Any affected devices that contain private keys should have those keys treated as compromised and have those keys reissued per your company’s information security policies regarding key management procedures.

Next Steps

All financial institutions should immediately review their own environments to determine that no other third-party systems that are involved in serving or securing the online banking experience, or any other publicly-available services, are running vulnerable versions of the Bash shell.  Any financial institution that provides any secure services with Linux or *nix variants running a vulnerable version of the Bash shell could be at risk no matter what their vendor mix. If any vulnerable devices are found, they should be treated as suspect and isolated per your incident response procedures until they are validated as not affected or remediated.  All financial institutions should immediately and thoroughly review their systems and be prepared to change passwords on and revoke and reissue certificates with private key components stored on any compromised devices.

For further reading on this issue:

 
Leave a comment

Posted by on September 25, 2014 in Security

 

End-User Credential Security

This week’s announcement that a Russian crime syndicate has amassed 1.2 billion unique usernames and passwords across 420,000 websites would seem like startling news in 72-point font on the front of major newspapers, if it wasn’t sad it was such a commonplace announcement these days.  With four more months to go and still higher than the estimated 823 million compromised credentials part of 2013 breaches affecting Adobe to Target, it’s from Black Hat 2014 I find myself thinking about what we as ISV’s, SaaS providers, and security professionals can do to protect users in the wake of advanced persistent threats and organized, well-funded thieves wreaking havoc on the digital identities and real assets of our clients and customers.

Unlike Heartbleed or other server-side vulnerabilities, this particular credential siphoning technique obviously targeted users themselves to affect so many sites and at least 542 unique addresses affecting at least half that many unique users.  Why are users so vulnerable to credential-stealing malware?  To explore this issue, let’s immediately discard a tired refrain inside software houses everywhere: users aren’t dumb.  All too often, good application security is watered down to its least secure but most useful denominator for an overabundance of concern that secure applications may frustrate users, lower adoption, and reduce retention and usage.  While it is true that the more accessible the Internet becomes, the wider the spectrum the audience that uses it, from the most expertly capable to the ‘last-mile’ of great grandparents, young children, and the technologically unsophisticated.  However, this should neither be grounds to dismiss end-user credential security as a concern squarely in service provider’s court to address nor should it be an excuse to fail to provide adequately secure systems.  End-user education is our mutual responsibility, even if that means three more screens, additional prompts to confirm identity or action, or an out-of-band verification process.  Keeping processes as stupefying simple as possible because our SEO metrics show that’s the way to marginally improve adoption, reduce cart abandonment, or improve site usage times breeds complacency that ends up hurting us all in the long-run.

Can we agree that 1FA needs to end?  In an isolated world of controlled systems, a username and password combination might have been a fair assertion of identity.  Today’s systems, however, are neither controlled or isolated – the same tablets that log into online banking also run Fruit Ninja for our children, and we pass them over without switching out any concept of identity on a device that can save our passwords and represent them without any authentication.  Small-business laptops often run without real-time malware scanning software, easily harvesting credentials through keystroke logging, MitM attacks, cookie stealing, and a variety of other commonplace techniques.  Username and passwords fail us because they can be saved and cached just as easily as they can be collected and forwarded to command and control servers is Russia or elsewhere.  I’m not one of those anarchists advocating ‘death to the password’ (remember Vidoop?), but using knowledge-based challenges (password, out-of-wallet questions, or otherwise) as the sole factor of authentication needs to end.  And it needs to end smartly: sending an e-mail ‘out of band’ to an inbox loaded in another tab on the same machine, or an SMS message read by Google Voice in another tab means your ‘2FA’ is really just one factor layered twice instead of two-factor authentication.  A few more calls into the call center to help users cope with 2FA will be far cheaper in the long-run than the fallout of a major credential breach that affects your sites users.

We need to also discourage poor password management: allowing users to choose short or non-complex passwords and warning them about their poor choices is no excuse – we should just flatly reject them.  At the same time, we need to recognize that forcing users to establish too complex of a password will encourage them to establish a small number of complex passwords and reuse them across more sites.  This is one of the largest Achilles’s Heels for end-users: when a compromise of one site does occur, and especially if you have removed the option for users to establish a username not tied to their identity (name, e-mail address, or otherwise), you have made it tremendously easier for those who have gathered credentials from one site to have a much higher likelihood of exploiting them on your site.  Instead, we should consider nuances to each of complexity requirements that would make it likely a user would have to generate a different knowledge-based credential for each site.  While that in of itself may increase the chance a user would ‘write a password down’, a user who stores all their passwords in a password manager is still arguably more secure than the user who users one password for all websites and never writes it anywhere.

Finally, when lists of affected user accounts become available in uploaded databases of raw credentials that are leaked or testable on sites such as https://haveibeenpwned.com/ – ACT.  Find out your users that have overlap with compromised credentials on other sites, and proactively flag or lock their accounts or at least message to them to educate and encourage good end-user credential security.  We cannot unilaterally force users to improve the security of their credentials, but we can educate them, and we can make certain their eventual folly through our inaction.

 
 

The Wires Cannot Be Trusted; Does DRM Have Something to Teach Us?

In the continuing revelations about the depth to which governments have gone to subjugate global communications in terms of privacy, anonymity, and security on the Internet, one thing is very clear: nothing can be trusted anymore.

Before you wipe this post off as smacking of ‘conspiracy theorist’, take the Snowden revelations disclosed since Christmas, particularly regarding the NSA’s Tailored Access Operations catalog that demonstrates the ways they can violate implicit trust in local hardware by infecting firmware at a level where even reboots and factory ‘resets’ cannot remove the implanted malware, or their “interdiction” of new computers that allow them to install spyware between the time it leaves the factory and arrives at your house.  At a broader level, because of the trend in global data movement towards centralizing data transit through a diminishing number of top tier carriers – a trend is eerily similar to wealth inequality in the digital era – governments and pseudo-governmental bodies have found it trivial to exact control with quantum insert attacks.  In these sophisticated attacks, malicious entities (which I define for these purposes as those who exploit trust to gain illicit access to a protected system) like the NSA or GCHQ can slipstream rogue servers that mimic trusted public systems such as LinkedIn to gain passwords and assume identities through ephemeral information gathering to attack other systems.

Considering these things, the troubling realization is this is not the failure of the NSA, the GCHQ, the US presidential administration, or the lack of public outrage to demand change.  The failure is in the infrastructure of the Internet itself.  If anything, these violations of trust simply showcase technical flaws we have chosen not to acknowledge to this point in the larger system’s architecture.  Endpoint encryption technologies like SSL became supplanted by forward versions of TLS because of underlying flaws not only in cipher strength, but in protocol assumptions that did not acknowledge all the ways in which the trust of a system or the interconnects between systems could be violated.  This is similarly true for BGP, which has seen a number of attacks that allow routers on the Internet to be reprogrammed to shunt traffic to malicious entities that can intercept it: a protocol that trusts anything is vulnerable because nothing can be trusted forever.

When I state nothing can be trusted, I mean absolutely nothing.  Your phone company definitely can’t be trusted – they’ve already been shown to have collapsed to government pressure to give up the keys to their part of the kingdom.  The very wires leading into your house can’t be trusted, they could already or someday will be tapped.  Your air-gapped laptop can’t be trusted, it’s being hacked with radio waves.

But, individual, private citizens are facing a challenge Hollywood has for years – how do we protect our content?  The entertainment industry has been chided for years on its sometimes Draconian attempts to limit use and restrict access to data by implementing encryption and hardware standards that run counter to the kind of free access analog storage mediums, like the VHS and cassette tapes of days of old, provided.  Perhaps there are lessons to be learned from their attempts to address the problem of “everything, everybody, and every device is malicious, but we want to talk to everything, everybody, on every device”.  One place to draw inspiration is HDCP, a protocol most people except hardcore AV enthusiasts are unaware of that establishes device authentication and encryption across each connection of an HD entertainment system.  Who would have thought when your six year old watches Monsters, Inc., those colorful characters are protected by such an advanced scheme on the cord that just runs from your Blu-ray player to your TV?

While you may not believe in DRM for your DVD’s from a philosophical or fair-use rights perspective, consider the striking difference with this approach:  in the OSI model, encryption occurs at Layer 6, on top of many other layers in the system.  This is an implicit trust of all layers below it, and this is the assumption violated in the headlines from the Guardian and NY Times that have captured our attention the most lately: on the Internet, he who controls the media layers also controls the host layers.  In the HDCP model, the encryption happens more akin to Layer 2, as the protocol expects someone’s going to splice a wire to try to bootleg HBO from their neighbor or illicitly pirate high-quality DVD’s.  Today if I gained access to a server closet in a corporate office, there is nothing technologically preventing me from splicing myself into a network connection and copying every packet on the connection.  The data that is encrypted on Layer 6 will be very difficult for me to make sense of, but there will be plenty of data that is not encrypted that I can use for nefarious purposes: ARP broadcasts, SIP metadata, DNS replies, and all that insecure HTTP or poorly-secured HTTPS traffic.  Even worse, it’s a jumping off point for setting up a MITM attack, such as an SSL Inspection Proxy.  Similarly, without media-layer security, savvy attackers with physical access to a server closet or the ability to coerce or hack into the next hop in the network path can go undetected if they redirect your traffic into rogue servers or into malicious networks, and because there is no chained endpoint authentication mechanism on the media-layer, there’s no way for you to know.

These concerns aren’t just theoretical either, and they’re not to protect teenagers’ rights to anonymously author provocative and mildly threatening anarchist manifestos.  They’re to protect your identity, your money, your family, and your security.  Only more will be accessible and controllable on the Internet going forward, and without appropriate protections in place, it won’t just be governments soon who can utilize the assumptions of trust in the Internet’s architecture and implementation for ill, but idealist hacker cabals, organized crime rings, and eventually, anyone with the right script kiddie program to exploit the vulnerabilities once better known and unaddressed.

Why aren’t we protecting financial information or credit card numbers with media-layer security so they’are at least as safe as Mickey Mouse on your HDTV?

 

Tags: , , ,

When All You See Are Clouds… A Storm Is Brewing

The recent disclosures that the United States Government has violated the 4th amendment of the U. S. Constitution and potentially other international law by building a clandestine program that provides G-Men at the NSA direct taps into every aspect of our digital life – our e-mail, our photos, our phone calls, our entire relationships with other people and even with our spouses, is quite concerning from a technology policy perspective.  The fact that the US Government (USG) can by legal authority usurp any part of our recorded life – which is about every moment of our day – highlights several important points to consider:

  1. Putting the issue of whether the USG/NSA should have broad access into our lives aside, we must accept that the loopholes that allow them to demand this access expose weaknesses in our technology.
  2. The fact the USG can perform this type of surveillance indicates other foreign governments and non-government organizations likely can and may already be doing so as well.
  3. Given that governments are often less technologically savvy though much more resource-rich than malevolent actors, if data is not secure from government access, is it most definitely not secure from more cunning hackers, identity thieves, and other criminal enterprises.

If we can accept the points above, then we must accept that the disclosure of PRISM and connotation through carefully but awkwardly worded public statements about the program present both a problem and an opportunity for technologists to solve regarding data security in today’s age.  This is not a debate of whether we have anything to hide, but rather a discussion of how can we secure data, because if we cannot secure it from a coercive power (sovereign or criminal), we have no real data security at all.

But before proposing some solutions, we must consider:

How Could PRISM Have Happened in the First Place?

I posit an answer devoid of politics or blame, but on an evaluation of the present state of Internet connectivity and e-commerce.  Arguably, the Internet has matured into a stable, reliable set of services.  The more exciting phase of its development saw a flourishing of ideas much like a digital Cambrian explosion.  In its awkward adolescence, connecting to the Internet was akin to performing a complicated rain dance that involved WinSock, dial-up modems, and PPP, sprinkled with roadblocks like busy signals, routine server downtime, and blue screens of death.  The rate of change in equipment, protocols, and software was meteoric, and while the World Wide Web existed (what most laypeople consider wholly as “the Internet” today), it was only a small fraction of the myriad of services and channels for information to flow.  Connecting to and using the Internet required highly specialized knowledge, which both increased the level of expertise of those developing for and consuming the Internet, while limiting its adoption and appeal – a fact some consider the net’s Golden Age.

But as with all complex technologies, eventually they mature.  The rate of innovation slows down as standardization becomes the driving technological force, pushed by market forces.  As less popular protocols and methods of exchanging information give way to young but profitable enterprises that push preferred technologies, the Internet became a much more homogeneous experience both in how we connect to and interact with it.  This shapes not only the fate of now-obsolete tech, such as UUCP, FINGER, ARCHIE, GOPHER, and a slew of other relics of our digital past, but also influenced the very design of what remains — a great example being identification and encryption.

For the Internet to become a commercializable venue, securing access to money, from online banking to investment portfolio management, to payments, was an essential hurdle to overcome.  The solution for the general problem of identity and encryption, centralized SSL certificate authorities providing assurances of trust in a top-down manner, solves the problem specifically for central server webmasters, but not for end-users wishing to enjoy the same access to identity management and encryption technology.  So while the beneficiaries like Amazon, eBay, PayPal, and company now had a solution that provided assurance to their users that you could trust their websites belonged to them and that data you exchanged with them was secure, end-users were still left with no ability to control secure communications or identify themselves with each other.

A final contributing factor I want to point out is that other protocols drifted into oblivion, more functionality was demanded over a more uniform channel — the de facto winner becoming HTTP and the web.  Originally a stateless protocol designed for minimal browsing features, the web became a solution for virtually everything, from e-mail (“webmail”), to searching, to file storage (who has even fired up an FTP client in the last year?).  This was a big win for service providers, as they, like Yahoo! and later Google, could build entire product suites on just one delivery platform, HTTP, but it was also a big win for consumers, who could throw away all their odd little programs that performed specific tasks, and could just use their web browser for everything — now even Grandma can get involved.  A more rich offering of single-shot tech companies were bought up or died out in favor of the oligarchs we know today – Microsoft, Facebook, Google, Twitter, and the like.

Subtly, this also represented a huge shift on where data is stored.  Remember Eudora or your Outlook inbox file tied to your computer (in the days of POP3 before IMAP was around)?  As our web browser became our interface to the online world, and as we demanded anywhere-accessibility to those services and they data they create or consume, those bits moved off our hard drives and into the nebulous service provider cloud, where data security cannot be guarenteed.

This is meaningful to consider in the context of today’s problem because:

  1. Governments and corporate enterprises were historically unable to sufficiently regulate, censor, or monitor the internet because they lacked the tools and knowledge to do so.  Thus, the Internet had security through obscurity.
  2. Due to the solutions to general problems around identity and encryption relying on central authorities,  malefactors (unscrupulous governments and hackers alike) have fewer targets to influence or assert control over to tap into the nature of trust, identity, and communications.
  3. With the collapse of service providers into a handful of powerful actors on a scale of inequity on par with a collapse of wealth distribution in America, there exist now fewer providers to surveille to gather data, and those providers host more data on each person or business that can be interrelated in a more meaningful way.
  4. As information infrastructure technology has matured to provide virtual servers and IaaS offerings on a massive scale, fewer users and companies deploy controlled devices and servers, opting instead to lease services from cloud providers or use devices, like smartphones, that wholly depend upon them.
  5. Because data has migrated off our local storage devices to the cloud, end-users have lost control over their data’s security.  Users have to choose between an outmoded device-specific way to access their data, or give up the control to cloud service providers.

There Is A Better Way

Over the next few blog posts, I am going to delve into a number of proposals and thoughts around giving control and security assurances of data back to end-users.  These will address points #2 and #4 above as solutions that layer over existing web technologies, not proposals to upend our fundamental usage of the Internet by introducing opaque configuration barriers or whole-new paradigms.  End-users should have choice whether their service providers have access to their data in a way that does not require Freenet’s darknets or Tor’s game-of-telephone style of anonymous but slow onion-routing answer to web browsing.  Rather, users should be able to positively identify themselves to the world and be able to access and receive data and access it in a cloud-based application without ever having to give up their data security, not have to trust of the service provider, be independent to access the data on any devices (access the same service securely anywhere), and not have to establish shared secrets (swap passwords or certificates).

As a good example, if you want to send a secure e-mail message today, you have three categorical options to do so:

  1. Implicitly trust a regular service provider:  Ensure both the sender and the receiver use the same server.  By sending a message, it is only at risk while the sender connects to the provider to store it and while the receiver connects the provider to retrieve it.  Both parties trust the service provider will not access or share the information.  Of course, many actors, like Gmail, still do.
  2. Use a secure webmail provider:  These providers, like Voltage.com, encrypt the sender’s connection to the service to protect the message as it is sent, and send notifications to receivers to come to a secure HTTPS site to view the message.  While better than the first option, the message is still stored in a way that can be demanded by subpoena or snooped inside the company while it sits on their servers.
  3. Use S/MIME certificates and an offline mail client:  While the most secure option for end-to-end message encryption, this cumbersome method is machine-dependent and requires senders and receivers to first share a certificate with each other – something the average user is flatly incapable of understanding or configuring.

Stay tuned to my next post, where I propose a method by which anyone could send me a message securely, without knowing anything else about me other than my e-mail address, in a way I could read online or my mobile device, in a way that no one can subpoena or snoop on in between.

 

 
 

Tags: ,

Doing Your Due Diligence on Security Scanning and Penetration Testing Vendors

All too often, development shops and IT professionals become complacent with depending on packaged scanning solutions or a utility belt of tools to provide security assurance testing of a hosted software solution.  In the past five years, a number of new entrants to the security evaluation and penetration testing market have created some compelling cloud-based solutions to perimeter testing.  These tools, while exceptionally useful for a sanity check of firewall rules, load balancer configurations, and even certain industry best practices in web application development, are starting to create a false sense of security in a number of ways.  As these tools proliferate, infrastructure professionals are becoming increasingly dependent upon their handsomely-crafted reporting about PCI, GLBA, SOX, HIPPA, and all the other regulatory buzzwords that apply to certain industries.  If you’re using these tools, have you considered:

Do you use more than one tool?  If not, and you should, is there any actual overlap between their testing criteria?

There is a certain incestuous phenomenon that develops in any SaaS industry that sees high profit margins: entrepreneurs perceive cloud-based solutions as having a low barrier to entry.  This perception drives new market entrants to cobble together solutions to compete for share in the space.  But are these fly-by-night competitors competitively differentiated from their peers?

Sadly, I have found in practical experience this not to be the case.  Too many times have I have enrolled in a free trial of a tool or actually shelled out for some hot new cloud-based scanning solution to find at best only existing known vulnerabilities are duplicatively reported by this new ‘solution’, with only false positives appearing as the ‘net new’ items to bring to my attention.  Here in lies the rub — when new entrants to this market create competing products, there is an iterative reverse engineering that goes on — they run existing scanning products on the market against websites, check to see those results, and make sure they develop a solution that at least identifies the same issues.

That’s not good at all.  In any given security scan, you may see, perhaps, 20% of the total vulnerabilities a product is capable of finding show up as a problem in a scan target.  Even if you were to scan multiple targets, you may only be seeing mostly the same kinds of issues in each subsequent scan.  Those using this as a methodology to build quick-to-market security scanning solutions are delivering sub-par offerings that may only identify 70% of the vulnerabilities other scanning solutions do.  eEye has put together similar findings in an intriguing report I highly recommend reading.  Investigating the research and development activities of a security scanning provider is an important due diligence step to make sure when you get an “all clear” clean report from a scanning tool, that report actually means something.

How do you judge your security vendor in this regard?  Ask for a listing of all specific vulnerabilities they scan for.  Excellent players in this market will not flinch at giving you this kind of data for two reasons: (1) a list of what they check for isn’t as important as how well and how thoroughly they actually assess each item, and (2) worthwhile vendors are constantly adding new items to the list, so it doesn’t represent any static master blueprint for their product.

Does your tool test more than OWASP vulnerabilities?

The problem with developing security testing tools is in part the over-reliance on the standardization of vulnerability definition and classifications.  While it is helpful to categorize vulnerabilities into conceptually similar groups to create common mitigation strategies and mitigation techniques, too often security vendors focus on OWASP attack classifications as the definitive scope for probative activities.  Don’t get me wrong, these are excellent guides for ensuring the most common types of attacks are covered, but they do not provide a comprehensive test of application security.  Too often the types of testing such as incremental information disclosure, where various pieces of the system provide information that can be used to discern how to attack the system further, are relegated to manual penetration testing instead of codified into scanning criteria.  Path disclosure and path traversal vulnerabilities are a class of incremental information disclosures that are routinely tested for by scanning tools, but they represent only a file-system basis test for this kind of security problem instead of part of a larger approach to the problem through systematic scanning.

Moreover, SaaS providers should consider DoS/DDoS weaknesses as security problems, not just customer relationship or business continuity problems.  These types of attacks can cripple a provider and draw their technical talent to the problem at hand, mitigating the denial of service attack.  During those periods, this can and has recently been used in high-profile fake-outs to either generate so much trash traffic that other attacks and penetrations are difficult to perceive or react to, or to create opportunities for social engineering attacks to succeed with less sophisticated personnel while the big-guns are trying to tackle the bigger attacks.  Until weaknesses that can allow for high-load to easily take down a SaaS application are included as part of vulnerability scanning, this will remain a serious hole in the testing methodology of a security scanning vendor.

So, seeing CVE identifiers and OWASP classifications for reported items is nice from a reporting perspective, and it gives a certain credence to mitigation reports to auditors, but don’t let those lull you into a false sense of security coverage.  Ask your vendor what other types of weaknesses and application vulnerabilities they test for outside of the prescribed standard vulnerability classifications.  Otherwise, you will potentially shield yourself from “script kiddies”, but leave yourself open to targeted attacks and advanced persistent threats that have created embarrassing situations for a number of large institutions in the past year.

What is your mobile strategy?

Native mobile applications are the hot-stuff right now.  Purists tout the HTML5-only route to mobile application development, but mobile web development alone isn’t enough to satisfy Apple to get access to the iOS platform, (since 2008) and consumers still can detect a web app that is merely a browser window and prefer the feature set that comes from native applications, including camera access, accelerometer data, and usage of the physical phone buttons into application navigation.  The native experience is still too nice to pass up to be at the head-of-the-class in your industry.

If you’re a serious player in the SaaS market, you have or will soon have a native mobile application or hybrid-native deliverable. If you’re like most other software development shops, mobile isn’t your forte, but you’ve probably hired specific talent with a mobile skill set to realize whatever your native strategy is.  Are your architects and in-house security professionals giving the same critical eye to native architecture, development, and code review as they are to your web offering?  If you’re honest, the answer is: probably not.

The reason your answer is ‘probably not’ is because it is a whole different technology stack, set of development languages, and testing methodology where the tools you invested in to secure your web application do not apply to your native application development.  This doesn’t mean your native applications are not vulnerable, it means they’re vulnerable in different ways that you don’t even know or are testing for yet.  This should be a wake-up call for enterprise software shops: because a vulnerability exists only on a native platform does not mitigate its seriousness.  It is trivial to spin up a mobile emulator to host a native application and use the power of a desktop or server to exploit that vulnerability on a scale that could cripple a business through disclosure or denial of service.

Your native mobile security scanning strategy should minimally cover two important surface areas:

1. Vulnerabilities in the way the application stores data on the device in memory and on any removable media

2. Vulnerabilities in the underlying API serving the native application

If you’re not considering these, then you probably have not selected a native application security scanning tool checking for these either.

In Conclusion

Security is always a moving target, as fluid as the adaptiveness of the techniques of attackers and the rapid pace of change in technologies they attack.  Don’t treat security scanning and penetration testing as a checklist item for RFP’s or to address auditor’s concerns — understand the surface areas, and understanding the failings of security vendors’ products.  Understand your assessments are valid only in the short-term, and re-evaluation of your vendor mix and their offerings on a continual basis is crucial.  Only then will you be informed and able to make the right decisions to be proactive, instead of reactive, regarding the sustainability of your business.

 
Leave a comment

Posted by on May 29, 2013 in Security

 

Thwarting SSL Inspection Proxies

A disturbing trend in corporate IT departments everywhere is the introduction of SSL inspection proxies.  This blog post explores some of the ethical concerns about such proxies and proposes a provider-side technology solution to allow clients to detect their presence and alert end-users.  If you’re well-versed in concepts about HTTPS, SSL/TLS, and PKI, please skip down to the section entitled ‘Proposal’.

For starters, e-commerce and many other uses of the public Internet are only possible because the capability for encryption of messages to exist.  The encryption of information across the World Wide Web is possible through a suite of cryptography technologies and practices known as Public Key Infrastructure (PKI).  Using PKI, servers can offer a “secure” variant of the HTTP protocol, abbreviated as HTTPS.  This variant itself encapsulates other application level protocols, like HTTP, using a transport-layer protocol called Secure Socket Layer (SSL), which as since been superseded by a similar, more secure version, Transport Layer Security (TLS).  Most users of the Internet are familiar with the symbolism common with such secure connections: when a user browses a webpage over HTTPS, usually some visual iconography (usually a padlock) as well as a stark change in the presentation of the page’s location (usually a green indicator) show the end-user that the page was transmitted over HTTPS.

SSL/TLS connections are protected in part by a server certificate stored on the web server.  Website operators purchase these server certificates from a small number of competing companies, called Certificate Authorities (CA’s), that can generate them.  The web browsers we all use are preconfigured to trust certificates that are “signed” by a CA.  The way certificates work in PKI allows certain certificates to sign, or vouch for, other certificates.  For example, when you visit Facebook.com, you see your connection is secure, and if you inspect the message, you can see the server certificate Facebook presents is trusted because it is signed by VeriSign, and VeriSign is a CA that your browser trusts to sign certificates.

So… what is an SSL Inspection Proxy?  Well, there is a long history of employers and other entities using technology to do surveillance of the networks they own.  Most workplace Internet Acceptable Use Policies state clearly that the use of the Internet using company-owned machine and company-paid bandwidth is permitted only for business use, and that the company reserves the right to enforce this policy by monitoring this use.  While employers can easily review and log all unencrypted that flows over their networks, that is any request for a webpage and the returned rendered output, the increasing prevalence of HTTPS as a default has frustrated employers in recent years.  Instead of being able to easily monitor the traffic that traverses their networks, they have had to resort to less-specific ways to infer usage of secure sites, such as DNS recording.

(For those unaware and curious, the domain-name system (DNS) allows client computers to resolve a URL’s name, such as Yahoo.com, to its IP address, 72.30.38.140.  DNS traffic is not encrypted, so a network operator can review the requests of any computers to translate these names to IP addresses to infer where they are going.  This is a poor way to survey user activity, however, because many applications and web browsers do something called “DNS pre-caching”, where they will look up name-to-number translations in advance to quickly service user requests, even if the user hasn’t visited the site before.  For instance, if I visited a page that had a link to Playboy.com, even if I never click the link, Google Chrome may look up that IP address translation just in case I ever do in order to look up the page faster.)

So, employers and other network operators are turning to technologies that are ethically questionable, such as Deep Packet Inspection (DPI), which looks into all the application traffic you send to determine what you might be doing, to down right unethical practices of using SSL Inspection Proxies.  Now, I concede I have an opinion here, that SSL Inspection Proxies are evil.  I justify that assertion because an SSL Inspection Proxy causes your web browser to lie to it’s end-user, giving them a false assertion of security.

What exactly are SSL Inspection Proxies?  SSL Inspection Proxies are servers setup to execute a Man-In-The-Middle (MITM) attack on a secure connection, on behalf of your ISP or corporate IT department snoops.  When such a proxy exists on your network, when you make a secure request for https://www.google.com, the network redirects your request to the proxy.  The proxy then makes a request to https://www.google.com for you, returns the results, and then does something very dirty — it creates a lie in the form of a bogus server certificate.  The proxy will create a false certificate for http://www.google.come, sign it with a different CA it has in its software, and hand the response back.  This “lie” happens in two manners:

  1. The proxy presents itself as the server you request, instead of the actual server you requested.
  2. The proxy states the certificate handed back with the page response is a different one than what was actually handed back by that provider, http://www.google.com in this case.

This interchange would look like this:

It sounds strange to phrase the activities of your own network as an “attack”, but this type of interaction is precisely that, and it is widely known in the network security industry as a MITM attack.  As you can see, a different certificate is handed back to the end-user’s browser than what http://www.example.com in the above image.  Why?  Well, each server certificate that is presented with a response is used to encrypt that data.  Server certificates have what is called a “public key” that everyone knows which unique identifies the certificate, and they also have a “private key”, known only by the web server in this example.  A public key can be used to encrypt information, but only a private key can decrypt it.  Without an SSL Inspection Proxy, that is, what normally happens, when you make a request to http://www.example.com, example.com first sends back the public key of the server certificate for its server to your browser.  Your browser uses that public key to encrypt the request for a specific webpage as well as a ‘password’ of sorts, and sends that back to http://www.example.com.  Then, the server would use its private key to decrypt the request, process it, then use that ‘password’ (called a session key) to send back an encrypted response.  That doesn’t work so well for an inspection proxy, because this SSL/TLS interchange is designed to thwart any interloper from being able to intercept or see the data transmitted back and forth.

The reason an SSL Inspection Proxy sends a different certificate back is so it can see the request the end-user’s browser is making so it knows what to pass on to the actual server as it injects itself as a proxy to this interchange.  Otherwise, once the request came to the proxy, the proxy could not read it, because the proxy wouldn’t have http://www.example.com’s private key.  So, instead, it generates a public/private key and makes it appear like it is http://www.example.com’s server certificate so it can act on its behalf, and then uses the actual public key of the real server certificate to broker the request on.

Proposal

The reason an SSL Inspection Proxy can even work is because it signs a fake certificate it creates on-the-fly using a CA certificate trusted by the end user’s browser.  This, sadly, could be a legitimate certificate (called a SubCA certificate), which would allow anyone who purchases a SubCA certificate to create any server certificate they wanted to, and it would appear valid to the end-user’s browser.  Why?  A SubCA certificate is like a regular server certificate, except it can also be used to sign OTHER certificates.  Any system that trusts the CA that created and signed the SubCA certificate would also trust any certificate the SubCA signs.  Because the SubCA certificate is signed by, let’s say, the Diginotar CA, and your web browser is preconfigured to trust that CA, your browser would accept a forged certificate for http://www.example.com signed by the SubCA.  Thankfully, SubCA’s are frowned upon and increasingly difficult for any organization to obtain because they do present a real and present danger to the entire certificate-based security ecosystem.

However, as long as the MITM attacker (or, your corporate IT department, in the case of an SSL Inspection Proxy scenario) can coerce your browser to trust the CA used by the proxy, then the proxy can create all the false certificates it wants, sign it with the CA certificate they coerced your computer to trust, and most users would never notice the difference.  All the same visual elements of a secure connection — the green coloration, the padlock icon, and any other indicators made by the browser, would be present.  My proposal to thwart this:

Website operators should publish a hash of the public key of their server certificates (the certificate thumbprint) as a DNS record.  For DNS top-level domains (TLD’s) that are protected with DNSSEC, as long as this DNS record that contains the has for http://www.example.com is cryptographically signed, the corporate IT department of local clients nor a network operator could forge a certificate without creating a verifiable breach that clients could check for and then warn to end users.  Of course, browsers would need to be updated to do this kind of verification in the form of a DNS lookup in conjunction with the TLS handshake, but provided their resolvers checked for an additional certificate thumbprint DNS record anyway, this would be a relatively trivial enhancement to make.

EDIT: (April 15, 2013): There is in fact an IETF working group now addressing this proposal, very close to my original proposal! Check out the work of the DNS-based Authentication of Named Entities (DANE) group here: http://datatracker.ietf.org/wg/dane/ — on February 25, they published a working draft of this proposed resolution as the new “TLSA” record.  Great minds think alike. 🙂

 
2 Comments

Posted by on September 15, 2012 in Ethical Concerns, Open Standards, Privacy, Security

 

Tags:

The Long Overdue Case for Signed Emails

A technology more than a decade-old is routinely ignored by online banking vendors despite a sustained push to find technology that counteract fraud and phishing: S/MIME.  For the unaware, S/MIME is a set of standards that define a way to sign and encrypt e-mail messages using a public key infrastructure (PKI) to either provide a way to prove the identity of the message sender (signing), to encrypt the contents of the message so that only the recipient can view the message (encryption), or both (signing and encrypting).  The use of a PKI scheme to create secure communications is generally implemented with asymmetric sets of public and private keys, where in a signing scenario, the sender of messages makes their public key available to the world which can be used to validate that only the corresponding private key was used to craft a message.

This secure messaging scheme offers a way for financial institutions to digitally prove any communication dressed up to look like it came from the institution in fact was crafted by them.  This technology can both thwart the falsification of the ‘from address’ from which a message appears to be sent as well as ensures the content of the message, it’s integrity, is not compromised by any changing of facts or figures or the introduction of other language, links, or malware by any of the various third-parties that are involved with transferring an e-mail from the origin to the recipient.  The application for financial institutions is obvious in a world where over 95% of all e-mail sent worldwide is spam or a phishing scam.  Such gross abuse of the system threatens to undermine the long-term credibly medium, which, in a “green” or paperless world, is the only cost-effective way many financial institutions have to maintain contact with their customers.

So, if the technology is readily available and the potential benefits are so readily apparent, why hasn’t digital e-mail signatures caught on in the financial services industry?  I believe there are several culprits here:

1. Education. End-users are generally unaware of the concept of “secure e-mail”, since implementing digital signatures from a sender’s perspective requires quite a bit of elbow grease, today colleagues don’t send secure messages between each other.  Moreover, most financial institution employees are equally untrained in the concept of secure e-mail, how it works, and much less, how to explain it to their customers to make it understandable as well as a competitive advantage.  Financial institutions have an opportunity to take a leadership role with digital e-mail signatures, since as one of the most trusted vendors any retail customer will ever have, creating a norm of secure e-mail communications across the industry can drive both education and technology adoption.  Even elderly users and young children understand the importance of the “lock icon” in web broswers before typing in sensitive information such as a social security number, a credit card number, or a password — with proper education, users can learn to demand the same protection afforded by secure e-mail.

2. Lack of Client Support.  Unfortunately, as more users shift from desktop e-mail clients to web-based e-mail clients like Gmail and Yahoo Mail, they lose a number of features in these stripped down, advertising-laden SaaS apps, one of which is often the ability to parse a secure e-mail.  The reasons of this are partially technological (it does take a while to re-invent the same wheel desktop client applications like Outlook and Thunderbird have mastered long ago), partially a lack of demand due to the aforementioend ‘education’ reason, and partially unscrupulous motives of SaaS e-mail providers.  The last point I want to call special attention to because of the significance of the problem:  Providers of SaaS applications “for free” are targeted advertising systems, which have increasingly used not just the profile and behavior of end-users to develop a targeted promotional experience, but the content of their e-mails themselves to understand a user’s preferences.  Supporting S/MIME encryption is counter to the aim of scanning the body of e-mails to determine context, when in a secure e-mail platform, Hotmail for instance, would be unable to peek into messages.  Unfortunately, this deliberate ommission of encryption support in online e-mail clients has meant that digital signatures, the second part of the S/MIME technology, is often also omitted.  In early 2009, Google experimented with adding digital signature functionality to Gmail; however, it was quickly removed after it was implemented.,   If users came to demand secure e-mail communications from their financial institutions, these providers would need to play along.

3. Lack of Provider Support.  It’s no secret most online banking providers have a software offering nearly a decade old, which is increasingly a mishmash of legacy technologies, stitched together with antiquated components and outdated user interfaces to create a fragile, minimally working fabric for an online experience.  Most have never gone back to add functionality to core components, like e-mail dispatch systems to incorporate technologies like S/MIME.  Unfortunately, because their customers who are technologically savvy enough to request such functionality represents a small percentage of their customer base, even over ten years later, other online banking offerings still neglect to incorporate emerging security technologies.  While a bolt-on internet banking system has moved from a “nicety” to a “must have” for large financial services software providers, the continued lack of innovation and continuous improvement in their offers is highly incongruent with the needs of financial institutions in an increasingly connected world where security is paramount.

S/MIME digital e-mail signatures is long over-due in the fight against financial account phishing; however, as a larger theme, financial institutions either need to become better drivers of innovation in stalwart online banking companies to ensure their needs are met in a quickly changing world, or they need to identify the next generation of online banking software providers, who embrace today’s technology climate and incorporate it into their offerings as part of a continual improvement process.

 
Leave a comment

Posted by on June 16, 2010 in Open Standards, Security