RSS

Category Archives: Security

Aside

In controlled environments, it’s useful to know when outbound connectivity is not restricted to a predefined list of required hosts, as many standards like PCI require.  Here’s a helpful one-liner that will query your Active Directory instance for computer accounts that are enabled, and then for each of them try to connect to a site from that machine, as orchestrated by WinRM.  If you use this script, just know that you will probably see a sea of errors for machines that connect be reached from your source host via WinRM.  My go-to site for testing non-secure HTTP is asdf.com, but you could use anything target and port you desire based on what should not be allowed in your environment.  I have changed the snippet below to example.com (which will not work) so I don’t spam the poor soul who runs asdf.com, but you should replace that with google.com or whatever host to which you wish to verify connectivity.

Invoke-Command -ComputerName (Get-ADComputer -Filter {Enabled -eq "True"}
 -Property Name,Enabled | foreach { $_.Name }) -ScriptBlock
 { Test-NetConnection -Port 80 "example.com" | Select TcpTestSucceeded }

The output will be dropped into look something like this:

 TcpTestSucceeded PSComputerName RunspaceId 
 ---------------- -------------- ---------- 
             True YOUR-HOST-1    d5fd044c-c268-460e-a274-d3253adc8ce2 
             True YOUR-HOST-2    98206f71-80c1-4e7e-a467-fec489c542ee 
            False YOUR-HOST-3    d0b6cf57-e833-44a6-a7bb-aebd4d854b5c 
             True YOUR-HOST-4    14af618b-1ca7-4c1f-bb56-ce58dbd4af94

It’s a great sanity check before an audit or after major changes to your network architecture or security controls.  Enjoy!

 

 

 

PowerShell one-liner to find outbound connectivity via WinRM

 
Leave a comment

Posted by on June 24, 2017 in Programming, Security

 

Tags: ,

SQL Injection with New Relic [PATCHED]

SQL Injection with New Relic [PATCHED]

Background

First off, I have found New Relic to be a great application performance monitoring (APM) tool.  Its ability to link transaction performance from the front-end all the way to back-end database queries that slow your web application is pretty awesome.  This feature lets you see specific queries that are running slowly, including the query execution plans and how much time is spent on processing various parts of a database request.  From their online documentation, the interface looks similar to this:

What’s not so awesome is when your APM’s method for retrieving this data creates a SQL injection flaw in your application that wasn’t there before.  In October 2016, I became aware of some strange errors when a DBA was trying to load SQL Server trace files into PSSDiag, due to a formatting problem in the trace file itself.  Our DBA discovered that unclosed quotation marks were causing problems with PSSDiag loading trace files.  So, how could an unclosed quotation mark even be happening?  It’s a hallmark of a SQL injection exploit, and so I began digging.

It appeared our ORM (NHibernate at the time) was sending unparameterized queries, and one of the field values had an unescaped quotation mark, which was causing the error in PSSDiag.  However, in other cases the same query, unique to an area of our code, would be issued with parameters.  Upon further digging, it actually appeared our application was submitting the same query twice, first with the parameterized query version, and a second with parameter values replaced into the query string, sandwiched with a SET SHOWPLAN_ALL.  It looked a bit like this:

exec sp_executesql N'INSERT INTO dbo.Table (A, B, C) 
VALUES (@p0, @p1, @p2);select SCOPE_IDENTITY()'
,N'@p0 uniqueidentifier,@p1 uniqueidentifier, @p2 nvarchar(50)'
,@p0='{Snipped}',@p1='{Snipped}',@p2=N'I don''t even'

Followed by:

SET SHOWPLAN_ALL ON
INSERT INTO dbo.Table (A, B, C)
VALUES ('{Snipped}', '{Snipped}', 'I don't even');select SCOPE_IDENTITY()

As you can see in the first example created by NHibernate, the word “don’t” was properly escaped; however, in the subsequent execution, it was not.  This second statement is sent by our very same application process, which New Relic will instrument using the ICorProfilerCallback2 profiler hook to retrieve application performance statistics.  But it doesn’t just snoop on the process, it actually hijacks database connections to periodically piggyback on their ‘echo’ of requests to retrieve metrics used to populate their slow queries feature.  The SET SHOWPLAN_ALL directive causes the subsequent request not actually to return data, but to just return the execution plan.

(DBA’s will note this is actually not a reliable way retrieve this data at all, as parameterized queries can and often do have very different query execution plans when parameter sniffing and lopsided column statistics are in play.  But that’s how New Relic does it.)

This is pretty bad, because now virtually every user-provided input that is sent to your database, even if programmed using secure programming practices to avoid SQL injection flaws, becomes vulnerable with New Relic is installed with the Slow Queries feature enabled.  That being said, New Relic does not send this second ‘show plan’ and repeated statement set for every query.  It samples, appending it only onto some executions of any given statement.  An attacker attempting to exploit this would not be able to do so consistently; although, repeated attempts on something like the username field of a login screen, which in many systems is likely log to a database table that stores usernames of failed login attempts, would occasionally succeed when the subsequent SHOWPLAN_ALL and unparamaterized version of the original query is injected at the end of the request by New Relic.

Timeline

  • October 5, 2016: Notified New Relic
  • October 5: New Relic acknowledges issue and provides a workaround (disabling explain plans)
  • October 6: New Relic’s application security team responds with details explaining why they believe the issue is not exploitable as a security vulnerability. Their reasoning is based on the expected behavior of SHOWPLAN_ALL, which would not execute subsequent commands
  • October 6: I provide a specific example of how to bypass the ‘protection’ of the preceding SHOWPLAN_ALL statement that confirms this is an exploitable vulnerability.
  • October 6 New Relic confirms the exploit and indicates it is targeted for resolution in their upcoming 6.x version of the New Relic .NET Agent.  I confirm the issue in New Relic .NET Agent 5.22.6.
  • October 7: New Relic indicates they will not issue a CVE for this issue.
  • October 12: New Relic updates us a fix is still in development, but a new member of their application security team questions the exploit-ability of the issue.
  • October 12: I provide an updated, detailed exploit to the New Relic security team to demonstrate how to exploit the flaw.
  • November 8: Follow-up call with New Relic security team and .NET product manager on progress.  They confirm they have resolved the issue as of the New Relic .NET Agent 6.3.123.0.
  • November 9: .NET Agent with issue fixed addressed.
  • May 26, 2017: Public disclosure

Conclusion

First off, I want to applaud New Relic on their speedy response and continued dialogue as we worked through the communication of this issue so they understood how to remediate it.  On our November 8 call, I specifically asked if New Relic would reconsider their stance of not issuing a CVE for the issue, or at least clearly identify 6.3.123.0 as a security update so developers and companies that use this agent would know they needed to prioritize this update.  They thoughtfully declined, and I did inform them that I would then be publicly disclosing the vulnerability if they did not.

Even if I don’t agree with it, I understand the position companies take about not proactively issuing CVE’s.  However, I do believe software creators must clearly indicate when action is needed by their users to update software they provide to resolve security vulnerabilities. Many IT administrators take the ‘if it’s not broken, don’t update it’ approach to components like the New Relic .NET Agent, and if no security urgency is communicated for an update, it could take months to years for it to be updated in some environments.  While some companies may be worried about competitors’ narratives or market reactions to self-disclosing, the truth is vulnerabilities will eventually be disclosed anyway, and providing an appropriate amount of disclosure and timely communications for security fixes is a sign of a mature vulnerability management program within a software company.

Also, be sure if you put any mitigation techniques in place that they actually work.  We stumbled upon another bug in working around the issue that was subsequently fixed in 6.11.613 where trying to turn off the ‘slow query’ analysis feature per the New Relic documentation did not consistently work.

Given the potential gravity of this issue, I have quietly sat on this for almost 7 months to allow for old versions of this agent to be upgraded by New Relic customers, in the name of responsible disclosure.  I have not done any testing on versions of New Relic agents other than the .NET one, but I would implore security researchers to test agents from any APM vendor that collects execution plans as part of their solution for this or similar weaknesses.

 
1 Comment

Posted by on May 26, 2017 in Security

 

Security Advisory for Financial Institutions: POODLE

Yesterday evening, Google made public a new form of attack on encrypted connections between end-users and secure web servers using an old form of encryption technology called SSL 3.0.  This attack could permit an attacker who has the ability to physically disrupt or intercept an end-user’s browser communications to execute a “downgrade attack” that would could cause an end-user’s web browser to attempt to use the older SSL 3.0 encryption protocol rather than the newer TLS 1.0 or higher protocols.  Once an attacker successfully executed a downgrade attack on an end-user, a “padded oracle” attack could then be attempted to steal user session information such as cookies or security tokens, which could be further used to gain illicit access to an active secure website sessions.  This particular flaw is termed the POODLE (Padding Oracle On Downgraded Legacy Encryption) attack.  At this time this advisory was authored, US-CERT had not yet published a vulnerability document for release yet, but has reserved advisory number CVE-2014-3566 for its publication, expected today.

It is important to know this is not an attack on the secure server environments that host online banking and other end-user services, but is a form of attack on end-users themselves who are using web browsers that support the older SSL 3.0 encryption protocol.  For an attacker to target an end-user, they would need to be able to capture or reliably disrupt the end-user’s web browser connection in specific ways, which would generally limit the scope of this capability to end-user malware or attackers on the user’s local network or that controlled significant portions of the networking infrastructure an end-user was using.  Unlike previous security scares in 2014 such as Heartbleed or Shellshock, this attack targets the technology and connection of end-users.  The nature of this attack is one of many classes of attacks that exist that target end-users, and is not the only such risk posed to end-users who have an active network attacker specifically targeting them from their local network.

The proper resolution for end-users will be to update their web browsers to versions that have not yet been released that completely disable this older and susceptible SSL 3.0 technology.  In the interim, service providers can disable SSL 3.0 support, with the caveat that IE 6 users will no longer be able to access sites with SSL 3.0 without making special settings adjustments in their browser configuration.  (But honestly, if you are keeping IE 6 a viable option for your end-users, this is one of many security flaws those issues are subject to).  Institutions that run on-premises software systems for their end-users may wish to perform their own analysis of the POODLE SSL 3.0 security advisory and evaluate what, if any, server-side mitigations are available to them as part of their respective network technology stacks.

Defense-in-depth is the key to a comprehensive security strategy in today’s fast-developing threat environment.  Because of the targeted nature of this type of attack, and its prerequisites for a privileged vantage point to interact with an end-user’s network connection, it does not appear to be a significant threat to online banking and other end-user services, and this information is therefore provided as a precaution and for informational purposes only.

All financial institutions should subscribe to US-CERT security advisories and to monitor the publication of CVE-2014-3566 once released for any further recommendations and best practices.  The resolution for end-users of updated versions of Chrome, Firefox, Internet Explorer, and Safari which remove all support for the older SSL 3.0 protocol will be made through their respective vendor release notification channels.  For more information from US-CERT once published, refer to the Google whitepaper directly at https://www.openssl.org/~bodo/ssl-poodle.pdf

 
Leave a comment

Posted by on October 15, 2014 in Security

 

Security Advisory for Financial Institutions: Shell Shock

“Shell Shock” Remote Code Execution and Compromise Vulnerability

Yesterday evening, DHS National Cyber Security Division/US-CERT published CVE-2014-6271 and CVE-2014-7169, outlining a serious vulnerability in a widely used command line interface (or shell) for the Linux operating system and many other *nix variants.  This software bug in the Bash shell allows files to be written on remote devices or remote code to be executed on remote systems by unauthenticated, unauthorized malicious users.  Because the vulnerability involves the Bash shell, some media outlets are referring to this vulnerability as Shell Shock.

Nature of Risk

By exploiting this parsing bug in the Bash shell, other software on a vulnerable system, including operating system components, can be compromised, including the OpenSSH server process and the Apache web server process. Because this attack vector allows an attacker to potentially compromise any element of a vulnerable system, effects from website defacement to password collection, malware distribution, and retrieval of protected system components such as private keys stored on servers are possible, and the US-CERT team has rated this it’s highest impact CVSS rating of 10.0.

Please be specifically aware that a patch was provided to close the issue for the original CVE-2014-6271; however, this patch did not sufficiently close the vulnerability.  The current iteration of the vulnerability is CVE-2014-7169, and any patches applied to resolve the issue should specifically state they close the issue for CVE-2014-7169.  Any devices that are vulnerable and exposed to any untrusted network, such as a vendor-accessible extranet or the public Internet should be considered suspect and isolated and reviewed by a security team due to the ability for “worms”, or automated infect-and-spread scripts that exploit this vulnerability, to quickly affect vulnerable systems in an unattended manner.  Any affected devices that contain private keys should have those keys treated as compromised and have those keys reissued per your company’s information security policies regarding key management procedures.

Next Steps

All financial institutions should immediately review their own environments to determine that no other third-party systems that are involved in serving or securing the online banking experience, or any other publicly-available services, are running vulnerable versions of the Bash shell.  Any financial institution that provides any secure services with Linux or *nix variants running a vulnerable version of the Bash shell could be at risk no matter what their vendor mix. If any vulnerable devices are found, they should be treated as suspect and isolated per your incident response procedures until they are validated as not affected or remediated.  All financial institutions should immediately and thoroughly review their systems and be prepared to change passwords on and revoke and reissue certificates with private key components stored on any compromised devices.

For further reading on this issue:

 
Leave a comment

Posted by on September 25, 2014 in Security

 

End-User Credential Security

This week’s announcement that a Russian crime syndicate has amassed 1.2 billion unique usernames and passwords across 420,000 websites would seem like startling news in 72-point font on the front of major newspapers, if it wasn’t sad it was such a commonplace announcement these days.  With four more months to go and still higher than the estimated 823 million compromised credentials part of 2013 breaches affecting Adobe to Target, it’s from Black Hat 2014 I find myself thinking about what we as ISV’s, SaaS providers, and security professionals can do to protect users in the wake of advanced persistent threats and organized, well-funded thieves wreaking havoc on the digital identities and real assets of our clients and customers.

Unlike Heartbleed or other server-side vulnerabilities, this particular credential siphoning technique obviously targeted users themselves to affect so many sites and at least 542 unique addresses affecting at least half that many unique users.  Why are users so vulnerable to credential-stealing malware?  To explore this issue, let’s immediately discard a tired refrain inside software houses everywhere: users aren’t dumb.  All too often, good application security is watered down to its least secure but most useful denominator for an overabundance of concern that secure applications may frustrate users, lower adoption, and reduce retention and usage.  While it is true that the more accessible the Internet becomes, the wider the spectrum the audience that uses it, from the most expertly capable to the ‘last-mile’ of great grandparents, young children, and the technologically unsophisticated.  However, this should neither be grounds to dismiss end-user credential security as a concern squarely in service provider’s court to address nor should it be an excuse to fail to provide adequately secure systems.  End-user education is our mutual responsibility, even if that means three more screens, additional prompts to confirm identity or action, or an out-of-band verification process.  Keeping processes as stupefying simple as possible because our SEO metrics show that’s the way to marginally improve adoption, reduce cart abandonment, or improve site usage times breeds complacency that ends up hurting us all in the long-run.

Can we agree that 1FA needs to end?  In an isolated world of controlled systems, a username and password combination might have been a fair assertion of identity.  Today’s systems, however, are neither controlled or isolated – the same tablets that log into online banking also run Fruit Ninja for our children, and we pass them over without switching out any concept of identity on a device that can save our passwords and represent them without any authentication.  Small-business laptops often run without real-time malware scanning software, easily harvesting credentials through keystroke logging, MitM attacks, cookie stealing, and a variety of other commonplace techniques.  Username and passwords fail us because they can be saved and cached just as easily as they can be collected and forwarded to command and control servers is Russia or elsewhere.  I’m not one of those anarchists advocating ‘death to the password’ (remember Vidoop?), but using knowledge-based challenges (password, out-of-wallet questions, or otherwise) as the sole factor of authentication needs to end.  And it needs to end smartly: sending an e-mail ‘out of band’ to an inbox loaded in another tab on the same machine, or an SMS message read by Google Voice in another tab means your ‘2FA’ is really just one factor layered twice instead of two-factor authentication.  A few more calls into the call center to help users cope with 2FA will be far cheaper in the long-run than the fallout of a major credential breach that affects your sites users.

We need to also discourage poor password management: allowing users to choose short or non-complex passwords and warning them about their poor choices is no excuse – we should just flatly reject them.  At the same time, we need to recognize that forcing users to establish too complex of a password will encourage them to establish a small number of complex passwords and reuse them across more sites.  This is one of the largest Achilles’s Heels for end-users: when a compromise of one site does occur, and especially if you have removed the option for users to establish a username not tied to their identity (name, e-mail address, or otherwise), you have made it tremendously easier for those who have gathered credentials from one site to have a much higher likelihood of exploiting them on your site.  Instead, we should consider nuances to each of complexity requirements that would make it likely a user would have to generate a different knowledge-based credential for each site.  While that in of itself may increase the chance a user would ‘write a password down’, a user who stores all their passwords in a password manager is still arguably more secure than the user who users one password for all websites and never writes it anywhere.

Finally, when lists of affected user accounts become available in uploaded databases of raw credentials that are leaked or testable on sites such as https://haveibeenpwned.com/ – ACT.  Find out your users that have overlap with compromised credentials on other sites, and proactively flag or lock their accounts or at least message to them to educate and encourage good end-user credential security.  We cannot unilaterally force users to improve the security of their credentials, but we can educate them, and we can make certain their eventual folly through our inaction.

 
 

The Wires Cannot Be Trusted; Does DRM Have Something to Teach Us?

In the continuing revelations about the depth to which governments have gone to subjugate global communications in terms of privacy, anonymity, and security on the Internet, one thing is very clear: nothing can be trusted anymore.

Before you wipe this post off as smacking of ‘conspiracy theorist’, take the Snowden revelations disclosed since Christmas, particularly regarding the NSA’s Tailored Access Operations catalog that demonstrates the ways they can violate implicit trust in local hardware by infecting firmware at a level where even reboots and factory ‘resets’ cannot remove the implanted malware, or their “interdiction” of new computers that allow them to install spyware between the time it leaves the factory and arrives at your house.  At a broader level, because of the trend in global data movement towards centralizing data transit through a diminishing number of top tier carriers – a trend is eerily similar to wealth inequality in the digital era – governments and pseudo-governmental bodies have found it trivial to exact control with quantum insert attacks.  In these sophisticated attacks, malicious entities (which I define for these purposes as those who exploit trust to gain illicit access to a protected system) like the NSA or GCHQ can slipstream rogue servers that mimic trusted public systems such as LinkedIn to gain passwords and assume identities through ephemeral information gathering to attack other systems.

Considering these things, the troubling realization is this is not the failure of the NSA, the GCHQ, the US presidential administration, or the lack of public outrage to demand change.  The failure is in the infrastructure of the Internet itself.  If anything, these violations of trust simply showcase technical flaws we have chosen not to acknowledge to this point in the larger system’s architecture.  Endpoint encryption technologies like SSL became supplanted by forward versions of TLS because of underlying flaws not only in cipher strength, but in protocol assumptions that did not acknowledge all the ways in which the trust of a system or the interconnects between systems could be violated.  This is similarly true for BGP, which has seen a number of attacks that allow routers on the Internet to be reprogrammed to shunt traffic to malicious entities that can intercept it: a protocol that trusts anything is vulnerable because nothing can be trusted forever.

When I state nothing can be trusted, I mean absolutely nothing.  Your phone company definitely can’t be trusted – they’ve already been shown to have collapsed to government pressure to give up the keys to their part of the kingdom.  The very wires leading into your house can’t be trusted, they could already or someday will be tapped.  Your air-gapped laptop can’t be trusted, it’s being hacked with radio waves.

But, individual, private citizens are facing a challenge Hollywood has for years – how do we protect our content?  The entertainment industry has been chided for years on its sometimes Draconian attempts to limit use and restrict access to data by implementing encryption and hardware standards that run counter to the kind of free access analog storage mediums, like the VHS and cassette tapes of days of old, provided.  Perhaps there are lessons to be learned from their attempts to address the problem of “everything, everybody, and every device is malicious, but we want to talk to everything, everybody, on every device”.  One place to draw inspiration is HDCP, a protocol most people except hardcore AV enthusiasts are unaware of that establishes device authentication and encryption across each connection of an HD entertainment system.  Who would have thought when your six year old watches Monsters, Inc., those colorful characters are protected by such an advanced scheme on the cord that just runs from your Blu-ray player to your TV?

While you may not believe in DRM for your DVD’s from a philosophical or fair-use rights perspective, consider the striking difference with this approach:  in the OSI model, encryption occurs at Layer 6, on top of many other layers in the system.  This is an implicit trust of all layers below it, and this is the assumption violated in the headlines from the Guardian and NY Times that have captured our attention the most lately: on the Internet, he who controls the media layers also controls the host layers.  In the HDCP model, the encryption happens more akin to Layer 2, as the protocol expects someone’s going to splice a wire to try to bootleg HBO from their neighbor or illicitly pirate high-quality DVD’s.  Today if I gained access to a server closet in a corporate office, there is nothing technologically preventing me from splicing myself into a network connection and copying every packet on the connection.  The data that is encrypted on Layer 6 will be very difficult for me to make sense of, but there will be plenty of data that is not encrypted that I can use for nefarious purposes: ARP broadcasts, SIP metadata, DNS replies, and all that insecure HTTP or poorly-secured HTTPS traffic.  Even worse, it’s a jumping off point for setting up a MITM attack, such as an SSL Inspection Proxy.  Similarly, without media-layer security, savvy attackers with physical access to a server closet or the ability to coerce or hack into the next hop in the network path can go undetected if they redirect your traffic into rogue servers or into malicious networks, and because there is no chained endpoint authentication mechanism on the media-layer, there’s no way for you to know.

These concerns aren’t just theoretical either, and they’re not to protect teenagers’ rights to anonymously author provocative and mildly threatening anarchist manifestos.  They’re to protect your identity, your money, your family, and your security.  Only more will be accessible and controllable on the Internet going forward, and without appropriate protections in place, it won’t just be governments soon who can utilize the assumptions of trust in the Internet’s architecture and implementation for ill, but idealist hacker cabals, organized crime rings, and eventually, anyone with the right script kiddie program to exploit the vulnerabilities once better known and unaddressed.

Why aren’t we protecting financial information or credit card numbers with media-layer security so they’are at least as safe as Mickey Mouse on your HDTV?

 

Tags: , , ,

When All You See Are Clouds… A Storm Is Brewing

The recent disclosures that the United States Government has violated the 4th amendment of the U. S. Constitution and potentially other international law by building a clandestine program that provides G-Men at the NSA direct taps into every aspect of our digital life – our e-mail, our photos, our phone calls, our entire relationships with other people and even with our spouses, is quite concerning from a technology policy perspective.  The fact that the US Government (USG) can by legal authority usurp any part of our recorded life – which is about every moment of our day – highlights several important points to consider:

  1. Putting the issue of whether the USG/NSA should have broad access into our lives aside, we must accept that the loopholes that allow them to demand this access expose weaknesses in our technology.
  2. The fact the USG can perform this type of surveillance indicates other foreign governments and non-government organizations likely can and may already be doing so as well.
  3. Given that governments are often less technologically savvy though much more resource-rich than malevolent actors, if data is not secure from government access, is it most definitely not secure from more cunning hackers, identity thieves, and other criminal enterprises.

If we can accept the points above, then we must accept that the disclosure of PRISM and connotation through carefully but awkwardly worded public statements about the program present both a problem and an opportunity for technologists to solve regarding data security in today’s age.  This is not a debate of whether we have anything to hide, but rather a discussion of how can we secure data, because if we cannot secure it from a coercive power (sovereign or criminal), we have no real data security at all.

But before proposing some solutions, we must consider:

How Could PRISM Have Happened in the First Place?

I posit an answer devoid of politics or blame, but on an evaluation of the present state of Internet connectivity and e-commerce.  Arguably, the Internet has matured into a stable, reliable set of services.  The more exciting phase of its development saw a flourishing of ideas much like a digital Cambrian explosion.  In its awkward adolescence, connecting to the Internet was akin to performing a complicated rain dance that involved WinSock, dial-up modems, and PPP, sprinkled with roadblocks like busy signals, routine server downtime, and blue screens of death.  The rate of change in equipment, protocols, and software was meteoric, and while the World Wide Web existed (what most laypeople consider wholly as “the Internet” today), it was only a small fraction of the myriad of services and channels for information to flow.  Connecting to and using the Internet required highly specialized knowledge, which both increased the level of expertise of those developing for and consuming the Internet, while limiting its adoption and appeal – a fact some consider the net’s Golden Age.

But as with all complex technologies, eventually they mature.  The rate of innovation slows down as standardization becomes the driving technological force, pushed by market forces.  As less popular protocols and methods of exchanging information give way to young but profitable enterprises that push preferred technologies, the Internet became a much more homogeneous experience both in how we connect to and interact with it.  This shapes not only the fate of now-obsolete tech, such as UUCP, FINGER, ARCHIE, GOPHER, and a slew of other relics of our digital past, but also influenced the very design of what remains — a great example being identification and encryption.

For the Internet to become a commercializable venue, securing access to money, from online banking to investment portfolio management, to payments, was an essential hurdle to overcome.  The solution for the general problem of identity and encryption, centralized SSL certificate authorities providing assurances of trust in a top-down manner, solves the problem specifically for central server webmasters, but not for end-users wishing to enjoy the same access to identity management and encryption technology.  So while the beneficiaries like Amazon, eBay, PayPal, and company now had a solution that provided assurance to their users that you could trust their websites belonged to them and that data you exchanged with them was secure, end-users were still left with no ability to control secure communications or identify themselves with each other.

A final contributing factor I want to point out is that other protocols drifted into oblivion, more functionality was demanded over a more uniform channel — the de facto winner becoming HTTP and the web.  Originally a stateless protocol designed for minimal browsing features, the web became a solution for virtually everything, from e-mail (“webmail”), to searching, to file storage (who has even fired up an FTP client in the last year?).  This was a big win for service providers, as they, like Yahoo! and later Google, could build entire product suites on just one delivery platform, HTTP, but it was also a big win for consumers, who could throw away all their odd little programs that performed specific tasks, and could just use their web browser for everything — now even Grandma can get involved.  A more rich offering of single-shot tech companies were bought up or died out in favor of the oligarchs we know today – Microsoft, Facebook, Google, Twitter, and the like.

Subtly, this also represented a huge shift on where data is stored.  Remember Eudora or your Outlook inbox file tied to your computer (in the days of POP3 before IMAP was around)?  As our web browser became our interface to the online world, and as we demanded anywhere-accessibility to those services and they data they create or consume, those bits moved off our hard drives and into the nebulous service provider cloud, where data security cannot be guarenteed.

This is meaningful to consider in the context of today’s problem because:

  1. Governments and corporate enterprises were historically unable to sufficiently regulate, censor, or monitor the internet because they lacked the tools and knowledge to do so.  Thus, the Internet had security through obscurity.
  2. Due to the solutions to general problems around identity and encryption relying on central authorities,  malefactors (unscrupulous governments and hackers alike) have fewer targets to influence or assert control over to tap into the nature of trust, identity, and communications.
  3. With the collapse of service providers into a handful of powerful actors on a scale of inequity on par with a collapse of wealth distribution in America, there exist now fewer providers to surveille to gather data, and those providers host more data on each person or business that can be interrelated in a more meaningful way.
  4. As information infrastructure technology has matured to provide virtual servers and IaaS offerings on a massive scale, fewer users and companies deploy controlled devices and servers, opting instead to lease services from cloud providers or use devices, like smartphones, that wholly depend upon them.
  5. Because data has migrated off our local storage devices to the cloud, end-users have lost control over their data’s security.  Users have to choose between an outmoded device-specific way to access their data, or give up the control to cloud service providers.

There Is A Better Way

Over the next few blog posts, I am going to delve into a number of proposals and thoughts around giving control and security assurances of data back to end-users.  These will address points #2 and #4 above as solutions that layer over existing web technologies, not proposals to upend our fundamental usage of the Internet by introducing opaque configuration barriers or whole-new paradigms.  End-users should have choice whether their service providers have access to their data in a way that does not require Freenet’s darknets or Tor’s game-of-telephone style of anonymous but slow onion-routing answer to web browsing.  Rather, users should be able to positively identify themselves to the world and be able to access and receive data and access it in a cloud-based application without ever having to give up their data security, not have to trust of the service provider, be independent to access the data on any devices (access the same service securely anywhere), and not have to establish shared secrets (swap passwords or certificates).

As a good example, if you want to send a secure e-mail message today, you have three categorical options to do so:

  1. Implicitly trust a regular service provider:  Ensure both the sender and the receiver use the same server.  By sending a message, it is only at risk while the sender connects to the provider to store it and while the receiver connects the provider to retrieve it.  Both parties trust the service provider will not access or share the information.  Of course, many actors, like Gmail, still do.
  2. Use a secure webmail provider:  These providers, like Voltage.com, encrypt the sender’s connection to the service to protect the message as it is sent, and send notifications to receivers to come to a secure HTTPS site to view the message.  While better than the first option, the message is still stored in a way that can be demanded by subpoena or snooped inside the company while it sits on their servers.
  3. Use S/MIME certificates and an offline mail client:  While the most secure option for end-to-end message encryption, this cumbersome method is machine-dependent and requires senders and receivers to first share a certificate with each other – something the average user is flatly incapable of understanding or configuring.

Stay tuned to my next post, where I propose a method by which anyone could send me a message securely, without knowing anything else about me other than my e-mail address, in a way I could read online or my mobile device, in a way that no one can subpoena or snoop on in between.

 

 
 

Tags: ,