RSS

Category Archives: Open Standards

Thwarting SSL Inspection Proxies

A disturbing trend in corporate IT departments everywhere is the introduction of SSL inspection proxies.  This blog post explores some of the ethical concerns about such proxies and proposes a provider-side technology solution to allow clients to detect their presence and alert end-users.  If you’re well-versed in concepts about HTTPS, SSL/TLS, and PKI, please skip down to the section entitled ‘Proposal’.

For starters, e-commerce and many other uses of the public Internet are only possible because the capability for encryption of messages to exist.  The encryption of information across the World Wide Web is possible through a suite of cryptography technologies and practices known as Public Key Infrastructure (PKI).  Using PKI, servers can offer a “secure” variant of the HTTP protocol, abbreviated as HTTPS.  This variant itself encapsulates other application level protocols, like HTTP, using a transport-layer protocol called Secure Socket Layer (SSL), which as since been superseded by a similar, more secure version, Transport Layer Security (TLS).  Most users of the Internet are familiar with the symbolism common with such secure connections: when a user browses a webpage over HTTPS, usually some visual iconography (usually a padlock) as well as a stark change in the presentation of the page’s location (usually a green indicator) show the end-user that the page was transmitted over HTTPS.

SSL/TLS connections are protected in part by a server certificate stored on the web server.  Website operators purchase these server certificates from a small number of competing companies, called Certificate Authorities (CA’s), that can generate them.  The web browsers we all use are preconfigured to trust certificates that are “signed” by a CA.  The way certificates work in PKI allows certain certificates to sign, or vouch for, other certificates.  For example, when you visit Facebook.com, you see your connection is secure, and if you inspect the message, you can see the server certificate Facebook presents is trusted because it is signed by VeriSign, and VeriSign is a CA that your browser trusts to sign certificates.

So… what is an SSL Inspection Proxy?  Well, there is a long history of employers and other entities using technology to do surveillance of the networks they own.  Most workplace Internet Acceptable Use Policies state clearly that the use of the Internet using company-owned machine and company-paid bandwidth is permitted only for business use, and that the company reserves the right to enforce this policy by monitoring this use.  While employers can easily review and log all unencrypted that flows over their networks, that is any request for a webpage and the returned rendered output, the increasing prevalence of HTTPS as a default has frustrated employers in recent years.  Instead of being able to easily monitor the traffic that traverses their networks, they have had to resort to less-specific ways to infer usage of secure sites, such as DNS recording.

(For those unaware and curious, the domain-name system (DNS) allows client computers to resolve a URL’s name, such as Yahoo.com, to its IP address, 72.30.38.140.  DNS traffic is not encrypted, so a network operator can review the requests of any computers to translate these names to IP addresses to infer where they are going.  This is a poor way to survey user activity, however, because many applications and web browsers do something called “DNS pre-caching”, where they will look up name-to-number translations in advance to quickly service user requests, even if the user hasn’t visited the site before.  For instance, if I visited a page that had a link to Playboy.com, even if I never click the link, Google Chrome may look up that IP address translation just in case I ever do in order to look up the page faster.)

So, employers and other network operators are turning to technologies that are ethically questionable, such as Deep Packet Inspection (DPI), which looks into all the application traffic you send to determine what you might be doing, to down right unethical practices of using SSL Inspection Proxies.  Now, I concede I have an opinion here, that SSL Inspection Proxies are evil.  I justify that assertion because an SSL Inspection Proxy causes your web browser to lie to it’s end-user, giving them a false assertion of security.

What exactly are SSL Inspection Proxies?  SSL Inspection Proxies are servers setup to execute a Man-In-The-Middle (MITM) attack on a secure connection, on behalf of your ISP or corporate IT department snoops.  When such a proxy exists on your network, when you make a secure request for https://www.google.com, the network redirects your request to the proxy.  The proxy then makes a request to https://www.google.com for you, returns the results, and then does something very dirty — it creates a lie in the form of a bogus server certificate.  The proxy will create a false certificate for http://www.google.come, sign it with a different CA it has in its software, and hand the response back.  This “lie” happens in two manners:

  1. The proxy presents itself as the server you request, instead of the actual server you requested.
  2. The proxy states the certificate handed back with the page response is a different one than what was actually handed back by that provider, http://www.google.com in this case.

This interchange would look like this:

It sounds strange to phrase the activities of your own network as an “attack”, but this type of interaction is precisely that, and it is widely known in the network security industry as a MITM attack.  As you can see, a different certificate is handed back to the end-user’s browser than what http://www.example.com in the above image.  Why?  Well, each server certificate that is presented with a response is used to encrypt that data.  Server certificates have what is called a “public key” that everyone knows which unique identifies the certificate, and they also have a “private key”, known only by the web server in this example.  A public key can be used to encrypt information, but only a private key can decrypt it.  Without an SSL Inspection Proxy, that is, what normally happens, when you make a request to http://www.example.com, example.com first sends back the public key of the server certificate for its server to your browser.  Your browser uses that public key to encrypt the request for a specific webpage as well as a ‘password’ of sorts, and sends that back to http://www.example.com.  Then, the server would use its private key to decrypt the request, process it, then use that ‘password’ (called a session key) to send back an encrypted response.  That doesn’t work so well for an inspection proxy, because this SSL/TLS interchange is designed to thwart any interloper from being able to intercept or see the data transmitted back and forth.

The reason an SSL Inspection Proxy sends a different certificate back is so it can see the request the end-user’s browser is making so it knows what to pass on to the actual server as it injects itself as a proxy to this interchange.  Otherwise, once the request came to the proxy, the proxy could not read it, because the proxy wouldn’t have http://www.example.com’s private key.  So, instead, it generates a public/private key and makes it appear like it is http://www.example.com’s server certificate so it can act on its behalf, and then uses the actual public key of the real server certificate to broker the request on.

Proposal

The reason an SSL Inspection Proxy can even work is because it signs a fake certificate it creates on-the-fly using a CA certificate trusted by the end user’s browser.  This, sadly, could be a legitimate certificate (called a SubCA certificate), which would allow anyone who purchases a SubCA certificate to create any server certificate they wanted to, and it would appear valid to the end-user’s browser.  Why?  A SubCA certificate is like a regular server certificate, except it can also be used to sign OTHER certificates.  Any system that trusts the CA that created and signed the SubCA certificate would also trust any certificate the SubCA signs.  Because the SubCA certificate is signed by, let’s say, the Diginotar CA, and your web browser is preconfigured to trust that CA, your browser would accept a forged certificate for http://www.example.com signed by the SubCA.  Thankfully, SubCA’s are frowned upon and increasingly difficult for any organization to obtain because they do present a real and present danger to the entire certificate-based security ecosystem.

However, as long as the MITM attacker (or, your corporate IT department, in the case of an SSL Inspection Proxy scenario) can coerce your browser to trust the CA used by the proxy, then the proxy can create all the false certificates it wants, sign it with the CA certificate they coerced your computer to trust, and most users would never notice the difference.  All the same visual elements of a secure connection — the green coloration, the padlock icon, and any other indicators made by the browser, would be present.  My proposal to thwart this:

Website operators should publish a hash of the public key of their server certificates (the certificate thumbprint) as a DNS record.  For DNS top-level domains (TLD’s) that are protected with DNSSEC, as long as this DNS record that contains the has for http://www.example.com is cryptographically signed, the corporate IT department of local clients nor a network operator could forge a certificate without creating a verifiable breach that clients could check for and then warn to end users.  Of course, browsers would need to be updated to do this kind of verification in the form of a DNS lookup in conjunction with the TLS handshake, but provided their resolvers checked for an additional certificate thumbprint DNS record anyway, this would be a relatively trivial enhancement to make.

EDIT: (April 15, 2013): There is in fact an IETF working group now addressing this proposal, very close to my original proposal! Check out the work of the DNS-based Authentication of Named Entities (DANE) group here: http://datatracker.ietf.org/wg/dane/ — on February 25, they published a working draft of this proposed resolution as the new “TLSA” record.  Great minds think alike. 🙂

 
1 Comment

Posted by on September 15, 2012 in Ethical Concerns, Open Standards, Privacy, Security

 

Tags:

The Long Overdue Case for Signed Emails

A technology more than a decade-old is routinely ignored by online banking vendors despite a sustained push to find technology that counteract fraud and phishing: S/MIME.  For the unaware, S/MIME is a set of standards that define a way to sign and encrypt e-mail messages using a public key infrastructure (PKI) to either provide a way to prove the identity of the message sender (signing), to encrypt the contents of the message so that only the recipient can view the message (encryption), or both (signing and encrypting).  The use of a PKI scheme to create secure communications is generally implemented with asymmetric sets of public and private keys, where in a signing scenario, the sender of messages makes their public key available to the world which can be used to validate that only the corresponding private key was used to craft a message.

This secure messaging scheme offers a way for financial institutions to digitally prove any communication dressed up to look like it came from the institution in fact was crafted by them.  This technology can both thwart the falsification of the ‘from address’ from which a message appears to be sent as well as ensures the content of the message, it’s integrity, is not compromised by any changing of facts or figures or the introduction of other language, links, or malware by any of the various third-parties that are involved with transferring an e-mail from the origin to the recipient.  The application for financial institutions is obvious in a world where over 95% of all e-mail sent worldwide is spam or a phishing scam.  Such gross abuse of the system threatens to undermine the long-term credibly medium, which, in a “green” or paperless world, is the only cost-effective way many financial institutions have to maintain contact with their customers.

So, if the technology is readily available and the potential benefits are so readily apparent, why hasn’t digital e-mail signatures caught on in the financial services industry?  I believe there are several culprits here:

1. Education. End-users are generally unaware of the concept of “secure e-mail”, since implementing digital signatures from a sender’s perspective requires quite a bit of elbow grease, today colleagues don’t send secure messages between each other.  Moreover, most financial institution employees are equally untrained in the concept of secure e-mail, how it works, and much less, how to explain it to their customers to make it understandable as well as a competitive advantage.  Financial institutions have an opportunity to take a leadership role with digital e-mail signatures, since as one of the most trusted vendors any retail customer will ever have, creating a norm of secure e-mail communications across the industry can drive both education and technology adoption.  Even elderly users and young children understand the importance of the “lock icon” in web broswers before typing in sensitive information such as a social security number, a credit card number, or a password — with proper education, users can learn to demand the same protection afforded by secure e-mail.

2. Lack of Client Support.  Unfortunately, as more users shift from desktop e-mail clients to web-based e-mail clients like Gmail and Yahoo Mail, they lose a number of features in these stripped down, advertising-laden SaaS apps, one of which is often the ability to parse a secure e-mail.  The reasons of this are partially technological (it does take a while to re-invent the same wheel desktop client applications like Outlook and Thunderbird have mastered long ago), partially a lack of demand due to the aforementioend ‘education’ reason, and partially unscrupulous motives of SaaS e-mail providers.  The last point I want to call special attention to because of the significance of the problem:  Providers of SaaS applications “for free” are targeted advertising systems, which have increasingly used not just the profile and behavior of end-users to develop a targeted promotional experience, but the content of their e-mails themselves to understand a user’s preferences.  Supporting S/MIME encryption is counter to the aim of scanning the body of e-mails to determine context, when in a secure e-mail platform, Hotmail for instance, would be unable to peek into messages.  Unfortunately, this deliberate ommission of encryption support in online e-mail clients has meant that digital signatures, the second part of the S/MIME technology, is often also omitted.  In early 2009, Google experimented with adding digital signature functionality to Gmail; however, it was quickly removed after it was implemented.,   If users came to demand secure e-mail communications from their financial institutions, these providers would need to play along.

3. Lack of Provider Support.  It’s no secret most online banking providers have a software offering nearly a decade old, which is increasingly a mishmash of legacy technologies, stitched together with antiquated components and outdated user interfaces to create a fragile, minimally working fabric for an online experience.  Most have never gone back to add functionality to core components, like e-mail dispatch systems to incorporate technologies like S/MIME.  Unfortunately, because their customers who are technologically savvy enough to request such functionality represents a small percentage of their customer base, even over ten years later, other online banking offerings still neglect to incorporate emerging security technologies.  While a bolt-on internet banking system has moved from a “nicety” to a “must have” for large financial services software providers, the continued lack of innovation and continuous improvement in their offers is highly incongruent with the needs of financial institutions in an increasingly connected world where security is paramount.

S/MIME digital e-mail signatures is long over-due in the fight against financial account phishing; however, as a larger theme, financial institutions either need to become better drivers of innovation in stalwart online banking companies to ensure their needs are met in a quickly changing world, or they need to identify the next generation of online banking software providers, who embrace today’s technology climate and incorporate it into their offerings as part of a continual improvement process.

 
Leave a comment

Posted by on June 16, 2010 in Open Standards, Security

 

Facebook OpenGraph: A Good Laugh or a Chilling Cackle?

If you want to sell a proprietary technology for financial gain or to increase user adoption for eventual financial gain once a model is monetized, the hot new thing is to call it “open” and ascribe intellectual property rights to insignificant portions of the technology to a “foundation.  The most recent case in point that has flown across my radar is Facebook’s OpenGraph, a new ‘standard’ the company is putting forward to replace their existing Facebook Connect technology, a system by which third-parties could integrate a limited number of Facebook features into their own sites, including authentication and “Wall”-like communication on self-developed pages and content.  The impetus for Facebook to create such a system is rather straightforward:  If it joins other players in the third-party authentication product-space, such as Microsoft’s Windows Live ID, Tricipher’s myOneLogin, or the OpenID, it can minimally drive your traffic to its site for authentication, where it requires you to register for an account and log in.  These behemoths have much more grand visions though, for there’s a lot more in your wallet than your money: your identity is priceless.

Facebook and other social networking players make a majority of their operating income from targeted advertising, and displaying ads to you during or subsequent to the login process are just the beginning.  Knowing where you came from as you end up at their doorstep to authenticate lets them build a profile of your work, your interests, or your questionable pursuits based on the what comes through a browser “referrer header”, a response most modern web browsers announce to pages that tell them “I came to your site through a link on site X”.  But, much more than that, these identity integration frameworks often require rich information that describe the content of the site you were at, or even metadata that site collected about you that further identifies or profiles you, as part of the transaction to bring you to the third-party authentication page.  This information is critical to building value in a targeted marketing platform, which is all Facebook really is, with a few shellacs of paint and Mafia Wars added for good measure to keep users around, and viewing more ads.

OpenGraph, the next iteration from their development shop in the same aim, greatly expands both the flexibility of the Facebook platform, as well as the amount and type of information it collects on you.  For starters, this specification proposes content providers and web masters annotate their web pages with Facebook-specific markup that improves the semantic machine readability of the page.  This will make web pages appear to light up and become interactive, when viewed by users who have Facebook accounts, and either the content provider as enabled custom JavaScript libraries that make behind-the-scenes calls to the Facebook platform or the user himself runs a Facebook plug-in in their browser, which does the same.  (An interesting aside is, should Facebook also decide to enter the search market, they will have a leg up on a new content metadata system they’ve authored, but again, Google will almost certainly, albeit quietly, be noting and indexing these new fields too.)

However, even users not intending to reveal their web-wanderings to Facebook do so when content providers add a ‘Like’ button to their web pages.  Either the IFRAME or JavaScript implementations of this make subtle calls back to Facebook to either retrieve the Like image, or to retrieve a face of a friend or the author to display.  Those who know what “clearpixel.gif” means realize this is just a ploy to use the delivery of a remotely hosted asset to mask the tracking of a user impression:  When my browser makes a call to your server to retrieve an image, you not only give me the image, you also know my IP address, which in today’s GeoIP-coded world, also means if I’m not on a mobile device, you know where I am by my IP alone.  If I am on my mobile device using an updated (HTML5) browser, through Geolocation, you know precisely where I am, as leaked by the GPS device in my phone. Suddenly, impression tracking became way cooler, and way more devious, as you can dynamically see where in the world viewers are looking at which content providers, all for the value of storing a username or password… or if I never actually logged in, for no value added at all.  In fact, the content providers just gave this information to them for free.

Now, wait for it…  what about this new OpenGraph scheme?  Using this scheme, Facebook can not only know where you are and what you’re looking at, but they know who you are, and the meaning behind what you’re looking at, through their proprietary markup combined with OpenID’s Immediate Mode, triggered through AJAX technology.  Combined with the rich transfer of metadata through JSON, detailing specific fields that describe content, not just a URL reference, now instead of knowing what they could only know a few years ago, such as “A guy in Dallas is viewing http://www.example.com/Page.html”, they know “Sean McElroy is at 32°46′58″N 96°48′14″W, and he’s looking at a page about ‘How to Find a New Job at a Competitor’, which was created by CareerBuilder”.  That information has to be useful to someone, right?

I used to think, “Hrm, I was sharing pictures and status updates back in 2001, what’s so special about Facebook?”, and now I know.  Be aware of social networking technology; it’s a great way to connect to friends and network with colleagues, but with it, you end up with a lot more ‘friends’ watching you than you knew you ever had.

References:

http://www.facebook.com/advertising/?connect

http://opengraphprotocol.org/

http://developers.facebook.com/docs/opengraph

http://openid.net/specs/openid-authentication-2_0.html

 

Structure vs. Creativity

The other day I was speaking with a friend on the east coast about some of the nuances of the HTTP protocol and the HTML/XHTML standards which have changed over time, which diverged after I had answered his immediate question to reminiscencing about the actual content available today over the Internet.  This short session of remembering the “good old days” of circa 1995 got me thinking over the past few days what really has changed on a fundamental level over the past 15 years within the content of the World Wide Web itself.

For one, search engines have dramatically improved.  For those who remember OpenText and AltaVista as some of the only search engines that allowed free-form queries of pages without finding sites through canonical directories, as Yahoo! used to only provide, getting a relevant search result was truly an art.  Finding a site on “june bugs” could yield any page that contained either word, whether contextually used together, or simply those talking about bugs, which mentioned temperatures in the month of June.  Utilizing tools that simply indexed words on pages required the understanding a whole metalanguage to requiring and disallowing certain words or hosting domain names and group words with Boolean operators.  Even with a mastery of the technique needs needed to coerce relevant results, one usually had to wade through pages of results; I recall configuring AltaVista in particular to show 100 results per page, so I could find results more quickly, since each successive “show next page” request did take a while over my 14.4 modem.  Today, I rarely scroll down from the first three results on Google, much less ask for ‘another page’ of results.

Second, we use very few client tools today to access content on the Internet.  15 years ago, I fired up Trumpet Winsock to access the Internet on Windows 3.0, PowWow for IM, Netscape for web browsing, Eudora for e-mail and USENET newsgroups, wsFTP to actually download files, and HyperComm to connect to remote systems to surf non-HTTP sites.  Today, virtually everything 99% of Internet denizens do is within a web browser, from search, to downloading, to chatting.  Traffic has moved from the various communication and sometimes vendor-specific protocols to a smaller subset of standards, mostly based on HTTP and XML, and therefore, our browsers have turned into Swiss Army knifes to tackle everything a user needs.

Third, collaboration in non-transient mediums has drastically changed.  If I wanted to share an idea 15 years ago, I opened my trusty Windows Notepad, typed up a quick HTML page (because, who doesn’t know hypertext markup language?), and FTP’ed it to my Geocities account to share to the world.  Someone, somewhere, would eventually construct a search query that linked to my page, or eventually my page might be included in a directory, such as Yahoo!’s old format, and if that person wanted to praise or criticize my content, then they could do the same thing on their personal web page, and link my site.  Content was scattered across hundreds of different hosting providers, in visual designs and contextual organization that varied widely from page to page.  Today, the advent of Wiki’s and other Web 2.0 collaborative workspaces have drastically lowered the knowledge barrier of entry to participate in the exchange of ideas — virtually anything you’d ever want to know about is in a Yahoo! or LiveJournal group for it.  Web 2.0 truly is, as Tim Berners-Lee has argued, just jargon for the same thing we’ve been doing through CGI for over 15 years to make sites interactive and collaborative.  The “Web 2.0” buzzword doesn’t represent any fundamental technology evolution, but simply a proliferation of what has been available for a very long time.

So, I haven’t told anyone who has been around at least as long as I have anything they didn’t already know — but, these three aspects highlight the fundamental change I see:  as the Internet expands its reach, particularly with a new generation unfamiliar with the technical framework that it is built upon, since the need to have such an understanding for its basic use is no longer there, we are seeing a shift from a “loosely coupled, poorly organized” body of information to a “structured and organized” body of information.  Especially important in my opinion, is this shift is changing the quantity of content.  By virtue of me writing these thoughts on a WordPress blog, I’ve chosen convenience over creativity.  I could write a web page and style it as I wish just as easily as type these thoughts, but I have made a conscious decision not to use my knowledge of HTML and FTP and to make this easier for other users to casually find, since I syndicate this blog onto my LinkedIn feed.  Consequently, I realize my thoughts may be littered with interjected advertisements by those providing this ‘convenience’, and I accept the limitations of the format:  I cannot express my thoughts outside of the framework WordPress has provided for me to do so.  Now, WordPress is pretty flexible, and I probably wouldn’t otherwise use the advanced markup that I know WordPress cannot support, however, the limitations become much more understandable in more popular formats.  A quick export and calculation of my Facebook friends’ News Feed shows that 72% of content my friends have written in the past two months are status updates.  Another 10% are photograph uploads, and the remaining 18% are a collection of ‘apps’, such as Mafia Wars and Farmville, which litter my ‘content’ feed with self-promoting advertisements for the apps themselves.  When I tried to paste this post into the ‘status updates’ for Facebook, I received an error stating that my formatting would be lost and that my content was too long.  Similarly, were I to exclusively microblog through a service like Twitter, the richness of my thoughts are limited to 140 characters and stripped of all multimedia.  I now receive over 1,000 tweets a day from less than 20 friends, most of which produce content no other way.  I say “content” loosely with regards to Twitter and Facebook, as the quality of posts in limited space vs. personal web presences and blogs is akin to the difference of the content from a lecturer to the content value of casual conversation, where one party may simply reply with “Okay.  Right.”  Posting much and often is a far cry from sharing thoughts and ideas.

It is my sincere desire that as we seek continued convenience to reach wider audiences and connect them in engaging communities, that we do not let our desire for structure and searchability constrain the richness of our thought.  Similarly, we are quickly losing a level of technical aptitutde still very relevant in today’s Internet among our future generations, who lack the necessity to understand the technologies we use today to create easy-to-use sites that attract the masses.  These skills, of underlying protocols and interactions common to the whole infrastructure, aren’t taught in any university, and are specialty subjects at technical trade schools.  If we are going to embrace structure for accessibility sake, we must be careful not to box ourselves in creatively now, and then pass an empty box to the next generation.

 
Leave a comment

Posted by on January 2, 2010 in Open Standards, Social Media