RSS

Category Archives: Social Responsibility

End-User Credential Security

This week’s announcement that a Russian crime syndicate has amassed 1.2 billion unique usernames and passwords across 420,000 websites would seem like startling news in 72-point font on the front of major newspapers, if it wasn’t sad it was such a commonplace announcement these days.  With four more months to go and still higher than the estimated 823 million compromised credentials part of 2013 breaches affecting Adobe to Target, it’s from Black Hat 2014 I find myself thinking about what we as ISV’s, SaaS providers, and security professionals can do to protect users in the wake of advanced persistent threats and organized, well-funded thieves wreaking havoc on the digital identities and real assets of our clients and customers.

Unlike Heartbleed or other server-side vulnerabilities, this particular credential siphoning technique obviously targeted users themselves to affect so many sites and at least 542 unique addresses affecting at least half that many unique users.  Why are users so vulnerable to credential-stealing malware?  To explore this issue, let’s immediately discard a tired refrain inside software houses everywhere: users aren’t dumb.  All too often, good application security is watered down to its least secure but most useful denominator for an overabundance of concern that secure applications may frustrate users, lower adoption, and reduce retention and usage.  While it is true that the more accessible the Internet becomes, the wider the spectrum the audience that uses it, from the most expertly capable to the ‘last-mile’ of great grandparents, young children, and the technologically unsophisticated.  However, this should neither be grounds to dismiss end-user credential security as a concern squarely in service provider’s court to address nor should it be an excuse to fail to provide adequately secure systems.  End-user education is our mutual responsibility, even if that means three more screens, additional prompts to confirm identity or action, or an out-of-band verification process.  Keeping processes as stupefying simple as possible because our SEO metrics show that’s the way to marginally improve adoption, reduce cart abandonment, or improve site usage times breeds complacency that ends up hurting us all in the long-run.

Can we agree that 1FA needs to end?  In an isolated world of controlled systems, a username and password combination might have been a fair assertion of identity.  Today’s systems, however, are neither controlled or isolated – the same tablets that log into online banking also run Fruit Ninja for our children, and we pass them over without switching out any concept of identity on a device that can save our passwords and represent them without any authentication.  Small-business laptops often run without real-time malware scanning software, easily harvesting credentials through keystroke logging, MitM attacks, cookie stealing, and a variety of other commonplace techniques.  Username and passwords fail us because they can be saved and cached just as easily as they can be collected and forwarded to command and control servers is Russia or elsewhere.  I’m not one of those anarchists advocating ‘death to the password’ (remember Vidoop?), but using knowledge-based challenges (password, out-of-wallet questions, or otherwise) as the sole factor of authentication needs to end.  And it needs to end smartly: sending an e-mail ‘out of band’ to an inbox loaded in another tab on the same machine, or an SMS message read by Google Voice in another tab means your ‘2FA’ is really just one factor layered twice instead of two-factor authentication.  A few more calls into the call center to help users cope with 2FA will be far cheaper in the long-run than the fallout of a major credential breach that affects your sites users.

We need to also discourage poor password management: allowing users to choose short or non-complex passwords and warning them about their poor choices is no excuse – we should just flatly reject them.  At the same time, we need to recognize that forcing users to establish too complex of a password will encourage them to establish a small number of complex passwords and reuse them across more sites.  This is one of the largest Achilles’s Heels for end-users: when a compromise of one site does occur, and especially if you have removed the option for users to establish a username not tied to their identity (name, e-mail address, or otherwise), you have made it tremendously easier for those who have gathered credentials from one site to have a much higher likelihood of exploiting them on your site.  Instead, we should consider nuances to each of complexity requirements that would make it likely a user would have to generate a different knowledge-based credential for each site.  While that in of itself may increase the chance a user would ‘write a password down’, a user who stores all their passwords in a password manager is still arguably more secure than the user who users one password for all websites and never writes it anywhere.

Finally, when lists of affected user accounts become available in uploaded databases of raw credentials that are leaked or testable on sites such as https://haveibeenpwned.com/ – ACT.  Find out your users that have overlap with compromised credentials on other sites, and proactively flag or lock their accounts or at least message to them to educate and encourage good end-user credential security.  We cannot unilaterally force users to improve the security of their credentials, but we can educate them, and we can make certain their eventual folly through our inaction.

 
 

The Wires Cannot Be Trusted; Does DRM Have Something to Teach Us?

In the continuing revelations about the depth to which governments have gone to subjugate global communications in terms of privacy, anonymity, and security on the Internet, one thing is very clear: nothing can be trusted anymore.

Before you wipe this post off as smacking of ‘conspiracy theorist’, take the Snowden revelations disclosed since Christmas, particularly regarding the NSA’s Tailored Access Operations catalog that demonstrates the ways they can violate implicit trust in local hardware by infecting firmware at a level where even reboots and factory ‘resets’ cannot remove the implanted malware, or their “interdiction” of new computers that allow them to install spyware between the time it leaves the factory and arrives at your house.  At a broader level, because of the trend in global data movement towards centralizing data transit through a diminishing number of top tier carriers – a trend is eerily similar to wealth inequality in the digital era – governments and pseudo-governmental bodies have found it trivial to exact control with quantum insert attacks.  In these sophisticated attacks, malicious entities (which I define for these purposes as those who exploit trust to gain illicit access to a protected system) like the NSA or GCHQ can slipstream rogue servers that mimic trusted public systems such as LinkedIn to gain passwords and assume identities through ephemeral information gathering to attack other systems.

Considering these things, the troubling realization is this is not the failure of the NSA, the GCHQ, the US presidential administration, or the lack of public outrage to demand change.  The failure is in the infrastructure of the Internet itself.  If anything, these violations of trust simply showcase technical flaws we have chosen not to acknowledge to this point in the larger system’s architecture.  Endpoint encryption technologies like SSL became supplanted by forward versions of TLS because of underlying flaws not only in cipher strength, but in protocol assumptions that did not acknowledge all the ways in which the trust of a system or the interconnects between systems could be violated.  This is similarly true for BGP, which has seen a number of attacks that allow routers on the Internet to be reprogrammed to shunt traffic to malicious entities that can intercept it: a protocol that trusts anything is vulnerable because nothing can be trusted forever.

When I state nothing can be trusted, I mean absolutely nothing.  Your phone company definitely can’t be trusted – they’ve already been shown to have collapsed to government pressure to give up the keys to their part of the kingdom.  The very wires leading into your house can’t be trusted, they could already or someday will be tapped.  Your air-gapped laptop can’t be trusted, it’s being hacked with radio waves.

But, individual, private citizens are facing a challenge Hollywood has for years – how do we protect our content?  The entertainment industry has been chided for years on its sometimes Draconian attempts to limit use and restrict access to data by implementing encryption and hardware standards that run counter to the kind of free access analog storage mediums, like the VHS and cassette tapes of days of old, provided.  Perhaps there are lessons to be learned from their attempts to address the problem of “everything, everybody, and every device is malicious, but we want to talk to everything, everybody, on every device”.  One place to draw inspiration is HDCP, a protocol most people except hardcore AV enthusiasts are unaware of that establishes device authentication and encryption across each connection of an HD entertainment system.  Who would have thought when your six year old watches Monsters, Inc., those colorful characters are protected by such an advanced scheme on the cord that just runs from your Blu-ray player to your TV?

While you may not believe in DRM for your DVD’s from a philosophical or fair-use rights perspective, consider the striking difference with this approach:  in the OSI model, encryption occurs at Layer 6, on top of many other layers in the system.  This is an implicit trust of all layers below it, and this is the assumption violated in the headlines from the Guardian and NY Times that have captured our attention the most lately: on the Internet, he who controls the media layers also controls the host layers.  In the HDCP model, the encryption happens more akin to Layer 2, as the protocol expects someone’s going to splice a wire to try to bootleg HBO from their neighbor or illicitly pirate high-quality DVD’s.  Today if I gained access to a server closet in a corporate office, there is nothing technologically preventing me from splicing myself into a network connection and copying every packet on the connection.  The data that is encrypted on Layer 6 will be very difficult for me to make sense of, but there will be plenty of data that is not encrypted that I can use for nefarious purposes: ARP broadcasts, SIP metadata, DNS replies, and all that insecure HTTP or poorly-secured HTTPS traffic.  Even worse, it’s a jumping off point for setting up a MITM attack, such as an SSL Inspection Proxy.  Similarly, without media-layer security, savvy attackers with physical access to a server closet or the ability to coerce or hack into the next hop in the network path can go undetected if they redirect your traffic into rogue servers or into malicious networks, and because there is no chained endpoint authentication mechanism on the media-layer, there’s no way for you to know.

These concerns aren’t just theoretical either, and they’re not to protect teenagers’ rights to anonymously author provocative and mildly threatening anarchist manifestos.  They’re to protect your identity, your money, your family, and your security.  Only more will be accessible and controllable on the Internet going forward, and without appropriate protections in place, it won’t just be governments soon who can utilize the assumptions of trust in the Internet’s architecture and implementation for ill, but idealist hacker cabals, organized crime rings, and eventually, anyone with the right script kiddie program to exploit the vulnerabilities once better known and unaddressed.

Why aren’t we protecting financial information or credit card numbers with media-layer security so they’are at least as safe as Mickey Mouse on your HDTV?

 

Tags: , , ,

Will State Treasuries Get Wise to Geolocation?

Slowly, mobile users are becoming increasingly complacent with giving up the last remaining visages of privacy when it comes to using a mobile web browser or using mobile native apps to do the most rudimentary tasks.  Just five years ago, imagine the adoption rate an application would have that required your exact geographic location and the rights to read the names and phone numbers of your entire digital Rolodex to let you read the front page headlines of news.  It would fester in digital obsolescence through right-out rejection!  Today, it’s a different ballgame.

There’s some interesting changes I can foresee that will come out of these shifting norms that have nothing to do with the overblogged concepts of targeted advertising or the erosion of our privacy.  There’s an awesome company called Square has a nifty credit card reader that plugs directly into the audio port of a mobile device to create instant point of sale devices with a lot of flexibility and little capital investment.  Even this can’t be called new  by today’s blogosphere standards, but something that caught my attention in beta testing this service was its requirement to continuously track your fine GPS location as an anti-fraud measure.  Pretty sensical, but also, pretty telling of things to come.

Anyone’s whose been following the tech world recalls the recent tiffs between Amazon and various states, most recently of those being California, that have tried to get a slice of the revenue generated by sales addressed to their state.  Large corporations can keep playing evasive maneuvers with state legislatures, and small business brick-and-mortar retailers as well as state coffers continue to feel the squeeze as shoppers become continuously comfortable and familiar with making large ticket purchases online, both to comparison shop, but also, quite obviously, to avoid paying state and local sales taxes.  A looming federal debt crisis that is decades away from a meaningful resolution means less distributions to states, leaving each to pick up a larger share of the tab for basic services, infrastructure improvements, and some types of entitlements.  States have reacted two-fold: to try to squeeze the large online retailers with legislation, and secondly, to require state taxpayers to volunteer their “fair share” by paying use tax.

Who accurately reports their online sales for the last tax year for the purposes of paying use tax?   Anyone that knows me is well aware of my almost maniacal love for and usage of budgeting tools that allow me to easily pull up a report of every online purchase I’ve made in a given time period in a matter if seconds.  But many people who owe hundreds in state use taxes file their returns the same as my parents, who purchase nothing online, and report zero in this box.

It would be relatively trivial from a technology perspective, but predictably forthcoming from a policy perspective, that this free ride is about to end.  One-third of smartphone owners have made a mobile online purchase from their phone, and a full 20% use their device as a fully-fledged mobile wallet.  47% of smartphone owners and 56% of tablet owners plan to purchase more products on their respective devices in the future.  With the skyrocketing adoption of mobile as a valid, trusted payments platform, it won’t be long before a majority of physical goods transactions are made with these devices.  In the name of “safer, more secure transactions”, consumers will likely be prompted to, and likely won’t think twice about, revealing their location from which they make that purchase.

No matter how much we might muse to the contrary, legislators, nor their more technically savvy aides, aren’t oblivious to the coming opportunity this shift will provide:  Imagine a requirement that any purchase made would log the location of the purchaser at the time the transaction was made, and charge online sales tax based on that location.  Since most mobile users spend their lives in their home location, this would keep a high percentage of taxes collected in this manner in the municipalities that provide services to the end consumer, reclaiming unreported taxable sales in a manner consistent with the collections prior to this massive behavioral shift.  It also levels the playing field for small retailers, who have to collect the same rates on their purchases.

It’s an intriguing scenario, and one not far from reality.  It may be this, and only this, that creates a consumer backlash against the complacent acceptance of leaking geolocation for anything other than maps or yellow page-type applications.  It may create scenarios where people travel to an adjoining town which creates a digital “tax haven” by instituting free municipal WiFi and low tax rates to drive a new form of digital tax haven tourism.

In any case, it’s definitely something to think about.

 

Sony’s Poor Behavior: What does this say about learning in America?

Ask any technical recruiter, or any quickly-growing technology business, what the number one challenge in the external environment is to growth, and the answer might surprise you.  In a resurgence reminiscent of the late 90’s in Silicon Valley in social media and associated technologies that connect people, ideas, and cash, there’s no lack of innovation, imagination, or good business ideas out there.  With investment tax credits and freely-flowing capital fueled by low interest rates and desperate federal, state, and local attempts to ignite the engines of industry and the economy, lack of funding or tightness of credit isn’t the challenge it was two years ago.  Rather, the lack of sufficiently knowledgeable and adequately trained professionals in highly technical fields is the biggest roadblock to the economic expansion of the services industry.

The cost of labor of highly skilled software engineers is increasingly well above the rate of inflation, having increased over 25% in the past 8 years.  (Just check out the term “computer systems software engineers median annual salary” on WolframAlpha.)  Simply supply and demand sets the price points for wages in local markets, and this trend broadly realized over the entire world has to make one wonder:  Where is the supply of new talent, and why is it not keeping pace with the growth demands of various technology-dependent industry sectors?  I postulate there is a widening knowledge gap analogous to the wealth gap in America, driven by the policy, legal, education and cultural environments.

Specifically, legislation built to protect corporate innovations, including software algorithm patents, anti-copyright mechanisms, and the Digital Millennium Copyright Act are two-edged swords that stifle learning by today’s technically-inclined youth by positioning technologies in untouchable black boxes.  Consider for a moment a future electrical engineer in the 1950’s and what his potential contributions to his field would be if he couldn’t dismantle a radio and learn how its components work.  What if programming languages were restricted from college classes to only corporations who could afford extortionate fees to access and learn technologies; would the networking revolution of the 1980’s and 1990’s have ever occurred?  If young men couldn’t open the hoods of their cars without going to jail, would have have any more automotive innovation, even mechanics?  While corporations must be able to earn protected profits to cover their costs of research and development, those same innovations must be allowed to be embraced and extended not only in the broader macro-economy, but also understood, adopted, and applied by the upcoming generation in higher education.

The higher education system itself, however, has been unable to keep pace with the imparting of technical knowledge specifically in business applications, leading to B-schools churning out freshly minted grads that understand some of the ideas behind requirements analysis and abstract system design, but who lack technical depth that cannot be dismissed by specialization difference, but is required in today’s world where technology permeates every level of business, industry, and life.  These b-school graduates then go out into the world, often with a deficient understanding of the application of technology required to manage technical resources or properly apply them to real-world processes.  I believe this falls squarely in the fault of the lack of cross-disciplinary study plans that integrate related topics within a college, but fails to address the widening rift between engineers who are able to understand the inner workings of the technology, and the business majors who receive only a brush of experience with key concepts.

As one university dean explained to me when I inquired why MIS majors were only required to take a single, general-purpose programming class without any exposure to reporting or datawarehousing concepts, upon which degreed candidates will be expended to understand in their first professional job, the answer was startling.  That PhD replied, “We teach people to build businesses and manage technical talent.  They don’t need to understand how the technical work is done.”  Wrong.  Dead wrong.  Long past are the days when engineers can be enlisted for one-off projects and dismissed when their work is done.  In today’s world, businesses that don’t integrate automation, networking, communication, and social media technologies are being quickly replaced by more savvy, and often foreign entities, that understand the importance of every corporate level, from the board room to the mail room, embracing a cross-functional understanding of technology application.

Restricting knowledge transfer is a sure-fire way to ensure you’ll never be able to procure enough of it.  A great case in point of such ignorance and short-sightedness can be found in the Sony vs. George Hotz drama currently unfolding in technical circles.  A young man, Hotz, dared to open his PS3 and learn how it works.  Pages and pages of TOS’s, AUP’s, and EULA’s explicitly forbid him from doing so, and now in retribution for sharing what he learned about what’s inside the $600 black box he purchased, one of the largest companies in the world is actively suing him, and those he spoke to, to keep what they learned to themselves by applying the DMCA against them.

The mass media has long abused and contorted the term “hacking” to apply to virtually any illegal, unethical, or criminal element that remotely involves technology.  First and foremost, hacking in its true sense, is learning what’s not obvious.  If we have effectively criminalized this learning process both legally and culturally, we can sit back and watch our economic output dwindle as other cultures and nations which either through their abandonment of intellectual property protections or permissive discovery and learning culture prepare a more capable generation of tinkerers, whom individually and in greater numbers will show us up.  Sony’s behavior in attempting to sue young men attempting to learn how they do what they do is driven by the assumption that knowledge can be owned, controlled, and metered.  While Sony may be able to apply punitive measures against a handful of the curious, the attempt to do so is not only futile (anyone remember what Napster did to the music recording industry?), but it creates a climate of fear and draconian policies that trickle down to further squelch off those who want to learn from being able to do so, both systematically by instilling a fear to do so will incur corporate wrath, or by discouraging institutions capable of imparting that knowledge from doing so as they attempt to shape ethical norms.

A society that fundamentally believes that some knowledge should not be learned nor shared is doomed to pay its dues to societies that value knowledge creation, knowledge transfer, and raising future generations with the desire and ability to become as competent as their forbearers and extend the reaches of their contributions.

 

Facebook OpenGraph: A Good Laugh or a Chilling Cackle?

If you want to sell a proprietary technology for financial gain or to increase user adoption for eventual financial gain once a model is monetized, the hot new thing is to call it “open” and ascribe intellectual property rights to insignificant portions of the technology to a “foundation.  The most recent case in point that has flown across my radar is Facebook’s OpenGraph, a new ‘standard’ the company is putting forward to replace their existing Facebook Connect technology, a system by which third-parties could integrate a limited number of Facebook features into their own sites, including authentication and “Wall”-like communication on self-developed pages and content.  The impetus for Facebook to create such a system is rather straightforward:  If it joins other players in the third-party authentication product-space, such as Microsoft’s Windows Live ID, Tricipher’s myOneLogin, or the OpenID, it can minimally drive your traffic to its site for authentication, where it requires you to register for an account and log in.  These behemoths have much more grand visions though, for there’s a lot more in your wallet than your money: your identity is priceless.

Facebook and other social networking players make a majority of their operating income from targeted advertising, and displaying ads to you during or subsequent to the login process are just the beginning.  Knowing where you came from as you end up at their doorstep to authenticate lets them build a profile of your work, your interests, or your questionable pursuits based on the what comes through a browser “referrer header”, a response most modern web browsers announce to pages that tell them “I came to your site through a link on site X”.  But, much more than that, these identity integration frameworks often require rich information that describe the content of the site you were at, or even metadata that site collected about you that further identifies or profiles you, as part of the transaction to bring you to the third-party authentication page.  This information is critical to building value in a targeted marketing platform, which is all Facebook really is, with a few shellacs of paint and Mafia Wars added for good measure to keep users around, and viewing more ads.

OpenGraph, the next iteration from their development shop in the same aim, greatly expands both the flexibility of the Facebook platform, as well as the amount and type of information it collects on you.  For starters, this specification proposes content providers and web masters annotate their web pages with Facebook-specific markup that improves the semantic machine readability of the page.  This will make web pages appear to light up and become interactive, when viewed by users who have Facebook accounts, and either the content provider as enabled custom JavaScript libraries that make behind-the-scenes calls to the Facebook platform or the user himself runs a Facebook plug-in in their browser, which does the same.  (An interesting aside is, should Facebook also decide to enter the search market, they will have a leg up on a new content metadata system they’ve authored, but again, Google will almost certainly, albeit quietly, be noting and indexing these new fields too.)

However, even users not intending to reveal their web-wanderings to Facebook do so when content providers add a ‘Like’ button to their web pages.  Either the IFRAME or JavaScript implementations of this make subtle calls back to Facebook to either retrieve the Like image, or to retrieve a face of a friend or the author to display.  Those who know what “clearpixel.gif” means realize this is just a ploy to use the delivery of a remotely hosted asset to mask the tracking of a user impression:  When my browser makes a call to your server to retrieve an image, you not only give me the image, you also know my IP address, which in today’s GeoIP-coded world, also means if I’m not on a mobile device, you know where I am by my IP alone.  If I am on my mobile device using an updated (HTML5) browser, through Geolocation, you know precisely where I am, as leaked by the GPS device in my phone. Suddenly, impression tracking became way cooler, and way more devious, as you can dynamically see where in the world viewers are looking at which content providers, all for the value of storing a username or password… or if I never actually logged in, for no value added at all.  In fact, the content providers just gave this information to them for free.

Now, wait for it…  what about this new OpenGraph scheme?  Using this scheme, Facebook can not only know where you are and what you’re looking at, but they know who you are, and the meaning behind what you’re looking at, through their proprietary markup combined with OpenID’s Immediate Mode, triggered through AJAX technology.  Combined with the rich transfer of metadata through JSON, detailing specific fields that describe content, not just a URL reference, now instead of knowing what they could only know a few years ago, such as “A guy in Dallas is viewing http://www.example.com/Page.html”, they know “Sean McElroy is at 32°46′58″N 96°48′14″W, and he’s looking at a page about ‘How to Find a New Job at a Competitor’, which was created by CareerBuilder”.  That information has to be useful to someone, right?

I used to think, “Hrm, I was sharing pictures and status updates back in 2001, what’s so special about Facebook?”, and now I know.  Be aware of social networking technology; it’s a great way to connect to friends and network with colleagues, but with it, you end up with a lot more ‘friends’ watching you than you knew you ever had.

References:

http://www.facebook.com/advertising/?connect

http://opengraphprotocol.org/

http://developers.facebook.com/docs/opengraph

http://openid.net/specs/openid-authentication-2_0.html