RSS

The Cost of Speed

First off, I’m quite dissatisfied with my work.

But then again, isn’t every architect?  No matter how fantastically we break down and lay out complex enterprise systems, there’s always something to be dissatisfied with even the best logical designs, physical hardware, business logic, and user experiences.  We know well enough that enterprise software development is never complete.  Sure, user stories and discrete tasks can be marked “complete” in an issue tracking system, but large enterprise systems are virtual organisms that can be endlessly extended, refined, and improved upon.  There is no finish line, but rather a multidimensional cube of gradients where each metric of success is defined and measured by different stakeholders. So, when I state I’m dissatisfied with my work, that’s not a state of being, it’s an acknolwedgement that architecting and developing these systems is a continuum of satisficing stakeholders, not a process that is ever truley complete.  We should be dissatisfied, because if we are not, we are complacent.

Measurements of Success

However, just because the composition of large and complex systems has no discrete end, it doesn’t mean success cannot be measured.  There are a ton of metrics that can be derived to have some meaning to various parties in an ISV and the client ecosystem, some of which have meaning, and some of which can be predictors of success.  When I look at a system, I intrinsically think about the technical metrics first – the layers of indirection, query costs, how chatty an interface is, cyclomatic complexity, interface definitions, the segregation of responsibility, patterns that are reusable and durable from one set of developers to the next, et cetera.  But architects must understand that while these metrics do play a role in the ultimate success, re-usability, and appeal of a solution, they are not the same metrics a business user — usually those who define success at a more meaningful level for going concern of sustainable business — will consider.  Instead, these technical metrics contribute to other metrics that are the ultimate way in which a product’s success will be measured and judged.  Specifically, there are only three things that executive offices, sales, and prospects care about:

  1. What does the system do?  (What are the features and benefits?)
  2. What does the system look like when it does it?  (What’s the visual user experience?)
  3. How fast does the system do it?

Not that absent from that list is a metric worded like “How does the system do it?”  Inevitably the ‘how’ question is part of large Requests For Proposal (RFP’s), but in my experiences, at the end of the day, those questions are mere pass-fail criteria that rarely play into an actual purchase decision or a contract renewal decision.  Quite often both junior and senior developers, and many times even management fails to keep this in perspective.  If a solution can demonstrate what it does — and what it does is what a customer needs it to do, that it does it in a pleasing way, and that it does it fast, users are satisfied.

That last item, “How fast does the system do it?”, seems out of place, doesn’t it?   Now any whiney sales guy (I used to work with a lot of them, thankfully we have an awesome team where I’m at now) can tell you how a sluggish-feeling web page can tank a demo or blame a two second render time for a bacon he didn’t bring home last quarter, and cloistered developers are used to brushing off those comments.  They really shouldn’t.  Speed directly determines the success of a product in three ways:

Users who have a slow experience are less likely to start to use the product

KISSmetrics put together a fantastic infographic on this subject that shows how page abandonment is affected by web page load times.

And let’s not fool ourselves — just because your product is served on an intranet, not for the fickle consumption of the B2C public Internet, your users are no yes fickle or demanding.  Nor are you immune to this phenomenon because you utilize native clients or rich internet applications (RIA’s) to provide your product or service.  Users will abandon your way to access their data if it’s too slow, even if you might think they are a captive audience.  For instance, in a world where data liberation is a real and powerful force — where users demand to export their data from your system to use the interface of their choice, or even worse, where users demand you provide API’s to your data so they can use your competitor’s user interface — no audience is captive.  Even worse for those of you providing a B2C public Internet service, page load times play into search engine optimization (SEO) ranking algorithms, meaning a slow slight is less likely to even enter the consciousness of prospects who depend on a search engine to scope their perception of available services.

Users who have a slow experience are less likely to continue using a product

Let’s say you’ve enticed users with all your wonderful functionality and a slick Web 2.0 (I hate that term, for the record) user interface to visit your site, perhaps even sign-up and take it for a spin.  Most developers fail to realize that a clunky web browsing experience in an application doesn’t just temporarily frustrate users, it affects their psychological perceptions about the credibility of your product (Fogg et al. 2001) as well as the quality of the service (Bouch, Kuchinsky, and Bhatti 2000).  In one case which analyzed a large data site of an e-commerce site, a one second delay in page loads reduced customer conversion rates by 7%.

The above graphic is a visualization of a behavior model by BJ Fogg of Stanford University about how users motivation and ability create a threshold to take action, and what triggers a product can use to entice users to cross that threshold depending on their position along this action boundary.  Truly fascinating stuff, but to distill it down into the context of this blog post — the marketing of your product and the value proposition of your service should be creating a high motivation for your end users.  What a shame then, if users never take action to use your product because you failed to reduce barriers to usage, reducing the ability and increasing complexity because your site was sluggish.  Crossing that boundary is one hurdle to cross, but ISV’s have the ability to move the boundary in the way the market, design, and implement the product.

The Cost-Speed Curvature

Okay, okay, you got it, right?  The product needs to be fast.  But how fast is fast enough?  You can find studies from the late 1990’s that say 8-10 seconds is the gold standard.  But back in reality, our expectations are closer to the 2-3 second threshold.  The wiggle room is admittedly extraordinarily small in this minuscule window: it doesn’t accept any excuses due to the slow rendering speeds of ancient computers or low-powered mobile devices that might be using your site, the client’s low bandwidth, or buffer bloat in each piece of equipment between your server’s network card and your end user’s.  Not to mention, most sites aren’t simply delivering static, cache-able content.   They’re hitting web farms of web servers behind load balancers, often using a separate caching instance, subject to the disk and network I/O of a database server and any number of components in between to execute potentially long-running processes — all of which need to happen in a manner that still provides the perception of a speedy user experience.

Now, exactly how to get your product or service faster isn’t my concern, and it’s highly dependent on exactly what you do and exactly how you do it — your technology stack and specific infrastructure decisions.  What I can tell you though is you need an answer to your executive suite, board, or ultimate impatient user who, no matter how performant (or not) your system is, asks, “How can we make this faster?”  This answer shouldn’t be quantitative, as in, “We can shave 4 seconds off if we do Enhancement X, which will take two weeks”, unless you want to hear your words parroted back to you when you can’t deliver such an unrealistic expectation.  Even if you have an amazing amount of profiled data points about each component of your system, quantifying improvements is a mental exercise with little predictable result in enterprise solutions.

Why?

Well, in any serious enterprise software solution, there is obviously code you didn’t write and pieces you didn’t architect.  Even if you were Employee #1, and not inheriting a mess by a predecessor team or architect, inevitably you’re using multiple black boxes in your interconnected system in the form of code libraries.  Even if you’re a big FOSS proponent and can technically look at any of the source code for those libraries, face it, in a real business you never will have the time to do so, if the nerdy interest.  While you can sample the inputs and outputs into each of those closed systems, you can predict but you cannot quantify how changing an input will affect the performance of a closed system creating an output.  Don’t try it, you will fail.

Instead, remember my opening paragraph — performance optimization, much like “feature completeness”, is not a goal, it is a process that is continual over the life of the product.  Obviously, developers start this process Googling StackOverflow et al. for “slow IOC startup” or “IIS memory issues in WCF services” or whatever the issue is with your particular technology stack, and will review the “me too” comments to see if they too did a “me too” misconfiguration or misdesign.  Maybe it’s “whoops, forgot to turn on web server GZIP compression” or “whoops, forgot to turn off debug symbols when I compile”.  Typically, these are low-hanging fruit — low risk to affect change with a high potential impact.  But eventually you run out of simple “whoops!” Eureka moments or answers to simple questions, and you end up having to ask harder questions that have fewer obvious answers, thus requiring time spent specifically on researching those answers and developing solutions in-house.  When you think about it, there’s a real escalating cost for each unit of performance gain over the lifetime of the product for this very reason.  Graphed as a curve, I’ll call it the Marginal Cost of Speed:

And this is, in fact, a reality that must be thoroughly understood inside a development team all the way up through the executive suite.  Not dissimilar to how Einstein postulated the only way to achieve infinite speed was to harness infinite energy, the only way to get an instance page-load or a zero-latency back-end process completion is by spending an infinite amount of resources achieving that goal.  I say this has to be understood at the development team level mostly because you will never, no matter how pragmatic and persuasive you are, convince the executive suite or the customer that you in fact cannot repeat the last thing you did that doubled performance, because the further you go down the performance optimization road, the narrower and longer it gets between mile markers.  The development team needs to fully understand what constitutes low-hanging fruit and must have their efforts focused on those simple changes that affect the greatest change first, and not tackle such problems with an instinctive impulse to refactor.

Likewise, the executive and marketing teams need to understand the development of a lightning fast product is a last-mile problem, that reaching that nirvana will require an increasing amount of time (cost) and resources (cost) to achieve it.  The effort is an exercise in satisficing the parameters to find an acceptable middle-ground.  Usually, though, the realities of product development aren’t treated the same as the realities of other externally-governed factors, simply because they are perceived not to be governed by any absolutes since they are not external.  Put another way, customers of Amazon.com might abandon the site because shipping times for purchases are too long, but the company can’t just start comp’ing overnight service for everyone.  Well, they could do so, but the cost to acquire that customer just skyrocketed to a level that makes their business model unsustainable.  Similarly, the time spent on performance optimization has a real and measurable cost, and it can actually be quantified as a cost to acquire and retain a customer when you think about how a performant site directly impacts customer acquisition and retention.  Now, the business folks can definitely understand it in those terms.  But, they’ll still want it faster anyway.

Where To Sit

So, where do you then sit on that curve?  The real answer is, it doesn’t really matter how much you do or don’t want to make performance optimizations, particularly if they’re approaching the infinite cost asymptote of that graph.  The answer is, you will have to sit wherever your competitors sit.  Most of us out there building the next great thing aren’t making markets, we’re creating displacement products.  For those of us doing so, we’ve got to chase after wherever your most successful competitor sits on the marginal cost of per speed graph.  Now, to be fair, those guys have probably been working for a few years on their ascent up that cost-performance climb, and they probably have deeper pockets / more slack time to do so than you do if you’re breaking into a market, but there is a trade-off the suits can make.  The accumulated cost to 90% of the graph is less than the whole last 10%, so put another way, if you can be at least performant to make 90% of those prospects who are 100% happy with your competitor’s product, that may well be enough to get displace enough business to let you keep tackling that last mile another day.

Obviously, this question can’t be completely answered that way, because it’s highly dependent on your specific markets.  Are you entering a market with a democratic offering of grass-roots, home-grown alternatives or are you tackling an oligarchy industry?  Are you targeting disparate customers, or are your customers banded together in trade associations — which translates to — how much does your reputation change for each success or each failure?  How are your customers allowed to back out of a contract if they find performance or other factors don’t match the vision sold to them?  These answers may make the “how fast does it need to be” answer necessitate a disproportionately higher amount of resources and time to get it where it needs to be to have a good, marketable value proposition.

In summary, you never really should sit anywhere on that curve, you should be climbing it.  It will cost you more the further you climb, but you should never feel like you’re done optimizing performance, and you should never stop continuously reviewing it.  Remember how I mentioned most of us are in the displacement business?  Even if you’re not, someday, someone else will be, looking to displace you.  That guy might be me someday, and rest assured, I won’t rest assured anywhere. 🙂

 
Leave a comment

Posted by on May 2, 2012 in User Experience

 

It’s About the Developers, Stupid!

Last week’s continued equity market shakeups were made even more volatile by a few headscratchers:  Google purchasing Motorola Mobility for USD$12.5 billion (nearly $735 thousand per issued patent held by the company), and HP musing about spinning off its PC manufacturing business and potentially buying Autonomy to become a software and consulting house, an apparent IBM redux.  Endless articles and commentaries are focusing on Google’s purchase of MMI, but the more interesting story to me is HP, and how their shift in business model is less about focusing on higher margin lines of business, but rather admitting failure in their purchase of Palm, and more generally, in building sustainable developer ecosystems.

When big companies spend big money on massive acquisitions, they take on huge amounts of explicit, intrinsic, and opportunity risk that only a carefully designed strategy will vindicate.  When the stakeholders discuss only the balance sheet terms of deals they agree to, without really understanding the cultures of the external environments they depend upon, there’s a lot of unmitigated risk, and ultimately, a lot of avoidable waste.  Arguably, Palm faltered and became an acquisition target for HP not because they had a inferior product or platform, but because they failed to nurture a strong developer ecosystem after Jeff Hawkins and Donna Dubinsky left to form Handspring.  When iterations of the Palm OS failed to deliver critical platform feature requests to keep the offering competitive, Palm addressed the problem by releasing webOS, years later, and with a cavalier attitude that they could build a new developer community around the offering without needing to mend their fences with their long-time supporters.

We know what happened there – Palm stumbled, and HP picked up a compelling technology offering in webOS.  But HP made the same competitive mistake as Palm – it failed to foster a developer community to propel WebOS forward as the mobile operating system oligarchy was taking shape.  It, like Nokia with Symbian, did not appreciate the role of a thriving developer ecosystem in building a mobile brand, nor did they continue to continuously invest into it. Great technologies attract bright developers, who in turn make direct contributions to the ecosystem in the forms of apps, frameworks, and cloud services, and indirect contributions by recommending technologies to ‘the suits’ who invest resources in leveraging them for their own ends.  This generates a current of innovation that can become self-sustaining, and this fills out direct to consumer ‘app stores’ with features that intrigue consumers who make the ultimate platform selection through their purchases.  Let’s face it, when you walk into a brick and mortar mobile phone store, you’re not confronted by displays that put “smart phones” on one wall, “camera phones” on another, with old-style candy bar phones somewhere in the back – that was so four years ago!  Consumers today are targeted with marketing to compel them to choose an ecosystem — Android vs. iOS vs. Windows Mobile 7.  The hardware is become less relevant as a purchasing decision, because there’s few physical differentiators other than form factor (which Apple continues to win, hands-down).

Microsoft has understood this concept extremely well for decades, and they embrace their strategy by focusing on delivering excellent tool chains for developing applications that function on platforms (operating systems) they sell.  Despite Steve Ballmer’s fanatical espoused enthusiasm on the matter, the company actually does make good on their word on investing in developers who invest in their technology.  They virtually give away expensive integrated development environments to secondary and post-secondary schools and create extensive supportive curriculum, documentation, and living communities that attract bright people and encourage other young minds seeking to connect with the brightest of their peers working on their technology.

Microsoft’s not alone in this strategy, but they’re notable for how well they execute it.  Apple is one of the only notable exceptions to this process: attracting developers by rapidly building amazing market share.  Apple is a force to be reckoned with, for sure, but at the end of the day, “suits” decide to support iOS because of it’s market share, not because their technologists and in-house developers extol the “amazing development experience” of iOS.  Nokia tried this and failed.  RIM is failing despite having a great market share position, at one time, for a mixture of technology capability and community support reasons.

The lesson here, though, isn’t restricted to the multinational, large-cap platform developers — even small, agile start-ups must quickly understand the importance and formulate strategies for building synergies to succeed.  Whether they’re implemented through open source software, direct-to-the-community adoption initiatives, or strategic partnerships between peer companies, small businesses depend upon the rich technological feedback for continous improvement they cannot generate internally due to constrained early-stage resources.

HP, though, doesn’t understand or doesn’t appreciate the “how” of building a real, working platform ecosystem is critical not only for innovative start-ups, but also for large-cap software firms.  And though HP may be throwing in the towel for mobile devices, this is a lesson critically important for any software company no matter what their distribution channel is: mobile, tablet, desktop, or enterprise servers.  The fact HP doesn’t get it or is too encumbered to act on it, is the biggest threat to HP spinning off their low-margin, but reliable revenue generating manufacturing segment and plugging ahead.

Ballmer should do his good deed for 2011 and ring them up with a tip: It’s all about the developers, stupid!

 
1 Comment

Posted by on August 21, 2011 in Technology Policies

 

Will State Treasuries Get Wise to Geolocation?

Slowly, mobile users are becoming increasingly complacent with giving up the last remaining visages of privacy when it comes to using a mobile web browser or using mobile native apps to do the most rudimentary tasks.  Just five years ago, imagine the adoption rate an application would have that required your exact geographic location and the rights to read the names and phone numbers of your entire digital Rolodex to let you read the front page headlines of news.  It would fester in digital obsolescence through right-out rejection!  Today, it’s a different ballgame.

There’s some interesting changes I can foresee that will come out of these shifting norms that have nothing to do with the overblogged concepts of targeted advertising or the erosion of our privacy.  There’s an awesome company called Square has a nifty credit card reader that plugs directly into the audio port of a mobile device to create instant point of sale devices with a lot of flexibility and little capital investment.  Even this can’t be called new  by today’s blogosphere standards, but something that caught my attention in beta testing this service was its requirement to continuously track your fine GPS location as an anti-fraud measure.  Pretty sensical, but also, pretty telling of things to come.

Anyone’s whose been following the tech world recalls the recent tiffs between Amazon and various states, most recently of those being California, that have tried to get a slice of the revenue generated by sales addressed to their state.  Large corporations can keep playing evasive maneuvers with state legislatures, and small business brick-and-mortar retailers as well as state coffers continue to feel the squeeze as shoppers become continuously comfortable and familiar with making large ticket purchases online, both to comparison shop, but also, quite obviously, to avoid paying state and local sales taxes.  A looming federal debt crisis that is decades away from a meaningful resolution means less distributions to states, leaving each to pick up a larger share of the tab for basic services, infrastructure improvements, and some types of entitlements.  States have reacted two-fold: to try to squeeze the large online retailers with legislation, and secondly, to require state taxpayers to volunteer their “fair share” by paying use tax.

Who accurately reports their online sales for the last tax year for the purposes of paying use tax?   Anyone that knows me is well aware of my almost maniacal love for and usage of budgeting tools that allow me to easily pull up a report of every online purchase I’ve made in a given time period in a matter if seconds.  But many people who owe hundreds in state use taxes file their returns the same as my parents, who purchase nothing online, and report zero in this box.

It would be relatively trivial from a technology perspective, but predictably forthcoming from a policy perspective, that this free ride is about to end.  One-third of smartphone owners have made a mobile online purchase from their phone, and a full 20% use their device as a fully-fledged mobile wallet.  47% of smartphone owners and 56% of tablet owners plan to purchase more products on their respective devices in the future.  With the skyrocketing adoption of mobile as a valid, trusted payments platform, it won’t be long before a majority of physical goods transactions are made with these devices.  In the name of “safer, more secure transactions”, consumers will likely be prompted to, and likely won’t think twice about, revealing their location from which they make that purchase.

No matter how much we might muse to the contrary, legislators, nor their more technically savvy aides, aren’t oblivious to the coming opportunity this shift will provide:  Imagine a requirement that any purchase made would log the location of the purchaser at the time the transaction was made, and charge online sales tax based on that location.  Since most mobile users spend their lives in their home location, this would keep a high percentage of taxes collected in this manner in the municipalities that provide services to the end consumer, reclaiming unreported taxable sales in a manner consistent with the collections prior to this massive behavioral shift.  It also levels the playing field for small retailers, who have to collect the same rates on their purchases.

It’s an intriguing scenario, and one not far from reality.  It may be this, and only this, that creates a consumer backlash against the complacent acceptance of leaking geolocation for anything other than maps or yellow page-type applications.  It may create scenarios where people travel to an adjoining town which creates a digital “tax haven” by instituting free municipal WiFi and low tax rates to drive a new form of digital tax haven tourism.

In any case, it’s definitely something to think about.

 

Sony’s Poor Behavior: What does this say about learning in America?

Ask any technical recruiter, or any quickly-growing technology business, what the number one challenge in the external environment is to growth, and the answer might surprise you.  In a resurgence reminiscent of the late 90’s in Silicon Valley in social media and associated technologies that connect people, ideas, and cash, there’s no lack of innovation, imagination, or good business ideas out there.  With investment tax credits and freely-flowing capital fueled by low interest rates and desperate federal, state, and local attempts to ignite the engines of industry and the economy, lack of funding or tightness of credit isn’t the challenge it was two years ago.  Rather, the lack of sufficiently knowledgeable and adequately trained professionals in highly technical fields is the biggest roadblock to the economic expansion of the services industry.

The cost of labor of highly skilled software engineers is increasingly well above the rate of inflation, having increased over 25% in the past 8 years.  (Just check out the term “computer systems software engineers median annual salary” on WolframAlpha.)  Simply supply and demand sets the price points for wages in local markets, and this trend broadly realized over the entire world has to make one wonder:  Where is the supply of new talent, and why is it not keeping pace with the growth demands of various technology-dependent industry sectors?  I postulate there is a widening knowledge gap analogous to the wealth gap in America, driven by the policy, legal, education and cultural environments.

Specifically, legislation built to protect corporate innovations, including software algorithm patents, anti-copyright mechanisms, and the Digital Millennium Copyright Act are two-edged swords that stifle learning by today’s technically-inclined youth by positioning technologies in untouchable black boxes.  Consider for a moment a future electrical engineer in the 1950’s and what his potential contributions to his field would be if he couldn’t dismantle a radio and learn how its components work.  What if programming languages were restricted from college classes to only corporations who could afford extortionate fees to access and learn technologies; would the networking revolution of the 1980’s and 1990’s have ever occurred?  If young men couldn’t open the hoods of their cars without going to jail, would have have any more automotive innovation, even mechanics?  While corporations must be able to earn protected profits to cover their costs of research and development, those same innovations must be allowed to be embraced and extended not only in the broader macro-economy, but also understood, adopted, and applied by the upcoming generation in higher education.

The higher education system itself, however, has been unable to keep pace with the imparting of technical knowledge specifically in business applications, leading to B-schools churning out freshly minted grads that understand some of the ideas behind requirements analysis and abstract system design, but who lack technical depth that cannot be dismissed by specialization difference, but is required in today’s world where technology permeates every level of business, industry, and life.  These b-school graduates then go out into the world, often with a deficient understanding of the application of technology required to manage technical resources or properly apply them to real-world processes.  I believe this falls squarely in the fault of the lack of cross-disciplinary study plans that integrate related topics within a college, but fails to address the widening rift between engineers who are able to understand the inner workings of the technology, and the business majors who receive only a brush of experience with key concepts.

As one university dean explained to me when I inquired why MIS majors were only required to take a single, general-purpose programming class without any exposure to reporting or datawarehousing concepts, upon which degreed candidates will be expended to understand in their first professional job, the answer was startling.  That PhD replied, “We teach people to build businesses and manage technical talent.  They don’t need to understand how the technical work is done.”  Wrong.  Dead wrong.  Long past are the days when engineers can be enlisted for one-off projects and dismissed when their work is done.  In today’s world, businesses that don’t integrate automation, networking, communication, and social media technologies are being quickly replaced by more savvy, and often foreign entities, that understand the importance of every corporate level, from the board room to the mail room, embracing a cross-functional understanding of technology application.

Restricting knowledge transfer is a sure-fire way to ensure you’ll never be able to procure enough of it.  A great case in point of such ignorance and short-sightedness can be found in the Sony vs. George Hotz drama currently unfolding in technical circles.  A young man, Hotz, dared to open his PS3 and learn how it works.  Pages and pages of TOS’s, AUP’s, and EULA’s explicitly forbid him from doing so, and now in retribution for sharing what he learned about what’s inside the $600 black box he purchased, one of the largest companies in the world is actively suing him, and those he spoke to, to keep what they learned to themselves by applying the DMCA against them.

The mass media has long abused and contorted the term “hacking” to apply to virtually any illegal, unethical, or criminal element that remotely involves technology.  First and foremost, hacking in its true sense, is learning what’s not obvious.  If we have effectively criminalized this learning process both legally and culturally, we can sit back and watch our economic output dwindle as other cultures and nations which either through their abandonment of intellectual property protections or permissive discovery and learning culture prepare a more capable generation of tinkerers, whom individually and in greater numbers will show us up.  Sony’s behavior in attempting to sue young men attempting to learn how they do what they do is driven by the assumption that knowledge can be owned, controlled, and metered.  While Sony may be able to apply punitive measures against a handful of the curious, the attempt to do so is not only futile (anyone remember what Napster did to the music recording industry?), but it creates a climate of fear and draconian policies that trickle down to further squelch off those who want to learn from being able to do so, both systematically by instilling a fear to do so will incur corporate wrath, or by discouraging institutions capable of imparting that knowledge from doing so as they attempt to shape ethical norms.

A society that fundamentally believes that some knowledge should not be learned nor shared is doomed to pay its dues to societies that value knowledge creation, knowledge transfer, and raising future generations with the desire and ability to become as competent as their forbearers and extend the reaches of their contributions.

 

P2P DNS: Not solving the real problem of centralized control

The more tech-savvy probably noted with passing interest the news blip this last week by Peter Sunde, co-founder of The Pirate Bay, a notorious website for finding BitTorrent .torrent files for everything from public domain books to copyrighted music, video, and warez of a new peer-to-peer Domain Name System in response to recent US authoritarian action in seizing domain names.  The specific instance that is causing so much cyberangst is the Department of Homeland Security and Immigration and Customs Enforcement bowing to the pressures of media giants have shut down RapGodfathers.com.  By “shut down”, these enforcement agencies didn’t just confiscate server equipment, they actually seized DNS hostnames assigned by their registrar, through ICANN.  Long has the rest of the world complained that IANA and ICANN, bodies that assign all sorts of global numbering and addressing schemes, are puppets of the U.S. Government, and even a number of the American tech crowd that the actions of these bodies over time are counter to the perceived free and open nature of the Internet.

While DNS isn’t that important from a purely technological networking perspective, that is, it is simply a redirection service, almost no denizens of the web could find Google, Facebook, or Bing without it.  DNS is a protocol that allows a simple name, such as example.com to be translated into an IP address, serving the role of a phone book of sorts.  I’ll have to admit, just as I’d probably lose all my friends if I lost my EVO, since I depend on my address books over memorized phone numbers these days — I only know some of Google’s servers, my work, and my home IP address by heart, but for everything else, I’m dependent on DNS to tell me (and my browser) where to find things.  In response to ICE’s attack on the perception that domain names should not be commandeered by governments, Sunde has started a project to offer up an alternative DNS service over peer-to-peer networks, to remove the ability for corporations or governments to seize domains.  Unlike failed ‘alternate root’ schemes in the past, this shift in technology would, as the thought goes, allow the domain name resolution service to be operated by consensus.  In such a world, ICE couldn’t have seized RapGodfathers.com domain, nor could any corporation with a similar name as a private individual file a copyright claim to take a domain name away from them.  Do we have a fundamental right to allow the public to sign off on who gets to hold what URL properties?

The rhetoric on the issue has been amusing at best and eye-rolling at worst, when people like Keir Thomas make outlandish claims that an alternate DNS scheme will be ‘heartily embraced by terrorists and pedophiles’.  Sadly, such claims showcase the true lack of technical understanding about how the networking protocols of the Internet actually work.  Coming back to my phone book analogy, a P2P DNS scheme would be akin to GOOG-411 providing phone numbers instead of my local phonebook (which sits unused, now 5 years old, mind you):  Anyone can one a phone number or IP address, but the way you resolve a name to a number doesn’t really, on a true technical level, change anything about who controls access and availability to resources.  If I could configure my computer to point cocacola.com to illegal content, that doesn’t change the fact the content was out there to point to in the first place, nor does it make it any easier to find for those not seeking how to access it.

The real threat is when governments start mandating control over a protocol that hasn’t yet become a household name — BGP.  Around in some form since 1982, BGP doesn’t translate human-recognizable names into network numbers, it actually describes where to route those numbers.  When the Great (Fire)wall of China censors where its citizens can go, it does so by dictating that the numbers it doesn’t want you to dial call non-existent places, or more realistically in the network world, that the paths to route your request to are wrong or dead-end.  Back to the analogy, controlling BGP is the end-game on the Internet– instead of taking over the phone book’s printing presses, you take over the phone company’s switching stations themselves.  For those wishing to make the Internet more autonomous and decentralized, the future to securing the existing global communications network from superpowers’ total control lies in alternatives to BGP, not DNS.

However, P2P BGP isn’t going to happen, because as DNS instructs your computers where to go to find information, an attribute you can control yourself, BGP instructs your ISP’s routers where to get their information, and you won’t ever control their hardware.  And really, the fundamental issue is there’s no clear way to keep the current networking stack of protocols we collectively call the Internet free and open, as we like to believe it should be.  Instead, for those wanting to leverage the crowd to free the Internet from tyrannous regimes or powerful special interests, your best bet for the future is Freenet or Tor, layers that sit on top of the Internet’s infrastructure and provide their own.  They route requests and traffic through a “tunnel-atop-the-tunnels” approach that cannot be easily discerned nor controlled.  If the history of Internet governance has taught us anything, it’s that if something can be controlled, the wrong entities end up controlling it.  The approach that Freenet and similar onion routing networks take is to remove control and technologically favor independent voices.  Instead of writing new technologies like P2P DNS to address yesterday’s problems, I heartily recommend those with the interest and aptitude look into key-routing networks like Freenet, which by their very design prevent eavesdropping and circumvent traditional control mechanisms.  Just in their awkward teenage years, these will be the technology tools of digital patriots in the future, not P2P DNS on a network protocol stack that is increasingly being pulled out of the grasps of its grandfathers and architects.

I will have to commend Sunde’s efforts though, on the principal that if you do some Google keyword searching, ICE’s seizure of RapGodfathers.com was only a spec on the web’s map until Sunde’s project was announced.  Raising awareness of who holds the keys to the words we write, read, and share is paramount in a world where most of the people who write, read, and share their thoughts over the Internet are generally otherwise without a clue to how their ideas are allowed or blocked by the powers above.

 
1 Comment

Posted by on December 3, 2010 in Ethical Concerns, Privacy

 

Be Assimilated, Or Be Ignored

An interesting exercise is to visualize tidbits of data as material widgets, units of value that can be bought and sold in a marketing economy controlled by the forces of supply and demand. While completely relevant in an information and services-based economy, often we don’t stop to put data, news, or information in the same contextual terms as goods we find in supermarkets or services we can find in the phone book.  (What is a phone book anyhow?)  All the same rules apply, however.

For example, if I can find white socks of equal quality cheaper at a store next door, for me, this is a viable substitute, and I will vote on retailers with my purchasing power.   I am not, however, interested in purchasing raw cotton to spin my own socks.  I’m just not equipped with the time or skill to take raw inputs of that nature to create the outputs I desire; the cost to do so is far greater than the opportunity cost I would incur doing things that make a lot more sense for my skill set.  Similarly, if I can find my news on Twitter, Facebook, or other blogs, where others have distilled facts and raw data into commentary and analysis, and if I can determine the quality is sufficiently the same for my needs, then I won’t need to buy a newspaper, pay for online periodical archive access, or spend an opportunity cost in watching ads before each 30 second video on my local news channel’s website.

That’s nothing new.  What is new is, my economic substitutes, or other sources for information consumption, have a key feature my previous choices did not:  aggregation.  Now, I don’t even have to look at this information in the layout provided for me.  I don’t need to view CNN’s promotions, Google’s AdWords, or Twitter’s obnoxious color schemes when all my news feeds come into my Microsoft Outlook or Google Reader tool.  I pick and choose not only what I want to consume, but the manner in which it’s displayed.  Were someone to make information unavailable for syndication or add inline advertisements to the syndicated content itself, I perceive there are many equally valuable substitutes, so I can nix any offending feed and replace it with another that meets my consumption demands.

Any viable business needs to consider not only the importance of providing their content for aggregation to vie for a user’s attention among other feeds, but also to build aggregation into their own offerings.  As aggregators begin to control not only where a user looks, but provide more advanced options to filter what feeds are recommended for users and further, what portions of feeds are selected for inclusion into a dashboard view, they will become the most important gatekeeper of the next decade.  They will control not only the screen real estate used to provide banner ads and inline contextual linkages to other promotional content, but they will also gain the power to shape what we think about and to what ideas we are exposed.

For the rest of us, the bloggers and content providers, don’t worry so much on your layout and formatting.  The way in which you deliver information loses relevancy compared to the actual value of the content you provide, and no matter how valuable you feel your analysis or commentary is, in a plugged-in world that encourages further social media interaction and feedback from smart people who may not be editorial experts, your offering is just a commodity.  You will become increasingly disconnected from your consumers, who will use the product of your data and information over channels you do not control and of which you may not even be aware.   You will lose your ability to monetize the delivery of your content, or at the very least, someone else will be giving you a faction of the channel they now own, a pay-for-play access fee to their aggregation or social network users.  It may not be what we want, but it’s better than not being included in the new digital world order, as it were.  Or, in other words, prepare to be assimilated, or prepare to be ignored!

 
Leave a comment

Posted by on August 5, 2010 in Social Media

 

The Long Overdue Case for Signed Emails

A technology more than a decade-old is routinely ignored by online banking vendors despite a sustained push to find technology that counteract fraud and phishing: S/MIME.  For the unaware, S/MIME is a set of standards that define a way to sign and encrypt e-mail messages using a public key infrastructure (PKI) to either provide a way to prove the identity of the message sender (signing), to encrypt the contents of the message so that only the recipient can view the message (encryption), or both (signing and encrypting).  The use of a PKI scheme to create secure communications is generally implemented with asymmetric sets of public and private keys, where in a signing scenario, the sender of messages makes their public key available to the world which can be used to validate that only the corresponding private key was used to craft a message.

This secure messaging scheme offers a way for financial institutions to digitally prove any communication dressed up to look like it came from the institution in fact was crafted by them.  This technology can both thwart the falsification of the ‘from address’ from which a message appears to be sent as well as ensures the content of the message, it’s integrity, is not compromised by any changing of facts or figures or the introduction of other language, links, or malware by any of the various third-parties that are involved with transferring an e-mail from the origin to the recipient.  The application for financial institutions is obvious in a world where over 95% of all e-mail sent worldwide is spam or a phishing scam.  Such gross abuse of the system threatens to undermine the long-term credibly medium, which, in a “green” or paperless world, is the only cost-effective way many financial institutions have to maintain contact with their customers.

So, if the technology is readily available and the potential benefits are so readily apparent, why hasn’t digital e-mail signatures caught on in the financial services industry?  I believe there are several culprits here:

1. Education. End-users are generally unaware of the concept of “secure e-mail”, since implementing digital signatures from a sender’s perspective requires quite a bit of elbow grease, today colleagues don’t send secure messages between each other.  Moreover, most financial institution employees are equally untrained in the concept of secure e-mail, how it works, and much less, how to explain it to their customers to make it understandable as well as a competitive advantage.  Financial institutions have an opportunity to take a leadership role with digital e-mail signatures, since as one of the most trusted vendors any retail customer will ever have, creating a norm of secure e-mail communications across the industry can drive both education and technology adoption.  Even elderly users and young children understand the importance of the “lock icon” in web broswers before typing in sensitive information such as a social security number, a credit card number, or a password — with proper education, users can learn to demand the same protection afforded by secure e-mail.

2. Lack of Client Support.  Unfortunately, as more users shift from desktop e-mail clients to web-based e-mail clients like Gmail and Yahoo Mail, they lose a number of features in these stripped down, advertising-laden SaaS apps, one of which is often the ability to parse a secure e-mail.  The reasons of this are partially technological (it does take a while to re-invent the same wheel desktop client applications like Outlook and Thunderbird have mastered long ago), partially a lack of demand due to the aforementioend ‘education’ reason, and partially unscrupulous motives of SaaS e-mail providers.  The last point I want to call special attention to because of the significance of the problem:  Providers of SaaS applications “for free” are targeted advertising systems, which have increasingly used not just the profile and behavior of end-users to develop a targeted promotional experience, but the content of their e-mails themselves to understand a user’s preferences.  Supporting S/MIME encryption is counter to the aim of scanning the body of e-mails to determine context, when in a secure e-mail platform, Hotmail for instance, would be unable to peek into messages.  Unfortunately, this deliberate ommission of encryption support in online e-mail clients has meant that digital signatures, the second part of the S/MIME technology, is often also omitted.  In early 2009, Google experimented with adding digital signature functionality to Gmail; however, it was quickly removed after it was implemented.,   If users came to demand secure e-mail communications from their financial institutions, these providers would need to play along.

3. Lack of Provider Support.  It’s no secret most online banking providers have a software offering nearly a decade old, which is increasingly a mishmash of legacy technologies, stitched together with antiquated components and outdated user interfaces to create a fragile, minimally working fabric for an online experience.  Most have never gone back to add functionality to core components, like e-mail dispatch systems to incorporate technologies like S/MIME.  Unfortunately, because their customers who are technologically savvy enough to request such functionality represents a small percentage of their customer base, even over ten years later, other online banking offerings still neglect to incorporate emerging security technologies.  While a bolt-on internet banking system has moved from a “nicety” to a “must have” for large financial services software providers, the continued lack of innovation and continuous improvement in their offers is highly incongruent with the needs of financial institutions in an increasingly connected world where security is paramount.

S/MIME digital e-mail signatures is long over-due in the fight against financial account phishing; however, as a larger theme, financial institutions either need to become better drivers of innovation in stalwart online banking companies to ensure their needs are met in a quickly changing world, or they need to identify the next generation of online banking software providers, who embrace today’s technology climate and incorporate it into their offerings as part of a continual improvement process.

 
Leave a comment

Posted by on June 16, 2010 in Open Standards, Security

 

Facebook OpenGraph: A Good Laugh or a Chilling Cackle?

If you want to sell a proprietary technology for financial gain or to increase user adoption for eventual financial gain once a model is monetized, the hot new thing is to call it “open” and ascribe intellectual property rights to insignificant portions of the technology to a “foundation.  The most recent case in point that has flown across my radar is Facebook’s OpenGraph, a new ‘standard’ the company is putting forward to replace their existing Facebook Connect technology, a system by which third-parties could integrate a limited number of Facebook features into their own sites, including authentication and “Wall”-like communication on self-developed pages and content.  The impetus for Facebook to create such a system is rather straightforward:  If it joins other players in the third-party authentication product-space, such as Microsoft’s Windows Live ID, Tricipher’s myOneLogin, or the OpenID, it can minimally drive your traffic to its site for authentication, where it requires you to register for an account and log in.  These behemoths have much more grand visions though, for there’s a lot more in your wallet than your money: your identity is priceless.

Facebook and other social networking players make a majority of their operating income from targeted advertising, and displaying ads to you during or subsequent to the login process are just the beginning.  Knowing where you came from as you end up at their doorstep to authenticate lets them build a profile of your work, your interests, or your questionable pursuits based on the what comes through a browser “referrer header”, a response most modern web browsers announce to pages that tell them “I came to your site through a link on site X”.  But, much more than that, these identity integration frameworks often require rich information that describe the content of the site you were at, or even metadata that site collected about you that further identifies or profiles you, as part of the transaction to bring you to the third-party authentication page.  This information is critical to building value in a targeted marketing platform, which is all Facebook really is, with a few shellacs of paint and Mafia Wars added for good measure to keep users around, and viewing more ads.

OpenGraph, the next iteration from their development shop in the same aim, greatly expands both the flexibility of the Facebook platform, as well as the amount and type of information it collects on you.  For starters, this specification proposes content providers and web masters annotate their web pages with Facebook-specific markup that improves the semantic machine readability of the page.  This will make web pages appear to light up and become interactive, when viewed by users who have Facebook accounts, and either the content provider as enabled custom JavaScript libraries that make behind-the-scenes calls to the Facebook platform or the user himself runs a Facebook plug-in in their browser, which does the same.  (An interesting aside is, should Facebook also decide to enter the search market, they will have a leg up on a new content metadata system they’ve authored, but again, Google will almost certainly, albeit quietly, be noting and indexing these new fields too.)

However, even users not intending to reveal their web-wanderings to Facebook do so when content providers add a ‘Like’ button to their web pages.  Either the IFRAME or JavaScript implementations of this make subtle calls back to Facebook to either retrieve the Like image, or to retrieve a face of a friend or the author to display.  Those who know what “clearpixel.gif” means realize this is just a ploy to use the delivery of a remotely hosted asset to mask the tracking of a user impression:  When my browser makes a call to your server to retrieve an image, you not only give me the image, you also know my IP address, which in today’s GeoIP-coded world, also means if I’m not on a mobile device, you know where I am by my IP alone.  If I am on my mobile device using an updated (HTML5) browser, through Geolocation, you know precisely where I am, as leaked by the GPS device in my phone. Suddenly, impression tracking became way cooler, and way more devious, as you can dynamically see where in the world viewers are looking at which content providers, all for the value of storing a username or password… or if I never actually logged in, for no value added at all.  In fact, the content providers just gave this information to them for free.

Now, wait for it…  what about this new OpenGraph scheme?  Using this scheme, Facebook can not only know where you are and what you’re looking at, but they know who you are, and the meaning behind what you’re looking at, through their proprietary markup combined with OpenID’s Immediate Mode, triggered through AJAX technology.  Combined with the rich transfer of metadata through JSON, detailing specific fields that describe content, not just a URL reference, now instead of knowing what they could only know a few years ago, such as “A guy in Dallas is viewing http://www.example.com/Page.html”, they know “Sean McElroy is at 32°46′58″N 96°48′14″W, and he’s looking at a page about ‘How to Find a New Job at a Competitor’, which was created by CareerBuilder”.  That information has to be useful to someone, right?

I used to think, “Hrm, I was sharing pictures and status updates back in 2001, what’s so special about Facebook?”, and now I know.  Be aware of social networking technology; it’s a great way to connect to friends and network with colleagues, but with it, you end up with a lot more ‘friends’ watching you than you knew you ever had.

References:

http://www.facebook.com/advertising/?connect

http://opengraphprotocol.org/

http://developers.facebook.com/docs/opengraph

http://openid.net/specs/openid-authentication-2_0.html

 

Structure vs. Creativity

The other day I was speaking with a friend on the east coast about some of the nuances of the HTTP protocol and the HTML/XHTML standards which have changed over time, which diverged after I had answered his immediate question to reminiscencing about the actual content available today over the Internet.  This short session of remembering the “good old days” of circa 1995 got me thinking over the past few days what really has changed on a fundamental level over the past 15 years within the content of the World Wide Web itself.

For one, search engines have dramatically improved.  For those who remember OpenText and AltaVista as some of the only search engines that allowed free-form queries of pages without finding sites through canonical directories, as Yahoo! used to only provide, getting a relevant search result was truly an art.  Finding a site on “june bugs” could yield any page that contained either word, whether contextually used together, or simply those talking about bugs, which mentioned temperatures in the month of June.  Utilizing tools that simply indexed words on pages required the understanding a whole metalanguage to requiring and disallowing certain words or hosting domain names and group words with Boolean operators.  Even with a mastery of the technique needs needed to coerce relevant results, one usually had to wade through pages of results; I recall configuring AltaVista in particular to show 100 results per page, so I could find results more quickly, since each successive “show next page” request did take a while over my 14.4 modem.  Today, I rarely scroll down from the first three results on Google, much less ask for ‘another page’ of results.

Second, we use very few client tools today to access content on the Internet.  15 years ago, I fired up Trumpet Winsock to access the Internet on Windows 3.0, PowWow for IM, Netscape for web browsing, Eudora for e-mail and USENET newsgroups, wsFTP to actually download files, and HyperComm to connect to remote systems to surf non-HTTP sites.  Today, virtually everything 99% of Internet denizens do is within a web browser, from search, to downloading, to chatting.  Traffic has moved from the various communication and sometimes vendor-specific protocols to a smaller subset of standards, mostly based on HTTP and XML, and therefore, our browsers have turned into Swiss Army knifes to tackle everything a user needs.

Third, collaboration in non-transient mediums has drastically changed.  If I wanted to share an idea 15 years ago, I opened my trusty Windows Notepad, typed up a quick HTML page (because, who doesn’t know hypertext markup language?), and FTP’ed it to my Geocities account to share to the world.  Someone, somewhere, would eventually construct a search query that linked to my page, or eventually my page might be included in a directory, such as Yahoo!’s old format, and if that person wanted to praise or criticize my content, then they could do the same thing on their personal web page, and link my site.  Content was scattered across hundreds of different hosting providers, in visual designs and contextual organization that varied widely from page to page.  Today, the advent of Wiki’s and other Web 2.0 collaborative workspaces have drastically lowered the knowledge barrier of entry to participate in the exchange of ideas — virtually anything you’d ever want to know about is in a Yahoo! or LiveJournal group for it.  Web 2.0 truly is, as Tim Berners-Lee has argued, just jargon for the same thing we’ve been doing through CGI for over 15 years to make sites interactive and collaborative.  The “Web 2.0” buzzword doesn’t represent any fundamental technology evolution, but simply a proliferation of what has been available for a very long time.

So, I haven’t told anyone who has been around at least as long as I have anything they didn’t already know — but, these three aspects highlight the fundamental change I see:  as the Internet expands its reach, particularly with a new generation unfamiliar with the technical framework that it is built upon, since the need to have such an understanding for its basic use is no longer there, we are seeing a shift from a “loosely coupled, poorly organized” body of information to a “structured and organized” body of information.  Especially important in my opinion, is this shift is changing the quantity of content.  By virtue of me writing these thoughts on a WordPress blog, I’ve chosen convenience over creativity.  I could write a web page and style it as I wish just as easily as type these thoughts, but I have made a conscious decision not to use my knowledge of HTML and FTP and to make this easier for other users to casually find, since I syndicate this blog onto my LinkedIn feed.  Consequently, I realize my thoughts may be littered with interjected advertisements by those providing this ‘convenience’, and I accept the limitations of the format:  I cannot express my thoughts outside of the framework WordPress has provided for me to do so.  Now, WordPress is pretty flexible, and I probably wouldn’t otherwise use the advanced markup that I know WordPress cannot support, however, the limitations become much more understandable in more popular formats.  A quick export and calculation of my Facebook friends’ News Feed shows that 72% of content my friends have written in the past two months are status updates.  Another 10% are photograph uploads, and the remaining 18% are a collection of ‘apps’, such as Mafia Wars and Farmville, which litter my ‘content’ feed with self-promoting advertisements for the apps themselves.  When I tried to paste this post into the ‘status updates’ for Facebook, I received an error stating that my formatting would be lost and that my content was too long.  Similarly, were I to exclusively microblog through a service like Twitter, the richness of my thoughts are limited to 140 characters and stripped of all multimedia.  I now receive over 1,000 tweets a day from less than 20 friends, most of which produce content no other way.  I say “content” loosely with regards to Twitter and Facebook, as the quality of posts in limited space vs. personal web presences and blogs is akin to the difference of the content from a lecturer to the content value of casual conversation, where one party may simply reply with “Okay.  Right.”  Posting much and often is a far cry from sharing thoughts and ideas.

It is my sincere desire that as we seek continued convenience to reach wider audiences and connect them in engaging communities, that we do not let our desire for structure and searchability constrain the richness of our thought.  Similarly, we are quickly losing a level of technical aptitutde still very relevant in today’s Internet among our future generations, who lack the necessity to understand the technologies we use today to create easy-to-use sites that attract the masses.  These skills, of underlying protocols and interactions common to the whole infrastructure, aren’t taught in any university, and are specialty subjects at technical trade schools.  If we are going to embrace structure for accessibility sake, we must be careful not to box ourselves in creatively now, and then pass an empty box to the next generation.

 
Leave a comment

Posted by on January 2, 2010 in Open Standards, Social Media

 

Beginnings

For those who have known me over the years, I have been involved in some rather exciting research and development to bring new solutions to the financial services market.  I’ve been fortunate to both work with a talented team of individuals at a number of places, and even more fortunate to have new opportunities to continue to work with the top talent in both the financial services business and product development arenas as well as excellent designers and software engineers to bring to life the sketchy ideas that can and do change the way we spend, save, shop, plan, and think.

While I have blogged for years on several different sites for personal life topics, this is my first WordPress blog and my first time committing the array of creative thought floating around in the groups of sharp minds and visionary leaders I have the opportunity to work with on a daily basis.  In thinking of a suitable name for my intentions for this blog, it really comes down to — what’s important in financial services?  With tumultuous economic times and brutal market forces that leave us in a precarious balance between high unemployment and a weak dollar world wide with inflation on the horizon, how can we as designers, developers, and implementers make a difference?

The difference is that our currency is our thoughts,  Truly, knowledge is our capital and sharing our thoughts and our solutions is what creates our mutual, long-term wealth.  Here’s to a good start!

 
Leave a comment

Posted by on November 6, 2009 in Uncategorized