Thoughts on passing the GIAC Security Essentials (GSEC)

Today I passed the GIAC Security Essentials Certification, also known as the GSEC.  I passed with a 95% on my first certification attempt, so I thought it might be useful to decompose my thoughts on this one for any who attempt it in the future.

My background is technical – I started my career in software engineering and database performance tuning, moved into engineering leadership roles, and eventually ended up pursuing my interests in cybersecurity, where I have been a CISO at two financial services firms.  Yet, I’m still very hands-on during the day, and I recently wrote a QUIC userspace implementation to learn the spec in the evenings.  I have previously earned the CISSP and CISM certifications, although these are more leadership and risk management focused credentials that don’t speak much to technical aptitude as it relates to security.  For that reason, as well as my personal desire to keep my technical skills sharp while also working at the executive level and leading a team, I decided to apply to, and was accepted into, the SANS Technology Institute’s (STI) Masters of Science in Information Security Engineering program.  The first stop along the MSISE journey is the GSEC.

As part of the MSISE program, I pay tuition for a graduate class to gain access to SANS training and the associated GIAC exam which provides me with a grade for my course.  This was my very first SANS training and my first GIAC exam.  There was an option provided for me to directly challenge the exam since I do have and did recently earn my CISSP, but my STI student advisor kindly recommended I take in the full training experience.  I was admittedly reluctant both because I feel I am pretty strong technically and because it would have been slightly cheaper and faster for me to just go straight to the GSEC exam, but the advice was well founded.

The SANS SEC 401 class by Dr. Eric Cole was outstanding.  Dr. Cole’s presentation style feels genuine and engaging over the self-paced OnDemand modality I chose.  I walked into this content with the preconceived notion that much of this would be review for me, and honestly, a lot of it was for me.  This isn’t to state the course is remedial, simply that as a builder of security programs, the concepts and advice aren’t new to me, but some of the technical pieces were.  I learned new and useful tools as part of this course, and I could see this as an excellent foundational course for current and aspiring security team members in any organization.  Finding high quality training content is exceptionally valuable to me in my day job, and of course for me personally taking this course.

As other GIAC alumni will tell you, since the GSEC is an open-book exam, developing good indexing skills, as others who have recounted their experiences state, is critical.  I followed Josh Armentrout’s index format, and I walked into the exam with about 4 pages of indexes I developed throughout the course.  Admittedly, the way I learn best is by reading, so I spent my time in SEC 401’s OnDemand video with Dr. Cole on x2 speed and scanning pages in the book as I went along for index-worthy concepts or terms.  I did not spend any time highlighting the books or listening to MP3’s, just focusing on the audio and what I was reading.  I would finish a ‘day’ at 2x in about 2 nights of my time, devoting about 5 hours a night for a couple of weeks to get through it all with a worthwhile index.  There’s no specific tips or tricks to the content — the course syllabus plainly states what will be covered, and that’s the reality of what OnDemand provided.  I will say read your entire book.  Sometimes key concepts have interesting nuances that end on the back of a page on a trailing paragraph.  Don’t skip those.

With my course, in addition to the self-study quizzes in the OnDemand portal – which test the content of SEC401, not the GSEC – I received two GIAC practice tests and the final GIAC exam test ready to schedule. While everything in the OnDemand portal is self-paced, repeatable, and not timed (other than the overall subscription access), the GIAC practice tests are delivered in the same format as the exam – timed, but they also provide explanations for any incorrectly answered questions.  The MSISE program has a learning community portal where generous souls who do not use both GIAC practice tests give away their tests to others who want extra shots.  While that’s awfully nice of them, and I was tempted to do the same, I found value in taking both practice tests to test and refine the quality of my index.  I’m glad I did, and would suggest never to give away a practice test if you feel you could use it to benefit your index or your comprehension of the breadth of the training topics.  (Hey, you paid for these practice tests, so you come first.)  I took my first practice test as an ‘open internet’ variant where I would quickly Google something to answer the test, but then make sure my notes were fully fleshed out from what external sources could add.  My last practice test was ‘closed internet, open book’ to mimic the actual exam experience, and this was a last test of my index for completeness, since that’s all I would have on the test day.  Obviously, I carefully read the explanations to anything I answered incorrectly and tuned my notes and did additional readings to make sure I did not repeat any misfires.

April | 2014 | geekandbooknerd's BlogFinally, exam day came today!  I’m no stranger to these types of tests or Pearson Vue, so the experience was predictable and suitable.  It is interesting walking into a Pearson Vue with an armful of books since most exams they test for allow no notes or books.  I came in with all six course books, the lab workbook, the network quick reference guide, my index, and a separate page of notes I made about common ports and protocols that were not on the network quick reference guide but were mentioned elsewhere in the course material.  I used everything I brought in, if only to take the exam at a ‘leisurely’ pace and spend adequate time double checking my answers.

Unlike the CISSP or CISM which are based on practical experience (with the exception of the CISSP’s strange obsession with fire suppression controls…), the GSEC was much more knowledge-based, specifically on the SEC401 training materials.  So, the right answer is less likely to come from things you already know (come on, you don’t really know ALL those nmap switches), but from what you have learned and can recall or find.  Arguably, this is a bit more realistic, as aren’t all technical folks somewhat depending on their navigation of StackOverflow or Google-fu? 🙂

It’s hard to know from the outside whether SEC401 is custom tailored to the GSEC, or whether the GSEC is really testing SEC401, but they fit together like pieces of a puzzle.  Answers to questions often came nearly verbatim from the slides, or more often, the narrative, in the SEC401 books I had in tow.  That’s not a knock on the SANS content or the GIAC exam – I call this out simply to advise those studying for the GSEC to intimately know the SEC401 material as it is presented in the books.  Treat the high-quality OnDemand video as a wonderful supplement, but don’t go light on your reading and indexing of your spiral bound friends.  Also, do the labs, and repeat them until you could recognize a screenshot of output to a tool you covered in the curriculum or in a lab.  If you couldn’t recognize a screenshot or command well enough by sight, you probably aren’t soaking in the technical material at the level you need to demonstrate competency at the higher end of the spectrum.

This process got me from a 89% on my cavalier run through the first practice test, to a 92% on my second practice test, to a 95% on exam day.  There’s really no tricks to doing well on the GSEC or tricks the exam will try to play on you.  It is plainly written, very technical, and you would be a fool not to be prepared with the associated SANS training and a well-crafted index before sitting down to make an attempt.  (Check out Lesley Carhart’s great post on studying and indexing too, if you have not already.)  Even if you might think ‘I know all this’, you probably don’t have the GSEC cinched unless you give it serious attention and a good study.

I hope this helps someone out there!

Leave a comment

Posted by on January 30, 2019 in Uncategorized


Despite DoH and ESNI, with OCSP, web activity is insecure and not private


Certificate Transparency (CT) logs increasingly provide virtually every TLS certificate to be identified by serial number.  Since OCSP responses are unencrypted and contain the serial number of the certificate as can be found in CT logs, as well as unsalted hashes of the certificate’s Distinguished Name and public key, these can easily be profiled to compromise the privacy of clients even in the presence of DoH and ESNI privacy protections.


A lot of great work has happened over the past few years in securing the web by strengthening encryption and improving user security indicators.  This helps users make informed decisions to keep their online activity secure and private and to thwart network adversaries from profiling users.  Man-in-the-middle attacks on the network often conjure images of someone breaking into a server room and installing some kind of interlocutor spyware device or splicing a network card.  Repeatedly, though, the internet service providers that bring the Internet to consumers’ homes have demonstrated they will use their privileged position on the network to sell private information about consumer internet use or degrade services from competitors.

Policy fixes like network neutrality are still in play, but these threats aren’t unlikely one-offs that target individuals, they are systemic abuses by technology providers.  Technology fixes, though, are seeking to limit the visibility of web activity, such as the names of websites one visits or the content they download, indiscernible to anyone except the requester and the actual website operator.


Significant strides in improving the strength of encryption that makes data in transit unreadable, such as TLS 1.3, have squelched out vulnerabilities that stem from aging cryptographic algorithms and ciphers as well as certain threats that can affect the confidentiality of communications when an encryption key is leaked or a nation-state attacker.  However, metadata that is exchanged in the process of finding a server and securely establishing a connection, DNS and TLS with a Server Name Indicator (SNI), can still leak and poses both an existential privacy problem that is particularly troubling to vulnerable populations under repressive regimes as well as a method for sophisticated technology providers in ‘free’ societies to profile traffic for bandwidth discrimination, censorship, or profiteering.

A couple of standards have gained traction to address these weaknesses in DNS and TLS, with proposals termed DNS over HTTPS (DoH) and encrypted SNI (ESNI), respectively.


DoH moves the plaintext game of ‘telephone’ whereby a client’s request to resolve a URL into an IP address may traverse many different servers operated by many different entities to look up and return the answer.  DoH moves this communication from an unencrypted channel to an encrypted one, which still requires one to trust the privacy policy of the entity servicing the request, but does not need to presume the good behavior of every intermediate network and DNS server in the mix.  This is a very good thing we will see rolling out in the next few years in a much wider adoption.


ESNI is a proposal to plug a hole in an extension of the Transport Layer Security protocol (sometimes incorrectly referred to by its obsolete predecessor, SSL) which allows for encrypted communications to happen over a channel in a standard way for many applications.  In the web’s early days, users would connect to a web server, such as, and would return a signed certificate that could be used to setup a secure communications channel.

However, as the web matured, methods for hosting many different sites on the same server or set of servers took off and there was no longer a 1:1 match for a domain name and a web server.  SNI was an extension that lets a client, like a web browser, specify “I want” so the web site provider could return the correct, unique certificate to setup the channel for, even though it could also be serving lots of other sites too.  However, the “I want” is exchanged in plain-text before the certificate is provided and before an encrypted channel is established.

That means savvy technology providers could just look here instead of logging DNS requests for similar data on what host names to which a customer is attempting to connect.  This is becoming far more viable as HTTPS Everywhere, user agent changes, and free certificate authorities like Lets Encrypt are making ‘secure by default’ the new reality for the web.  More TLS means more encryption, but also more consistency in finding hostnames in SNI fields.


CT Logs

TLS is underpinned by a system of trust, particularly in the entities called Certificate Authorities that cryptographically sign certificates used to establish encrypted communications.  However, certificate authorities are fallible, and some have failed due to security breaches or by failing to abide by the rules and mis-issuing certificates.  Some of the most egregious offenses from failed certificate authorities like DigiNotar, Symantec, and WoSign/StartCom have resulted in technology solutions that make it possible to hold them accountable.  Certificate Transparency (CT) logs are a public ledger of certificates issued by authorities that allow their behavior to be monitored, but also create central clearinghouses of certificates that can be looked up by name or serial number.  More on that soon.


When a certificate is compromised, a certificate authority can revoke it.  While normally a certificate has a limited duration noted by an immutable expiration date embedded into it, certificates may be prematurely revoked if the holder or the authority is compromised.  The Online Certificate Status Protocol (OCSP) is a protocol clients like web browsers user to verify a certificate it receives is still valid. OCSP lets a client ask “I just received this certificate for, but is it valid?”  The request is obscure, but not secure:


The request has a one-way hash of the distinguished name and public key in the certificate as well as the serial number of the certificate.  Unsalted hashes mean anyone could poll CT logs for all distinguished names, build their own hash lookup dictionary, and then compare this value to their dictionary.  However, the unhashed serial number makes this far easier, as many CT logs support direct lookup of certificates by their serial number.  In the following screenshot, you can see a trivial lookup to find out my lab virtual machine was connecting out to



This is not a new vulnerability.  In fact, RFC 6960, which defines OCSP, explicitly states:

Where privacy is a requirement, OCSP transactions exchanged using HTTP MAY be protected using either Transport Layer Security/Secure Socket Layer (TLS/SSL) or some other lower-layer protocol.

Incorrectly, some presume OCSP must be performed over insecure HTTP to address a address a ‘chicken and egg’ problem that would arise from trying to validate the certificate of a secure OCSP site to validate the certificate of another secure site.  While implementation details could be non-trivial, solutions like pinning the TLS certificates of well-known OCSP responders could address that challenge.

It is important, though, to consider that in the cat-and-mouse game of threats to privacy and privacy-protecting technologies, OCSP is a more readily available source of metadata on users as HTTPS adoption increases, CT logs become mandatory and pervasive, and insecure OCSP communications dominate the responder implementations.  As other privacy holes are addressed, such as DoH and ESNI, to keep users’ Internet activity private, OCSP is a challenge at scale to address as well.

Leave a comment

Posted by on January 5, 2019 in Uncategorized


In controlled environments, it’s useful to know when outbound connectivity is not restricted to a predefined list of required hosts, as many standards like PCI require.  Here’s a helpful one-liner that will query your Active Directory instance for computer accounts that are enabled, and then for each of them try to connect to a site from that machine, as orchestrated by WinRM.  If you use this script, just know that you will probably see a sea of errors for machines that connect be reached from your source host via WinRM.  My go-to site for testing non-secure HTTP is, but you could use anything target and port you desire based on what should not be allowed in your environment.  I have changed the snippet below to (which will not work) so I don’t spam the poor soul who runs, but you should replace that with or whatever host to which you wish to verify connectivity.

Invoke-Command -ComputerName (Get-ADComputer -Filter {Enabled -eq "True"}
 -Property Name,Enabled | foreach { $_.Name }) -ScriptBlock
 { Test-NetConnection -Port 80 "" | Select TcpTestSucceeded }

The output will be dropped into look something like this:

 TcpTestSucceeded PSComputerName RunspaceId 
 ---------------- -------------- ---------- 
             True YOUR-HOST-1    d5fd044c-c268-460e-a274-d3253adc8ce2 
             True YOUR-HOST-2    98206f71-80c1-4e7e-a467-fec489c542ee 
            False YOUR-HOST-3    d0b6cf57-e833-44a6-a7bb-aebd4d854b5c 
             True YOUR-HOST-4    14af618b-1ca7-4c1f-bb56-ce58dbd4af94

It’s a great sanity check before an audit or after major changes to your network architecture or security controls.  Enjoy!




PowerShell one-liner to find outbound connectivity via WinRM

Leave a comment

Posted by on June 24, 2017 in Programming, Security


Tags: ,

SQL Injection with New Relic [PATCHED]

SQL Injection with New Relic [PATCHED]


First off, I have found New Relic to be a great application performance monitoring (APM) tool.  Its ability to link transaction performance from the front-end all the way to back-end database queries that slow your web application is pretty awesome.  This feature lets you see specific queries that are running slowly, including the query execution plans and how much time is spent on processing various parts of a database request.  From their online documentation, the interface looks similar to this:

What’s not so awesome is when your APM’s method for retrieving this data creates a SQL injection flaw in your application that wasn’t there before.  In October 2016, I became aware of some strange errors when a DBA was trying to load SQL Server trace files into PSSDiag, due to a formatting problem in the trace file itself.  Our DBA discovered that unclosed quotation marks were causing problems with PSSDiag loading trace files.  So, how could an unclosed quotation mark even be happening?  It’s a hallmark of a SQL injection exploit, and so I began digging.

It appeared our ORM (NHibernate at the time) was sending unparameterized queries, and one of the field values had an unescaped quotation mark, which was causing the error in PSSDiag.  However, in other cases the same query, unique to an area of our code, would be issued with parameters.  Upon further digging, it actually appeared our application was submitting the same query twice, first with the parameterized query version, and a second with parameter values replaced into the query string, sandwiched with a SET SHOWPLAN_ALL.  It looked a bit like this:

exec sp_executesql N'INSERT INTO dbo.Table (A, B, C) 
VALUES (@p0, @p1, @p2);select SCOPE_IDENTITY()'
,N'@p0 uniqueidentifier,@p1 uniqueidentifier, @p2 nvarchar(50)'
,@p0='{Snipped}',@p1='{Snipped}',@p2=N'I don''t even'

Followed by:

INSERT INTO dbo.Table (A, B, C)
VALUES ('{Snipped}', '{Snipped}', 'I don't even');select SCOPE_IDENTITY()

As you can see in the first example created by NHibernate, the word “don’t” was properly escaped; however, in the subsequent execution, it was not.  This second statement is sent by our very same application process, which New Relic will instrument using the ICorProfilerCallback2 profiler hook to retrieve application performance statistics.  But it doesn’t just snoop on the process, it actually hijacks database connections to periodically piggyback on their ‘echo’ of requests to retrieve metrics used to populate their slow queries feature.  The SET SHOWPLAN_ALL directive causes the subsequent request not actually to return data, but to just return the execution plan.

(DBA’s will note this is actually not a reliable way retrieve this data at all, as parameterized queries can and often do have very different query execution plans when parameter sniffing and lopsided column statistics are in play.  But that’s how New Relic does it.)

This is pretty bad, because now virtually every user-provided input that is sent to your database, even if programmed using secure programming practices to avoid SQL injection flaws, becomes vulnerable with New Relic is installed with the Slow Queries feature enabled.  That being said, New Relic does not send this second ‘show plan’ and repeated statement set for every query.  It samples, appending it only onto some executions of any given statement.  An attacker attempting to exploit this would not be able to do so consistently; although, repeated attempts on something like the username field of a login screen, which in many systems is likely log to a database table that stores usernames of failed login attempts, would occasionally succeed when the subsequent SHOWPLAN_ALL and unparamaterized version of the original query is injected at the end of the request by New Relic.


  • October 5, 2016: Notified New Relic
  • October 5: New Relic acknowledges issue and provides a workaround (disabling explain plans)
  • October 6: New Relic’s application security team responds with details explaining why they believe the issue is not exploitable as a security vulnerability. Their reasoning is based on the expected behavior of SHOWPLAN_ALL, which would not execute subsequent commands
  • October 6: I provide a specific example of how to bypass the ‘protection’ of the preceding SHOWPLAN_ALL statement that confirms this is an exploitable vulnerability.
  • October 6 New Relic confirms the exploit and indicates it is targeted for resolution in their upcoming 6.x version of the New Relic .NET Agent.  I confirm the issue in New Relic .NET Agent 5.22.6.
  • October 7: New Relic indicates they will not issue a CVE for this issue.
  • October 12: New Relic updates us a fix is still in development, but a new member of their application security team questions the exploit-ability of the issue.
  • October 12: I provide an updated, detailed exploit to the New Relic security team to demonstrate how to exploit the flaw.
  • November 8: Follow-up call with New Relic security team and .NET product manager on progress.  They confirm they have resolved the issue as of the New Relic .NET Agent
  • November 9: .NET Agent with issue fixed addressed.
  • May 26, 2017: Public disclosure


First off, I want to applaud New Relic on their speedy response and continued dialogue as we worked through the communication of this issue so they understood how to remediate it.  On our November 8 call, I specifically asked if New Relic would reconsider their stance of not issuing a CVE for the issue, or at least clearly identify as a security update so developers and companies that use this agent would know they needed to prioritize this update.  They thoughtfully declined, and I did inform them that I would then be publicly disclosing the vulnerability if they did not.

Even if I don’t agree with it, I understand the position companies take about not proactively issuing CVE’s.  However, I do believe software creators must clearly indicate when action is needed by their users to update software they provide to resolve security vulnerabilities. Many IT administrators take the ‘if it’s not broken, don’t update it’ approach to components like the New Relic .NET Agent, and if no security urgency is communicated for an update, it could take months to years for it to be updated in some environments.  While some companies may be worried about competitors’ narratives or market reactions to self-disclosing, the truth is vulnerabilities will eventually be disclosed anyway, and providing an appropriate amount of disclosure and timely communications for security fixes is a sign of a mature vulnerability management program within a software company.

Also, be sure if you put any mitigation techniques in place that they actually work.  We stumbled upon another bug in working around the issue that was subsequently fixed in 6.11.613 where trying to turn off the ‘slow query’ analysis feature per the New Relic documentation did not consistently work.

Given the potential gravity of this issue, I have quietly sat on this for almost 7 months to allow for old versions of this agent to be upgraded by New Relic customers, in the name of responsible disclosure.  I have not done any testing on versions of New Relic agents other than the .NET one, but I would implore security researchers to test agents from any APM vendor that collects execution plans as part of their solution for this or similar weaknesses.


Posted by on May 26, 2017 in Security


Last weekend, I did some sprucing up of my public website.  It’s just a simple static one-pager, but why on earth keep a Windows box just to host that?  It was long overdue for me to move something simple into something more cost effective that I could securely manage easier.  In case others are looking for a quick recipe book on the same, here’s what I did last weekend:

Spin up an Encrypted Linux AMI in AWS

My objectives in this move were to (1) keep it simple, and (2) keep it secure.  I’ve already enjoyed great success using NGINX at Alkami in getting the best security posture possible for TLS termination, and NGINX can be more than just a reverse proxy – it can also work as a blazingly fast web server for static content too.  Dusting off my Apache skills just for this project seemed unnecessary, so for this recipe, we’re going to be setting up NGINX as the only server process for this static site.  If you aren’t familiar with NGINX… don’t fret – I’m going to make it easy to configure and explain each step along the way, although you can reference great AWS documentation here too.

Encrypt your AMI

If you’re a Linux guy, you probably have a distribution already in mind.  For this project, I’m fine with the standard machine image Amazon AWS puts together, and I don’t necessarily need to worry about which package manager I should use or what filesystem or startup configuration file layout I prefer to maintain.  Going with a plain vanilla Amazon Linux AMI, (AMI ID ami-178ef900), I:

  1. Created a new AWS account.  This is very easy to do with a credit card, although I’ll be using the Free Usage tier of services for this recipe and don’t plan to go over those thresholds.
  2. Went to the EC2 console – that’s the Elastic Compute Cloud – and click the AMI’s option under Images on the left.
  3. Searched for AMI ID ami-178ef900
  4. Right-clicked to select the result and chose Copy AMI, selected the Encryption option, and confirmed Copy AMI.
    1. Here you have an option of getting fancy with key management and creating a special key for this encrypted operating system image.  We don’t need to be fancy, we just need to be secure.  If we are using this AWS account just for a public website and for that single purpose, the default key for the account is just fine.

This part is important, because I want all my data at rest in my AMI to be encrypted.  “Why?”, you might ask?  Virtually anything you do in the cloud will have sensitive data at rest, at least in the form of SSH or website TLS certificate keys, if not critical corporate or client data.  Encryption is not an option – just do this.

Setup a Secure Security Group

Under Security Groups under Network & Security in the EC2 console, we’re going to define who can access our new AMI.  To start, we will only allow access to ourselves while we configure and harden it.  Only after we’re happy with the configuration will be open it up to the world.  To do this:

  1. In Security Groups, click Create Security Group at the top.
  2. Name your security group something simple, like webserversecuritygroup
  3. Add three Inbound rules
    1. HTTP from My IP only – this is how we will test insecure HTTP connections
    2. HTTPS from My IP only – this is how we will test secure HTTP connections
    3. SSH from My IP only – this is how you will connect to your new AMI with PuTTY or another terminal session manager
  4. By default your instance can connect Outbound anywhere.  Not a great idea for a production enterprise system.  For this recipe, we’re going to leave this with this default, but we could shore it up later once we get everything like OCSP working near the end.  Flipping on a lot of security early on can make this whole process much more painful, so our approach will be (1) to use secure defaults, (2) get functionality working, then (3) harden it.

Launch Your AMI

When the copy completed in about 5 minutes, I was able to right-click the encrypted AMI I just just copied from the source and click “Launch”.  Herein I was able to select the options for the virtual machine I would boot with this AMI as the image, and in order:

  1. I used the t2.micro instance to keep it simple and free.
  2. Chose the webserversecuritygroup I created in the previous step
  3. Selected the VPC and subnet (if you aren’t sure about VPC’s and subnets, your AWS account comes with a default one you can setup in the VPC option where you previously selected EC2.  The first option, just one public subnet, is fine for this application, because we won’t have back-end database or file servers that in a more complex environment we would architect for additional layers of security.  It’s completely unnecessary for this recipe, and quite honestly, I prefer to keep my various recipes in separate AWS accounts to keep cost tracking easier.  Don’t complicate this for yourself – one public subnet is all you will need and use.)
  4. You will need to choose what SSH key you will use to connect to the instance in our post-modern, password-less world.  If you haven’t setup a SSH key yet, you will create one here and just download the .PEM file
  5. Enabled protection against accidental termination.
  6. Clicked Review and Launch, then waited about 10 minutes for the machine to spin up.
  7. You will need to Download

Grab a Constant IP Address

While I was waiting for that, I wanted an Elastic IP.  AWS will generate a public IP for your EC2 instance, but that public IP isn’t guaranteed to stay with you.  We want that guarantee, and at about $3/mo for an Elastic IP address, it’s worthwhile not to have to muck with DNS updates any time I may reboot or rebuild a box and potentially suffer through downtime.  To get and use an Elastic IP:

  1. Go to Elastic IP’s under Network & Security in the EC2 console.
  2. Click Allocate New Address
  3. (Once that EC2 instance we’re firing up is up and running, then you can) Right click your new address and choose Associate Address.  We kept it simple and only have a single EC2 instance in this AWS account, so it’s easy to select the only instance to associate this address to it.
  4. Thanks to the magic of the software-defined networking stack of AWS, you don’t need to mess with ifconfig or reboot your AMI once you make this change – you’re ready to go.

Prepare the Box

In an enterprise production system, we’d probably already have pristine golden images, fully patched, tailored for our need.  Here, we don’t, we just went with a reasonable default.  But in either case, and especially in this one, we need to make sure we have all the latest patches, so we’ll:

  1. Connect to the box using PuTTY using the key-based login of the .PEM file we generated or chose when we launched our AMI.
  2. Enter ec2-user as our username when prompted
  3. Enter the password for our key in the .PEM file when prompted, and we’re in.
    1. And if you’re not in, either you don’t SSH much, or you’ve forgotten how to use PuTTY.  Documentation is your friend.
  4. Type sudo su to get root access
  5. Apply updates for your packages, type yum update
  6. Remember, we’re using NGINX, and it’s not installed on the default Linux AMI, so we’ll simply do yum install nginx to get that happen
  7. There are other nice things we’ll use in the epel-release that make using advanced NGINX features easier, so let’s also do yum install epel-release

Configure NGINX

NGINX is pretty simple to configure when you know what options need to be configured.  Just like the Linux AMI, it comes out of the box with relatively sane defaults, and we’ll use those as a starting point.  There are two files of special significane: /etc/nginx/nginx.conf which is the overall configuration for NGINX, and that can ‘include’ other files from /etc/nginx/conf.d/  We’ll use this separation to make minimal changes to NGINX’s overall configuration, and keep our site configuration centralized in one site-specific configuration file to make it easy to add another site to the same box in the future.

Make sure of a few simple things first in your global /etc/nginx/nginx.conf file.  I’m going to reproduce mine and make comments in red.

# For more information on configuration, see:
# * Official English Documentation:
# * Official Russian Documentation:

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /var/run/;

events {
 worker_connections 1024;

http {
 log_format main '$remote_addr - $remote_user [$time_local] "$request" '
 '$status $body_bytes_sent "$http_referer" '
 '"$http_user_agent" "$http_x_forwarded_for"';

access_log /var/log/nginx/access.log main;

sendfile on;
 tcp_nopush on;
 tcp_nodelay on;
 keepalive_timeout 65;
 types_hash_max_size 2048;

include /etc/nginx/mime.types;
 default_type application/octet-stream;

proxy_cache_path /tmp/nginx levels=1:2 keys_zone=default_zone:10m inactive=60m;
 proxy_cache_key "$scheme$request_method$host$request_uri";

# Load modular configuration files from the /etc/nginx/conf.d directory.
 # See
 # for more information.
 include /etc/nginx/conf.d/*.conf; This is the line that brings in our site-specific configuration files

index index.html; My site's default is just index.html, so I've simplified this line to make that the only one that can be served by default

server {
 listen 80 default_server; Listen on insecure HTTP IPv4 port 80 in this server block
 listen [::]:80 default_server; Also, listen on insecure HTTP IPv6 port 80 in this server block
 server_name localhost; This will serve as a catch-all, regardless of domain name specified.
 root /usr/share/nginx/html;

# Load configuration files for the default server block.
 include /etc/nginx/default.d/*.conf;

location / {
 limit_except GET {
 deny all; This block ensures any HTTP verb that is not GET just gets denied.  Remember, we have a simple static site.
 return 301; This whole 'server' block is for PORT 80 insecure traffic only.  We want users to get redirected to HTTPS always.

 add_header Content-Security-Policy "default-src 'none'; script-src 'self'; img-src 'self'; style-src 'self'"; We'll cover this CSP line later.
 add_header Strict-Transport-Security "max-age=31536000" always; Instruct the browser to never again ask for this site except using HTTPS
 add_header X-Content-Type-Options nosniff; Tell the browser not to try to second-guess our Content-Type HTTP response headers; old browser security problem
 add_header X-Frame-Options DENY; We don't use IFRAME's, so if someone tries to frame this site in a user's browser, the browser should just error out


As you’ll note from your own initial NGINX configuration file, I removed a lot of commented-out lines and added a few things I documented above in red with reasons.  The meat of our site though, will be our next file, that enumerates the settings for our target domain, in this case,  Before we go through that, though, let’s establish where exactly some paths are going to be we’ll reference in the configuration file that follows.  We’re going to put our website in /var/www/  We’ll place any secure keys like our TLS certificate in /var/www/  I’m going to want to find logs for this site in a predictable place that aren’t co-mingled with other sites I might add in the future, so I’ll also need a /var/www/ directory.  So, let’s do this, at the command line:

  1. Create the www directory: mkdir /var/www
  2. Create the site-specific directory: mkdir /var/www/
  3. Set permissions on these directories by issuing
    1. chmod 755 /var/www
    2. chmod 755 /var/www/
  4. Set ownership on these directories to the root user and the root group by issuing
    1. chown root:root /var/www
    2. chown root:root /var/www/
  5. We’d like put place our site in the public directory, but we don’t want to have to act as root each time we want to edit them… so we’ll create that one slightly differently
    1. mkdir /var/www/
    2. chown root:ec2-user /var/www/
    3. chmod 775 /var/www/
    4. This way, anyone in the ec2-user group can also edit the files herein
  6. NGINX will need to read the keys for this site, so we’ll need to do some special permission settings on the key subdirectory
    1. mkdir /var/www/
    2. chown nginx:nginx /var/www/
    3. chmod 550 /var/www/
    4. Now, only NGINX can read the keys herein
  7. NGINX will need to write log files out, so we’ll do something mostly similar for the log directory
    1. mkdir /var/www/
    2. chown nginx:nginx /var/www/
    3. chmod 750 /var/www/

Great!  Now we have a sturdy directory structure to work from.  One last thing we need to do before we configure our site file is establish what’s in that key subdirectory.  There are a few things we need:

  1. We need a certificate from our website issued by a certificate authority in a standard .PEM format.  You have a few options here:
    1. You can get a free DV cert from StartSSL.  This is the dirt-cheap solution, but given StartSSL was recently purchased by WoSign in a clandestine acquisition, and WoSign has had multiple and serious security lapses, you should not be trusting or providing support this entity.
    2. You can get a free DV cert from Let’s Encrypt, if you’re savvy enough to set up the automated renewal these 90-day in duration certificates require.  If you’re this savvy, though, you probably aren’t reading this blog, because you likely know much of the NGINX configuration I’m about to describe.  In addition, you would need to automate the rest of the configuration to handle frequent certificate rotations and the updating of subsequent key files and DNS entries for some of the fancier things we will do, like DANE, near the end.
    3. You could buy a relatively cheap DV certificate from an authority like GoDaddy
    4. You could pony up for a mid-tier OV certificate from an authority like Entrust
    5. If, and only if, you have a registered business with a DUNS number, you can get the top-tier assurance EV certificate from an authority like Entrust
    6. … and let’s face it, HTTP is so 1994.  Soon Chrome will warn users who visit HTTP sites that your site is insecure by default, and that’s not what you want to project, so you will pick one of the 5 options above.
  2. Your certificate is likely issued from a certificate authority’s trusted root certificate, which in turn has trusted an issuing certificate, which in turn has issued the certificate you acquire.  Or, sometimes instead of three links to this chain (root, intermediate, leaf), there are four (root, intermediate1, intermediate2, leaf).  This is important because you will need a few files here:
    1. Your leaf’s private key, what I will call below
    2. Your leaf’s public key + your intermediate(s) public keys, what I will call below.
    3. Your leaf’s public key + your intermediate(s) public keys + your root certificate authority’s public key, what I will call below.
    4. Some very big and unique prime numbers used for Diffe-Hellman key exchange, what I will call below
  3. To accomplish this, you will use openssl.  You will not use online converters to which you upload your private keying material to and let it do the work for you. You will not use online converters.  You will never, ever upload your private key anywhere that’s not encrypted, and when you do, you would never supply or keep the passphrase for it in any connected container.  Here’s some openssl cheatsheet commands for you, presuming you obtained a .PFX file that contains your public and private key combined.
    1. Export the private key for your leaf certificate into a file from a file.  These literally are the keys to your kingdom – the private key without encryption or passphrase protection.
      openssl pkcs12 -in -out -nocerts -nodes
    2. Export the public key for your leaf certificate into a file from a file.
      openssl pkcs12 -in -out -clcerts -nokeys
    3. Export your issuer’s root public certificate into a file
      openssl pkcs12 -in -out -cacerts
    4. Create your DH primes for key exchange.  You don’t have to understand what this is in-depth, but you should understand it could take 10-15 minutes to complete.
      openssl dhparam -out 4096
    5. Now, let’s create that chained file.  OpenSSL strangely doesn’t export a chain in the proper order.  You can either manually save the intermediate certificate in a .PEM format (called in the example below) and do:
      OPTION 1) cat >
    6. OR, you could alternatively type this and hand edit the resulting file to order the exported certificates in reverse order
      OPTION 2) openssl pkcs12 -in -nodes -nokeys -passin pass:<password> -out
    7. And finally, we need to get our chained+root file, so we can do:
      cat >
    8. (Finding the right OpenSSL can be time consuming if you don’t know them already.  In this example, I’m presuming you may have generated your CSR using IIS and completed it in there or another Windows-based system to get the resulting PFX we worked from, but if you read the OpenSSL documentation, it can handle many different input formats that don’t require Windows or a PFX artifact at all.)

I know you thought we’d be done and ready to setup the NGINX configuration file by now.. and we are.  But, I want to explain some of the concepts and options we’re about to enable:

  1. This setup will only enable TLS 1.2 and 256-bit ECDHE and DHE RSA, leaving in the dust IE 10, Android 4.3 and earlier, and about every Java client out there as of this writing.  I’m choosing security over accessibility so I get the principle of Forward Secrecy, and that sweet, sweet 100% Protocol Support rating in Qualys.  If this was a production legacy site, you’d want to really think about these options, because a granny on a Tracfone stuck on Android 4.2 could be frustrated by your choices here, frustrating your call center as well.
  2. We don’t want to deal with CRIME-mitigation, so gzip is going to be disabled.  A complex production site may want to weigh this or implement gzipped cache assets differently, but our use case will keep it simple.
  3. We will use custom Diffe-Hellman (DH) prime numbers.  Default implementations often use “well-known” primes that weaken your security and amplify the impact of vulnerabilities like LOGJAM and FREAK.
  4. We will enable OCSP stapling to improve page load times.  This means NGINX will reach out to get OCSP responses from your root CA occasionally, so you can’t turn off your Outbound connectivity in your EC2 security group without ensuring DNS and the ports used for this lookup remain open.
  5. We are going to PIN our TLS certificate public key using HTTP Public Key Pinning (HPKP)
    1. This means the server will tell the browser, “You should expect to always see THIS certificate in a certificate chain coming from this site for at least THIS amount of time”
    2. It also means we need to get a 2nd certificate as a backup, which is not part of the certificate chain of the first certificate.
      1. Which means double your money to buy a second certificate… hopefully with a different expiry period from the first
      2. Or, you get a dirt-cheap DV certificate as your emergency backup, and you use an EV or OV certificate as your primary one.
    3. To generate these hashes, you can check out Scott Helme’s HPKP toolset – super useful!  Or, Qualys’ SSL Server Test can tell you at least the hash of the currently-presented certificate.
  6. We are going to instruct the browser that from now on, NEVER ask for this page over HTTP (or let Javascript make such a request) – HTTPS only from here on out.  This is the Strict-Transport-Security header, otherwise known as HSTS.
  7. We are also going to have a tight policy on what our website should do using the Content-Security-Policy header, also known as CSP.  Beware this header — it takes time to test your policies proportionate to the complexity and number of pages on your site.  If you are a web developer, you can open up Chrome DevTools or Firebug to view problems with your policy of “default-src: none” and handled each type of error one by one to get a custom, strict policy.  Various groups debate the usefulness of CSP, and Google recently cast doubt on its efficacy.  I wanted the bells and whistles, so it was worth 15 minutes for me to get my 1 page website working with it… but if you notice browser rendering problems, you will want to strike the relevant add_header line complete for this.
  8. We are going to instruct browsers not to guess on the MIME content types of our resources, but rather to just trust our Content-Type HTTP response headers.  Some older browsers had security issues in their code that tried to read files to determine this.  Modern browsers don’t have this issue (and older browsers won’t be able to speak the TLS 1.2 baseline requirement in this configuration anyway), but we simply want to deter the practice.
  9. Our site should never be in an IFRAME, so to protect from clickjacking, we instruct the browser to enforce this expectation.

And, without further ado, let’s use these files we created in our key subdirectory and the knowledge of the features we will enable to configure NGINX for our website:


server {
 listen 80; For insecure HTTP port 80...
 server_name; And for either domain name, with and without the 'www'... 

# Discourage deep links by using a permanent redirect to home page of HTTPS site
 return 301 https://$host; Redirect to the HTTPS version

server {
 listen 443 ssl; But for secure HTTP port 443...
 server_name; And for either domain name, with and without the 'www'...

# Server headers
 server_tokens off; Don't show the end-user the version of NGINX we run.  Security through obscurity...

ssl_certificate /var/www/; We serve up the intermediaries and our leaf public key; mobile devices need this.
 ssl_certificate_key /var/www/; Our private site key used for the transport encryption
 ssl_protocols TLSv1.2; We are only going to enable TLS 1.2
 ssl_ciphers 'AES256+EECDH:AES256+EDH:!aNULL'; First prefer Elliptic Curve Diffe-Hellman AES-256 or better, and finally, regular DH AES-256 or better... or bust!
 ssl_prefer_server_ciphers on; If the client prefers different ciphers... too bad!  We make the rules of the cipher negotiation.

# DH primes
 ssl_dhparam /var/www/; Use our custom DH parameters

# For OCSP stapling
 ssl_stapling on; Enable OCSP stapling
 ssl_stapling_verify on; Make sure the stapling responses match our chained+root file
 ssl_trusted_certificate /var/www/; ... THIS chained+root file
 resolver; Use these nameservers to resolve OCSP servers for the stapling

# For Session Resumption (caching)
 ssl_session_cache shared:SSL:10m; Allow TLS resumption for up to 10 minutes to improve page-to-page navigation speed
 ssl_session_timeout 10m; Allow TLS resumption for up to 10 minutes

# HPKP - public key pinning These are the two hashes of two leaf certificates I use for public key pinning - start with a low max-age, then ratchet it up when tested out
 add_header Public-Key-Pins 'pin-sha256="qo5XNG/l96xuzO9F+syXML4wY3XAOM3J4r8mquhuwEs="; pin-sha256="RwJopnm+J6FZTS2jQBnGltzagjpTt62N8Oc4nGEW0Mo="; max-age=3600';

location / {
 root /var/www/; Our website is served from this root directory
 index index.html; If no page is specified in a URL and index.html exists for a directory, serve that instead as the default document
 access_log /var/www/; Store access log for this particular site into my custom log file
 expires 30d; Let the browser know it could cache these pages for 30 days; tune if you manually update your static site often... but I bet you won't.
 proxy_cache default_zone;
 gzip off; Don't compress so we avoid TLS issues like the CRIME attack

limit_except GET {
 deny all; If the browser requests an HTTP verb other than HEAD or GET, deny them.

add_header Content-Security-Policy "default-src 'none'; img-src 'self' data:; style-src 'self' 'unsafe-inline'; script-src 'self' 'unsafe-inline' '; font-src 'self'";
 add_header Strict-Transport-Security "max-age=31536000" always;
 add_header X-Content-Type-Options nosniff;
 add_header X-Frame-Options DENY;

Once your site is up and running, don’t forget to update your EC2 attached Security Group to make HTTP and HTTPS available from Anywhere.  Go ahead and leave SSH as “My IP”, or simply remove it when you are done and add it back when you need to, as your IP can shift in the time between connecting to this server again.



That’s all for this configuration installment.  Next time, I’ll probably be covering the how-to’s of DNSSEC, DANE, and OpenPGP PKA records for DNS-based security assertions and key publishing, but at least by the end of this article, you should be able to configure a relatively secure NGINX static content HTTP server, with many of the security bells and whistles enabled.


Posted by on September 13, 2016 in Uncategorized


First Impressions Matter

When it comes to researching vendors, first impressions matter so much.  I tend to judge any potential vendor by its sales apparatus, not just because it is the first impression, but because that positioning and interaction will tell you so much more than any press release, executive ‘corporate culture’ communication, or other third-party source of information on financial or industry strength.  Things I notice right off the bat that influence my decision to continue engagement or build trust:

Is the sales channel optimized?

Building great companies and great products is all about optimization at a later stage of an organization’s maturation life cycle.  Idea-driven founding staff are joined or replaced by data-driven staff as a company’s offering is validated and it grows to benefit from economies of scale and to show profitability to patient investors and equity holders.  The distance between my interest and the vendor’s name recognition is a marketing issue, but the distance between my identification of a vendor and getting a meaningful response from their sales organization is a sales/company issue.  If I’m clicking through a brochure-ware website to find the place to start engagement, filling out a general ‘Contact Us’ form, navigating a tedious phone tree, or heaven-forbid, clicking a ‘mailto:’ link to type my interest, then I’ve already learned a lot about your company.  I’ve learned one of the following statements is true:

  1. The number of client contacts you deal with through this channel is relatively small: you are new or slow to acquire customers through it
  2. Your company is too focused on the ideation and ‘fun’ phase of the business to optimize your sales channel – your company may not be mature enough for my needs
  3. Your company is too focused serving existing customers (keeping the wheels on) to work on growing your business by optimizing sales channels – your company may not be ready for my needs
  4. Your company is mature but not thinking about data-driven results, which tells me your product probably isn’t either.

What is the quality of the first contact?

Did the person who responds to my inquiry bother to look up the domain of my e-mail address to check out what my company does?  Does that sales executive reference recent PR releases we made?  This is a high-quality contact and this action shows me your sales executives aren’t quote-monkeys or order-takers, they are relationship-builders.  Or did I just get a form letter thank me for my form entry and letting me know someone may get back to me about whatever my interest might be?  If it is the latter, this tells me:

  1. Your company will require me to tell, and you probably won’t ask.  I’ll need to know what I want and be prepared to demand.  Since from the start of the relationship, there was little concern for finding a good fit, I will have extra heavy lifting to do.
  2. If you are asking what my interest is and you don’t already know, then that probably means you haven’t placed me in any segment or internal classification that represents the nature of my potential demand.  That tells me the out-of-the-box customization of the solution may be low, or if not, you are not capitalizing on the specialized needs of different classes of customers.
  3. If I get an “I don’t know” in the first conversation, that is okay, but it tells me I’m either working with someone that does not know their product well (new or inexperienced), or the sales group is not connected to the product group, which is a more fundamental problem.  The most important communication line is (in my view) between sales and product, and secondly between sales and operations to ensure in order that: (1) pre-sales the right solution is sold to a customer … if that doesn’t happen everything else will fail … and (2) post-sales the requirements are appropriately communicated to deliver a synchronized expectation and final result.

What is the speed of the first quality contact?

  1. If I get a poor-quality first contact very fast, I presume I’m talking to someone young and hungry.  This can be a good sign if I need a lot of attention or customization and you’re not a large player.  This is a very bad sign if you have a signature single product and are an established company, as I assume there’s inadequate sales training or high sales churn, both of which send a negative signal about your company’s position and our potential together.
  2. If I get a high-quality first contact very slowly, I’m not thrilled, but I’m willing to wait and pay for quality.  Not everyone is, but that’s how I do business.
  3. If I get a poor-quality contact very slowly, you really shouldn’t be in business, and you probably won’t be anymore very soon.


Leave a comment

Posted by on August 25, 2015 in User Experience


Alkami: Genesis

In the summer of 2008, I was preparing a large strategic product shift within Myriad Systems, Inc. to unify a suite of ancillary banking productions I had built and managed: remote deposit capture, merchant capture, expedited payments, e-Statements, e-Notices, check imaging, and a one-to-one marketing solution among many others. A key opportunity presented itself in that we had several large and progressive financial institution clients that were interested in what an MSI online banking offering could look like, particularly given the relatively poor user experience in the online banking offerings at the time.  This would have completed a big piece of the end-user product portfolio for MSI, and while as daunting as online banking from the ground-up is, it stood to provide substantial strategic value to our whole suite.

Computer Services, Inc. began courting MSI and started a full acquisition in August of 2009. It was clear CSI’s intent was to maximize the value of the print and mail operational assets of MSI, but it had little interest in its online banking products other than to preserve existing revenue streams. This disinterest in the strategic vision of the online web applications as a product portfolio was the impetus for me to pursue my personal career interests of building a best-in-breed online banking solution outside of the MSI umbrella.

Jeff Vetterick and Richard Owens, two industry colleagues that had previously had stints at MSI, reached out when they heard of my desire to continue to build online banking and move on and encouraged me to reach out to Gary Nelson, an acquaintance who was part of the very successful build and sale of Advanced Financial Services to Metavente (an interesting and great story in of itself), who had interest in this as well. After AFS, Gary had many interests and projects, a significant one being part of an idea to build a learning management system that provided tools for schools to impart educational content in an online tool where students would have a fictitious bank account balance and through different learning modules, understand concepts of spending, budgeting, and the time-value of money.

When I spoke to Gary in September, I found this initiative was in wind-down: the project had exceeded its funding, and only an IT manager had been retained as a temporary contractor to document and turn over all the company’s assets. Gary engaged me as a consultant to perform an analysis of the source code developed by that team to determine if there was any value in it as an asset for sale as the company was closed up. I reviewed the company’s source and patents, but when I started looking at the few cloud VM’s and pulled open the Subversion repository where the source code was to be, I found a shocking lack of value: what did exist were some architectural documents and some demoware in the form of static screens coded into a .NET MVC ‘shell project’ that had no actual implementation or integration of the key concepts around educational content delivery and assessment. Looking back at the Finnovate presentation the team from this company did, I found only that minimal proof of concept presented on stage, but little more complete.

The internal company documentation in the form of ‘wikis’, agile storyboards, and some unorganized developer notes showed no cohesive technical direction or architectural plan. When I began reviewing invoices for consultants and local contractors, a sad picture materialized: I felt Gary and other investors had been somewhat duped by a mixture of technical ineptitude and probably some overbilling greed by people and local development ‘firms’. I delivered the news that what assets I could find and review had little fire-sale value, other than perhaps one patent that had some intrinsic value, but no implementation. I exemplified this situation by opening the source code for the portion of the system that purported to calculate a ‘relationship score’ about how much an end-user understood financial literacy content and how their behavior in their accounts, transactions, and progress in meeting their financial goals; the source code simply ran in an endless empty loop, doing nothing. Demoware.

After delivering the news to Gary and preparing for whatever my next endeavor would end up being, Gary suggested I reach out to Stephen Bohanon, a consultant with Catalyst Consulting Group who had previous been a high-performing salesperson with AFS. After several discussions, it became clear Gary had an appetite to try a pivot in the financial technology web application space, and both Stephen and I were interested in building a world-class online banking solution – he as a formidably talented sales executive to build relationships and grow the organization, and myself to grow a technical team that would architect and build our next-generation online banking user experience.

And with no pre-existing source code, and only great ideas, tremendous perseverance, and some money (thanks, Gary!), we founded Alkami.

Leave a comment

Posted by on June 25, 2015 in Uncategorized


Security Advisory for Financial Institutions: POODLE

Yesterday evening, Google made public a new form of attack on encrypted connections between end-users and secure web servers using an old form of encryption technology called SSL 3.0.  This attack could permit an attacker who has the ability to physically disrupt or intercept an end-user’s browser communications to execute a “downgrade attack” that would could cause an end-user’s web browser to attempt to use the older SSL 3.0 encryption protocol rather than the newer TLS 1.0 or higher protocols.  Once an attacker successfully executed a downgrade attack on an end-user, a “padded oracle” attack could then be attempted to steal user session information such as cookies or security tokens, which could be further used to gain illicit access to an active secure website sessions.  This particular flaw is termed the POODLE (Padding Oracle On Downgraded Legacy Encryption) attack.  At this time this advisory was authored, US-CERT had not yet published a vulnerability document for release yet, but has reserved advisory number CVE-2014-3566 for its publication, expected today.

It is important to know this is not an attack on the secure server environments that host online banking and other end-user services, but is a form of attack on end-users themselves who are using web browsers that support the older SSL 3.0 encryption protocol.  For an attacker to target an end-user, they would need to be able to capture or reliably disrupt the end-user’s web browser connection in specific ways, which would generally limit the scope of this capability to end-user malware or attackers on the user’s local network or that controlled significant portions of the networking infrastructure an end-user was using.  Unlike previous security scares in 2014 such as Heartbleed or Shellshock, this attack targets the technology and connection of end-users.  The nature of this attack is one of many classes of attacks that exist that target end-users, and is not the only such risk posed to end-users who have an active network attacker specifically targeting them from their local network.

The proper resolution for end-users will be to update their web browsers to versions that have not yet been released that completely disable this older and susceptible SSL 3.0 technology.  In the interim, service providers can disable SSL 3.0 support, with the caveat that IE 6 users will no longer be able to access sites with SSL 3.0 without making special settings adjustments in their browser configuration.  (But honestly, if you are keeping IE 6 a viable option for your end-users, this is one of many security flaws those issues are subject to).  Institutions that run on-premises software systems for their end-users may wish to perform their own analysis of the POODLE SSL 3.0 security advisory and evaluate what, if any, server-side mitigations are available to them as part of their respective network technology stacks.

Defense-in-depth is the key to a comprehensive security strategy in today’s fast-developing threat environment.  Because of the targeted nature of this type of attack, and its prerequisites for a privileged vantage point to interact with an end-user’s network connection, it does not appear to be a significant threat to online banking and other end-user services, and this information is therefore provided as a precaution and for informational purposes only.

All financial institutions should subscribe to US-CERT security advisories and to monitor the publication of CVE-2014-3566 once released for any further recommendations and best practices.  The resolution for end-users of updated versions of Chrome, Firefox, Internet Explorer, and Safari which remove all support for the older SSL 3.0 protocol will be made through their respective vendor release notification channels.  For more information from US-CERT once published, refer to the Google whitepaper directly at

Leave a comment

Posted by on October 15, 2014 in Security


Alkami: A Retrospective

What a wild and crazy journey the last five years have been.

When I started this blog in 2009, it was shortly after I had inked a deal with an angel investor and journeyed down the road with him and my other co-founder and established Alkami Technology.  Against significant odds, this October marks the five year anniversary of a roller-coaster ride on up, which galvanized Alkami as the clear leader in the online banking space.  Before jumping into this endeavor, I was no stranger to walking products from idealization to realization or running enterprise services in a SaaS model.  But doing all that against the tremendous downside risks of the start-up world, as the new kid on the block among a world of established, very-well funded competitors has been challenging. Actually, it’s been brutal.

Reflecting on the past sixty months, I’ve started to pull together my notes from the early days, both before and after founding Alkami, and I will be commemorating this milestone with a series of blog posts on some company history – the why and how, as well as some valuable and hard-learned lessons along the way.  No one, no company finds tremendous success spontaneously.  While a Inc 500 splash piece on a company might portray success like a serendipitous fairy tale, only through a voracious appetite for risk, an iron stomach for failure, and a committed and skilled team does any great company find its footing.  It’s a great feeling to walk into the office every week and see new, fantastic talent we’ve added to our team and forward-leaning designs and concepts in our flagship solution.  It’s also a very satisfying one to know your personal efforts and sacrifices made that team and that company possible.

This series of posts will not be a beating of the chest or self-congratulatory account of our accolades.  Our work is far from over, and I judge success on a much longer time horizon.  But it will be a real account of our origin story, entrepreneurship, missteps and course correction, and moving from start-up to scale-out in a slow sales cycle, highly-regulated industry.  It’s one thing to have a hip product idea you incubate through an accelerator and debut on a demo day. It’s a very different thing to bootstrap a firm and an entire platform where you have to answer a few hundred RFP questions to get a prospect to even talk with you, many other steps to get just one sale, and many sales to get that kind of investor attention.

Those pieces are now in place and solidifying every day as we take an aggressive product and technical vision to its successful conclusion.  I’m honored to have found great working partners, worked (and still mostly continue to work) with some of the most committed and skilled people across a variety of disciplines along the way.  As we look back in retrospect at five formative years, I’m eager to chronicle our story and to add others who will extend and craft our bright future. Stay tuned.

Leave a comment

Posted by on October 1, 2014 in Uncategorized


Security Advisory for Financial Institutions: Shell Shock

“Shell Shock” Remote Code Execution and Compromise Vulnerability

Yesterday evening, DHS National Cyber Security Division/US-CERT published CVE-2014-6271 and CVE-2014-7169, outlining a serious vulnerability in a widely used command line interface (or shell) for the Linux operating system and many other *nix variants.  This software bug in the Bash shell allows files to be written on remote devices or remote code to be executed on remote systems by unauthenticated, unauthorized malicious users.  Because the vulnerability involves the Bash shell, some media outlets are referring to this vulnerability as Shell Shock.

Nature of Risk

By exploiting this parsing bug in the Bash shell, other software on a vulnerable system, including operating system components, can be compromised, including the OpenSSH server process and the Apache web server process. Because this attack vector allows an attacker to potentially compromise any element of a vulnerable system, effects from website defacement to password collection, malware distribution, and retrieval of protected system components such as private keys stored on servers are possible, and the US-CERT team has rated this it’s highest impact CVSS rating of 10.0.

Please be specifically aware that a patch was provided to close the issue for the original CVE-2014-6271; however, this patch did not sufficiently close the vulnerability.  The current iteration of the vulnerability is CVE-2014-7169, and any patches applied to resolve the issue should specifically state they close the issue for CVE-2014-7169.  Any devices that are vulnerable and exposed to any untrusted network, such as a vendor-accessible extranet or the public Internet should be considered suspect and isolated and reviewed by a security team due to the ability for “worms”, or automated infect-and-spread scripts that exploit this vulnerability, to quickly affect vulnerable systems in an unattended manner.  Any affected devices that contain private keys should have those keys treated as compromised and have those keys reissued per your company’s information security policies regarding key management procedures.

Next Steps

All financial institutions should immediately review their own environments to determine that no other third-party systems that are involved in serving or securing the online banking experience, or any other publicly-available services, are running vulnerable versions of the Bash shell.  Any financial institution that provides any secure services with Linux or *nix variants running a vulnerable version of the Bash shell could be at risk no matter what their vendor mix. If any vulnerable devices are found, they should be treated as suspect and isolated per your incident response procedures until they are validated as not affected or remediated.  All financial institutions should immediately and thoroughly review their systems and be prepared to change passwords on and revoke and reissue certificates with private key components stored on any compromised devices.

For further reading on this issue:

Leave a comment

Posted by on September 25, 2014 in Security