Monthly Archives: September 2016

Last weekend, I did some sprucing up of my public website.  It’s just a simple static one-pager, but why on earth keep a Windows box just to host that?  It was long overdue for me to move something simple into something more cost effective that I could securely manage easier.  In case others are looking for a quick recipe book on the same, here’s what I did last weekend:

Spin up an Encrypted Linux AMI in AWS

My objectives in this move were to (1) keep it simple, and (2) keep it secure.  I’ve already enjoyed great success using NGINX at Alkami in getting the best security posture possible for TLS termination, and NGINX can be more than just a reverse proxy – it can also work as a blazingly fast web server for static content too.  Dusting off my Apache skills just for this project seemed unnecessary, so for this recipe, we’re going to be setting up NGINX as the only server process for this static site.  If you aren’t familiar with NGINX… don’t fret – I’m going to make it easy to configure and explain each step along the way, although you can reference great AWS documentation here too.

Encrypt your AMI

If you’re a Linux guy, you probably have a distribution already in mind.  For this project, I’m fine with the standard machine image Amazon AWS puts together, and I don’t necessarily need to worry about which package manager I should use or what filesystem or startup configuration file layout I prefer to maintain.  Going with a plain vanilla Amazon Linux AMI, (AMI ID ami-178ef900), I:

  1. Created a new AWS account.  This is very easy to do with a credit card, although I’ll be using the Free Usage tier of services for this recipe and don’t plan to go over those thresholds.
  2. Went to the EC2 console – that’s the Elastic Compute Cloud – and click the AMI’s option under Images on the left.
  3. Searched for AMI ID ami-178ef900
  4. Right-clicked to select the result and chose Copy AMI, selected the Encryption option, and confirmed Copy AMI.
    1. Here you have an option of getting fancy with key management and creating a special key for this encrypted operating system image.  We don’t need to be fancy, we just need to be secure.  If we are using this AWS account just for a public website and for that single purpose, the default key for the account is just fine.

This part is important, because I want all my data at rest in my AMI to be encrypted.  “Why?”, you might ask?  Virtually anything you do in the cloud will have sensitive data at rest, at least in the form of SSH or website TLS certificate keys, if not critical corporate or client data.  Encryption is not an option – just do this.

Setup a Secure Security Group

Under Security Groups under Network & Security in the EC2 console, we’re going to define who can access our new AMI.  To start, we will only allow access to ourselves while we configure and harden it.  Only after we’re happy with the configuration will be open it up to the world.  To do this:

  1. In Security Groups, click Create Security Group at the top.
  2. Name your security group something simple, like webserversecuritygroup
  3. Add three Inbound rules
    1. HTTP from My IP only – this is how we will test insecure HTTP connections
    2. HTTPS from My IP only – this is how we will test secure HTTP connections
    3. SSH from My IP only – this is how you will connect to your new AMI with PuTTY or another terminal session manager
  4. By default your instance can connect Outbound anywhere.  Not a great idea for a production enterprise system.  For this recipe, we’re going to leave this with this default, but we could shore it up later once we get everything like OCSP working near the end.  Flipping on a lot of security early on can make this whole process much more painful, so our approach will be (1) to use secure defaults, (2) get functionality working, then (3) harden it.

Launch Your AMI

When the copy completed in about 5 minutes, I was able to right-click the encrypted AMI I just just copied from the source and click “Launch”.  Herein I was able to select the options for the virtual machine I would boot with this AMI as the image, and in order:

  1. I used the t2.micro instance to keep it simple and free.
  2. Chose the webserversecuritygroup I created in the previous step
  3. Selected the VPC and subnet (if you aren’t sure about VPC’s and subnets, your AWS account comes with a default one you can setup in the VPC option where you previously selected EC2.  The first option, just one public subnet, is fine for this application, because we won’t have back-end database or file servers that in a more complex environment we would architect for additional layers of security.  It’s completely unnecessary for this recipe, and quite honestly, I prefer to keep my various recipes in separate AWS accounts to keep cost tracking easier.  Don’t complicate this for yourself – one public subnet is all you will need and use.)
  4. You will need to choose what SSH key you will use to connect to the instance in our post-modern, password-less world.  If you haven’t setup a SSH key yet, you will create one here and just download the .PEM file
  5. Enabled protection against accidental termination.
  6. Clicked Review and Launch, then waited about 10 minutes for the machine to spin up.
  7. You will need to Download

Grab a Constant IP Address

While I was waiting for that, I wanted an Elastic IP.  AWS will generate a public IP for your EC2 instance, but that public IP isn’t guaranteed to stay with you.  We want that guarantee, and at about $3/mo for an Elastic IP address, it’s worthwhile not to have to muck with DNS updates any time I may reboot or rebuild a box and potentially suffer through downtime.  To get and use an Elastic IP:

  1. Go to Elastic IP’s under Network & Security in the EC2 console.
  2. Click Allocate New Address
  3. (Once that EC2 instance we’re firing up is up and running, then you can) Right click your new address and choose Associate Address.  We kept it simple and only have a single EC2 instance in this AWS account, so it’s easy to select the only instance to associate this address to it.
  4. Thanks to the magic of the software-defined networking stack of AWS, you don’t need to mess with ifconfig or reboot your AMI once you make this change – you’re ready to go.

Prepare the Box

In an enterprise production system, we’d probably already have pristine golden images, fully patched, tailored for our need.  Here, we don’t, we just went with a reasonable default.  But in either case, and especially in this one, we need to make sure we have all the latest patches, so we’ll:

  1. Connect to the box using PuTTY using the key-based login of the .PEM file we generated or chose when we launched our AMI.
  2. Enter ec2-user as our username when prompted
  3. Enter the password for our key in the .PEM file when prompted, and we’re in.
    1. And if you’re not in, either you don’t SSH much, or you’ve forgotten how to use PuTTY.  Documentation is your friend.
  4. Type sudo su to get root access
  5. Apply updates for your packages, type yum update
  6. Remember, we’re using NGINX, and it’s not installed on the default Linux AMI, so we’ll simply do yum install nginx to get that happen
  7. There are other nice things we’ll use in the epel-release that make using advanced NGINX features easier, so let’s also do yum install epel-release

Configure NGINX

NGINX is pretty simple to configure when you know what options need to be configured.  Just like the Linux AMI, it comes out of the box with relatively sane defaults, and we’ll use those as a starting point.  There are two files of special significane: /etc/nginx/nginx.conf which is the overall configuration for NGINX, and that can ‘include’ other files from /etc/nginx/conf.d/  We’ll use this separation to make minimal changes to NGINX’s overall configuration, and keep our site configuration centralized in one site-specific configuration file to make it easy to add another site to the same box in the future.

Make sure of a few simple things first in your global /etc/nginx/nginx.conf file.  I’m going to reproduce mine and make comments in red.

# For more information on configuration, see:
# * Official English Documentation:
# * Official Russian Documentation:

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /var/run/;

events {
 worker_connections 1024;

http {
 log_format main '$remote_addr - $remote_user [$time_local] "$request" '
 '$status $body_bytes_sent "$http_referer" '
 '"$http_user_agent" "$http_x_forwarded_for"';

access_log /var/log/nginx/access.log main;

sendfile on;
 tcp_nopush on;
 tcp_nodelay on;
 keepalive_timeout 65;
 types_hash_max_size 2048;

include /etc/nginx/mime.types;
 default_type application/octet-stream;

proxy_cache_path /tmp/nginx levels=1:2 keys_zone=default_zone:10m inactive=60m;
 proxy_cache_key "$scheme$request_method$host$request_uri";

# Load modular configuration files from the /etc/nginx/conf.d directory.
 # See
 # for more information.
 include /etc/nginx/conf.d/*.conf; This is the line that brings in our site-specific configuration files

index index.html; My site's default is just index.html, so I've simplified this line to make that the only one that can be served by default

server {
 listen 80 default_server; Listen on insecure HTTP IPv4 port 80 in this server block
 listen [::]:80 default_server; Also, listen on insecure HTTP IPv6 port 80 in this server block
 server_name localhost; This will serve as a catch-all, regardless of domain name specified.
 root /usr/share/nginx/html;

# Load configuration files for the default server block.
 include /etc/nginx/default.d/*.conf;

location / {
 limit_except GET {
 deny all; This block ensures any HTTP verb that is not GET just gets denied.  Remember, we have a simple static site.
 return 301; This whole 'server' block is for PORT 80 insecure traffic only.  We want users to get redirected to HTTPS always.

 add_header Content-Security-Policy "default-src 'none'; script-src 'self'; img-src 'self'; style-src 'self'"; We'll cover this CSP line later.
 add_header Strict-Transport-Security "max-age=31536000" always; Instruct the browser to never again ask for this site except using HTTPS
 add_header X-Content-Type-Options nosniff; Tell the browser not to try to second-guess our Content-Type HTTP response headers; old browser security problem
 add_header X-Frame-Options DENY; We don't use IFRAME's, so if someone tries to frame this site in a user's browser, the browser should just error out


As you’ll note from your own initial NGINX configuration file, I removed a lot of commented-out lines and added a few things I documented above in red with reasons.  The meat of our site though, will be our next file, that enumerates the settings for our target domain, in this case,  Before we go through that, though, let’s establish where exactly some paths are going to be we’ll reference in the configuration file that follows.  We’re going to put our website in /var/www/  We’ll place any secure keys like our TLS certificate in /var/www/  I’m going to want to find logs for this site in a predictable place that aren’t co-mingled with other sites I might add in the future, so I’ll also need a /var/www/ directory.  So, let’s do this, at the command line:

  1. Create the www directory: mkdir /var/www
  2. Create the site-specific directory: mkdir /var/www/
  3. Set permissions on these directories by issuing
    1. chmod 755 /var/www
    2. chmod 755 /var/www/
  4. Set ownership on these directories to the root user and the root group by issuing
    1. chown root:root /var/www
    2. chown root:root /var/www/
  5. We’d like put place our site in the public directory, but we don’t want to have to act as root each time we want to edit them… so we’ll create that one slightly differently
    1. mkdir /var/www/
    2. chown root:ec2-user /var/www/
    3. chmod 775 /var/www/
    4. This way, anyone in the ec2-user group can also edit the files herein
  6. NGINX will need to read the keys for this site, so we’ll need to do some special permission settings on the key subdirectory
    1. mkdir /var/www/
    2. chown nginx:nginx /var/www/
    3. chmod 550 /var/www/
    4. Now, only NGINX can read the keys herein
  7. NGINX will need to write log files out, so we’ll do something mostly similar for the log directory
    1. mkdir /var/www/
    2. chown nginx:nginx /var/www/
    3. chmod 750 /var/www/

Great!  Now we have a sturdy directory structure to work from.  One last thing we need to do before we configure our site file is establish what’s in that key subdirectory.  There are a few things we need:

  1. We need a certificate from our website issued by a certificate authority in a standard .PEM format.  You have a few options here:
    1. You can get a free DV cert from StartSSL.  This is the dirt-cheap solution, but given StartSSL was recently purchased by WoSign in a clandestine acquisition, and WoSign has had multiple and serious security lapses, you should not be trusting or providing support this entity.
    2. You can get a free DV cert from Let’s Encrypt, if you’re savvy enough to set up the automated renewal these 90-day in duration certificates require.  If you’re this savvy, though, you probably aren’t reading this blog, because you likely know much of the NGINX configuration I’m about to describe.  In addition, you would need to automate the rest of the configuration to handle frequent certificate rotations and the updating of subsequent key files and DNS entries for some of the fancier things we will do, like DANE, near the end.
    3. You could buy a relatively cheap DV certificate from an authority like GoDaddy
    4. You could pony up for a mid-tier OV certificate from an authority like Entrust
    5. If, and only if, you have a registered business with a DUNS number, you can get the top-tier assurance EV certificate from an authority like Entrust
    6. … and let’s face it, HTTP is so 1994.  Soon Chrome will warn users who visit HTTP sites that your site is insecure by default, and that’s not what you want to project, so you will pick one of the 5 options above.
  2. Your certificate is likely issued from a certificate authority’s trusted root certificate, which in turn has trusted an issuing certificate, which in turn has issued the certificate you acquire.  Or, sometimes instead of three links to this chain (root, intermediate, leaf), there are four (root, intermediate1, intermediate2, leaf).  This is important because you will need a few files here:
    1. Your leaf’s private key, what I will call below
    2. Your leaf’s public key + your intermediate(s) public keys, what I will call below.
    3. Your leaf’s public key + your intermediate(s) public keys + your root certificate authority’s public key, what I will call below.
    4. Some very big and unique prime numbers used for Diffe-Hellman key exchange, what I will call below
  3. To accomplish this, you will use openssl.  You will not use online converters to which you upload your private keying material to and let it do the work for you. You will not use online converters.  You will never, ever upload your private key anywhere that’s not encrypted, and when you do, you would never supply or keep the passphrase for it in any connected container.  Here’s some openssl cheatsheet commands for you, presuming you obtained a .PFX file that contains your public and private key combined.
    1. Export the private key for your leaf certificate into a file from a file.  These literally are the keys to your kingdom – the private key without encryption or passphrase protection.
      openssl pkcs12 -in -out -nocerts -nodes
    2. Export the public key for your leaf certificate into a file from a file.
      openssl pkcs12 -in -out -clcerts -nokeys
    3. Export your issuer’s root public certificate into a file
      openssl pkcs12 -in -out -cacerts
    4. Create your DH primes for key exchange.  You don’t have to understand what this is in-depth, but you should understand it could take 10-15 minutes to complete.
      openssl dhparam -out 4096
    5. Now, let’s create that chained file.  OpenSSL strangely doesn’t export a chain in the proper order.  You can either manually save the intermediate certificate in a .PEM format (called in the example below) and do:
      OPTION 1) cat >
    6. OR, you could alternatively type this and hand edit the resulting file to order the exported certificates in reverse order
      OPTION 2) openssl pkcs12 -in -nodes -nokeys -passin pass:<password> -out
    7. And finally, we need to get our chained+root file, so we can do:
      cat >
    8. (Finding the right OpenSSL can be time consuming if you don’t know them already.  In this example, I’m presuming you may have generated your CSR using IIS and completed it in there or another Windows-based system to get the resulting PFX we worked from, but if you read the OpenSSL documentation, it can handle many different input formats that don’t require Windows or a PFX artifact at all.)

I know you thought we’d be done and ready to setup the NGINX configuration file by now.. and we are.  But, I want to explain some of the concepts and options we’re about to enable:

  1. This setup will only enable TLS 1.2 and 256-bit ECDHE and DHE RSA, leaving in the dust IE 10, Android 4.3 and earlier, and about every Java client out there as of this writing.  I’m choosing security over accessibility so I get the principle of Forward Secrecy, and that sweet, sweet 100% Protocol Support rating in Qualys.  If this was a production legacy site, you’d want to really think about these options, because a granny on a Tracfone stuck on Android 4.2 could be frustrated by your choices here, frustrating your call center as well.
  2. We don’t want to deal with CRIME-mitigation, so gzip is going to be disabled.  A complex production site may want to weigh this or implement gzipped cache assets differently, but our use case will keep it simple.
  3. We will use custom Diffe-Hellman (DH) prime numbers.  Default implementations often use “well-known” primes that weaken your security and amplify the impact of vulnerabilities like LOGJAM and FREAK.
  4. We will enable OCSP stapling to improve page load times.  This means NGINX will reach out to get OCSP responses from your root CA occasionally, so you can’t turn off your Outbound connectivity in your EC2 security group without ensuring DNS and the ports used for this lookup remain open.
  5. We are going to PIN our TLS certificate public key using HTTP Public Key Pinning (HPKP)
    1. This means the server will tell the browser, “You should expect to always see THIS certificate in a certificate chain coming from this site for at least THIS amount of time”
    2. It also means we need to get a 2nd certificate as a backup, which is not part of the certificate chain of the first certificate.
      1. Which means double your money to buy a second certificate… hopefully with a different expiry period from the first
      2. Or, you get a dirt-cheap DV certificate as your emergency backup, and you use an EV or OV certificate as your primary one.
    3. To generate these hashes, you can check out Scott Helme’s HPKP toolset – super useful!  Or, Qualys’ SSL Server Test can tell you at least the hash of the currently-presented certificate.
  6. We are going to instruct the browser that from now on, NEVER ask for this page over HTTP (or let Javascript make such a request) – HTTPS only from here on out.  This is the Strict-Transport-Security header, otherwise known as HSTS.
  7. We are also going to have a tight policy on what our website should do using the Content-Security-Policy header, also known as CSP.  Beware this header — it takes time to test your policies proportionate to the complexity and number of pages on your site.  If you are a web developer, you can open up Chrome DevTools or Firebug to view problems with your policy of “default-src: none” and handled each type of error one by one to get a custom, strict policy.  Various groups debate the usefulness of CSP, and Google recently cast doubt on its efficacy.  I wanted the bells and whistles, so it was worth 15 minutes for me to get my 1 page website working with it… but if you notice browser rendering problems, you will want to strike the relevant add_header line complete for this.
  8. We are going to instruct browsers not to guess on the MIME content types of our resources, but rather to just trust our Content-Type HTTP response headers.  Some older browsers had security issues in their code that tried to read files to determine this.  Modern browsers don’t have this issue (and older browsers won’t be able to speak the TLS 1.2 baseline requirement in this configuration anyway), but we simply want to deter the practice.
  9. Our site should never be in an IFRAME, so to protect from clickjacking, we instruct the browser to enforce this expectation.

And, without further ado, let’s use these files we created in our key subdirectory and the knowledge of the features we will enable to configure NGINX for our website:


server {
 listen 80; For insecure HTTP port 80...
 server_name; And for either domain name, with and without the 'www'... 

# Discourage deep links by using a permanent redirect to home page of HTTPS site
 return 301 https://$host; Redirect to the HTTPS version

server {
 listen 443 ssl; But for secure HTTP port 443...
 server_name; And for either domain name, with and without the 'www'...

# Server headers
 server_tokens off; Don't show the end-user the version of NGINX we run.  Security through obscurity...

ssl_certificate /var/www/; We serve up the intermediaries and our leaf public key; mobile devices need this.
 ssl_certificate_key /var/www/; Our private site key used for the transport encryption
 ssl_protocols TLSv1.2; We are only going to enable TLS 1.2
 ssl_ciphers 'AES256+EECDH:AES256+EDH:!aNULL'; First prefer Elliptic Curve Diffe-Hellman AES-256 or better, and finally, regular DH AES-256 or better... or bust!
 ssl_prefer_server_ciphers on; If the client prefers different ciphers... too bad!  We make the rules of the cipher negotiation.

# DH primes
 ssl_dhparam /var/www/; Use our custom DH parameters

# For OCSP stapling
 ssl_stapling on; Enable OCSP stapling
 ssl_stapling_verify on; Make sure the stapling responses match our chained+root file
 ssl_trusted_certificate /var/www/; ... THIS chained+root file
 resolver; Use these nameservers to resolve OCSP servers for the stapling

# For Session Resumption (caching)
 ssl_session_cache shared:SSL:10m; Allow TLS resumption for up to 10 minutes to improve page-to-page navigation speed
 ssl_session_timeout 10m; Allow TLS resumption for up to 10 minutes

# HPKP - public key pinning These are the two hashes of two leaf certificates I use for public key pinning - start with a low max-age, then ratchet it up when tested out
 add_header Public-Key-Pins 'pin-sha256="qo5XNG/l96xuzO9F+syXML4wY3XAOM3J4r8mquhuwEs="; pin-sha256="RwJopnm+J6FZTS2jQBnGltzagjpTt62N8Oc4nGEW0Mo="; max-age=3600';

location / {
 root /var/www/; Our website is served from this root directory
 index index.html; If no page is specified in a URL and index.html exists for a directory, serve that instead as the default document
 access_log /var/www/; Store access log for this particular site into my custom log file
 expires 30d; Let the browser know it could cache these pages for 30 days; tune if you manually update your static site often... but I bet you won't.
 proxy_cache default_zone;
 gzip off; Don't compress so we avoid TLS issues like the CRIME attack

limit_except GET {
 deny all; If the browser requests an HTTP verb other than HEAD or GET, deny them.

add_header Content-Security-Policy "default-src 'none'; img-src 'self' data:; style-src 'self' 'unsafe-inline'; script-src 'self' 'unsafe-inline' '; font-src 'self'";
 add_header Strict-Transport-Security "max-age=31536000" always;
 add_header X-Content-Type-Options nosniff;
 add_header X-Frame-Options DENY;

Once your site is up and running, don’t forget to update your EC2 attached Security Group to make HTTP and HTTPS available from Anywhere.  Go ahead and leave SSH as “My IP”, or simply remove it when you are done and add it back when you need to, as your IP can shift in the time between connecting to this server again.



That’s all for this configuration installment.  Next time, I’ll probably be covering the how-to’s of DNSSEC, DANE, and OpenPGP PKA records for DNS-based security assertions and key publishing, but at least by the end of this article, you should be able to configure a relatively secure NGINX static content HTTP server, with many of the security bells and whistles enabled.


Posted by on September 13, 2016 in Uncategorized