Monthly Archives: January 2019

Thoughts on passing the GIAC Security Essentials (GSEC)

Today I passed the GIAC Security Essentials Certification, also known as the GSEC.  I passed with a 95% on my first certification attempt, so I thought it might be useful to decompose my thoughts on this one for any who attempt it in the future.

My background is technical – I started my career in software engineering and database performance tuning, moved into engineering leadership roles, and eventually ended up pursuing my interests in cybersecurity, where I have been a CISO at two financial services firms.  Yet, I’m still very hands-on during the day, and I recently wrote a QUIC userspace implementation to learn the spec in the evenings.  I have previously earned the CISSP and CISM certifications, although these are more leadership and risk management focused credentials that don’t speak much to technical aptitude as it relates to security.  For that reason, as well as my personal desire to keep my technical skills sharp while also working at the executive level and leading a team, I decided to apply to, and was accepted into, the SANS Technology Institute’s (STI) Masters of Science in Information Security Engineering program.  The first stop along the MSISE journey is the GSEC.

As part of the MSISE program, I pay tuition for a graduate class to gain access to SANS training and the associated GIAC exam which provides me with a grade for my course.  This was my very first SANS training and my first GIAC exam.  There was an option provided for me to directly challenge the exam since I do have and did recently earn my CISSP, but my STI student advisor kindly recommended I take in the full training experience.  I was admittedly reluctant both because I feel I am pretty strong technically and because it would have been slightly cheaper and faster for me to just go straight to the GSEC exam, but the advice was well founded.

The SANS SEC 401 class by Dr. Eric Cole was outstanding.  Dr. Cole’s presentation style feels genuine and engaging over the self-paced OnDemand modality I chose.  I walked into this content with the preconceived notion that much of this would be review for me, and honestly, a lot of it was for me.  This isn’t to state the course is remedial, simply that as a builder of security programs, the concepts and advice aren’t new to me, but some of the technical pieces were.  I learned new and useful tools as part of this course, and I could see this as an excellent foundational course for current and aspiring security team members in any organization.  Finding high quality training content is exceptionally valuable to me in my day job, and of course for me personally taking this course.

As other GIAC alumni will tell you, since the GSEC is an open-book exam, developing good indexing skills, as others who have recounted their experiences state, is critical.  I followed Josh Armentrout’s index format, and I walked into the exam with about 4 pages of indexes I developed throughout the course.  Admittedly, the way I learn best is by reading, so I spent my time in SEC 401’s OnDemand video with Dr. Cole on x2 speed and scanning pages in the book as I went along for index-worthy concepts or terms.  I did not spend any time highlighting the books or listening to MP3’s, just focusing on the audio and what I was reading.  I would finish a ‘day’ at 2x in about 2 nights of my time, devoting about 5 hours a night for a couple of weeks to get through it all with a worthwhile index.  There’s no specific tips or tricks to the content — the course syllabus plainly states what will be covered, and that’s the reality of what OnDemand provided.  I will say read your entire book.  Sometimes key concepts have interesting nuances that end on the back of a page on a trailing paragraph.  Don’t skip those.

With my course, in addition to the self-study quizzes in the OnDemand portal – which test the content of SEC401, not the GSEC – I received two GIAC practice tests and the final GIAC exam test ready to schedule. While everything in the OnDemand portal is self-paced, repeatable, and not timed (other than the overall subscription access), the GIAC practice tests are delivered in the same format as the exam – timed, but they also provide explanations for any incorrectly answered questions.  The MSISE program has a learning community portal where generous souls who do not use both GIAC practice tests give away their tests to others who want extra shots.  While that’s awfully nice of them, and I was tempted to do the same, I found value in taking both practice tests to test and refine the quality of my index.  I’m glad I did, and would suggest never to give away a practice test if you feel you could use it to benefit your index or your comprehension of the breadth of the training topics.  (Hey, you paid for these practice tests, so you come first.)  I took my first practice test as an ‘open internet’ variant where I would quickly Google something to answer the test, but then make sure my notes were fully fleshed out from what external sources could add.  My last practice test was ‘closed internet, open book’ to mimic the actual exam experience, and this was a last test of my index for completeness, since that’s all I would have on the test day.  Obviously, I carefully read the explanations to anything I answered incorrectly and tuned my notes and did additional readings to make sure I did not repeat any misfires.

April | 2014 | geekandbooknerd's BlogFinally, exam day came today!  I’m no stranger to these types of tests or Pearson Vue, so the experience was predictable and suitable.  It is interesting walking into a Pearson Vue with an armful of books since most exams they test for allow no notes or books.  I came in with all six course books, the lab workbook, the network quick reference guide, my index, and a separate page of notes I made about common ports and protocols that were not on the network quick reference guide but were mentioned elsewhere in the course material.  I used everything I brought in, if only to take the exam at a ‘leisurely’ pace and spend adequate time double checking my answers.

Unlike the CISSP or CISM which are based on practical experience (with the exception of the CISSP’s strange obsession with fire suppression controls…), the GSEC was much more knowledge-based, specifically on the SEC401 training materials.  So, the right answer is less likely to come from things you already know (come on, you don’t really know ALL those nmap switches), but from what you have learned and can recall or find.  Arguably, this is a bit more realistic, as aren’t all technical folks somewhat depending on their navigation of StackOverflow or Google-fu? 🙂

It’s hard to know from the outside whether SEC401 is custom tailored to the GSEC, or whether the GSEC is really testing SEC401, but they fit together like pieces of a puzzle.  Answers to questions often came nearly verbatim from the slides, or more often, the narrative, in the SEC401 books I had in tow.  That’s not a knock on the SANS content or the GIAC exam – I call this out simply to advise those studying for the GSEC to intimately know the SEC401 material as it is presented in the books.  Treat the high-quality OnDemand video as a wonderful supplement, but don’t go light on your reading and indexing of your spiral bound friends.  Also, do the labs, and repeat them until you could recognize a screenshot of output to a tool you covered in the curriculum or in a lab.  If you couldn’t recognize a screenshot or command well enough by sight, you probably aren’t soaking in the technical material at the level you need to demonstrate competency at the higher end of the spectrum.

This process got me from a 89% on my cavalier run through the first practice test, to a 92% on my second practice test, to a 95% on exam day.  There’s really no tricks to doing well on the GSEC or tricks the exam will try to play on you.  It is plainly written, very technical, and you would be a fool not to be prepared with the associated SANS training and a well-crafted index before sitting down to make an attempt.  (Check out Lesley Carhart’s great post on studying and indexing too, if you have not already.)  Even if you might think ‘I know all this’, you probably don’t have the GSEC cinched unless you give it serious attention and a good study.

I hope this helps someone out there!

Leave a comment

Posted by on January 30, 2019 in Uncategorized


Despite DoH and ESNI, with OCSP, web activity is insecure and not private


Certificate Transparency (CT) logs increasingly provide virtually every TLS certificate to be identified by serial number.  Since OCSP responses are unencrypted and contain the serial number of the certificate as can be found in CT logs, as well as unsalted hashes of the certificate’s Distinguished Name and public key, these can easily be profiled to compromise the privacy of clients even in the presence of DoH and ESNI privacy protections.


A lot of great work has happened over the past few years in securing the web by strengthening encryption and improving user security indicators.  This helps users make informed decisions to keep their online activity secure and private and to thwart network adversaries from profiling users.  Man-in-the-middle attacks on the network often conjure images of someone breaking into a server room and installing some kind of interlocutor spyware device or splicing a network card.  Repeatedly, though, the internet service providers that bring the Internet to consumers’ homes have demonstrated they will use their privileged position on the network to sell private information about consumer internet use or degrade services from competitors.

Policy fixes like network neutrality are still in play, but these threats aren’t unlikely one-offs that target individuals, they are systemic abuses by technology providers.  Technology fixes, though, are seeking to limit the visibility of web activity, such as the names of websites one visits or the content they download, indiscernible to anyone except the requester and the actual website operator.


Significant strides in improving the strength of encryption that makes data in transit unreadable, such as TLS 1.3, have squelched out vulnerabilities that stem from aging cryptographic algorithms and ciphers as well as certain threats that can affect the confidentiality of communications when an encryption key is leaked or a nation-state attacker.  However, metadata that is exchanged in the process of finding a server and securely establishing a connection, DNS and TLS with a Server Name Indicator (SNI), can still leak and poses both an existential privacy problem that is particularly troubling to vulnerable populations under repressive regimes as well as a method for sophisticated technology providers in ‘free’ societies to profile traffic for bandwidth discrimination, censorship, or profiteering.

A couple of standards have gained traction to address these weaknesses in DNS and TLS, with proposals termed DNS over HTTPS (DoH) and encrypted SNI (ESNI), respectively.


DoH moves the plaintext game of ‘telephone’ whereby a client’s request to resolve a URL into an IP address may traverse many different servers operated by many different entities to look up and return the answer.  DoH moves this communication from an unencrypted channel to an encrypted one, which still requires one to trust the privacy policy of the entity servicing the request, but does not need to presume the good behavior of every intermediate network and DNS server in the mix.  This is a very good thing we will see rolling out in the next few years in a much wider adoption.


ESNI is a proposal to plug a hole in an extension of the Transport Layer Security protocol (sometimes incorrectly referred to by its obsolete predecessor, SSL) which allows for encrypted communications to happen over a channel in a standard way for many applications.  In the web’s early days, users would connect to a web server, such as, and would return a signed certificate that could be used to setup a secure communications channel.

However, as the web matured, methods for hosting many different sites on the same server or set of servers took off and there was no longer a 1:1 match for a domain name and a web server.  SNI was an extension that lets a client, like a web browser, specify “I want” so the web site provider could return the correct, unique certificate to setup the channel for, even though it could also be serving lots of other sites too.  However, the “I want” is exchanged in plain-text before the certificate is provided and before an encrypted channel is established.

That means savvy technology providers could just look here instead of logging DNS requests for similar data on what host names to which a customer is attempting to connect.  This is becoming far more viable as HTTPS Everywhere, user agent changes, and free certificate authorities like Lets Encrypt are making ‘secure by default’ the new reality for the web.  More TLS means more encryption, but also more consistency in finding hostnames in SNI fields.


CT Logs

TLS is underpinned by a system of trust, particularly in the entities called Certificate Authorities that cryptographically sign certificates used to establish encrypted communications.  However, certificate authorities are fallible, and some have failed due to security breaches or by failing to abide by the rules and mis-issuing certificates.  Some of the most egregious offenses from failed certificate authorities like DigiNotar, Symantec, and WoSign/StartCom have resulted in technology solutions that make it possible to hold them accountable.  Certificate Transparency (CT) logs are a public ledger of certificates issued by authorities that allow their behavior to be monitored, but also create central clearinghouses of certificates that can be looked up by name or serial number.  More on that soon.


When a certificate is compromised, a certificate authority can revoke it.  While normally a certificate has a limited duration noted by an immutable expiration date embedded into it, certificates may be prematurely revoked if the holder or the authority is compromised.  The Online Certificate Status Protocol (OCSP) is a protocol clients like web browsers user to verify a certificate it receives is still valid. OCSP lets a client ask “I just received this certificate for, but is it valid?”  The request is obscure, but not secure:


The request has a one-way hash of the distinguished name and public key in the certificate as well as the serial number of the certificate.  Unsalted hashes mean anyone could poll CT logs for all distinguished names, build their own hash lookup dictionary, and then compare this value to their dictionary.  However, the unhashed serial number makes this far easier, as many CT logs support direct lookup of certificates by their serial number.  In the following screenshot, you can see a trivial lookup to find out my lab virtual machine was connecting out to



This is not a new vulnerability.  In fact, RFC 6960, which defines OCSP, explicitly states:

Where privacy is a requirement, OCSP transactions exchanged using HTTP MAY be protected using either Transport Layer Security/Secure Socket Layer (TLS/SSL) or some other lower-layer protocol.

Incorrectly, some presume OCSP must be performed over insecure HTTP to address a address a ‘chicken and egg’ problem that would arise from trying to validate the certificate of a secure OCSP site to validate the certificate of another secure site.  While implementation details could be non-trivial, solutions like pinning the TLS certificates of well-known OCSP responders could address that challenge.

It is important, though, to consider that in the cat-and-mouse game of threats to privacy and privacy-protecting technologies, OCSP is a more readily available source of metadata on users as HTTPS adoption increases, CT logs become mandatory and pervasive, and insecure OCSP communications dominate the responder implementations.  As other privacy holes are addressed, such as DoH and ESNI, to keep users’ Internet activity private, OCSP is a challenge at scale to address as well.

Leave a comment

Posted by on January 5, 2019 in Uncategorized