RSS

Monthly Archives: August 2013

A Brief Introduction to Part-of-Speech Tagging

A field of computer science that has captured my attention lately is computational linguistics — the inexact science of how to get a computer to understand what you mean.  This could be something as futuristic as Matthew Broderick’s battle with the WOPR, or with something more practical, like Siri.  Whether it be text entered by a human into a keyboard or something more akin to understanding the very unstructured format of human speech, understanding the meaning behind parsed words is incredibly complex — and to someone like me — fascinating!

My particular interest as of late is parsing — which from a linguistic perspective, means the breaking down of a string of characters into words, their meanings, and stringing them together in a parse tree, where the meanings of individual words as well as the relationships between words is composed into a logical construct that allows higher order functions, such as a personal assistant.  Having taken several foreign language classes before, then sitting on the other side of the table as an ESL teacher, I can appreciate the enormous ambiguity and complexity of any language, and much more so English among Germanic languages, as to creating an automated process to parse input into meaningful logical representations.  Just being able to discern the meaning of individual words given the multitude of meanings that can be ascribed to any one sequence of characters is quite a challenge.

Parsing Models

Consider this:  My security beat wore me out tonight.

In this sentence, what is the function of the word beat?  Beat functions as either a noun or a verb, but in this context, it is a noun.  There are two general schools of thought around assigning a tag as to what part of speech (POS) each word in a sentence functions as — iterative rules-based methods and stochastic methods.  In rules-based methods, like Eric Brill’s POS tagger, a priority-based set of rules that set forth language-specific axioms, such as “when a word appears to be a preposition, it is actually a noun if the preceding word is while”.  A complex set of these meticulously constructed conditions is used to refine a more course dictionary-style assignment of POS tags.

Stochastic methods, however, are more “fuzzy” methods of building advanced statistical models of how words should be tagged not based on a procedural and manual analysis of edge cases and their mitigations, but using training models over pre-tagged corpra, in a manner hearkening to the training sets applied to neural networks.  These trained models are then used as a baseline for assigning tags to incoming text, but no notable option for correction of any specific error or edge case other than retraining the entire model is available for refinement.  One such very interesting concept is treating the tagging of parts of speech as Hidden Markov Models, which is a probabilistic model that strives to explain how a process with a defined pattern that is not known other than sparse characteristics of the model and the inputs and the outputs through the process.

This continues to be a good candidate for doctorial theses in computer science disciplines.. papers that have caused me to lose too much sleep as of late.

Parsing Syntax

Even describing parts of speech can be as mundane as your elementary school grammar book, or as rich as the C7 tagset, which provides 146 unique ways to describe a word’s potential function.  While exceptionally expressive and specific, I have become rather fond of the Penn Treebank II tagset, which defines 45 tags that seem to provide enough semantic context for the key elements of local pronoun resolution and larger-scale object-entity context mapping.  Finding an extensively tagged Penn Treebank corpus proves difficult, however, as it is copyright by the University of Pennsylvania, distributed through a public-private partnership for several thousand dollars, and the tagged corpus is almost exclusively a narrow variety of topics and sentence structures — Wall Street Journal articles.  Obtaining this is critical to use as a reference check for writing a new Penn Treebank II part-of-speech tagger, and it prevents the construction of a more comprehensive Penn-tagged wordlist, which would be a boon for any tagger implementation.  However, the folks at the NLTK has provided a 10% free sample under Fair Use that has provided somewhat useful for both checking outputs in a limited fashion, but also for generating some more useful relative statistics about relationships between parts of speech within a sentence.

To produce some rudimentary probabilistic models to guide ambiguous POS-mappings for individual words, I wrote a five-minute proof of concept that scanned the NLTK-provided excerpt of the WSJ Penn Treebranch corpus to produce probabilities of what the next word’s part of speech would be given the previous word’s tag. The full results are available in this gist.

Future Musings

My immediate interest, whenever I get some free time on a weekend (which is pretty rare these days due to the exceptional pace of progress at our start-up), is pronoun resolution, which is the object of this generation’s Turing Test — the Winograd Schemas.  An example of such a challenge is to get a machine to answer this kind of question — Joe’s uncle can still beat him at tennis, even though he is 30 years older. Who is older? This kind of question is easy for a human to answer, but very, very hard for a machine to infer because (a) it can’t cheat to Google a suitable answer, which some of the less impressive Turing Test contestant programs now do, and (b) it requires not only the ability to successfully parse a sentence into its respective parts of speech, phrases, and clauses, but it requires the ability for a computer to resolve the meaning of a pronoun.  That’s an insanely tough feat!  Imagine this:

“Annabelle is a mean-spirited person.  She shot my dog out of spite.”

A program could infer “my dog” is a dog belonging to the person providing the text.  This has obvious applications in the real world if you can do this, and it has been done before.  But, imagine the leap in context that is exponentially harder to overcome when resolving “She”.  This requires not only an intra-sentence relationship of noun phrases, possessive pronouns, direct objects, and adverbial clauses, but it also requires the ability to carry context forward from one sentence to the next, building a going “mental map” of people, places, things — and building a profile of them as more information or context is provided.  And, if you think that’s not hard enough to define .. imagine the two additional words appended on to this sentence:

, she said.

That would to a human indicate dialog, which requires a wholly separate frame of Inception-style reference between contextual frames.  The parser is reading text about things which is actually being conveyed by other things — both sets of frames have their own unique, but not necessarily separate, domains and attributes.  I’m a very long-way off from ever getting this diversion in my “free time” anywhere close to functioning as advertised… but, then again, that’s what exercises on a weekend are for — not doing, but learning. 🙂

 
2 Comments

Posted by on August 22, 2013 in Programming

 

Robustness in Programming

(For my regular readers, I know I promised this post would detail ‘a method by which anyone could send me a message securely, without knowing anything else about me other than my e-mail address, in a way I could read online or my mobile device, in a way that no one can subpoena or snoop on in between.’  A tall order, for sure, but still something I am working to complete in an RFC format.  In the meantime…)

I have the benefit of supporting an engineering group that is seeing tremendous change and growth well past ideation and proof of concept, but at the validation and scaling phases of a product timeline.  One observation I’ve made about the many lessons taught and learned as part of this company and product growth spurt have been the misapplication of the Jon Postel’s Robustness Principle.  Many technical folks are at least familiar with, but often can quote the adage: “Be conservative in what you do, be liberal in what you accept from others“.  Unfortunately, like many good pieces of advice, this is taken out of context when it relates to software development.

First off, robustness, while it sounds positive, it not a trait you always want.  This can be confusing for the uninitiated, considering antonyms of the word include “unfitness” and “weakness”.  On a macro-scale, you want a system to be robust; you a product to be robust.  However, if you decompose an enterprise software solution into its components, and those pieces into their individual parts, the concerns do not always need to, and in some cases should not, be robust.

For instance, should a security audit log be robust?  Imagine a highly secure software application that must carefully log each access attempt to the system.  This system is probably designed so that many different components of the system can write data to this log, and imagine the logging system is simple and writes its output to a file.  If this particular part of the system were robust, as many developers define it, it must, as well as possible, attempt to accept and log any messages posted to it.  However, implemented this way, it is subject to CRLF attacks, whereby a component that can connect to it and insert a delimiter that would allow it to add false entries to the security log.  Of course, you developers say, you need to do input checking and not allow such a condition to pass through to the log.  I would go much further and state you must be as meticulous as possible about parsing and throwing exceptions or raising errors for as many conditions as possible.  Each exception that is not thrown is an implicit assumption, and assumptions are the root cause of 9 out the OWASP Top 10 vulnerabilities in web applications.

Robustness can, and is often, an excuse predicated by laziness.  Thinking about edge cases and about the assumptions software developers make with each method they write is tedious.  It is time consuming.  It does not advance a user story along its path in an iteration.  It adds no movement towards delivering functionality to your end users.  Recognizing and mitigating your incorrect assumptions, however, is an undocumented but critical requirement for the development of every piece of a system that does store, or may ever come in contact with, protected information.  Those that rely on the Robustness Principle must not interpret “liberal” to mean “passive” or “permissive”, but rather “extensible”.

In the previous example I posited about a example logging system, consider how such a system could remove assumptions but still be extensible.  The number and format of each argument that comprises a log entry should be carefully inspected – if auditing text must be descriptive, then shouldn’t such a system reject a zero or two-character event description?  While information systems should be localizable and multilingual, shouldn’t all logs be written in one language and any characters that are not of that language omitted and unique system identifiers within the log languages’ character set used instead?  If various elements are co-related, such as an account number and a username, shouldn’t they be checked for an association instead of blindly accepting them as stated by the caller?  If the log should be chronological, shouldn’t an event specified in the future or too far in the past be rejected?  Each of these leading questions exposes a vulnerability a careful assessment of input checking can address, but which is wholly against most developers’ interpretations of the Robustness Principle.

However, robustness is not about taking whatever is given to you, it is about very carefully checking what you get, and if and only if it passes a litany of qualifying checks, accepting it as an answer to an open-ended question, rather than relying on a defined set of responses, when possible.  A junior developer might enumerate all the error states he or she can imagine in a set list or “enum”, and only accept that value as valid input to a method.  While that’s a form of input checking, it is wholly inextensible, as the next error state any other contributor wishes to add will require a recompile/redeploy of the logging piece, and potentially every other consumer of that component.  Robustness need not require all data be free-form, it must simply be written with foresight.

Postel, wrote his “law” with reference to TCP implementations, but he never suggested that TCP stack implementers liberally accept TCP segments with such boundless blitheness that they infer the syntax of whatever bits they received, but rather, they should not impose an understanding of the data elements that were not pertinent to the task at hand, nor enforce one specific interpretation of a specification upon upstream callers.  And therein lies my second point — robustness is not about disregarding syntax, but about imposing a convention.  Robust systems must fail as early and as quickly as possible when syntax, especially, has been violated or cannot be accurately and unambiguously interpreted, or if the context or state of a system is deemed to be invalid for the operation.  For instance, if a receives a syntactically valid message but can determine the context is wrong, such as a request for information from a user who lacks an authorization to that data, every conceivable permutation of invalid context should be checked, not fail to consider each in a blasé fashion to leave room for a future feature that may, someday, require an assumption made in the present, if it is ever to be developed.  This crosses another threshold beyond extensibility to culpable disregard.

In conclusion, building a robust system requires discretion in interpretation of programming “laws” and “axioms”, and an expert realization that no one-liner assertions were meant by their authors as principles so general to apply to every level of technical scale of the architecture and design of a system.  To those who would disagree with me, I would say, then to be “robust” yourself, you have to accept my argument. 😉

 
Leave a comment

Posted by on August 7, 2013 in Programming