“Notwithstanding Anything Contained Herein”

A blog about financial agreements and language.

The Perilous Complexity of Information Extraction from Financial Contracts

March 7, 2021, 3:38 p.m.

A stark repeating pattern from my years of applying NLP to financial services is this: most seemingly straightforward information extraction tasks lure you with the sweet siren song of simplicity, allowing you to get just comfortable enough before the bottom falls out. #contracts #information_extraction #nlp

I don’t mean for the following statement to come across with the cynicism that it undoubtedly will, but maybe that will give it a more warning-like character.  In any case: for machine-learning information extraction models, most of what they actually can do with financial contracts isn’t particularly useful, and the rest isn’t particularly interesting

Let me give an example. A typical task for machine learning might be to recognize and extract maturity dates.  On its surface, this seems like a fairly coherent and straightforward problem: securities and loans typically have maturity dates, so all that’s needed is to recognize and return the span of characters which correspond to the maturity date.  This is about as vanilla as tasks get.  And if this was all that was required and all the nuance in the task, that would settle the question.

Now, one would be forgiven for believing that the following logic should apply to most problems of a similar kind to the maturity date example just described, e.g “find X and retrieve it”:

  1. All contracts of type A have information X

  2. Information X must be expressed within the contract 

  3. Some span of text corresponds to information X unambiguously and definitively

  4. That text can be isolated and retrieved by a sufficiently complex model

However, most, if not all, of these presuppositions are false in practice.  In financial contracts, often what seem to be simple tasks quickly balloon with irritating complexity.  A data scientist, believing (1-4) above and dedicated to the words-as-numbers school of NLP, would be stumped by the existence of the following highly typical variations:

  • The use of the term “Final Payment Date” in lieu of “Maturity Date” to describe the date on which the principal is to be paid in full.

  • Calculation of maturity dates by reference to some other date, (e.g., “5 years from the Closing Date”)

  • A comparative expression (e.g., “the earliest of” some set of document-defined dates)

  • Based on conditions (e.g., “shall be November 15, 2022 if the Springing Maturity Condition does not apply or else November 15, 2023.”)

  • Ambiguous definition of the maturity date in terms of some other date (the identity of which could be any defined date; e.g., “‘Maturity Date means the Tranche R Maturity Date or the Tranche T Maturity Date, as applicable.”)

To some extent, each of these issues can be mitigated in practice by expanding the annotation schema or changing the scope of what constitutes a “correct” return from the model.  For example, including “Final Payment Dates” as maturity dates, and allowing an expression to be returned instead of a date (e.g., “Tranche R Maturity Date”), so long as either of these options constitute correct outcomes, could both be correct and within the abilities of a well-designed machine-learning model to supply.

But, returning to my point above, even if “Tranche R Maturity Date or Tranche T Maturity Date” were to be output, or the earliest of some dates calculated, or some exact date inferred from a statement and reference to some other date, is that information ultimately useful to the financial institution employing it? 

To my way of  thinking, this return is about as useful as answering a question with another question. Instead of wondering what the maturity date is, we are left wondering what the Tranche R Maturity Date is, and whether it or the Tranche T Maturity Date is the one that should be extracted, with no principled way of knowing which.  Even worse, instead of wondering what the maturity date is, we are suddenly compelled to wonder what else is lurking in the document to modify that date.

In the end, what is needed is a complete understanding of a document.  That means everything: the definition of every date, the properties of every tranche, a model of every scenario that might toggle between Tranche T Maturity Date and Tranche R Maturity Date, etc. Only when that information is available can a complex question like “What is the maturity date?” be answered in a way that saves time or provides value.  In the end, if what we actually want is to get precise, useful information extracted from a contract, we shouldn’t use models that readily answer back with another question.

Information extraction from financial contracts requires understanding actual contracts - their form, their language, and the way they work. And that takes work, for which there is, in the end, no good substitute or shortcut via machine-learning models.