Dec. 9, 2021, 10:46 a.m.
As an acquaintance of mine used to say: “all models are wrong.” All models, among them neural networks, are also just tools, albeit very useful, very powerful tools. They do not necessarily come to learn or say anything of significance about whatever question they are put to. We should resist the temptation to imbue them with significance or to misunderstand their utility.
Read More →
Nov. 11, 2021, noon
Problems that should be approached with machine learning can look extremely similar to problems that should be approached in a different way. There’s often no theoretical reason you cannot approach a problem with machine learning if it can be expressed that way, but we should always be asking ourselves what we actually should be doing instead of just what we can do.
Read More →
Creativity vs Wrongness: The Case of ChatGPT
On Chinese Rooms, Roman Numerals, and Neural Networks
When Everything Looks Like a Nail
Language Technology Needs Linguists, But Companies Need Convincing
Financial Contracts Are More Fintech than LegalTech
The Archaic Language Whereby Lawyers Draft
The Paper Hard Drive, or, Where are Our Contracts Anyway?
The Perilous Complexity of Information Extraction from Financial Contracts