Aug. 23, 2023, 11:03 a.m.
LLMs generate text creatively but also adhere to literal or factual information when prompted. By mixing creative variability with literal information, current on-the-market LLMs manipulate us into applying theory of mind to things that don’t have one. This manipulation is decisively to their advantage, but probably not the user’s.
Read More →
Feb. 22, 2023, 10:05 a.m.
ChatGPT is a genuine advance: a truly general, intention-comprehending generative interactive agent that has captured the attention of the AI space. However, the widely-held pernicious assumption of monotonically increasing AI performance borne of inevitable hype creates the flawed expectation that we are just iterations away from something that won’t make (many) mistakes. But the history of AI development is littered with genuine advances made right into dead ends.
Read More →
Dec. 17, 2021, 10:53 a.m.
The Dunning-Kruger effect isn’t real. We should all be more skeptical of social science, especially pop psychology favorites. The real fools of Dunning-Kruger are those who have uncritically trusted social science, or lacked the curiosity to investigate and think critically about the actual research.
Read More →
Dec. 9, 2021, 10:46 a.m.
As an acquaintance of mine used to say: “all models are wrong.” All models, among them neural networks, are also just tools, albeit very useful, very powerful tools. They do not necessarily come to learn or say anything of significance about whatever question they are put to. We should resist the temptation to imbue them with significance or to misunderstand their utility.
Read More →
Nov. 11, 2021, noon
Problems that should be approached with machine learning can look extremely similar to problems that should be approached in a different way. There’s often no theoretical reason you cannot approach a problem with machine learning if it can be expressed that way, but we should always be asking ourselves what we actually should be doing instead of just what we can do.
Read More →
Creativity vs Wrongness: The Case of ChatGPT
On Chinese Rooms, Roman Numerals, and Neural Networks
When Everything Looks Like a Nail
Language Technology Needs Linguists, But Companies Need Convincing
Financial Contracts Are More Fintech than LegalTech
The Archaic Language Whereby Lawyers Draft
The Paper Hard Drive, or, Where are Our Contracts Anyway?
The Perilous Complexity of Information Extraction from Financial Contracts