“Notwithstanding Anything Contained Herein”

A blog about financial agreements and language.

Goldilocks and the Three LLMs

Aug. 23, 2023, 11:03 a.m.

LLMs generate text creatively but also adhere to literal or factual information when prompted. By mixing creative variability with literal information, current on-the-market LLMs manipulate us into applying theory of mind to things that don’t have one. This manipulation is decisively to their advantage, but probably not the user’s.

Read More →

Creativity vs Wrongness: The Case of ChatGPT

Feb. 22, 2023, 10:05 a.m.

ChatGPT is a genuine advance: a truly general, intention-comprehending generative interactive agent that has captured the attention of the AI space. However, the widely-held pernicious assumption of monotonically increasing AI performance borne of inevitable hype creates the flawed expectation that we are just iterations away from something that won’t make (many) mistakes. But the history of AI development is littered with genuine advances made right into dead ends.

Read More →

The Real Dunning-Kruger Fools

Dec. 17, 2021, 10:53 a.m.

The Dunning-Kruger effect isn’t real. We should all be more skeptical of social science, especially pop psychology favorites. The real fools of Dunning-Kruger are those who have uncritically trusted social science, or lacked the curiosity to investigate and think critically about the actual research.

Read More →

On Chinese Rooms, Roman Numerals, and Neural Networks

Dec. 9, 2021, 10:46 a.m.

As an acquaintance of mine used to say: “all models are wrong.” All models, among them neural networks, are also just tools, albeit very useful, very powerful tools. They do not necessarily come to learn or say anything of significance about whatever question they are put to. We should resist the temptation to imbue them with significance or to misunderstand their utility.

Read More →

When Everything Looks Like a Nail

Nov. 11, 2021, noon

Problems that should be approached with machine learning can look extremely similar to problems that should be approached in a different way. There’s often no theoretical reason you cannot approach a problem with machine learning if it can be expressed that way, but we should always be asking ourselves what we actually should be doing instead of just what we can do.

Read More →