

I remember doing something like this with the OG ChatGPT around when it first came out to the public, I gave it a bunch of jokes to explain to see how well it did. I wasn’t particularly rigorous but I remember noticing that it did pretty well with puns and wordplay, and often when it didn’t “get” a joke it would assume it was an obscure pun or wordplay joke and make up an explanation along those lines. I figured that made sense given it was a large language model, its sense of humor would naturally be language-based.





I’m sure it would be relatively straightforward to create a client that randomly blocks 90% of the content to simulate a smaller system.