Sometimes, someone really hits it out of the park. Written in 2016, HyperLogLogLog[a] (alternate[b]) pretty much sums up LLM misuse[c] of today. It's funny, and as with all the best jokes it is based on a kernel of decidedly unfunny truth:
A new paper in preprint by an interdisciplinary team of researchers reviews over a dozen cases reported in the media or online forums and highlights a concerning pattern of AI chatbots reinforcing delusions, including grandiose, referential, persecutory, and romantic delusions.
(...)
The underlying problem is that general-purpose AI systems are not trained to help a user with reality testing (...)