3 Comments
User's avatar
David Hsing's avatar

Every time someone comments “but human make mistakes just like they do” it makes me cringe. The public needs to be educated that yes, like you’ve said, those things don’t refer to anything at all https://davidhsing.substack.com/p/why-neural-networks-is-a-bad-technology and on top of that, all the harms it brings https://davidhsing.substack.com/p/generative-ai-does-far-more-harm

Expand full comment
Emile van Bergen's avatar

Here's another harm: pushing people over the edge who are at risk of a manic or psychotic episode. https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/

Expand full comment
Chris Sotraidis's avatar

This writing and stance reminds me of Douglas Hofstadter's writing on this from a couple of years ago in the Atlantic. LLMs, with transformer architecture in their current form, are certainly not mirrors. I do strongly think that before people interact with them, they should be keenly aware of how they work and what they are distinctly not.

I do think it might be interesting to revisit some of Hofstadter's ideas on modeling human cognition, which was spelled out his Fluid Concept book from the 90's and realized in a form with Copycat: https://en.wikipedia.org/wiki/Copycat_(software)

His quote says it best --

"To fall for the illusion that vast computational systems “who” have never had a single experience in the real world outside of text are nevertheless perfectly reliable authorities about the world at large is a deep mistake, and, if that mistake is repeated sufficiently often and comes to be widely accepted, it will undermine the very nature of truth on which our society—and I mean all of human society—is based."

Expand full comment