This is a retrospective of week 24, 2024 (2024-06-10–2024-06-16).
This week, I’ve read an article about the persistent inaccuracies which plague Large Language Models (LLMs). The authors argue that it’s more accurate and useful to describe “AI hallucinations” as bullshit. These models replicate human language without any concern for truth. This is a serious problem when accuracy is important.
The problem here isn’t that large language models hallucinate, lie, or misrepresent the world in some way. It’s that they are not designed to represent the world at all; instead, they are designed to convey convincing lines of text.
—Michael Townsen Hicks, James Humphries, Joe Slater, ChatGPT is bullshit
The potential harms of LLMs are enormous since we search for and find meaning in normal-sounding language.
In a famous paper called, “On the Danger of Stochastic Parrots“, AI researchers, Emily Bender, Timnit Gebru, and their co-authors predicted exactly this phenomenon. See, humans are hardwired for language. If we see a string of words that make grammatical sense, we naturally search for and find meaning in it, and we naturally tend to assume that there must be a mind like ours behind it, even when all we’re looking at is word-salad barfed out by a probabilistic parrot. As the researchers wrote, “…the tendency of human interlocutors to impute meaning where there is none can mislead both…researchers and the public into taking synthetic text as meaningful.” And that makes language models dangerous, because it means that we naturally trust what they say. The potential harms of that are enormous.
—Adam Conover https://youtu.be/ro130m-f_yk?feature=shared&t=905
In short, we need to use our discernment and stop believing the bullshit.
But let’s score a point for humanity here and use our soft squishy human brains to do one more thing that’s Silicon Valley’s dumb algorithm can’t. Let’s use it to tell the difference between a truth and a lie, and stop believing their bullshit.
—Adam Conover https://youtu.be/ro130m-f_yk?feature=shared&t=1414
It is important not to be blinded by sophisticated language, regardless of whether it is produced by a LLM or not.
There is now a degree of sophistication in insanity that gives it a sort of a coating of seeming plausibility, because if you articulate your insanity with enough sophistication, and obscure terms, and conceptual tie-ups that sort of get you all stuck in conceptual knots, you may lose sight of reason and clarity. You may think: Well, behind this incredibly complex story there may be something real that I just can’t understand, but the guy saying it can. No, the guy saying it cannot. He’s insane.
—Bernardo Kastrup https://youtu.be/pqYjsLcfE1g?feature=shared&t=2384
This week I’ve also started reading Bridging Science and Spirit: The Genius of William A. Tiller’s Physics and the Promise of Information Medicine by Nisha J. Manek. I will come back to this book next week.
Leave a Reply
You must be logged in to post a comment.