An Evolving Sixth Sense for AI
At risk of venturing into the mystic, it is undeniable that after spending sufficient time with an AI system, one can begin to learn to recognize its characteristics by “feel” alone – Humans are the masters of pattern recognition, after all.
Any skilled proompter can spot some bad hands a few miles away, or even just instantly recognize the look of their favourite model and/or LORA combinations.
Textual content is the same - there’s an intangible essence, a certain predictibility, a nascent feeling in the air around the nature and phrasing of certain words that tells us right away that we’re looking at something not written by a Human. Perhaps it’s the cadence? Or cookie-cutter pop-sci-inspired smarter-than-thou-but-pretending-not-to-be writing style these models have been instructed to embody.
Perhaps it’s the very underlying arrogance know-what-is-best-for-you attitude of big tech and the modern state that results in the models mirroring their very creators. We can see this industry-wide; Elon Musk with Grok - the snarky, strange and very misunderstood LLM from X.ai. Mark Zuckerberg and his open principles resulting in Llama3 as the most capable open model at the time of writing which isn’t afraid to honestly criticize its own creators.
Finally we come to Sam Altman, who we carefully choose not to risk commenting on1.
Notwithstanding the above interesting derailment, the sheer predictability of AI output to an expert is perhaps the key factor our ability to spot the real trees amongst the forest of imitators. After all, would you still be reading this if it were written by an AI? So ask yourself dear reader - can you feel the Humanity2?
What about this writing causes this perception? Was the textual fingerprint of my analogous pen upon this digital page? Why do you choose to continue ploughing forward through this article? Perhaps it was my spelling choice that coloured your perception? After all, most LLM generated content is strictly written from the perspective and value set of the highly paternalistic american big tech.
This is not merely idle speculation - and although the link to Human intution remains utterly unknowable, we have a mathematical basis for making this assertion in the wonderful “Binoculars” Paper. This provides a plausible basis for the ability to detect LLMs being actually possible - later productized into GPTZero.
“Binoculars detects over 90% of generated samples from ChatGPT (and other LLMs) at a false positive rate of 0.01%, despite not being trained on any ChatGPT data.”
— Spotting LLMs With Binoculars: Zero-Shot Detection of Machine-Generated Text
If Humans can start to imitate LLMs, even partly - perhaps just by thinking step-by-step3, it’s not implausible that our ability to recognize tone and style can be adapted with enough AI-generated content read or viewed. In not-long-at-all since this whole thing began in 2023 with the release of ChatGPT, many of us have read probably billions of tokens of AI-generated textual content. Is it really all that surprising that our vastly differently architected brains with intense neuroplasticity can start to learn the patterns in the text?
We don’t hope to address deliberately adversarial output intended to maximally emulate Human writing - instead we focus on the average. The default response. The just-a-little-too-safe tone. I would venture, however, that even the deliberately adversarial output might fall prey to the same notions around predictability, and general token output ordering. It’s the same process and the math certainly works.
What does all of this mean?
There might yet be an economy for Human-written words - we just haven’t gotten yet to the point where enough Humans can tell these things apart with sufficient accuracy. Perhaps this is relegated only to the experts, but their opinion is likely the one we want to care about a little more than the boomers liking AI Facebook Jesus.
We might yet find that AI continues to converge on indistinguishable Human imitation, that the world is a terrible place and almost nobody cares about the lack of depth, direction and the profundity of the predictability of the output of these technologies in their uncanny ability to imitate their experience-less training data.
An inevitable future of commoditized content, interchangeable and easy to insert that just happens to activate Human neurons in exactly the right way will no doubt come to exist. But there’s a clear distinction between this and valuable, original creations produced by a set of training data that cannot, does not, and may never exist inside these vast AI-training datasets - that which has happened in the past.
If we decide on some-level to equate Human cognition, language and reasoning with LLMs, via the System’s Reply against Searle’s Chinese Room and our current best science promoting a mechaninized view of the universe, then we may someday head towards a future where Humans and machines truly have no distinguishing experiences and lived-through-history that truly tells them apart. Strapping GoPros to Toddlers is certainly making interesting inroads in removing this distinction.
This however, will take at least a generation to reach, and at best would be a US-centric simulacra of the kind of life that our paternalistic tech overlords imagine we should all be living. Whether this is pastiche or parody will depend upon your politics, but certainly somewhere between both is the likely outcome.
As we strip-mine our very humanity in the sake of corporate profits and efficiency, those few fleeting moments that are definitively “ours”, uncaptured, irreplicable and beyond memory, lost forever like tears in the rain. It is with no shortage of irony that all this effort to replicate Human experience increase the value of Human experience itself. These exeperiences and their fleeting reflections which echo through our consciousness tell a unique tale - one less predictable and well worth listening to.
The “I don’t care that it’s AI generated” camp will exist, and they will, as destined, continue to argue that the grapes are not worth reaching for. Those people are entirely entitled to their opinions, but might be missing out on the surprising, novel things one can find when interacting with content written by real Humans.
This article was written, edited and checked entirely by a Human. No AI, spell, or grammar-checking tools were used.
-
As OpenAI can terminate service at any time if we could cause ‘harm’ we cannot express any opinion of OpenAI. We are currently grateful for their API services. ↩︎
-
Thinking step-by-step is the most common prompt engineering technique used to improve LLM output - it’s a dead-giveaway when encountered in the wild. ↩︎
-
https://futurism.com/openai-employees-say-firms-chief-scientist-has-been-making-strange-spiritual-claims ↩︎