Welcome back to the Lamb_OS Substack! I am always humbled – almost awed, in truth – to attract readers who attend voluntarily (I don’t get it either, but we all know that thing about gift horses…). So, if you are a regular reader of this Substack (or better yet, a subscriber!), you already know that you – the readers – are the only reason I do this. So as always, thank you for visiting!
Today I’m going to offer some of my opinions on artificial intelligence, or AI. The term “Artificial Intelligence” generally refers to the application of computer hardware and software systems to do things that, well, seem smart. And by “smart”, we mean systems capable of offering valuable achievements, and usually ascribed only to a limited class of talent.
But AI systems have been posited to be more than merely “smart”. AI systems are also seen as fundamentally disruptive – a “new” technology that could change everything, for better and for worse. Also predicted is the emergence of some future version of AI that will attain general intelligence not only equal to, but (of course) exceeding collective human intelligence. This proposed future version is called “AGI”, forartificial general intelligence. Most alarming, we have recently been warned that one or another of these versions of AI poses an existential threat to humanity.
I recognize that every new technology of substance is disruptive. But here is where I object to the remainder of what follows. My position is that the characterizations of AI offered in the previous paragraph are more than mere gross overstatements. They are, instead, fundamental mischaracterizations.
I do not take this position lightly. I have studied the differences between biological beings and computational systems for over 50 years, and I simply find it impossible to align myself with what have become mainstream characterizations of AI. My position owes to a studied conclusion about the basis of adaptive behavior that is intrinsic to living things. Biologicals systems, and only biological systems, have design properties that allow for flexibility which are inherently out of the reach of AIs. This in turn implies that AGI is a fool’s errand.
My hope is to leave the reader with at least a foundational basis for my viewpoint. Going forward, I am going to call the body of overstatements and mischaracterizations of AI The AGI Fallacy.
I propose that the AGI Fallacy ultimately boils down to a small handful of conceptual errors. Unfortunately, it is not sufficient merely to list them and be done with it. The consequences of overlooking what appear to be trivial differences between the inherent properties of complex systems can grow exponentially via propagation of error. It will take some effort to “unbundle” the confusion.
Nonetheless, one must start somewhere. Having thought the issue through several rewrites of this post, I am going to start by addressing what I consider a core implication of the AGI fallacy.
Of the many significant issues associated with the AGI Fallacy, they all appear to begin at the same place: the nature of intelligence. The current Zeitgeist is that AI systems are intelligent in the same way that people are intelligent. They are not. It is telling that journalists who have investigated such claims also generally reach this conclusion.
I direct the reader to two excellent essays published in The New York Times, back when Chat-GPT and Bing first appeared on the scene – now over a year ago. The title of Metz’s piece sums it up pretty well: A.I. Is Not Sentient. Why Do People Think It Is? To quote,
“Some of the pioneers were engineers. Others were psychologists or neuroscientists. No one, including the neuroscientists, understood how the brain worked. (Scientists still do not understand it.)”
The essay by Kevin Roose, entitled A Conversation With Bing’s Chatbot Left Me Deeply Unsettled., focused more on the “hallucinations” offered by Microsoft’s Bing Chatbot. To wit,
“…in general with A.I. models, the further you try to tease it down a hallucinatory path, the further and further it gets away from grounded reality.”
Granted, these are opinions, but I agree with them. They are not uninformed. I will address these arguments further in my next Substack post. For those who cannot wait, see my article “Artificial Intelligence is Neither”, posted on LinkedIn.
Yet it remains true that most scientists and engineers who create generative AI systems are unable to appreciate what their own systems can and cannot do. They instead often pontificate on the “unlimited potential” of the automata they create. They are sold on the conclusion that it is only a matter of time before AI cannot but achieve staggering abilities that will impact the whole of civilization, either for better, or, more frequently, for worse.
Now, I get that intelligence means different things to different people, even experts. Many disagree as to its nature, and how to properly define it. Furthermore, it is not wrong to assert that generative AI systems are more capable – by orders of magnitude – than previous iterations of A.I.
But that does not justify the assertion that AI systems are intelligent in the same sense as people are. Nor does it imply that these automata pose a threat to our species. We have adapted to social media, after all. Very few AI alarmists predict that humanity is at existential risk from Facebook, TikTok, or other instances of these platforms. They are recognized as unfortunate sources of misinformation, hate speech, trolling, doxing, and so forth. But such computer programs should not be considered existential threats. They do not pose a threat of human extinction. They are not like nuclear weapons or unchecked climate change. They cannot “learn” how to make our planet uninhabitable. Why, then, do so many otherwise well-informed individuals assume that generative AI is so dangerous? Why do we worry that we are on the cusp of an uprising of the machines?
The answer is that AI programs seem more human-like. They interact with us through language and without assistance from other people. They can respond to us in ways that imitate human communication and cognition. It is therefore natural to assume the output of generative AI implies human intelligence. The truth, however, is that AI systems are capable only of mimicking human intelligence. By their nature, they lack definitional human attributes such as sentience, agency, meaning, or the appreciation of human intention.
In my next Substack, I will focus on this handful of conceptual errors that leads to the present confusion I call the AGI Fallacy. For while generative AI offers a new generation to tools that are, indeed, impressive, they neither are as human-like, nor as threatening, as they are made out to be.
After all, I hope to convince you, my valuable readers, that only humans can be inhumane.