Why AGI Will Never Arise from Man-Made Automata (AI)
Hello and welcome to the Lamb_OS Substack! As always, know that I greatly value you - the readers and subscribers for whom I write these posts. I am just getting started with this endeavor, so I am still feeling my way through the process. But I’m quite motivated to continue the effort. For one thing, I feel like a lifetime of studied inquiry has resulted in some ideas – answers, really – that are worth sharing. Many of these are not mainstream ideas, which only makes we want to share them more. Why might I be someone with ideas that have value? For more about me (and the origin of the odd name of my Substack), take a look at my very first post. Otherwise, just keep reading.
I want to start out today with a shout-out to my colleague Devansh for mentioning this Substack in his widely read newsletter and Substack. As a result of his reference to my work, I have a slew of new subscribers, and I am honored to welcome you all and to have you on board. Devansh invited me to be a guest author for his Substack and newsletter back in July, and we have stayed in touch ever since. He is a great source of information on all things tech and AI, a serious and respected data scientist, and has an unusual relationship with chocolate milk.
Today’s post follows up on my previous one, “AI Is Truly Brainless”. As I argue there, I believed that the latest generation of AI systems show admittedly impressive capabilities. However, these are not at all like human intelligence. In fact, they are, necessarily, inherently different from biological intelligence. To arrive at this conclusion, one has to have studied both biology and computation. I know I’ve mentioned that I’ve been doing just that for many decades, if only because I want to emphasize that I’ve had time to, well, look into this in some detail. I recognize the impressive recent advances in generative AI models, and in no way dismiss them. Nonetheless, I strongly feel AI based on computation can in no way lead to artificial general intelligence. Period. I am acutely aware that this is a strong statement, and I do not make it lightly. It is also very much outside the mainstream of current thinking. I am, to be sure, by no means alone in the assertion. However, my own basis for this conclusion differs from all the others I have come across. I therefore very much want to share my reasoning as to why expecting AGI to emerge from AI, generative or otherwise, is “a fool’s errand.”
Allow me to elaborate. Biological systems are characterized by several properties that necessarily distinguish them from man-made automata. By necessarily, I mean that these properties can arise only from a cellular substrate. This is not because living systems are “magical” – they follow the laws of physics just like everything else. Rather, they differ in their underlying design properties. Moreover, these design properties arise from within the nuclei of tcells that are the building blocks of biological tissue. I.e., no living cells, no AGI.
I am overwhelmingly convinced that two of these design properties are inextricably linked to necessary substrates of generalized intelligence. These properties are Adaptive Functioning and its close cousin, Agency. If I am correct that these properties arise from the deepest individual attributes of cells, then it follows that AGI based on computation is simply not on the horizon.
What are these two forces – Agency and Adaptive Functioning? These, and how they necessarily underlie generalized intelligence, will be the subject matter of my next post. For now, I want to focus on how these two mighty forces arose in the first place.
The cellular biologist Nick Lane lays out how the origins of life on earth in his amazing book The Vital Question. He emphasizes how life on earth arose from what are best considered “historical accidents” involving the ways in which the elements carbon and hydrogen combined in the primordial environment of our planet, some 4 billion years ago. These unlikely circumstances gave rise to bacteria. Another singular such accident in turn gave rise to multicellular life forms some three billion years later. Only a few hundred million years after that, and voila, the earth saw the earliest of creatures that would go on to evolve nervous systems. As I see it, both agency and adaptive functioning – from which true intelligence arises – owe to the cascade of the events above and their repercussions.
Most importantly, at the core of it all lies DNA. The relationship between DNA and generalized intelligence, via the pathways of agency and adaptive functioning, is what makes generalized intelligence possible only for DNA based entities. Given sufficient time, and another unlikely historical accident or two, DNA became the basis of “EarthNet” (as opposed to “SkyNet”). Thus humans, the species called Homo sapiens, are the true “Terminators” who took over the earth. And it is us, and not the descendants of generative pre-trained transformers” (or GPTs), who are the prime candidates for wiping out the future of humanity.
In ending this post, I want to reiterate that the great majority of experts in both computer science and neuroscience respond to my ideas with what is essentially derision. “How can you make such a claim? You can’t predict the abilities of future AI systems!” Au contraire! If I am correct, one can predict that no matter advanced what is essentially a toaster oven we create, it will not result in AGI.
I have kept this post succinct because it describes the very essence of the AI – AGI fallacy. In the posts that follow, I intend to pursue this thesis, descending through different layers of evidence, one level at a time, until I have shared my conclusion. For today, the bottom line has been laid out, and I hope succinctly: The design principles underlying biological systems – cellular architecture, DNA, ATP synthase, and others – which give rise to agency and adaptive functioning, are necessary precursors to AGI. And, after 50 years of hammering this all out, I will very likely maintain this conclusion until I expire.
Thanks again for reading!