Welcome back to the Lamb_OS Substack! As always, I am thankful to have my subscribers and reader stop by! If you are a regular reader of @Lamb_OS, then you – the readers – know you are the only reason I do this. So as always, thank you for visiting!
Let me begin (and end) by introducing myself as Dr. William A. Lambos, neuroscientist, data scientist, and licensed neuropsychologist. I write a lot about AI, but not exclusively. See my previous screeds on this Substack to learn more.
Please Subscribe today if you have not already yet done so. This Substack is free! So, please subscribe for whatever reason might appeal to you. But I’d hope you do so for the value it offers.
GEN-AI PROFIT$ ARE RIGHT AROUND THE CORNER—-ANY DAY NOW!
Unless you are new to
(in which case, Welcome!…and we’ll follow up with you below), you already know that around here, we’re deeply skeptical of the state of Artificial Intelligence products and services - today, Q1-2024.In fact, I have gone quite far out on a limb in predicting that this year, in 2024, GPT Models will prove utterly nonviable, even for “recreational use;” the stock valuations of companies associated with Generative AI will plummet, and the next “AI Winter” will begin. So here we are again. And as before, there are two levels of zombie ideas by which your brain can be consumed. The first level is believing the hype that Gen AI is the next Internet. When there’s this much hype, something is happening, right? True, if we include the collapse of the Gen AI models and their ungodly costs. That’ll show the Magnificent 7 tech companies!
The second way one’s brains can be eaten in the GPT space is by getting suckered. In such cases, only the buy side experiences the blood-letting; the sellers are frauds.
Either way, like Tolstoy’s unhappy families, GPT businesses will fail in different and idiosyncratic ways. My advice is to be especially more cautious about “General Purpose” or “Multi-modal” GPTs than about specialized or regulated use systems.. You should be Skeptical about poor performance, worried about high cost errors, more worried about unsustainable power requirements, more worried yet about nefarious (intentionally harmful) use, be seriously worried about social harm, and be most worried about the business model.
MOST IMPORTANTLY, recognize that the letters ‘A-G-I’ are radioactive. They should be considered the “VOLDEMORT” phrase of the AI marketplace. The sooner any company says anything - at all - about AGI, the higher the chances that it is a. being run by imbeciles, or b. is a deliberate scam. Listen…can you hear “AGI…is coming?” It the sound of people about to get a dozen eggs in their faces (each). People who should know better. The “AGI Fallacy” remains the ultimate fool’s errand in automata studies.
Now consider the mania at the center of the current “AI Value Explosion:” the current AI marketplace frenzy based on GPTs. I had planned to take somewhat of a deeper dive than usual to explain how these models work, but that will have to wait. My reasoning was to show how, when, and why GPTs fail (and they do, all the time). And when GPTs fail - and they will fail spectacularly - then …next AI Winter.
To summarize the biggest problems facing the deployment of systems like GPT-4 these models are unsustainable. Here are the main problems. none of them solvable:
They are buckling under their own weight. These models require more training data than exists in the known universe.
Then there’s the problems of copyright violations and paying the creators for the content they hoover up.
The data sets on which these models were trained are undisclosed. There is no way to examine the training data.
Seriously erroneous outputs (answers, work product, recommendations).
IF humans are stupid enough to hook these idiot boxes up to m up to actuators, then they will be able to generate events “IRL.” Some these will harm and/or kill people.
No one has yet to show positive cash flow from training and deploying these ventriloquists and starving artist painters.
Unfortunately, as the sky is blue (unless it’s an eclipse) we are collectively and/or individually stupid enough to make such hook-ups sooner than later. Just because one is human, it does not guarantee intelligent outputs. But unlike these imbecilic pattern matching programs, human behaviors are carefully regulated and enforced, at least to the best of a society’s intent and ability. GPT’s are already unleashed and trashing productivity, the environment, and creating negative cash flows that also increase exponentially.
So given the very big issues listed above, and that even the typically most ebullient investors are starting to talk about an “AI Bubble,” what can we expect to see about the business side of AI→GPT→AGI→💣? Yes, GPTs will blow up AI, which, like every other time, will go bust once again.
Why Does AI Always Fail?
This is a simple question with a simple answer. I’m going to start with what I consider the best quote about AI out there now, courtesy of Cade Metz and The New York Times:
“Though the field’s founding fathers thought the path to re-creating the brain would be a short one, it turned out to be very long. Their original sin was that they called their field “artificial intelligence.” This gave decades of onlookers the impression that scientists were on the verge of re-creating the powers of the brain when, in reality, they were not.” (Emphasis mine).
It’s interesting to realize that highly complex automated “decision systems” - the complex algorithmic control systems now all but “in charge” of industrial facilities (including nuclear reactors), commercial aircraft autopilot systems, and other such technologies have ben around for decades. But since they weren’t called Artificial Intelligence, nobody fooled themselves they were creating anything other than a new service or gadget.
As frustrating as it may be, we should strenuously avoid using the term “Artificial Intelligence” in any capacity. Think about how people would look at GPTs they were not endowed with the illusion of human-like intelligence on the way - next year. If we described these models as “machine learning tools” instead, we might be able to establish and grow a new field, “automata studies.”
But as soon as you commit the original sin, then your future is doomed.
Summary
GPTs are fascinating. It will be quite interesting to follow the status of these transformer models and other deep learning approaches, especially after the coming market retraction and correction (“AI winter”.)
It never ceases to astonish me that otherwise very bright people can’t see (or refuse to see) something so obvious. That language - the name we we call something - can exert such an enormous effect on what we do with it, what we expect of it, and can even convince each other to invest great sums of money in something that has never made a penny in 70+ years.
Good luck!!!
Bill Lambos