The Next “AI Winter” Has Already Begun...
The Post, Post AI Apocalypse that Never Happened will Come at a Cost Hundreds of Billions of Dollars. We won't see any of it. (Updated Jan. 29, 2024).
Welcome back to the Lamb_OS Substack! As always, I am thankful to have my subscribers and reader stop by! If you are a regular reader of @Lamb_OS, then you – the readers – know you are the only reason I do this. So as always, thank you for visiting!
Let me begin as always by introducing myself: I am Dr. William A. Lambos, a neuroscientist, data scientist, and licensed clinical neuropsychologist. I write a lot about AI, but not exclusively. See my previous screeds on this Substack to learn more. If you want to learn more about me - including why I called this “@Lamb_OS”, click here.
Oh, and BTW, this Substack is still free! So, please subscribe to support me for whatever reason appeals to you. But I suggest you do so for the value it offers.
§¨©ª
This is the third (and final) installment of a three-part series about the upcoming AI winter (there have been at least three to date, starting in 1970 or so). As I’ve stated throughout this series of posts, I predict the next one is less than 9 months away. I expect it will result an enormous retrenchment, including a minimum loss of $500 billion in the market caps of the “Big Tech + AI” companies (in particular, five of the “Big 7” - more on this later).
I’ll make my full argument about my position shortly. But you want to hear something odd? When I started this series of posts, I wanted to find out if I was alone in predicting an AI Winter predicted to come no later than Fall. So in December, when writing Part I of the series, it seemed like no one (and I was looking hard for like-minded souls). But since then? Just one month later, it has somehow become quite a popular topic - I found at least 25 new publications with eerily concordant content in the next two weeks. Part I was published on December 30th. I know who reads my posts, and they include several of Bloomberg’s tech columnists. In fact, with only a single exception, the same folks I cite as references throughout this post have been adding such content on a daily basis. “Such as who?,” you ask? Just today (January 29th, 2023), Bloomberg’s “Tech Daily newsletter posted a piece called “Google and Microsoft Are Under Pressure to Disclose AI Earnings,” written by columnist Jackie Davalos, in which she writes:
“Billions of dollars in investments, numerous chatbots and promises to embed generative AI into products have captivated investors and driven some tech stocks to record highs…But a year into the AI boom, tech companies have yet to show how the hype translates into profits.” .
As self-aggrandizing as this must sound, it just seems to too big a coincidence to be, well, a coincidence. I think I just may have started something! And I’m thrilled. Not because I lack self-esteem and need the support (even if both are true), but because I am too well-versed in this area (and have been for five decades) not to know when I’m having déjà vu all over again.
§¨©ª
OK, enough bickering (at least with myself) over who started what. The fact remains that evidence has been accumulating for at least the last six months that companies invested in AI are headed for very tough times. But any of these who have been making claims about the incipient emergence AGI - always the most embarrassing of AI epic fails - those who imbibed the Kool-Aid of the AGI Fallacy are headed for the existential threats which have been all the rage. The remainder of humanity may still be at risk of extinction, but not from any robot uprising or emergent super intelligence.
What are the specific factors that I believe will drive the upcoming AI Winter? Here are just a few areas in which the stinky stuff will (continue to) hit the proverbial fan1:
Copyright, fair use, and intellectual property infringement litigations. These lawsuits are already in progress, growing exponentially, and a threat to the content creators necessary to sustain industries in nearly every economic sector. Gary Marcus2 has written extensively on this issue, as well as on related problems with Gen AI (itself a misnomer). From his recent post “Things are about to get a lot worse for Generative AI,” Gary writes:
“…the infringement problems are not easily fixed. They may well be unfixable…. Unless and until somebody can invent a new architecture that can reliably track provenance of generative text and/or generative images, infringement – often not at the request of the user — will continue. A good system should give the user a manifest of sources; current systems don’t.”
Highly publicized and embarrassing business and government failures specific to AI/AGI deployments. Specifically, over the previous six to nine months, most of the attempted deployments of GPT/DSI-based systems (i.e., “Gen AI”) have demonstrated embarrassments from botched rollouts of applications. In the best cases, these roll-outs have been for scaled-down and narrowly specialized projects or applications. Of course, even the successful application remain extraordinary expensive to keep running. And the deployments still in the final stages of testing and evaluation stages?. As the INC. Magazine article cited below makes clear, the best of these might offer 5% or so of increased productivity. But the enormous costs of running the data centers which allow these applications poses another layer of threats. Keeping these deployed systems running is exceptionally expensive and most are in no position to afford the losses being absorbed by the progenitors such as ChatGPT, DALL-E, Diffuse Stabilization architectures, GANNs. The small companies face development and operating costs that usually result in decade-long amortization periods. Most will never break even (see quote).
It should therefore be no surprise that after failing to show economic viability, the vast proportion of such deployments are already being abandoned. INC Magazine’s , “Microsoft, Cruise, and 4 More A.I. Business Fails From 2023: The hottest new technology of the year didn't boost everyone's business” puts it succinctly:
2023 was the year A.I. captured the imagination of entrepreneurs around the world. Since ChatGPT's public unveiling in November 2022, the business world has been racing to take advantage of generative A.I.'s ability to help companies rapidly scale for a fraction of the cost it would previously take to do so. But as with any gold rush, there are more losers in the A.I. race than winners. Let's take a look at some of the biggest A.I.-related business failures of 2023.
How does a unicorn go from a $4 billion valuation and nearly $1 billion in funding to dead in the water within two years? Just look at Olive A.I. The company was launched out of Columbus, Ohio, in 2012 by former NSA officer Sean Lane, who wanted to make communication easier between the myriad medical professionals who make up a patient's care team…Olive received $400 million in new funding and grew its employee count to over 1,400 people.
But just a year later in 2022, things were looking rocky. The company laid off 450 employees in July 2022, and an additional 215 employees were let go in February 2023. In April, the company began selling off its various business components and tools before finally shuttering in October (Emphasis added).
Thus, the reputational risks and/or economic losses associated with every AI system since 1958 have yet to include a single example worth the requisite investments. Anyone know an actual business using IBM’s Watson? Neither do I. Companies who were, or might yet be scammed into investing in them, rightly feel they were sold a bill of goods, and not an app or system they could successfully deploy. In last week’s “Reasons Why Generative AI Pilots Fail To Move Into Production,” Forbes writes:
“Executives in every large organization were convinced at the outset of 2023 that artificial intelligence (AI) and generative AI (gen AI) would have a big impact on how their companies operate. Organizations invested enormous resources into gen AI, expended a lot of energy thinking about how the technology could be best put to use, and initiated numerous pilots aligned to those use cases. Most of those Proof of Concept (POC) pilots – approximately 90% – will not move into production in the near future, and many will never move into production.”
The last of the reasons for the coming AI winter is my personal favorite: hitting the unscalable wall between today’s crappy, derivative and minimally useful GPT offerings, and the claims they are “a half step” away from achieving AGI. Bloomberg ran an article yesterday (Jan. 25th) with the title “AI Companies Are Obsessed with AGI. No One Can Agree What Exactly It Is.” It makes the simple point that until people can agree what it is that defines human-level intelligence, no one can say anything about if it is possible, or when it might come on the scene. Better yet is the following from a guest essay in The New York Times published in mid-2023 by Evgeny Morozov (author of “To Save Everything, Click Here: The Folly of Technological Solutionism”), in which he writes:
“After so many Uber- and Theranos-like traumas, we already know what to expect of an A.G.I. rollout. It will consist of two phases. First, the charm offensive of heavily subsidized services. Then the ugly retrenchment, with the overdependent users and agencies shouldering the costs of making them profitable.”
The first two debacles above should by themselves be enough to cause potential customers and investors to run away from “AI-based” business offerings for years to come. But even if people ignored those red flags, having either succumbed to the opiate of hype, or the greed to sustain their delusions, the third problem will be the spike in the heart for our current “GPT Revolution”. The death knell of the current hype cycle can already be heard if one listens carefully, and it will get too loud to ignore over the next six months.
Ultimately, AI Winters continue to recur because of the utter inability to scale the capabilities of whatever the period’s “AI du jour” is overpromised, underdelivered, and most importantly, predicted to scale to AGI. They have yet to succeed in any of their promises. With regard to the AGI Fallacy, not a single example of anything other than a spectacular failure to offer something - anything! - that approaches even a modicum of human intelligence. If readers want a better grasp of what such intelligence implies, I suggest they look at any of my AI-themed posts prior to this three-part “AI Winter” series. It’s all there.
For the current hype cycle, we are less than a year away from coming implosion. So grab some popcorn, get comfy and tune in to watch the promised limitless profits of Gen AI evaporate. Or if you prefer, read along as the dire predictions for the existential threat of AGI – which of course will never come – are revisited, as somehow, and inexplicably, the “experts“ got AI and AGI all wrong yet again. And then you can either laugh or cry (based on your investment portfolio) when these companies founder and report having lost staggering amounts of other people’s money.
Perhaps the saddest truth, however, is that the biggest AGI con artists will live to delude or to scam another day. For the AI Winter of 2024, the damage will be be absorbed by four of the “Big 7” tech companies: Microsoft, Google, Meta, and Tesla. They will very likely each lose hundreds of billions in market cap, but they will not go bankrupt, even if things get dicey for a while. As Mr. Marcus and I have both noted repeatedly, such companies have always made their money outside of AI/AGI. Most sell user data, like Meta and Google. OpenAI will be absorbed my Microsoft, which will still have software and cloud computing services to sell. For these companies, AI and its inevitable hubris of promised AGI will turn out to have been just another investment that didn’t pan out (the metaverse, anybody?). As for Tesla, both the implosion of X and their eventual roll-back of FSD will hurt, but it will continue to earn at least a small developing and selling otherwise very decent automobiles. Oh, and if you think autonomous driving is coming anytime soon, please note that as of yesterday, when using FSD, my Model 3 still cannot drive itself as well as a drunken twelve-year-old.
I hope this series of posts has shown that whatever umbrella they next come under, AI companies will never stop what they do now: make small advances in pattern recognition or statistical analytics, hype these minor advances as the most powerful tools ever crafted by humanity and temporarily skyrocket, and finally, inevitably, claim AGI is coming any minute now. And lest we forget, the next “new old thing” will pose existential threats to humanity which require raising many billions of dollars more!
Of course, it’s never more than five to seven years before they again blow up like a bad SPAC on Wall Street. The “rinse, lather, and repeat” cycles will never stop. And so the next, next AI Winter will begin, in earnest, again. What will the next, next generation of AI/AGI Zombies look like? In truth, I have no clue. History however dictates that as long as there is money to be made, delusions to be propagated, investor’s to be bled, and a society that remains sufficiently misinformed and thereby willing to be frightened nonsense such as AGI, my advice is “Let’s see what the Zombie Masters come up with next!” At least it will give me something to write about…
Thanks as always for reading and see you next week!
Bill
“Be kind, for everyone you meet is engaged in an enormous struggle."
— Philo of Alexandria
Those who want to hear about more, and ideally who are fans of Jay-Z, should see Gary Marcus’s post “OpenAI’s Got 9.9 Problems, and Twitch Ain’t One” for more details.
Gary Marcus and I have rather similar histories, and I hope he appreciates my work as much as I do his. To compare our modus operandi, I am more dogmatic, whereas Gary is more nuanced. I will make what 'Dr. Evil’ would describe as “outrageous assertions.” Gary often addresses the same topics, but typically in a more nuanced manner. I like making definitive and specific predictions - with stated time frames. Gary seems to agree with these, but his prognostications are by comparison toned down and avoid “use by” dates. Nonetheless, we often write about very similar subjects. I hope our readers agree with me that we compliment each other’s submissions in a healthy and complementary manner. I like to think I take the lead, and Gary improves upon my takes. But I’d be surprised if Gary didn’t see this as an astute observation, except that I have the timing exactly backwards. I acknowledge that we could both be right.