The 2028 Doomer Crisis
It seems the AI zeitgeist continues with a new viral essay exploring the dark possibilities around AI and its effect on the world as we know it.
Truly, it’s as if more and more people are waking up to the Rock Rebellion, albeit in a way that remains fixed in the status quo, reactive rather than proactive, as if the world as we know it is foundational to life as we know it. But the world as we know it isn’t that old.
The new piece is “The 2028 Global Intelligence Crisis” by Alap Shah and Citrini. Though the preface states that the piece is “a scenario, not a prediction,” it has been credited with the market dip on February 23. I wasn’t even aware of the correlation between this piece and the market dip until I saw a short by Kyla Scanlon. She alluded to inconsistencies in the piece–the Fed and other institutions would react, she says–yet it’s striking to me nonetheless. The piece is written from the perspective of 2 years in the future, mid-year 2028, and contains hard numbers that paint a bleak macroeconomic picture driven by the short-term gains in AI. This makes the prediction salient, though naturally one must question its merits. Granted, the authors’ perspective on the financial implications of AI is beyond the scope of my knowledge. My reasoning is confined to what I understand of the fundamentals underlying the technology and its theoretical limits in terms of value generation. My initial impression is that the piece is “doomer” rhetoric, in spite of its “not a prediction” warning.
As I stated in my recent piece that responded to “Something Big is Happening” by Matt Shumer, as well as Super Bowl AI ads, the Rock Rebellion is happening and we must look past the bullshit to the nuance if we hope to understand how to adapt to the new world. Fundamentally, the promise of the LLM paradigm is limited; there can be no artificial general intelligence (AGI) until we establish a scientific theory of consciousness. I believe this with firm conviction. Yann LeCun, the prominent AI pioneer who publicly quit his position as Chief AI Scientist at Meta over disagreements with Zuckerberg, is right to assert that LLMs are a dead end in the pursuit of AGI. Without a doubt, LLMs and LLM-based AI agents are cutting down on the tedious tasks of white collar work, which is threatening a significant fraction of jobs in the economy. But we are still in the paradigm established in 2012 with AlexNet, wherein GPUs enabled neural networks to realize deep learning at a scale necessary for practical application. Now the name of the game is scaling, leading to rapid buildout of compute infrastructure in datacenters with notable externalities. The fact that this piece, which is based on the assumption of AGI in 2 years, is attributed to market volatility underscores the waning hype around AI that I previously wrote on.
While Alap Shah brands himself on Twitter as an “Optimistic AI realist,” the realest reality check comes from his piece’s top commenter “Peter.” Peter offers the perspective of someone within the tech industry, involved in cloud and AI. While critical of Shah’s and Citrini’s piece, Peter acknowledges that there can be a real economic impact particularly among SaaS businesses due to competition from AI-accelerated in-house solutions, “beliefs around the technology that don’t really require the technology’s assistance to have very real economic impacts.” I think what he is talking about is the gap between the tangible deliverables that LLM-based technology can offer and the promise that business leaders make. If I ran a company that relies on a SaaS provider, I can theoretically build an in-house alternative using LLM agents; even if I don’t, I can use this belief to negotiate a more amenable contract with my SaaS provider. As more companies do this, SaaS margins shrink and with it a sizable chunk of the economy. Peter suggests that it’s less about whether the “vibe coded” application outperforms SaaS and more about whether the application can dig into the valuation of the SaaS provider.
I actually think SaaS is long overdue for a correction. Now, I have no experience with Salesforce and little experience with SaaS in general, so take this with a grain of salt. I have long held the conviction that software is a high-margin low-value industry. Obviously, software has become quite advanced and capable; however, the sky-high valuation of software providers is like a house of cards built on an unstable foundation. I mean, just look at chat technology as an example. Back in my day, we had software like AIM, ICQ, and MSN Messenger. Then we got Slack and more recently Discord. Even if there were improvements in the engineering and infrastructure, the value gain was marginal from an end-user perspective. Features stacked upon features without changing the underlying paradigm. It’s a repetitive pattern among software. Successful adoption is more based on trends than fundamentals. So while SaaS witnessed explosive growth over the past decade, AI can lower the bar for a viable service and undercut the very vendor-dependence that promised an unsustainable growth.
At the risk of sounding like a broken record, I think it’s important to remember that Shah and Citrini are finance bros, not tech bros. This is evident from their piece, which uses hard numbers convincingly to speculate a financial future where “AI bullishness” is “actually bearish.” Money drives decisions, which sucks because people who understand money and business don’t always grasp the nuances of technology. Their myopia got us into this mess, where a SaaS is valued obscenely high even though it offers nothing new. Business leaders need to understand technology better so they can ask the really hard-hitting fundamental questions. It can’t just be a numbers game. We can’t throw money at software companies that promise wide margins without offering something substantially innovative. That’s what creates bubbles, not infrastructure.