
Debate Two: Which approach to AI development will prevail?
In radiology scans, AI generates too many false positives. Humans generate too many false negatives. The best results involved well-trained humans complemented by AI.
Altman’s Law states that the intelligence of a model roughly equals the log of the resources required to train and run it. Translation: model accuracy and performance improve through exposure to more data. Massive amounts of data can serve as a substitute for “intelligence.”
Humans are flawed and biased. Human intelligence is plagued by cognitive distortions. The “Scaling Camp,” led by OpenAI, advocates strict adherence to Altman’s Law to develop a form of “alternative intelligence.” Fundamentally, this approach constructs models adept at recognizing patterns. To date, the Scaling Approach has constructed models that falter on toxicity, truthfulness, reasoning, and common sense. The models are unable to make sense of situations that deviate slightly from training data or assumptions of the creators.
The rival camp, led by Google (i.e., the “Structured Model Approach”), criticizes the Scaling Camp for creating models that “repeat everything but understand nothing.” Their approach rejects Altman’s Law and asserts that developing better AI is not a matter of data volume. The Structured Model camp believes engineers should teach machines to learn as humans do.
Debate Three: Is the technology/AI sector a classic financial market bubble?
Asset bubbles form when investors chase performance, lose discipline, and abandon “rules-based” decision making.
Investors have recently become more selective. The Google camp has performed better than the OpenAI camp. The divergence represents an important change from the past three years when everything AI increased. It could reflect “proof of concept”: Google’s Gemini has performed better than ChatGPT in several technical areas.
Concerns regarding massive capital expenditures could also explain the divergence. To satisfy projected data and power demands, OpenAI would need an incremental 30 GigaWatt (GW) capacity by 2030. Each GW requires $50B in incremental spending. ChatGPT-5 consumes 2.5 times more power than ChatGPT-4. The AI build out will require $2T in annual industry revenues just to cover the cost of power.
These data points raise fundamental concerns regarding an AI bubble.
At initiation of a bubble, there is always a promising innovation or new paradigm. External capital fuels rapid growth in the speculative phase. Gradually, “herd behavior becomes self-reinforcing.” Buyer enthusiasm propels valuations well beyond reasonable levels. Skeptics are ignored, ridiculed, and dismissed.
Late entrants jump on the bandwagon out of fear of missing the “next big thing.” The perceived risk of not investing overwhelms prudent considerations. Excessive investment leads to excess capacity. Valuations exceed reasonable or realistic parameters.
A bubble bursts because of either “endogenous” (i.e., internal) or “exogenous” (i.e., external) forces. The sector becomes vulnerable to a “phase shift” – i.e., a small incremental change that has a massive impact.
Upon bursting, the virtuous cycle turns vicious. Valuations collapse. Investors lose capital. Borrowers default on loans. Skeptics are vindicated. The media disavow their role hyping the bubble and hunt for people to blame. Eventually, in the final stage – i.e., applications – enduring, sustainable value
is created.
The future of AI lacks visibility, and most predictions will prove invalid. When the dust settles in a few years, the impact of AI is likely to be somewhere between the most optimistic and most pessimistic forecasts.