Three Relevant Debates About AI

By Ray Ryan, CFA

 

Ray Ryan

Ray Ryan is the president of Patten and Patten, an investment management firm, and a registered investment adviser in Chattanooga. Ray is a CFA charter holder, a member of the advisory board for UTC’s College of Business, and an adjunct professor of finance at UTC. He is a graduate of Princeton University, where he had the privilege of taking a course taught by former Federal Reserve Chairman Ben Bernanke.

Technology innovations often result in “step function” increases in capacity. Thomas Kuhn, well-known historian, said technological innovation requires paradigm shifts. Paradigm shifts require revolution of thought.

This is the appropriate context for evaluating prognostications regarding artificial intelligence (AI) – i.e., framed as three debates, likely to be settled in the next decade.

 

1 Revolutionary innovations disrupt platforms and systems. Evolutionary technologies merely extend capabilities. Which best describes AI?

2 There are two rival “camps” with different approaches to building Large Language Models (LLMs). One camp argues that AI improves only with more data. The rival camp argues that AI improves only through engineering model improvement. Which camp is likely to prevail?

3 The telecom/dot-com bubble reflected extreme investor mania. Herd mentality contributed to massive excess capacity which required 15 years to absorb. AI infrastructure costs to date dwarf the amounts spent to build out the internet. Is AI a bubble, and will AI experience a similar boom-bust cycle?

5090 South Web Ad

Debate One: Is AI evolutionary or revolutionary?

It is too early to tell.

New technologies generate both fears over disruption and excitement for future applications. Amara’s Law holds that society tends to overestimate their short-term impact and underestimate long-term impact. The ultimate impact of AI will reflect how it is used. For example, LLMs could become “the authority,” and humans could be relegated to a subordinate position as “interpretive authorities.” This outcome is assured if people merely accept ChatGPT responses without thinking critically.

History supports the evolutionary case. A rudimentary form of AI helped the allies win World War II. Its progression was gradual. Today, AI is everywhere but often invisible: search algorithms; chat bots; call centers; automated functions.

To qualify as revolutionary, AI must lead to regime shifts. Regime shifts involve transformative innovations. Transformative innovations require infrastructure on which to erect technology platforms. Platforms facilitate scale, and scale boosts adoption. Adoption, in turn, depends on applications for end users.

For most use cases, AI will complement human activity. AI could enhance human intelligence, improve efficiency, and reduce overall errors. In other cases, Agentic AI makes decisions without human input and could prove more efficient for mundane tasks.

Point Property ad

Tech looking scales

Debate Two: Which approach to AI development will prevail?

In radiology scans, AI generates too many false positives. Humans generate too many false negatives. The best results involved well-trained humans complemented by AI.

Altman’s Law states that the intelligence of a model roughly equals the log of the resources required to train and run it. Translation: model accuracy and performance improve through exposure to more data. Massive amounts of data can serve as a substitute for “intelligence.”

Humans are flawed and biased. Human intelligence is plagued by cognitive distortions. The “Scaling Camp,” led by OpenAI, advocates strict adherence to Altman’s Law to develop a form of “alternative intelligence.” Fundamentally, this approach constructs models adept at recognizing patterns. To date, the Scaling Approach has constructed models that falter on toxicity, truthfulness, reasoning, and common sense. The models are unable to make sense of situations that deviate slightly from training data or assumptions of the creators.

The rival camp, led by Google (i.e., the “Structured Model Approach”), criticizes the Scaling Camp for creating models that “repeat everything but understand nothing.” Their approach rejects Altman’s Law and asserts that developing better AI is not a matter of data volume. The Structured Model camp believes engineers should teach machines to learn as humans do.

Debate Three: Is the technology/AI sector a classic financial market bubble?

Asset bubbles form when investors chase performance, lose discipline, and abandon “rules-based” decision making.

Investors have recently become more selective. The Google camp has performed better than the OpenAI camp. The divergence represents an important change from the past three years when everything AI increased. It could reflect “proof of concept”: Google’s Gemini has performed better than ChatGPT in several technical areas.

Concerns regarding massive capital expenditures could also explain the divergence. To satisfy projected data and power demands, OpenAI would need an incremental 30 GigaWatt (GW) capacity by 2030. Each GW requires $50B in incremental spending. ChatGPT-5 consumes 2.5 times more power than ChatGPT-4. The AI build out will require $2T in annual industry revenues just to cover the cost of power.

These data points raise fundamental concerns regarding an AI bubble.

At initiation of a bubble, there is always a promising innovation or new paradigm. External capital fuels rapid growth in the speculative phase. Gradually, “herd behavior becomes self-reinforcing.” Buyer enthusiasm propels valuations well beyond reasonable levels. Skeptics are ignored, ridiculed, and dismissed.

Late entrants jump on the bandwagon out of fear of missing the “next big thing.” The perceived risk of not investing overwhelms prudent considerations. Excessive investment leads to excess capacity. Valuations exceed reasonable or realistic parameters.

A bubble bursts because of either “endogenous” (i.e., internal) or “exogenous” (i.e., external) forces. The sector becomes vulnerable to a “phase shift” – i.e., a small incremental change that has a massive impact.

Upon bursting, the virtuous cycle turns vicious. Valuations collapse. Investors lose capital. Borrowers default on loans. Skeptics are vindicated. The media disavow their role hyping the bubble and hunt for people to blame. Eventually, in the final stage – i.e., applications – enduring, sustainable value
is created.

The future of AI lacks visibility, and most predictions will prove invalid. When the dust settles in a few years, the impact of AI is likely to be somewhere between the most optimistic and most pessimistic forecasts.

The James Company Ad

Get Free Digital Copies of CityScope® Emailed to You!