
AI from the C-Suite
by Joe Merrill, CEO, OpenTeams
Executives are told that larger AI models mean smarter AI. The data say otherwise. There is a point where adding more information makes an AI system less reliable, not more. Past a certain threshold, every extra gigabyte of data introduces noise, bias, and spurious correlations that look like intelligence but erode precision.
When models pass that “optimal inference point,” they begin to mirror our collective bias or hallucinate instead of revealing new insight or a highly accurate response.
That means leaders should think twice before trusting today’s giant language models with critical decisions. These systems are designed to reproduce patterns, not discover truth. They excel at writing summaries and predicting the next word, but their vast memory turns against them when you need reasoning that is accurate, novel, or strategic.
Every correct insight requires a minimum amount of information to infer it. Give a model less than that, and it guesses poorly. Give it more than that, and it starts finding relationships that don’t exist.
Think of a stock analyst who has perfect data from ten companies. Add in every rumor from the internet, and her accuracy drops. The same happens inside a neural network. The tensors, mathematical correlation grids that store what a model “knows”, fill with connections between words, ideas, and signals that are statistically related but conceptually false.
At small scales, more data means better generalization. At massive scales, it creates statistical clutter that buries cause under correlation. The large model becomes confident but shallow, authoritative but brittle. It will tell you what most people think or hallucinate with something highly correlated but not causal. It will not tell you what is correct with a high degree of fidelity.
The biggest models, those trained on nearly all public text, are not discovering patterns. They are recycling consensus. Training them on everything ever written doesn’t make them omniscient; it makes them overfit to human bias.
To make them useful again, companies layer tools on top: retrieval engines, guardrails, and alignment filters. Those tools keep them safe, but also force them to follow existing orthodoxy. The model can only echo what the world already believes. It cannot surprise you with a new idea or optimize anything beyond average.
That is why these systems struggle with innovation, causation, and high-stakes reasoning. They are tuned to imitate, not to think.
For leaders, the message is clear:
1. Don’t use LLMs for judgment calls. They are superb assistants for drafting, summarizing, and brainstorming, but not for decisions that require factual certainty or causal understanding
2. Size isn’t a proxy for intelligence. The best model for your organization may be smaller, specialized, and trained only on data relevant to your domain.
In short: the era of “bigger is better” in AI is ending. The next frontier is precision, building models that know enough, but not too much. Applied AI in a company, close to subject matter experts who know how to optimize data for training, is the future.
The statistical models that underpin all modern AI contain a fundamental truth: every insight has an optimal information load. Beyond that load, spurious correlation and bias grow faster than truth.
This is why AI must adjust so that non-AI experts, who have expertise in other areas essential for optional training loads, can build high-fidelity, insightful models easily.
Determining what data will be used to train a model will define the next generation of AI strategy, and determine which companies truly lead in the age of intelligent systems. Putting your AI into their hands is your best strategy.
Researchers have been mapping this problem for years. A few key ideas connect directly to the argument above.
Share Your Feedback