In the late 90s, a young medical physicist at the Mayo Clinic faced a problem. He was trying to use a computer to understand the human body. But the tools of the time were siloed, expensive, and rigid. So, he did what any tinkerer would do: he decided to build his own tools.
That physicist is Travis Oliphant, the creator of NumPy and SciPy: the mathematical bedrock upon which modern data science—and by extension—modern AI, is built. If you have looked at a weather forecast, traded a stock, or interacted with an LLM, you have walked across a bridge that Travis Oliphant helped build.
Oliphant argues that we are at a crossroads. We have to choose between renting our intelligence or owning it. Currently, most of us interact with AI through “black boxes”, centralized models owned by a few massive tech giants. We send them our private thoughts and proprietary data every day. We trade our privacy for convenience.
Oliphant warns that this “convenience” is a trap. He calls it the “One Moon Fallacy”. We are being told that AI is like a single moon that everyone has to travel to. We are told only three or four big companies can build the rocket to get there. That is simply not true.
AI is just math. It is a recipe that has been shared by scientists for decades. You don’t need to rely on a giant tech company to use it.
For Oliphant, the solution is open source. He breaks the AI stack into three distinct layers that any modern leader needs to understand:
1. The Code: The instructions that run the engine.
2. The Weights: The “intelligence” or patterns learned during training.
3. The Data: The raw fuel that informs the weights.
The danger isn’t that AI will become “too smart.” The danger is that it will become too concentrated. If only three companies in the world hold the weights for global decision-making, the tool becomes a weapon for undue, mass influence.
Oliphant refuses to be scared. He remains a stubborn optimist. But his optimism is conditional. It depends on a radical shift in how we educate ourselves. We must learn new skills.
If AI handles the boring stuff, the 10,000-row spreadsheets and the basic code substrates, what is left for humans? Oliphant proposes three things we need to teach:
Oliphant believes in a future where AI acts as a translator, and therefore a facilitator for deeper understanding between fields and people. A rocket scientist and a sales executive will be able to better understand each other, and collaborate more effectively. This is how AI will increase human connection, by bridging the gap between the language of expertise and deep understanding.
Thousands of workers are being let go by tech giants under the banner of “AI efficiency”.
Companies hired too many people during the pandemic. Now, they are cleaning house. Blaming AI is just a convenient excuse. The real danger isn’t job loss today; it is the “Mentorship Gap.”
If a senior engineer uses AI to do the work of three junior engineers, who trains the next generation?
Companies hired too many people during the pandemic. Now, they are cleaning house. Blaming AI is just a convenient excuse. The real danger isn’t job loss today; it is the “Mentorship Gap.”
If a senior engineer uses AI to do the work of three junior engineers, who trains the next generation?
We need to create new ways for young people to learn, or the pipeline of talent will dry up.
The one piece of advice Oliphant would give:
The most powerful agent isn’t AI. It’s you. Ask questions. Own your data. Own your future.