For most of my career I have been an optimist about technology.
I have lived through nearly every major technology wave of the past forty years — the personal computer revolution, the rise of the internet, the telecom boom, smartphones, big data, and now artificial intelligence. In every case there were bold predictions and a great deal of hype. Some came true. Many did not. And once the technology arrived, people almost always found uses that no one had anticipated.
But artificial intelligence now feels different to me.
Over the past several months I have begun to feel something I have never felt before during a technology revolution: genuine concern about where this one may lead.
Several recent developments have made me stop and reconsider the trajectory of AI — and the implications for society.
Unease began with the never-before-seen rate of adoption. Historically there is usually a “chasm” after the introduction of a new technology — a period where the market figures out how the technology will actually be used before mass adoption occurs. Geoffrey Moore famously described this in Crossing the Chasm (Moore, 1991).
But Generative AI has been different.
Since its public release on November 30, 2022, adoption has been unprecedented.
- 1 million users in the first week
- Over 100 million within the first year
- By early 2026, estimates suggest hundreds of millions of users worldwide
According to UBS research, ChatGPT became the fastest growing consumer application in history (UBS Global Research, 2023).
There was no adoption pause. No chasm. Just continuous, accelerating growth.
I was an early and enthusiastic user. I believed strongly in the possibilities.
But recently my optimism has shifted toward concern.
Four developments changed my view.
1. The Possibility of Near-Term Superintelligence
At an AI conference in early 2026, OpenAI CEO Sam Altman suggested that by the end of this decade more of the world’s intellectual capacity could reside in data centers than outside of them.
In other words, the collective cognitive capability of AI systems could exceed that of humans across many domains.
Altman has repeatedly suggested that superintelligent systems may arrive sooner than most people expect, potentially within this decade.
In a widely circulated essay, he wrote:
“We are now confident we know how to build AGI as we have traditionally understood it.”
— Sam Altman, Moore’s Law for Everything (2023)
He has also speculated publicly that AI could eventually outperform humans in leadership roles — even potentially doing a better job running major organizations.
The exact timeline is uncertain. Progress may slow. Breakthroughs may stall.
But if current trajectories continue, the transition could happen far sooner than society is prepared for.
2. The Coming Revolution in Robotics
A recent personal experience reinforced this concern.
While waiting several hours in pre-op before a medical procedure, I had time to observe and reflect upon the jobs of the medical staff including my highly skilled and dedicated surgical team.
I then realized that robots will eventually perform many surgeries.
That idea might sound futuristic, but it is already partially happening.
Robotic-assisted surgery systems such as the da Vinci Surgical System are widely used today and have been deployed in millions of procedures worldwide (Intuitive Surgical, 2024).
But these systems still rely heavily on human surgeons.
The next phase will involve AI-powered autonomy.
For robots to perform complex tasks like surgery, they must learn the way humans do — through experience and sensory input.
Researchers are now building AI models that combine:
- Vision
- Language
- Physical interaction
- Reinforcement learning
- Real-world simulation environments
Examples include vision-language-action models and world models used to teach robots to interact with the physical world.
Companies like Tesla, Figure AI, Boston Dynamics, and Sanctuary AI are investing heavily in this next generation of robots that can independently act in and with the physical world.
NVIDIA CEO Jensen Huang recently described robotics as “the next multi-trillion-dollar AI industry” (NVIDIA GTC Conference, 2024).
If robots can perform surgery someday, then we must ask a bigger question.
What happens to other forms of physical work?
We already see white-collar jobs being transformed by AI.
If robotics reaches maturity, then eventually many forms of blue-collar labor may also be affected — from manufacturing to logistics to transportation.
3. The Expansion of Military AI
Another development that worries me is the increasing push to deploy AI in military systems.
Governments around the world are racing to integrate AI into defense technologies.
The U.S. Department of Defense has already launched initiatives such as Project Maven, which uses AI to analyze battlefield data and drone footage (U.S. Department of Defense, 2017).
Military leaders increasingly view AI as a strategic advantage.
Former Google CEO Eric Schmidt, who chaired the U.S. National Security Commission on Artificial Intelligence, warned that AI could reshape warfare:
“Artificial intelligence will transform every aspect of military operations.”
— NSCAI Final Report, 2021
But autonomous weapons systems also raise profound ethical concerns.
A world where machines participate in lethal decision-making — especially if escalation occurs faster than human decision cycles — carries risks we have never faced before.
4. Self-Improving AI Systems
Finally, there is a technical development that many people outside the AI field are not yet aware of.
Modern AI systems are beginning to demonstrate early forms of recursive improvement.
AI coding systems can now write large amounts of software with minimal human guidance.
As these tools improve, they may increasingly assist in building future generations of AI systems themselves.
Compounding the concern is something AI researchers openly acknowledge.
Even the companies building these systems do not fully understand how they work internally.
Anthropic CEO Dario Amodei has spoken openly about this challenge.
Large neural networks function as complex systems with billions or trillions of parameters.
Researchers can measure their outputs and train them effectively, but the internal reasoning process remains partially opaque.
Amodei has described interpretability — understanding exactly how these models think — as one of the most important unsolved problems in AI safety.
What I Fear May Be Coming
Taken together, these developments lead me to three concerns about the future.
1. Massive Displacement of Knowledge Workers
Large language models may increasingly perform work done by:
- Software developers
- Lawyers
- Accountants
- Researchers
- Writers
- Marketing professionals
- Teachers
- Corporate executives
These professions rely heavily on information processing — something AI systems are improving at rapidly. This is happening now.
2. The Rise of Robotic Labor
At the same time, robotics may begin replacing forms of work that involve physical interaction with the world.
This could affect:
- Manufacturing
- Warehousing
- Transportation
- Construction
- Logistics
- Law Enforcement
And eventually even highly skilled professions such as surgeons, laboratory technicians, and specialized medical roles.
Some of these changes may improve safety and efficiency.
But the scale of economic disruption could be enormous. I predict we will see humanoid robots in the home for domestic help by the end of 2027. Autonomous surgical robots within 5 years. Law Enforcement officers within 5 years.
3. Autonomous Weapons and Strategic Risk
My final concern involves the intersection of AI and military decision-making.
If autonomous systems begin making strategic or tactical decisions at machine speed — especially when nuclear weapons remain part of global arsenals — the potential consequences become existential.
Technology historically advances faster than governance.
And AI may be advancing faster than any technology before it. I expect to see humanoid robot soldiers on the battlefield within 2 years. I fear a nuclear strike and retaliation done by autonomous weapon systems within 2 to 5 years.
Final Thoughts
Every major technological revolution reshapes society.
Most ultimately improved human living standards and created entirely new industries.
But artificial intelligence may represent something fundamentally different.
For the first time, humanity is creating systems that not just rival but will soon surpass our own cognitive capabilities.
The question is not whether AI will change the world.
The question is how quickly it will happen — and whether society is prepared for the consequences. Mass job displacements and possible nuclear strikes without human intervention.
Sources and References
UBS Global Research. (2023). ChatGPT fastest growing consumer application in history.
Moore, Geoffrey. (1991). Crossing the Chasm.
NSCAI Final Report. (2021). National Security Commission on Artificial Intelligence.
Intuitive Surgical. (2024). da Vinci Surgical System global usage statistics.
Altman, Sam. (2023). Moore’s Law for Everything.
NVIDIA GTC Conference. (2024). Jensen Huang keynote on robotics and AI.
Leave a comment