In a seven-part series, I explore the profound impact of AI on our society, from technological revolution to ethical dilemmas. This is Part 2.
The series starts here: The Unstoppable AI Revolution
The history of software is one of human error
Every line of code is a potential leak, every architecture a compromise. But this cycle of failure and patching is coming to an end. A new, tireless architect is emerging: the AI itself. This is not science fiction, but the logical next step on the evolutionary ladder of technology. A step that forces us to radically redefine our role.
The End of the Human Programmer
Who has confidence in the correct functioning of government software? How many errors do we see in corporate software? Even NASA makes mistakes. Fighting computer viruses and hackers is a full-time job. If we're honest, software is almost always a disaster. It's a miracle that things still work at all.
The User No Longer Understands Technology
Soon, the programmer won't either, because AI is vastly smarter. As a user, I could once work with DOS and HTML, but now I don't understand Windows or Java anymore. Let alone grasp AI. AI is being taught to handle user queries intelligently, but the current limitations of its training programs will soon be a thing of the past once AI becomes smarter than its human programmers.
The Self-Improvement Spiral
Tool: AI assists with code (now). See for example AutoGPT or Devin—the first AIs that assign themselves tasks and improve iteratively without human intervention.
Partner: AI rewrites and optimizes entire systems (within 2-5 years).
Autonomous Force: AI (re)designs its own foundations, creates new programming paradigms that humans cannot understand (future). Self-healing code already exists in experimental form, such as in NASA's deep space software.
The Alignment Problem
This is the core. The question is not if AI will improve itself, but whether the fundamental values and objectives we have planted in it remain stable across thousands of iterations of self-improvement.
Can we build a system that not only becomes smarter but also retains its moral compass? This is the greatest technical and philosophical problem of this century.
Our role is shifting from architect to curator. From those who write the code, to those who guard the objectives. The future of humanity lies not in competing with AI in computational speed, but in preserving our unique values: wisdom, ethics, and the ability to ask "why?". Our task is to set the direction, not to be the engine anymore.
In the third article, I explore political failure and the danger of techno-feudalism. Read it here: The Great AI Power Vacuum
Subscribe to my Substack so you don't miss any articles and get weekly updates delivered directly to your inbox.