Is there such a thing as too much automation?
At a family wedding the other weekend, I fell into conversation with a relative who has several decades of experience in the aerospace industry. He bemoaned a growing problem among the younger engineers who work for him. It seems that some of these highly-paid professionals have not developed the ability to look at a finished piece of work and say - “That doesn’t seem right” - because they rely on their advanced computer systems to do the validation. When the computer makes a mistake, they do not have the breadth of experience to realize it.This point resonated with me for the simple reason that I experience it every day. I’m a professional automator - I automate software processes for a living - and I spend a lot of time inside the Amazon Web Services cloud. AWS handles the compute, storage and networking details for me so I can focus on higher-level tasks, which is both nice and worrisome. Nice because I can get more done in less time, worrisome because I don’t get the opportunity to grapple with the implementation details of server and network virtualization. I understand those things on a theoretical level, but I don’t get to play with them much, and this sometimes hampers my grasp of what’s really going on beneath all the automation.
I worry that my generation of young STEM professionals - scientists, engineers, technologists of all stripes - is not getting the opportunity to build the hands-on experience we need in our fields because the last generation’s “boring stuff” has been automated away. I worry that we are less effective and less prepared for unexpected circumstances because of this trend.
Then the new generation needs more rigorous training! you may say. But I think automation always has an atrophying effect on human ability to perform the same task. Tesla and other car manufacturers are struggling with the implications of this phenomenon, and it’s at least as old as cruise control: when an automated system is running part of the show, human attention inevitably drifts.
Then make the automation perfect! you may say. Or so close to perfect that only a handful of really highly-trained professionals is ever needed to keep an eye on the infrastructure. Modern automation is so advanced - think self-driving cars - that we actually refer to it with the mystical-sounding buzzword “artificial intelligence.” With AI at the controls, why should future generations ever have to learn how to do repetitive, uninteresting work? Let’s focus, you may say, on doing the things we haven’t figured out how to automate: pushing boundaries, inventing new processes, having creative epiphanies at the cutting edge of human endeavor.
But if and when we reach the point where human workers are only supplying new creative ideas and automated AI systems are carrying out the grunt work, we may soon find ourselves in a pretty dire predicament.
The bicycle and the merry-go-round
Steve Jobs famously hoped that the personal computer would be a “bicycle for the mind”. AI tools that increase human productivity are like Jobs’ bicycle: they help us move faster while exercising our own capabilities. But I don’t believe AI that keeps us from having to learn entry-level tasks will really bolster mankind’s creative power. I don’t think we can expand our creative horizons unless people really understand the fundamentals of the fields they’re trying to expand, and that understanding can only be gleaned by years of practice, failure and incremental growth.Unlike a bicycle, a merry-go-round doesn’t require any effort to ride. It even has the illusion of forward motion, but it’s not going anywhere, and it’s not making you stronger. Here’s a weirdly plausible future scenario: AI never matches human creativity, but excels at optimizing pre-defined, repetitive tasks. Humans struggle to find entry level work in cutting-edge fields due to the prevalence of AI assistants. Society’s increasing reliance on AI perpetuates status quo thinking and suppresses innovation, and the very technology that was supposed to enhance our abilities actually traps us in a loop of self-reinforced inertia. Welcome to the merry-go-round.
We’re already seeing this trend - of all places - in human resources, a discipline you might think would be hard to automate. Many companies now use predictive algorithms to screen job applicants for “soft skills”, seeking a perfect fit for the company culture. The automation is great at identifying candidates who match a pre-defined set of parameters. But what if the parameters are wrong? What if the AI is reinforcing a toxic culture? Without a human who understands the system, who will break the circle?
The Two Rules of Automation
AI true believers are fond of imagining a future in which infinitely self-improving AI will either wipe us off the planet or become our benevolent overlord. But the real danger of AI may be a lot more immediate, and a lot more subtle, for the simple reason that it’s not infinitely self-improving now and probably never will be. I keep these two rules in mind at work:- If you don’t understand an automated process that helps you do your job, you had better hope the automation never fails.
- Automation always fails.
Automation may fail obviously, with alarms and error messages, or it may fail insidiously, as a suboptimal process gets entrenched because no human knows or cares enough to push for improvements. But it will fail, and we need people who understand systems well enough to handle those mistakes when they happen. The more of us who are content to let someone else’s process handle their work without taking the time to understand what’s really going on, the faster we’ll slide toward a uniquely depressing version of the AI apocalypse - one where computer programs that are sort of okay have replaced people who have lost the ability to make them better. And the merry-go-round will slowly spin to a stop.