From turning ordinary language into code, to making quick work of repetitive research tasks, AI is making good on its promise to liberate people from drudgery, freeing them up to focus on higher level tasks. But sometimes, exorcising drudgery can stifle innovation.
The current wave of AI language generative transformers have been tools looking for a viable application since their release. One place where the industry seems to be finding traction is in writing computer code. By extensively augmenting the generative transformers with classic machine learning, computer scientists have shown that the algorithms can compose software with some minimal level of reliability. Output can be more reliably judged than language models—the code produced by the tool either works or it doesn't. It is good or bad. The input of the coding tool is in recognizable English, the user types “write code that accomplishes X.” Expertise is not required to use the tool—think Alexa or Siri. This seems like a win.
As a goal, eliminating drudgery by automating and streamlining the coding process has been around for most of the lifetime of modern computing. This application of large language models promises to replace the middling coder performing the most routine tasks and allow great coders to focus on breakthroughs, especially in AI.
As a goal, eliminating drudgery by automating and streamlining the coding process has been around for most of the lifetime of modern computing.
But that elimination of the middling coder and the drudgery of routine work speaks volumes as a goal for the industry. Picture a bell curve of software development performance in your mind. On the far right lies the exceptional work done by developers leading new, innovative work—the upper tail of the curve, i.e., Malcolm Gladwell's outliers. On the left tail lie the poor performers and in the middle are all the middling coders, performing the quotidian, marginally rewarding coding work. Automate everything to the left of the star performers with AI and an efficient system would result. There would be nothing left but star software developers pushing the boundaries of the field.
Except that is not how statistics work. Without the mean, there can be no tails to the curve. Exceptional coders do not self-identify, and all exceptional software innovators started as middling coders.
Replace the routine work at the mean of the curve with AI and and you eliminate the path that the best coders took to become proficient and then excel. Innovation will ultimately suffer. In other words, the marketing goal of AI for software composition such as for OpenAI Codex reflects a naïve, myopic, unsustainable perspective. Metaphorically, the industry wants to eat its own seed corn.
Extend that to other applications in which AI is said to be delivering humans from drudgery. For example, Deep Research is “an agent that uses reasoning to synthesize large amounts of online information and complete multi-step research tasks for you.” Who could possibly disagree with that? Students and researchers lead the list when it comes to looking for shortcuts that expedite finishing that term paper and getting that manuscript published—a marketplace ripe for the picking.
Replace the routine work at the mean of the curve with AI and and you eliminate the path that the best coders took to become proficient and then excel.
Unfortunately, this innovation stifles innovation. When humans do the drudgery of literature search, citation validation, and due research diligence—the things OpenAI claims for Deep Research—they serendipitously see things they weren't looking for. They build on the ideas of others that they hadn't considered before and are inspired to form altogether new ideas. They also learn cognitive skills including the ability to filter information efficiently and recognize discrepancies in meaning.
I have seen in my field of systems analysis where decades of researchers have cited information that was incorrect—and expanded it into its own self-perpetuating world view. Critical thinking leads the researcher to not accept the work that others took as foundational and to spot the error. Tools such as Deep Research are incapable of spotting the core truth and so will perpetuate misdirection in research. That's the opposite of good innovation.
Sometimes drudgery has utility in developing human understanding and wisdom. Other times, it is merely drudgery. As a student engineer, I learned drafting at a table with pencil and gum eraser. Computer-aided design made all of that obsolete, for the better.
Tools that simulate electronics, aerodynamics, mechanics, and virtually all of the applied sciences facilitate design and development, and lie at the core of modern innovation. Physicists, chemists, and biologists use AI-enabled simulations to study what cannot be seen. Similar AI tools in video and web design have made expression and art more accessible to creators. AI is wonderful stuff.
How do we take advantage of these AI developments without stifling the hard work of becoming proficient in core skills that ultimately provides the foundation of innovation? If I want to write poetry in another language, I need a certain level of fluency in that language. AI may assist me in becoming fluent: conjugate the verb to be, but if the tool becomes a crutch, at some point the tendency is to ask the tool: write a poem about a summer day in French. I might modify the output to my own taste, but the level of creativity is diminished.
What is needed is the systems view, the critical thinking, the human factors analysis of AI. Those who have invested capital in the current wave of AI tools want return on that investment. We cannot count on them to think critically beyond that return on investment, and we must not misunderstand what they publish. Critical thinking must come from those who see the systems view and who can question the net utility to human knowledge, understanding, and wisdom in the long term. That may be academics, policymakers, and those of us advising the policymakers. The American incubator of innovation and creativity may depend on it.