Is AI being overhyped?
A popular recent cartoon shows a marketing employee editing a PowerPoint deck of slides for NFTs (an extremely unsuccessful form of digital ledger) and replacing the word NFT with AI, but otherwise leaving it completely unchanged. The point is that AI is the latest overhyped fad, with no real substance.
With more and more companies talking about how they are using AI in their businesses without making a case for exactly how it will help them, it’s worth taking a critical look at the industry.
A History of AI Hype
Sceptics believe the whole AI industry is overhyped, and that parallels can be drawn to the dot-com bubble. Historically, something similar happened before, when the original AI algorithms were developed, and their potential began to be seen, leading to huge investments by nation-states and businesses. When the promised technologies failed to arrive, it led to “AI winters” as interest and funding waned. This happened several times: first in the 1950s and 1960s, when hopes for AI machine translation failed, then in the 1970s and 1980s, when neural networks failed to generate the promised practical results of AI capable of reasoning, and even after that, as strategic and military applications did not meet government hopes.
However, these patterns of hype and disappointment are misleading, as much of the theory in this earlier AI research was sound, but held back by the limited computing power and available datasets of the time. In fact, modern AI programs still use and build upon many of the algorithms and concepts of the 1970s and 1980s; neural networks remain at the heart of programs such as ChatGPT and image generation software. Machine translation through AI is now possible and highly effective. And AI’s current strategic implications for drone warfare have made it the focus of militaries everywhere. So one could say that it wasn’t a case of the AI hype in the past being wrong, but perhaps it was simply too early.
The Failure of Driverless Cars
Despite this, modern critics still have several examples to draw upon. Perhaps the strongest is the failure of the driverless car to actually arrive. Driverless cars have reportedly been two or three years away from reaching the mainstream for at least ten years, and while trials have been carried out globally, at the time of writing, no country has moved to mainstream adoption.
Instead, companies like Uber have actually given up on the technology completely. Crashes of Tesla cars with self-driving features have attracted extremely negative press coverage, with some arguing the technology will never be ready.
Part of the problem AI faces in this sector is that when it comes to matters of life and death, governments and regulators are unwilling to hold AI to the same standard as humans, who are themselves involved in a very large number of car accidents, with an estimated 1.25 million global deaths from human driving a year. Rather than ask AI to equal or surpass that fairly low level of driver safety, regulators appear to expect AI to reach a level where it is virtually never responsible for any accident. While understandable, this level of quality control might not be realistic for something as complex as driving, even by the very best AI. For driverless cars to go mainstream and succeed, it may require a change in government attitudes, with a higher tolerance of risk, rather than a fundamental technological change.
This said, the technology has also had some recent successes. China is currently allowing robotaxis in several designated zones across several of its cities, and it has been reported that Tesla will test its own forthcoming robotaxi in China.
Generative AI’s Copyright Challenges
AI sceptics such as psychologist and cognitive scientist Gary Marcus have argued that the very structure of generative AI products might leave them vulnerable to continued copyright claims. Citing the New York Times’ ongoing litigation against OpenAI for copyright infringement, where they have shown it will reproduce NYT articles for users almost verbatim, Marcus argues that:
“Systems like DALL-E and ChatGPT are essentially black boxes. GenAI systems don’t give attribution to source materials because at least as constituted now, they can’t.”
Assuming Marcus is right, it may be that generative AI does indeed have to scale back some of its ambitions. For example, it might have to refrain from offering answers which reference recent events or news stories, or use a far weaker dataset for image creation – which would make the product noticeably worse. But it would not end generative AI – just reduce its scope.
The State of Modern AI
Perhaps biggest fundamental difference between the current state of AI and hyped technologies of the past is the current version can be implemented quickly, and is already making companies money. Individual users and companies of all sizes have already proved willing to pay for tools ranging from ChatGPT to Stable Diffusion – being willing to put their money where their mouth is. This is in turn creating a wider ecosystem where companies can offer services like AI training or AI strategy consulting.
Risks of AI being Too Powerful
Therefore, as modern AI has in fact been more and more successful, perhaps one final line of argument can be made: that AI is overhyped not because it is not good enough, but rather that it is too good. That governments will be forced to heavily regulate it to the point it can barely function.
And it is true that generative AI can indeed be used in negative ways. For example, when combined with voice-imitation software it can create extremely convincing scam calls, and in an age of fake news, AI could potentially make things worse with deepfakes. Teachers worldwide have complained about many students using ChatGPT to write their essays for them.
And yet, like any tool, AI can also be used for good. Its pattern matching abilities make it the perfect way to identify and stop scams and fake videos. And the essays and exams model of education has been criticised, long before AI arrived, for teaching students to study for the test instead of genuinely learning. Perhaps it is time to reform that system, and create individually tailored curriculums for students.
Of course, this would not be practical for a teacher, but a teacher with help from AI? That might be a different story.