The past year has been a roller coaster ride in the AI world, and no doubt many people are giddy with the number of advances and reversals, the constant hype and equally constant fear mongering. But let’s take a step back: AI is a powerful and promising new technology, but the conversation is not always sincere and generates more heat than light.
AI is of interest to everyone, from PhD students to primary school children, for good reason. Not every new technology makes us question the fundamental nature of human intelligence and creativity, and let us generate an infinite variety of dinosaurs fighting with lasers.
This broad appeal means that the debate over what AI should, shouldn’t, maybe or shouldn’t be has spread from trade conferences like NeurIPS to specialist publications like this one, to the front page of impulse-buying supermarket news magazines. The threat and/or promise of AI (in a general sense, which lack of specificity is part of the problem) has seemingly become a household topic overnight.
On the one hand, it must be validating for researchers and engineers who have spent decades toiling in relative obscurity on what they believe is an important technology to see it so widely considered and noticed. But like the neuroscientist whose paper results in a headline like “Scientists have pinpointed the exact center of love,” or the physicist whose ironically named “god particle” sparks a theological debate, it must surely be frustrating when someone’s work is returned. around among the hoi polloi (that is, unscrupulous experts, not innocent laymen) like a beach ball.
“AI can now…” is a very dangerous way to start a sentence (although I’m sure I’ve done it myself), because it’s very hard to say for sure what AI is actually doing . It can certainly beat any human at chess or go, and it can predict the structure of protein chains; it can answer any question with confidence (if not correctly) and it can do a remarkably good impersonation of any artist, living or dead.
But it’s hard to tell which of these things are important, and to whom, and which will be remembered in 5 or 10 years as short diversions, like so many innovations we’ve been told are going to change the world. The capabilities of AI are widely misunderstood because they have been actively misrepresented by both those who want to sell or invest in it and those who fear or underestimate it.
Obviously there’s a lot of potential in something like ChatGPT, but those building products with it would like nothing more than for you, possibly a customer or at least anyone who comes across it to think it’s more powerful and less error prone than it is. Billions are being spent to ensure that AI is at the heart of services of all kinds – and not necessarily making them better, but automating them in the way so much has been automated with mixed results.
Not to use the narrow ‘they’, but they – i.e. companies like Microsoft and Google who have a huge financial interest in the success of AI in their core business (they have invested so much in it) – are not interested in changing of the world for the better, but the more money. They are companies and AI is a product they sell or hope to sell – that’s not a slander against them, but something to keep in mind when they make their claims.
On the other hand, you have people who, rightly so, fear that their role will be eliminated, not because of actual obsolescence, but because a gullible manager has swallowed the “AI revolution” hook, line and sinker. People don’t read ChatGPT scripts and think, “oh no, this software does what I do.” They think, “this software seems to do what I do, to people who don’t understand it either.”
That is very dangerous if your work is systematically misunderstood or undervalued, as many do. But it’s a management style problem, not AI from. Fortunately, we have bold experiments like CNET’s attempt to automate financial advice columns: the graves of such ill-advised attempts will serve as horrifying trail markers for those who think they’ll make the same mistakes in the future.
But it is just as dangerous to dismiss AI as a toy, or say it will never do this and that, simply because it can’t now, or because one has seen an example where it fails. It’s the same mistake the other side is making, but in reverse: proponents see a good example and say, “This shows it’s over for concept artists;” detractors see a bad example (or maybe the same thing!) and say “this shows it can never replace concept artists.”
Both build their houses on shifting sand. But, of course, both clicks and eyeballs are the fundamental currency of the online world.
And so you have these dueling extreme takes that command attention, not because they’re thoughtful, but because they’re reactive and extreme — which shouldn’t surprise anyone, since, as we’ve all learned from the past decade, conflict leads to engagement. What feels like a cycle of hype and disappointment is just fluctuating visibility into an ongoing and not very helpful argument about whether AI is essentially this or that. It has the feel of people in the ’50s arguing over whether we’ll colonize Mars or Venus first.
The reality is that many of those concept artists, not to mention novelists, musicians, tax consultants, lawyers, and any other profession that sees AI intrude in one way or another, are actually excited and interested. They know their job well enough to understand that even a really good imitation of what they’re doing is fundamentally different from actually doing it.
Advances in AI are slower than you think, not because there aren’t breakthroughs, but because those breakthroughs are the result of years and years of work that aren’t as photogenic or shareable as stylized avatars. The most important thing in the last decade was ‘Attention is all you need’, but we didn’t see that on the cover of Time. It’s certainly noteworthy that from this or that month it’s good enough to do certain things, but don’t think of it so much as AI “crossing a boundary” as AI going further down a long, long gradient or continuum that even its most gifted practitioners cannot see for more than a few months.
All this is just to say, don’t get caught up in the hype or the doomsayers. What AI can or can’t do is an open question, and if someone says they know, check if they’re trying to sell you something. What people might want to do with the AI we already have, that’s something we can and should talk more about. I can live with a model who can ape my writing – I only ape a dozen other writers anyway. But I would rather not work for a company that algorithmically determines wages or who gets fired, because I wouldn’t trust those who put that system in place. As usual, the technology isn’t the threat – it’s the people using it.