Last month, Google’s artificial intelligence branch DeepMind unleashed AlphaGo Zero, the latest iteration of its ground-breaking Go-playing machine. The team who built it certainly deserve applause, it plays Go really well; it is in fact the best Go player in the world. “Best” might not even cut it, it could well be unbeatable.
And while that’s all well and good, if you happened to see any headlines about it, no one would have blamed you for reaching towards the closest Arnie-shaped robot you could find. Headlines such as “It’s able to create knowledge itself”, “Supercomputer learns 3,000 years of human knowledge in 40 days”, or (my personal favourite) “How the demon plays Go” are more science-fiction than sober science fact.
If you happened to see any headlines about it, no one would have blamed you for reaching towards the closest Arnie-shaped robot you could find…
Of course, this particular breakthrough is hardly the first to suffer from reality-stretching reporting. We are all tired of reading how red wine/chocolate/olives are the new miracle cancer cure one month, only to be revealed as the source of all cancer in the next. If only we could know in advance in which months it’s safe to eat them!
If, like me, you’re the boring sort and chase up the papers that originate these claims, the language there could not be more different. All the contradictions tend to melt away into a puddle of small sample sizes, margins of error, maybe-ifs, or “when tested in lab mice”. No one really wants to read about (2 ± 3)% increases in cancer rates in mice, so is it all the journalists’ fault for chasing the clicks? Actually, not entirely; research from Cardiff Uni shows most exaggerated or misleading claims were “already present in the text of the press releases produced by academics and their establishments”.
Is it all the journalists’ fault for chasing the clicks?
In an ideal world, even if the press release is their first encounter with that bit of research, a writer would then take the time to read the paper themselves, contact the authors, and speak to related research groups elsewhere. That of course takes time, time that might just not be available in a busy newsroom, and too often a news article simply becomes a reshuffled press release. And while it may be obvious why a university’s PR department would want to hype any research going on in their institution, the scientists themselves aren’t always blameless. Too often, any media engagement is seen as ‘not my problem’ – a reality that, hopefully, seems to be changing.
I think things are a bit different in the AlphaGo case though. Hype aside, this is a genuine breakthrough. The original machine decisively won against the best human players in the world last year, and AGZ trounced its older sibling 100 games to zero. These are the sort of results that margins of error don’t dissolve away.
That of course takes time, time that might just not be available in a busy newsroom…
The first, and perhaps most important, difference is where this research is coming from. DeepMind is a company after all; despite their (and more generally Silicon Valley’s) ‘we come in peace’ attitude, priorities and objectives are simply different to a university’s research group. At the very least, there seems to be less pressure to keep modest – that line about creating knowledge came straight from AlphaGo’s lead researcher.
AI research is also a field loaded with baggage right from its beginnings. Even by discounting the countless books, films, and video games on the subject, it might just be impossible not to humanize something that, by most definitions of the word, is able to learn. Cultural references can be helpful to a writer, just drop a nice Skynet or HAL reference in the lede and go on from there. But do these crutches end up doing more harm than good?
At the very least, there seems to be less pressure to keep modest…
Not many people have the time or the will to do their own in-depth review of every topic under the sun. The constant, contradictory claims about the links between food and disease have certainly left a general feeling that it’s all just a bunch of hokum. While I doubt anyone will be storming DeepMind’s offices with pitchforks any time soon, as the field starts to pick up pace, we could see a backlash similar to what we see with GM foods.
One thing seems clear: fixing these issues will require concerted efforts from all groups involved in the chain between discovery and the public. In the not-so-far future, the discussion of ethics within AI research is bound to move to the legislative arena. Over-inflated claims and clickbait headlines about future developments in the field should not set the tune of the conversation, or we run the risk of having the debate decided on alarmist straw men.