I went to a Q&A by the “godfather of AI” Geoff Hinton the opposite night, and was invited to a (mostly) academic dinner afterwards. In the marginally awkward period firstly after we were milling around (also often called “pre-dinner drinks”) I began chatting to a really eminent computer scientist who had retired some years ago, having mostly – I gather – worked in academic labs.
By way of creating conversation, the Very Eminent Computer Scientist asked me what I “do”. That is an oddly complicated query at the very best of times, because I’m a chronic overthinker, don’t have a convenient Richard Scarry job title, and I do a couple of various things. I’m also an anomaly in that sort of setting as I’ve been working in technology for nearly 30 years without ever doing the forms of things that generally equate to status: I’m not an educational, I’ve never worked at Google, and I’m a really great distance from being fantastically wealthy. Nevertheless, when people make polite conversation before dinner, they don’t want a complete spiel about how, actually, you’re employed in an emerging field through which job titles are usually not yet fully formed and your practice is angled towards realising an inclusive feminist vision of digital technologies because, well, life is brief and persons are hungry, so as an alternative I said, “I help people understand how technologies work on the earth” and dropped the vague “tech ethics” catch all, and he replied, “How interesting, and, may I ask, what qualifies you to do this?”
Being asked by a professor “what qualifies you to do this?” while standing within the Senior Common Room of a Cambridge college is sort of daunting, so I emitted a vaguely incoherent freefall of word association – the sort you kick yourself about while replaying a slew of potential pithy aperçus. I explained that I’d spent twenty years making and commissioning digital services and products, a few of them utilized by hundreds of thousands of individuals, and so my practice was based on observing what happens to a technology when it goes into the world: the way it’s adapted and adjusted, and the way every technology is basically unfinished until it’s utilized by people. Mercifully, at that moment, we were ushered into dinner, but “what qualifies you to do this” stuck with me, and I needed I’d had a greater answer.
The query of “what qualifies you” to grasp a technology is especially relevant for the time being, as we enter the nth week of Sam Altman’s AI Hype Roadshow, a cavalcade of open letters and AI doomspeak from World-Leading Authorities, through which the term “AI” has been a compelling vehicle for a big selection of as-yet imaginary concepts.
On this instance, the power to grasp a technology is neither here nor there, since the point has not been to debate any of the relevant technologies. As a substitute, the project of Altman and his merry band of doomsayers appears to be to capture power and create obfuscation by making latest myths and legends. If there was a teachable moment, then the lesson has not been one in regards to the potential of technologies but in regards to the importance of media literacy.
And that is on no account a latest move, it just happens – this time – to have been astonishingly effective. For several many years, tech corporations have been aware that political influence is as necessary as technological innovation in shaping future market opportunities: from tactical promoting to political lobbying to creating well-paid public-policy jobs which have improved the bank balances of many former politicians and political advisers, the importance of getting in first with compelling political story has played a critical role in creating, expanding, and maintaining their incredibly lucrative markets.
The present “existential threat” framing is effective since it suits on a rolling news ticker, diverts attention from the harms being created at once by data-driven and automatic technologies and it confers huge and unknowable potential power on those involved in creating those technologies. If these technologies are unworldly, godlike, and unknowable, then the individuals who created them have to be greater than gods; their quasi-divinity transporting them into state rooms and on to newspaper front pages without have to offer a lot as a single piece of compelling evidence for his or her astonishing claims. This grandiosity makes the hubris of the primary page of Steward Brand’s Whole Earth Catalogue seem somewhat tame, and it assumes that nobody will pull back the curtain and expose it as a market-expansion strategy rathe than a moment of redemption. Nobody will ask what the words really mean, because they don’t need to appear like they don’t really understand.
And yet, really, it’s a only a narrative trick: the hidden object will not be a technology, but a bid for power. It is a plot twist familiar from Greek myths, cautionary tales and superhero stories, and it’s extremely compelling for journalists because most technology news is boring as hell. Altman’s current line is roughly, “please regulate me now because I’m not chargeable for how powerful I’m going to become – and, oh, let’s just skip over all the present copyright abuses and potentially lethal misinformation because that’s obvs small fry in comparison with once I unintentionally abolish humanity”. If it jogs my memory of anything, it’s the cartoon villain Dr Heinz Doofenshmirtz from Phineas and Ferb, who makes regular outlandish claims before trying, and failing, to take control of the Tri-State Area. The difference is, in fact, that Phineas and Ferb all the time frustrate his plan.
My point will not be that a lot that we want Phineas and Ferb to return and type this all out, but that we want to stop normalising credulity when individuals with power and money and fancy titles say extraordinary things. After I went to Hinton’s Q&A in Cambridge last week, he spoke with ease and expertise about neural nets, but admitted he knows little about politics or regulation or people beyond computer labs. These last points garnered several laughs from the audience, but they weren’t really funny; they spoke to a yawning gap in the way in which that technology is known and spoken about and covered within the media. Computer science is a fancy discipline, and people who excel at it are rightly lauded, but so is knowing and critiquing power and holding it to account. Understanding technologies requires also understanding power; it needs media literacy in addition to technical literacy; incisive questioning in addition to shock and awe.
If there may be an existential threat posed by OpenAI and other technology corporations, it’s the specter of a couple of individuals shaping markets and societies for their very own profit. Elite corporate capture is the true existential risk, but it surely looks much less exciting in a headline.