Home Artificial Intelligence What does a Harry Potter fanfic should do with OpenAI?

What does a Harry Potter fanfic should do with OpenAI?

0
What does a Harry Potter fanfic should do with OpenAI?

Have you ever ever puked and had diarrhea at the identical time? I actually have, and when it happened, I used to be listening to a fan-made audiobook version of Harry Potter and the Methods of Rationality (HPMOR), a fan fiction written by Eliezer Yudkowsky.

No, the dual-ended bodily horror was not incited by the fanfic, however the two experiences are inextricable in my mind. I used to be shocked to find years later that the 660,000-word fanfic I marathoned while sick has some bizarre intersections with the ultra wealthy technorati, including lots of the figures involved in the present OpenAI debacle.

Living proof: In an Easter egg spotted by 404 Media (which was too minor for anyone else — even me, someone who’s actually read the thousand-odd page fanfic — to note), there’s a once-mentioned Quidditch player within the sprawling story named Emmett Shear. Yes, the identical Emmett Shear who co-founded Twitch and was just named interim CEO of OpenAI, arguably probably the most influential company of the 2020s. Shear was a fan of Yudkowsky’s work, following the serialized story because it was published online. So, as a birthday present, he was gifted a cameo.

Shear is a longtime fan of the writings of Yudkowsky, as are lots of the AI industry’s key players. But this Harry Potter fanfic is Yudkowsky’s hottest work.

HPMOR is an alternate universe rewriting of the Harry Potter series, which begins with the premise that Harry’s aunt Petunia married an Oxford biochemistry professor as a substitute of the abusive dolt Vernon Dursley. So, Harry grows up as a know-it-all kid obsessive about rationalist pondering, an ideology which prizes experimental, scientific pondering to resolve problems, eschewing emotion, religion or other imprecise measures. It’s not three pages into the story before Harry quotes the Feynman Lectures on Physics to try to resolve a disagreement between his adoptive parents over whether or not magic is real. If you happen to thought Harry Potter might be a bit frustrating at times (why doesn’t he ever ask Dumbledore probably the most obvious questions?), prepare for Harry Potter, who could give the eponymous “Young Sheldon” a run for his money.

It is smart that Yudkowsky runs in the identical circles as a lot of probably the most influential people in AI today, since he himself is a longtime AI researcher. In a 2011 Latest Yorker feature on the techno-libertarians of Silicon Valley, George Packer reports from a feast at the house of billionaire enterprise capitalist Peter Thiel, who would later co-found and spend money on OpenAI. As “blondes in black dresses” pour the boys wine, Packer dines with PayPal co-founders like David Sacks and Luke Nosek. Also on the party is Patri Friedman, a former Google engineer who got funding from Thiel to begin a nonprofit that goals to construct floating, anarchist sea civilizations inspired by the Burning Man festival (after 15 years, the organization doesn’t appear to have made much progress). After which there’s Yudkowsky.

To further connect the parties involved, behold: a 10-month-old selfie of now-ousted OpenAI CEO Sam Altman, Grimes and Yudkowsky.

Yudkowsky isn’t a household name like Altman or Elon Musk. But he tends to crop up repeatedly within the stories behind corporations like OpenAI, and even behind the nice romance that brought us children named X Æ A-Xii, Exa Dark Sideræl and Techno Mechanicus. No, really — Musk once desired to tweet a joke about “Roko’s Basilisk,” a thought experiment about artificial intelligence that originated on LessWrong, Yudkowsky’s blog and community forum. But, because it turned out, Grimes had already made the identical joke a couple of “Rococo Basilisk” within the music video for her song “Flesh Without Blood.”

HPMOR is sort of literally a recruitment tool for the rationalist movement, which finds its virtual home on Yudkowsky’s LessWrong. Through an admittedly entertaining story, Yudkowsky uses the familiar world of Harry Potter for instance rationalist ideology, showing how Harry works against his cognitive biases to turn into a master problem-solver. In a final showdown between Harry and Professor Quirrell — his mentor in rationalism who seems to be evil — Yudkowsky broke the fourth wall and gave his readers a “final exam.” As a community, readers needed to submit rationalist theories explaining how Harry could get himself out of a fatal predicament. Thankfully, for the sake of comfortable endings, the community passed.

However the moral of HPMOR isn’t simply to be a greater rationalist, or as “less mistaken” as you possibly can be.

“To me, a lot of HPMOR is about how rationality could make you incredibly effective, but incredibly effective can still be incredibly evil,” my only other friend who has read HPMOR told me. “I feel just like the whole point of HPMOR is that rationality is irrelevant at the top of the day in case your alignment is to evil.”

But, in fact, we will’t all agree on one definition of excellent versus evil. This brings us back to the upheavals at OpenAI, an organization that’s attempting to construct an AI that’s smarter than humans. OpenAI desires to align this artificial general intelligence (AGI) with human values (similar to the human value of not being killed in an apocalyptic, AI-induced event), but it surely just so happens that this “alignment research” is Yudkowsky’s specialty.

In March, hundreds of notable figures in AI signed an open letter arguing for all “AI labs to instantly pause for a minimum of 6 months.”

Signatories included Meta and Google engineers, founders of Skype, Getty Images and Pinterest, Stability AI founder Emad Mostaque, Steve Wozniak and even Elon Musk, a co-founder of OpenAI who stepped down in 2018. But Yudkowsky didn’t sign the letter, and as a substitute, penned an op-ed in TIME Magazine to argue that a six-month pause isn’t radical enough.

“If any person builds a too-powerful AI, under present conditions, I expect that each single member of the human species and all biological life on Earth dies shortly thereafter,” Yudkowsky wrote. “There’s no proposed plan for the way we could do any such thing and survive. OpenAI’s openly declared intention is to make some future AI do our AI alignment homework. Just hearing that should be enough to get any sensible person to panic. The opposite leading AI lab, DeepMind, has no plan in any respect.”

While Yudkowsky argues for the doomerist approach in the case of AI, the OpenAI leadership kerfuffle has highlighted the wide selection of various beliefs around how one can navigate technology that’s possibly an existential threat.

Acting because the interim CEO of OpenAI, Shear — now some of the powerful people on the earth, and never a Quidditch seeker in a fanfic — is posting memes about different factions within the AI debate.

There’s the techno-optimists, who support the expansion of tech in any respect costs, because they think any problems brought on by this “grow in any respect costs” mentality will likely be solved by tech itself. Then there’s the effective accelerationists (e/acc) which appears to be sort of like techno-optimism, but with more language about how growth in any respect costs is the one way forward due to the second law of thermodynamics says so. The safetyists (or “decels”) support the expansion of technology, but only in a way that’s regulated and protected (meanwhile, in his “Techno-Optimist Manifesto,” enterprise capitalist Marc Andreessen decries “trust and safety” and “tech ethics” as his enemy). After which there are the doomers, who think that when AI outsmarts us, it would kill us all.

Yudkowsky is a pacesetter among the many doomers, and he’s also someone who has spent the last many years running in the identical circles as what looks as if half of the board of OpenAI. One popular theory about Altman’s ousting is that the board desired to appoint someone who aligned more closely with its “decel” values. So, enter Shear, who we all know is inspired by Yudkowsky and likewise considers himself a doomer-slash-safetyist.

We still don’t know what’s occurring at OpenAI, and the story seems to vary about once every 10 seconds. For now, techy circles on social media proceed to fight over decel versus e/acc ideology, using the backdrop of the OpenAI chaos to make their arguments. And within the midst of all of it, I can’t help but find it fascinating that, if you happen to squint at it, all of this traces back to at least one really tedious Harry Potter fanfic.

LEAVE A REPLY

Please enter your comment!
Please enter your name here