Grief at the end of history
Epistemic status: emotional ranting and grief (and, I'm aware of the structural incentives of capitalism and not deluded that it's all about individuals; allow me to indulge anyway)
My days are absolutely filled with reading about and discussing AI x-risk. How to think about it. What to do. How to think about what to do.
And I just keep coming back to that it feels really wrong to yield to this logic. The best thing we can do is to join everyone else in the thing that is causing our destruction!
And it's quite likely that that's not a strategic response given the situation we now find ourselves in.
But my thoughts keep coming back to the root cause: why the fuck did we start an AI race in the first place? We could have avoided so many downstream problems and tough decisions if we just had avoided this one upstream thing!
I know I'm not saying anything new; I just feel such incredible despair at what an absolute fucking disaster OpenAI has been. Seeing the slow train wreck play out over a decade.
Scott Alexander wrote this about the situation prior to OpenAI.
"Speculatively, DeepMind hoped to get all the AI talent in one place, led by safety-conscious people, so that they could double-check things at their leisure instead of everyone racing against each other to be first."
Have a look at this email. It was sent by Sam Altman to Elon Musk in 2015 — the email that initiated OpenAI.
"Been thinking a lot about whether it's possible to stop humanity from developing AI.
I think the answer is almost definitely not.
If it's going to happen anyway, it seems like it would be good for someone other than Google to do it first.
Any thoughts on whether it would be good for YC to start a Manhattan Project for AI? My sense is we could get many of the top ~50 to work on it, and we could structure it so that the tech belongs to the world via some sort of nonprofit but the people working on it get startup-like compensation if it works. Obviously we'd comply with/aggressively support all regulation."
And so they went ahead and started OpenAI, split up the AGI builders, and initiated the AI race by competing with DeepMind. DeepMind tried and failed to outbid the salaries of all the prospective OpenAI-employees.
They somehow, after all having read Bostrรถms "Superintelligence", came to the conclusion that AGI should be open-sourced (???) and spread that meme (took them several years to roll that back)
Then they burned massive amounts of timeline by making huge gains towards AGI.
Then Elon Musk left OpenAI and eventually ended up building a competing AGI company.
Then lots of safety-minded people left in order to build ๐ข๐ฏ๐ฐ๐ต๐ฉ๐ฆ๐ณ AGI lab to compete with OpenAI and Deepmind.
Then they massively intensified the AI race by releasing ChatGPT, drawing hundreds of billions of dollars of investment into the AI industry, waking China, and burned even more timeline.
Then the safety people on board tried to get Sam Altman out, failed, and got fired from the board.
Then one of the supposed safety people, Ilya Sutskever, left to create yet ๐ข๐ฏ๐ฐ๐ต๐ฉ๐ฆ๐ณ competing AGI company.
They said they were speeding towards AGI in order to avoid a compute overhang — but then they began pushing for gargantuan build-outs of computing infrastructure, deliberately trying to increase the compute overhang.
Then OpenAI started campaigning against AI regulation.
Now they want to turn it from a for-profit owned by a non-profit into a straight-up for-profit, where the goal is, instead of “ensur[ing] AI benefits all society”, will: "Pursue charitable initiatives in sectors such as health care, education, and science"
As Scott Alexander put it: "Pessimistically, it sounds like they’re trying to change the deal from “investors can’t capture the Singularity for themselves, and profits get paid out as UBI” to “investors will capture the Singularity, and we’ll buy off everyone else’s birthright by funding some hospitals or something pre-singularity”."
So if we go back to the original reasons Sam Altman gave in that first email - arguably every single of these reasons is just invalidated now. Sam probably correctly assumed that AGI was going to be built by someone at some point, but he made it way more likely, and way sooner. Given the change in the company goal, the tech won't "belong to the world". OpenAI probably won't be controlled by the nonprofit. They definitely aren't aggressively supporting all regulation.
But apparently, the risk of Google Deepmind being the ones to create AGI was soooo much worse than the huge risk of extinction you impose by initiating a race dynamic!
The thing that gets to me is that all these people were steeped in AI risk literature. They knew about the risk of extinction. They knew about the extreme dangers of an arms race. Sam Altman wrote in 2015 that the "Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity."
And yet they went ahead and did it anyway!
As Eliezer Yudkowsky put it:
"Nothing else Elon Musk has done can possibly make up for how hard the "OpenAI" launch trashed humanity's chances of survival; previously there was a nascent spirit of cooperation, which Elon completely blew up to try to make it all be about *who*, which monkey, got the poison banana, and by spreading and advocating the frame that everybody needed their own "demon" (Musk's old term) in their house, and anybody who talked about reducing proliferation of demons must be a bad anti-openness person who wanted to keep all the demons for themselves.
Nobody involved with OpenAI's launch can reasonably have been said to have done anything else of relative importance in their lives. The net impact of their lives is their contribution to the huge negative impact of OpenAI's launch, plus a rounding error.
Previously all the AGI people were at the same conference talking about how humanity was going to handle this together. Elon Musk didn't like Demis Hassabis, so he blew that up. That's the impact of his life. The end."
I'm sorry, but fuck these people. These people will very likely be responsible for omnicide. These people will likely be responsible for prematurely killing you and everyone you love. These people will likely be responsible for destroying the future of humanity.
To channel Greta Thunberg, but for AI risk:
"You have stolen my dreams and my future. And yet I'm one of the lucky ones. People will suffer. People will die. We are in the beginning of a mass extinction, and all you can do is hype up AGI and talk about fairy tales of massive economic growth. How fucking dare you!"
I don't know about you guys, but as events have been unfolding, it has become more and more real. I've been immersed in the AI x-risk discussion for the past 11 years, but for most of that time, it was too abstract, it didn't quite sink it. Now it's becoming more and more emotionally internalised — that we are likely witnessing a rather imminent demise. Something Aella tweeted out in response to JD Vance's jingoistic and accelerationist talk at the Paris AI Action Summit:
"We're all dead. I'm a transhumanist, I love tech, I desperately want aligned AI, but at our current stage of development, this is building the equivalent of a planet-sized nuke. The reason is boring and complicated and technical, so midwits in power don't understand the danger. It's really an enormity of grief to process. I live my life as though the planet has a few more years left to live - e.g. i've stopped saving for retirement."
๐๐ต'๐ด ๐ณ๐ฆ๐ข๐ญ๐ญ๐บ ๐ข๐ฏ ๐ฆ๐ฏ๐ฐ๐ณ๐ฎ๐ช๐ต๐บ ๐ฐ๐ง ๐จ๐ณ๐ช๐ฆ๐ง ๐ต๐ฐ ๐ฑ๐ณ๐ฐ๐ค๐ฆ๐ด๐ด.
Comments
Post a Comment