The AI doomers are licking their wounds


How a comparatively small subculture all of the sudden rose to prominence

Animation of a glitching warning sign
Illustration by The Atlantic

That is Atlantic Intelligence, a publication through which our writers assist you to wrap your thoughts round synthetic intelligence and a brand new machine age. Enroll right here.

For a second, the AI doomers had the world’s consideration. ChatGPT’s launch in 2022 felt like a shock wave: That pc packages might all of the sudden evince one thing like human intelligence prompt that different leaps could also be simply across the nook. Consultants who had frightened for years that AI could possibly be used to develop bioweapons, or that additional improvement of the expertise may result in the emergence of a hostile superintelligence, lastly had an viewers.

And it’s not clear that their pronouncements made a distinction. Though politicians held loads of hearings and made quite a few proposals associated to AI over the previous couple years, improvement of the expertise has largely continued with out significant roadblocks. To these involved concerning the damaging potential of AI, the chance stays; it’s simply now not the case that everyone’s listening. Did they miss their massive second?

In a current article for The Atlantic, my colleague Ross Andersen spoke with two notable consultants on this group: Helen Toner, who sat on OpenAI’s board when the corporate’s CEO, Sam Altman, was fired all of the sudden final 12 months, and who resigned after his reinstatement, plus Eliezer Yudkowsky, the co-founder of the Machine Intelligence Analysis Institute, which is concentrated on the existential dangers represented by AI. Ross wished to know what they realized from their time within the highlight.

“I’ve been following this group of people who find themselves involved about AI and existential danger for greater than 10 years, and in the course of the ChatGPT second, it was surreal to see what had till then been a comparatively small subculture all of the sudden rise to prominence,” Ross advised me. “With that second now over, I wished to test in on them, and see what they’d realized.”


Animation of a glitching warning sign
Illustration by The Atlantic

AI Doomers Had Their Large Second

By Ross Andersen

Helen Toner remembers when each one who labored in AI security might match onto a college bus. The 12 months was 2016. Toner hadn’t but joined OpenAI’s board and hadn’t but performed an important function within the (short-lived) firing of its CEO, Sam Altman. She was working at Open Philanthropy, a nonprofit related to the effective-altruism motion, when she first related with the small group of intellectuals who care about AI danger. “It was, like, 50 individuals,” she advised me lately by cellphone. They have been extra of a sci-fi-adjacent subculture than a correct self-discipline.

However issues have been altering. The deep-learning revolution was drawing new converts to the trigger.

Learn the total article.


What to Learn Subsequent


P.S.

This 12 months’s Atlantic Pageant is wrapping up as we speak, and you may watch periods through our YouTube channel. A fast advice from me: Atlantic CEO Nick Thompson speaks a couple of new examine displaying a shocking relationship between generative AI and conspiracy theories.

— Damon



Supply hyperlink

We will be happy to hear your thoughts

Leave a reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Easy Click Express
Logo
Compare items
  • Total (0)
Compare
0
Shopping cart