Produced by Pause AI, a global activist group that co-organized the protest, it ended with this plea to the reader: “Pause AI till we all know what the hell Step 2 is.”
Within the South Park episode “Gnomes,” which first aired in 1998, Kenny, Kyle, Cartman, and Stan uncover a neighborhood of gnomes that sneak out at evening to steal underpants from dressers. Why? The gnomes current their pitch deck. “Part 1: Accumulate underpants. Part 2: ? Part 3: Revenue.”
The gnomes’ marketing strategy has since turn into one of many greats amongst web memes, used to satirize all the things from startup methods to coverage proposals. Memelord in chief Elon Musk as soon as invoked it in a chat about how he deliberate to fund a mission to Mars. Proper now, it captures the state of AI. Firms have constructed the tech (Step 1) and promised transformation (Step 3). How they get there’s nonetheless an enormous query mark.
So far as Pause AI is worried, Step 2 should contain some form of regulation. However precisely what it should name for and who will implement it are up for debate.
AI boosters, however, are satisfied that Step 3 is salvation and have a tendency to glaze over the center bit. They see us racing towards sunny uplands on the again of an “economically transformative expertise,” as OpenAI’s chief scientist, Jakub Pachocki, put it to me a number of weeks in the past. They know the place they need to go—roughly: It’s hazy up there and nonetheless a way off. However everybody’s taking a unique route. Will all of them make it? Will anybody?
For each large declare in regards to the future, there’s a extra sober evaluation of how the rubber meets the highway—one which quells the hype. Take into account two current research. One, from Anthropic, predicted what varieties of jobs are going to be most affected by LLMs. (A takeaway: Managers, architects, and folks within the media ought to put together for change; groundskeepers, development employees, and people in hospitality, not a lot.) However their predictions are actually simply guesses, primarily based on what sorts of duties LLMs appear to be good at somewhat than how they actually carry out within the office.
One other examine, put out in February by researchers at Mercor, an AI hiring startup, examined a number of AI brokers powered by top-tier fashions from OpenAI, Anthropic, and Google DeepMind on 480 office duties incessantly carried out by human bankers, consultants, and attorneys. Each agent they examined failed to finish most of its duties.
Why is there such broad disagreement? There are a variety of things. For a begin, it’s essential to contemplate who’s making the claims (and why). Anthropic has pores and skin within the recreation. What’s extra, the general public telling us that one thing large is about to occur have reached that conclusion largely on the idea of how briskly AI coding instruments are getting. However not all duties could be hacked with coding. Different research have discovered that LLMs are dangerous at making strategic judgment calls, for instance.









