Platformer - OpenAI's alignment problem
Here’s this week’s free column — a look at the wild drama at OpenAI and what it means for very real concerns about AI safety. Do you value independent reporting on content moderation during wartime? If so, your support would mean a lot to us. Upgrade your subscription today and we’ll email you first with all our scoops — like our recent interview with a former Twitter employee that Elon Musk fired for criticizing her. ➡️
I. Less than two months ago, on stage at the Code Conference, I asked Helen Toner how she thought about the awesome power that she’d been entrusted with as a board member at OpenAI. Toner has the power under the company’s charter to halt OpenAI’s efforts to build an artificial general intelligence. If the circumstances presented themselves, would she really stop the company’s work and redirect employees to working on other projects? At the time, Toner demurred. I had worded my question inelegantly, suggesting that she might be able to shut down the company entirely. The moment passed, and I never got my answer — until this weekend, when the board Toner serves on effectively ended OpenAI as we know it. (She declined to comment when I emailed her.) By now I assume you have caught up on the seismic events of the past three days at OpenAI: the shock firing on Friday of CEO Sam Altman, followed by company president Greg Brockman quitting in solidarity; a weekend spent negotiating their possible returns; ex-Twitch CEO Emmett Shear being installed by the board as OpenAI's new interim CEO; and minority investor Microsoft swooping in to create a new advanced research division for Altman and Brockman to run. By mid-afternoon Monday, more than 95 percent of OpenAI employees had signed a letter threatening to quit unless Altman and Brockman are reinstated. It seems all but certain that there will be more twists to come. I found this turn of events as stunning as anyone, not least because I had just interviewed Altman on Wednesday for Hard Fork. I had run into him a few days before OpenAI’s developer conference, and he suggested that we have a conversation about AI’s long-term future. We set it up for last week, and my co-host Kevin Roose and I asked about everything that has been on our minds lately: about copyright, about open-source development, about building AI responsibly and avoiding worst-case scenarios. (We’ve just posted that interview, along with a transcript, here.) The days that follow revealed fundamental tensions within OpenAI about its pace of development and the many commitments of its CEO. But in one fundamental respect, the story remains as confusing today as it did on Friday when the board made its bombshell announcement: why, exactly, did OpenAI’s board fire Sam Altman? The official explanations have proliferated. The board’s original stated reason was that Altman “was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.” In an all-hands meeting the next day, the company’s chief scientist and a member of the board, Ilya Sutskever, suggested that the removal had been necessary “to make sure that OpenAI builds AGI that benefits all of humanity.” To some employees, the remark suggested that the firing may have been connected to concerns that OpenAI was unduly accelerating the development of the technology. Later on Saturday, the chief operating officer, Brad Lightcap, ruled out many possible explanations while blaming the situation on poor communication. “We can say definitively that the board’s decision was not made in response to malfeasance or anything related to our financial, business, safety or security/privacy practices,” he told employees. “This was a breakdown in communication between Sam and the board.” The following evening, after he was named CEO, Shear also ruled out a safety explanation. “The board did *not* remove Sam over any specific disagreement on safety, their reasoning was completely different from that,” he posted on X, without elaborating. The cumulative effect of these statements was to all but exonerate Altman. In a few short years, he had led the company from a nonprofit research effort to a company worth as much as $90 billion to investors. And now he was being driven out over some unspecified miscommunications? It didn’t add up. In their silence, the board ensured that Altman became the protagonist and hero of this story. Altman’s strategic X posts, cleverly coordinated with his many allies at the company, gave him the appearance of a deposed elected official about to be swept back into power by the sheer force of his popularity. By the time Susteveker apologized for his role in the coup Monday morning, and expressed his desire to reinstate Altman and Brockman to their roles, I had officially lost the plot. By some accounts, Susteveker had spearheaded the effort to remove Altman from his post. Now that he’s had some time to read the room, apparently, he’s changed his mind. So far much of the attention this story has received has focused, understandably, on the value that has been destroyed. ChatGPT is the most compelling product in a generation, and had it been left to develop according to plan it likely would have cemented OpenAI’s position as one of the three or four most important technology companies in the world. But to most people, it will never matter how much OpenAI was worth. What matters is what it built, and how it deployed it. What jobs it destroyed, and what jobs it created. What capabilities the company’s technology has — and what features it disabled before releasing. Navigating those tensions is the role of OpenAI’s nonprofit board. And while it botched its role badly, that can’t be the end of the story. Someone has to navigate those tensions. And before we blame this weekend’s fiasco on OpenAI’s unusual structure, or on nonprofit governance broadly, it’s worth considering why the board was set up this way in the first place. II. When they launched OpenAI in 2015, the founders considered several models. One was a public-sector project funded by the government — but our government being what it is, they saw no feasible way to spin up such an effort. Another was a venture-backed startup. But the founders believed the superintelligence they were trying to build should not be concentrated in the hands of a single for-profit company. That left the non-profit model, which they hoped “would be the most effective vehicle to direct the development of safe and broadly beneficial AGI while remaining unencumbered by profit incentives." They raised $1 billion, including $100 million from Elon Musk, and set to work. Four years later, they discovered what everyone else who tries to train a large language model eventually realizes: $1 billion doesn’t get you very far. To continue building, they would have to take money from private investors — which meant setting up a for-profit entity underneath the nonprofit, similar to the way the Mozilla Foundation owns the corporation that oversees revenue operations for the Firefox browser, or how the nonprofit Signal Foundation owns the LLC that operates the messaging app. While the membership of the board has changed over the years, its makeup has always reflected a mix of public-benefit and corporate interests. At the time of Altman’s firing, though, it had arguably skewed away from the latter. In March, LinkedIn co-founder Reid Hoffman left the board; he co-founded a for-private rival two months later. That left three OpenAI employees on the board: Altman, Brockman, and Sutskever. And it left three independent directors: Toner, Quora co-founder Adam D’Angelo, and entrepreneur Tasha McCauley. Toner and McCauley have worked in the effective altruism movement, which seeks to maximize the leverage on philanthropic dollars to do the most good possible. The reputation of effective altruism cratered last year along with the fortunes of one of its most famous adherents, Sam Bankman-Fried. Bankman-Fried had sought to generate wealth as rapidly as possible so he could begin giving it all away, ultimately defrauding FTX customers out of billions of dollars. Taken to this extreme, EA can create harmful incentives. But for rank-and-file EAs, the movement’s core idea was to apply some intellectual rigor to philanthropy, which too often serves only to flatter the egos of its donors. EA groups often invest in areas that other funders neglect, and were early funders of research into the potential long-term consequences of AI. They feared that advances in AI would begin to accelerate at a rapid clip, and then compound, delivering a superintelligence into the world before we made the necessary preparations. The EAs were interested in this long before large language models were widely available. And the thing is, they were right: we are currently living at a time of exponential improvement in AI systems, as anyone who used both GPT-3 and GPT-4 can attest. AI safety researchers will be the first to tell you that this exponential progress could hit a curve: that we won’t be able to solve the research questions necessary to see a similar step-change in functionality if and when a theoretical GPT-5 is ever trained and released. But given the progress that has been made just in the past couple years, the EAs would say, shouldn’t we do some worst-case-scenario planning? Shouldn’t we move cautiously as we build systems that aid in the discovery of novel bioweapons, or can plan and execute schemes on behalf of cybercriminals, or who can corrupt our information sphere with synthetic media and hyper-personalized propaganda? If you are running a for-profit company, questions like these can be extremely annoying: they can suffocate product development under layers of product, policy, and regulatory review. If you’re a nonprofit organization, though, these can feel like the only questions to ask. OpenAI was founded, after all, as a research project. It was never meant to compete on speed. And yet, as The Atlantic reported over the weekend, ChatGPT itself launched last year less out of the company’s certainty that it would be beneficial to society than the fear that Anthropic was about to launch a chatbot of its own. Time and again over the next year, OpenAI’s for-profit arm would make moves to extend its product lead over its rivals. Most recently, at its developer conference, the company announced GPTs: custom chatbots that represent a first step toward agents that can perform higher-level coordination tasks. Agents are a top concern of AI safety researchers — and their release reportedly infuriated Sutskever. And during all this time, was Altman squarely focused on OpenAI? Well … not really. He was launching Worldcoin, his eyeball-scanning crypto orb project. He was raising a new venture fund that would focus on “hard” tech, according to Semafor. He was also seeking billions to create a new company that would build AI chips, Bloomberg reported. Silicon Valley has a long history of founders running multiple companies. Steve Jobs had roles at Apple and Pixar simultaneously; Jack Dorsey served as CEO of Twitter and Square at the same time. What’s different in the Altman case is that OpenAI was driven by a public mission, and one that would seem to foreclose certain for-profit extracurricular work. If OpenAI is designed to promote cautious AI development, and its CEO is working to build a for-profit chip company that might accelerate AI development significantly, the conflict seems obvious. And that’s before you even get into the question of what Altman told the board about any of this, and when. None of which is to say that the board couldn’t have found a way to resolve this situation without firing Altman. Particularly given the billions of dollars at stake. But the board’s job is explicitly to ignore any concerns about money in favor of safe AI development. And they would not be the first people to break with Altman over fears that OpenAI has not been true to its mission. III. However valid these concerns may have been, it’s now clear that the board overplayed its hand. The success of ChatGPT had made the nonprofit an afterthought in the minds of its 700-plus employees, to say nothing of the world at large. And Altman is a popular leader, both inside OpenAI and as a kind of roving AI diplomat. The board was never going to win a fight with him, even if it had communicated its position effectively. Now the board stands to lose everything. One former employee described OpenAI to me over the weekend as “a money incinerator.” As successful as it is, ChatGPT is not close to being profitable. There’s a reason Altman was out raising new capital — the company needs it to keep the lights on. And now, with everything that has transpired, it is extremely difficult to imagine how the company could raise the funds necessary to power its ambitions — assuming it even has any employees left to execute on them. On one hand, history may show that the board did serve as good stewards of their mission. On the other, though, the board was also responsible for being a good steward of OpenAI as an institution. In that, it failed entirely. It is understandable to look at this wreckage and think — well, that’s what happens when you mix nonprofit governance with for-profit incentives. And surely there is something to that point of view. At the same time, though, it’s worth asking whether we would still be so down on OpenAI’s board had Altman been focused solely on the company and its mission. There’s a world where an Altman, content to do one job and do it well, could have managed his board’s concerns while still building OpenAI into the juggernaut that until Friday it seemed destined to be. That outcome seems preferable to the world we now find ourselves in, where AI safety folks have been made to look like laughingstocks, tech giants are building superintelligence with a profit motive, and social media flattens and polarizes the debate into warring fandoms. OpenAI’s board got almost everything wrong, but they were right to worry about the terms on which we build the future, and I suspect it will now be a long time before anyone else in this industry attempts anything other than the path of least resistance. On the emergency podcast today: Kevin and I trade notes on the weekend’s events. And then, we present our interview from Wednesday with Altman, in the final hours before OpenAI got turned upside-down. (This episode will take the place of our usual Friday episode; we’ll be back next week after the holiday.) Apple | Spotify | Stitcher | Amazon | Google | YouTube Governing
Industry
Those good postsSend us good posts from Bluesky and Threads! Just reply to this email. (Link) (Link) (Link) Talk to usSend us tips, comments, questions, and posts: casey@platformer.news and zoe@platformer.news. By design, the vast majority of Platformer readers never pay anything for the journalism it provides. But you made it all the way to the end of this week’s edition — maybe not for the first time. Want to support more journalism like what you read today? If so, click here: |
Older messages
How banning one Palestinian slogan roiled Etsy
Tuesday, November 14, 2023
The company secretly banned the phrase “from the river to the sea” on sellers' wares. Some employees aren't happy
How OpenAI is building a path toward AI agents
Tuesday, November 7, 2023
Building a GPT-based copy editor showcases their promise — but the risks ahead are real
The case for a little AI regulation
Friday, November 3, 2023
Not every safety rule represents "regulatory capture" — and even challengers are asking the government to intervene
Biden seeks to rein in AI
Wednesday, November 1, 2023
An executive order gives AI companies the guardrails they asked for. Will the US go further?
Twitter is dead and Threads is thriving
Friday, October 27, 2023
One year after Elon Musk let that sink in, an elegy for the platform that was — and some notes on the one that is poised to succeed it
You Might Also Like
Northvolt files for bankruptcy
Friday, November 22, 2024
Plus: Slush 2024 takeaways; Europe's newest unicorn View in browser Sponsor Card - Up Round-31 Good morning there, European climate tech poster child Northvolt is filing for Chapter 11 bankruptcy
Nov 2024: My first million!
Friday, November 22, 2024
$1M in annual revenue, B2B sales, SOC 2, resellers, grow team, and other updates in November 2024. ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Google wants to keep Chrome
Friday, November 22, 2024
The search giant is pushing back on the government's plan to break it up — but competition is coming anyway Platformer Platformer Google wants to keep Chrome The search giant is pushing back on the
SaaSHub Weekly - Nov 21
Thursday, November 21, 2024
SaaSHub Weekly - Nov 21 Featured and useful products Tapzo logo Tapzo Award winning Smart NFC Business Cards #Business Cards #NFC #Sustainability Multiply.cloud logo Multiply.cloud Algorithmic Pricing
🚀 Master Outbound with Chris Marin – Join Us Live! 📬
Thursday, November 21, 2024
[Webinar] Tips to Boost Meetings & Build Sales Pipelines with Email Outreach 📬
[CEI] Chrome Extension Ideas #167
Thursday, November 21, 2024
ideas for Non-Gamblers, Gamers, Twitter, and AI ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
[SaaS Club] How a Tiny Team Bootstrapped a $6M SaaS
Thursday, November 21, 2024
The SaaS Club Newsletter Hey Reader Here's a quick round up of what's been going on at SaaS Club: In this week's newsletter: 🎙️ How Missive grew to $6M ARR with no VC help. 🚀 A smart way to
🗞 What's New: OpenAI's o1 is now available to all paid API users
Thursday, November 21, 2024
Also: How AI is reshaping the global workforce ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Make Your Social Media Work Smarter, Not Harder, With AI 📲
Thursday, November 21, 2024
Keeping up with social media can feel like running on a never-ending treadmill. ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
150 days for the rest of your life
Thursday, November 21, 2024
Before we jump in: Every founder knows that chargeback disputes are messy and annoying to deal with. And in some crazy cases, chargebacks can even get your Stripe account suspended 😬 Well, today's