Microsoft just laid off one of its responsible AI teams
Would you believe that we’re back with another Platformer scoop? This one concerns the fate of a key team focused on responsible artificial intelligence at Microsoft, and what happens now that they’re gone. We could have put this behind the paywall, but we think sharing news like this is in the public interest. Upgrading your subscription today, even for just $10, helps pay our bills and lets us to continue doing this work. Plus, paid subscribers get more. Recently, they received our scoop about Twitter’s most recent layoffs, and how they appear to position Boring Company CEO Steve Davis as a likely candidate for Twitter CEO. We’d love to email you those scoops every week. Subscribe now and we’ll send you the link to join us in our chatty Discord server, where we now post all of our breaking news first.
Microsoft just laid off one of its responsible AI teamsAs the company accelerates its push into AI products, the ethics and society team is goneI. Microsoft laid off its entire ethics and society team within the artificial intelligence organization as part of recent layoffs that affected 10,000 employees across the company, Platformer has learned. The move leaves Microsoft without a dedicated team to ensure its AI principles are closely tied to product design at a time when the company is leading the charge to make AI tools available to the mainstream, current and former employees said. Microsoft still maintains an active Office of Responsible AI, which is tasked with creating rules and principles to govern the company’s AI initiatives. The company says its overall investment in responsibility work is increasing despite the recent layoffs. “Microsoft is committed to developing AI products and experiences safely and responsibly, and does so by investing in people, processes, and partnerships that prioritize this,” the company said in a statement. “Over the past six years we have increased the number of people across our product teams and within the Office of Responsible AI who, along with all of us at Microsoft, are accountable for ensuring we put our AI principles into practice. […] We appreciate the trailblazing work the ethics and society team did to help us on our ongoing responsible AI journey.” But employees said the ethics and society team played a critical role in ensuring that the company’s responsible AI principles are actually reflected in the design of the products that ship. “People would look at the principles coming out of the office of responsible AI and say, ‘I don’t know how this applies,’” one former employee says. “Our job was to show them and to create rules in areas where there were none.” In recent years, the team designed a role-playing game called Judgment Call that helped designers envision potential harms that could result from AI and discuss them during product development. It was part of a larger “responsible innovation toolkit” that the team posted publicly. More recently, the team has been working to identify risks posed by Microsoft’s adoption of OpenAI’s technology throughout its suite of products. The ethics and society team was at its largest in 2020, when it had roughly 30 employees including engineers, designers, and philosophers. In October, the team was cut to roughly seven people as part of a reorganization. In a meeting with the team following the reorg, John Montgomery, corporate vice president of AI, told employees that company leaders had instructed them to move swiftly. “The pressure from [CTO] Kevin [Scott] and [CEO] Satya [Nadella] is very very high to take these most recent openAI models and the ones that come after them and move them into customers hands at a very high speed,” he said, according to audio of the meeting obtained by Platformer. Because of that pressure, Montgomery said, much of the team was going to be moved to other areas of the organization. Some members of the team pushed back. “I'm going to be bold enough to ask you to please reconsider this decision,” one employee said on the call. “While I understand there are business issues at play … what this team has always been deeply concerned about is how we impact society and the negative impacts that we've had. And they are significant.” Montgomery declined. “Can I reconsider? I don't think I will,” he said. “Cause unfortunately the pressures remain the same. You don't have the view that I have, and probably you can be thankful for that. There's a lot of stuff being ground up into the sausage.” In response to questions, though, Montgomery said the team would not be eliminated. “It's not that it's going away — it's that it's evolving,” he said. “It’s evolving toward putting more of the energy within the individual product teams that are building the services and the software, which does mean that the central hub that has been doing some of the work is devolving its abilities and responsibilities." Most members of the team were transferred elsewhere within Microsoft. Afterward, remaining ethics and society team members said that the smaller crew made it difficult to implement their ambitious plans. About five months later, on March 6, remaining employees were told to join a Zoom call at 11:30AM PT to hear a “business critical update” from Montgomery. During the meeting, they were told that their team was being eliminated after all. One employee says the move leaves a foundational gap on the user experience and holistic design of AI products. “The worst thing is we’ve exposed the business to risk and human beings to risk in doing this,” they explained. The conflict underscores an ongoing tension for tech giants that build divisions dedicated to making their products more socially responsible. At their best, they help product teams anticipate potential misuses of technology and fix any problems before they ship. But they also have the job of saying “no” or “slow down” inside organizations that often don’t want to hear it — or spelling out risks that could lead to legal headaches for the company if surfaced in legal discovery. And the resulting friction sometimes boils over into public view. In 2020, Google fired ethical AI researcher Timnit Gebru after she published a paper critical of the large language models that would explode into popularity two years later. The resulting furor resulted in the departures of several more top leaders within the department, and diminished the company’s credibility on responsible AI issues. Members of the ethics and society team said they generally tried to be supportive of product development. But they said that as Microsoft became focused on shipping AI tools more quickly than its rivals, the company’s leadership became less interested in the kind of long-term thinking that the team specialized in. It’s a dynamic that bears close scrutiny. On one hand, Microsoft may now have a once-in-a-generation chance to gain significant traction against Google in search, productivity software, cloud computing, and other areas where the giants compete. When it relaunched Bing with AI, the company told investors that every 1 percent of market share it could take away from Google in search would result in $2 billion in annual revenue. That potential explains why Microsoft has so far invested $11 billion into OpenAI, and is currently racing to integrate the startup’s technology into every corner of its empire. It appears to be having some early success: the company said last week Bing now has 100 million daily active users, with one third of them new since the search engine relaunched with OpenAI’s technology. On the other hand, everyone involved in the development of AI agrees that the technology poses potent and possibly existential risks, both known and unknown. Tech giants have taken pains to signal that they are taking those risks seriously — Microsoft alone has three different groups working on the issue, even after the elimination of the ethics and society team. But given the stakes, any cuts to teams focused on responsible work seem noteworthy. II. The elimination of the ethics and society team came just as its remaining employees had trained their focus on arguably their biggest challenge yet: anticipating what would happen when Microsoft released tools powered by OpenAI to a global audience. Last year, the team wrote a memo detailing brand risks associated with the Bing Image Creator, which uses OpenAI’s DALL-E system to create images based on text prompts. The image tool launched in a handful of countries in October, making it one of Microsoft’s first public collaborations with OpenAI. While text-to-image technology has proved hugely popular, Microsoft researchers correctly predicted that it it could also threaten artists’ livelihoods by allowing anyone to easily copy their style. “In testing Bing Image Creator, it was discovered that with a simple prompt including just the artist’s name and a medium (painting, print, photography, or sculpture), generated images were almost impossible to differentiate from the original works,” researchers wrote in the memo. They added: “The risk of brand damage, both to the artist and their financial stakeholders, and the negative PR to Microsoft resulting from artists’ complaints and negative public reaction is real and significant enough to require redress before it damages Microsoft’s brand.” In addition, last year OpenAI updated its terms of service to give users “full ownership rights to the images you create with DALL-E.” The move left Microsoft’s ethics and society team worried. “If an AI-image generator mathematically replicates images of works, it is ethically suspect to suggest that the person who submitted the prompt has full ownership rights of the resulting image,” they wrote in the memo. Microsoft researchers created a list of mitigation strategies, including blocking Bing Image Creator users from using the names of living artists as prompts, and creating a marketplace to sell an artist’s work that would be surfaced if someone searched for their name. Employees say neither of these strategies were implemented, and Bing Image Creator launched into test countries anyway. Microsoft says the tool was modified before launch to address concerns raised in the document, and prompted additional work from its responsible AI team. But legal questions about the technology remain unresolved. In February 2023, Getty Images filed a lawsuit against Stability AI, makers of the AI art generator Stable Diffusion. Getty accused the AI startup of improperly using more than 12 million images to train its system. The accusations echoed concerns raised by Microsoft’s own AI ethicists. “It is likely that few artists have consented to allow their works to be used as training data, and likely that many are still unaware how generative tech allows variations of online images of their work to be produced in seconds,” employees wrote last year. Governing
Industry
Those good tweetsFor more good tweets every day, follow Casey’s Instagram stories. Talk to usSend us tips, comments, questions, and ethical AI principles: casey@platformer.news and zoe@platformer.news. By design, the vast majority of Platformer readers never pay anything for the journalism it provides. But you made it all the way to the end of this week’s edition — maybe not for the first time. Want to support more journalism like what you read today? If so, click here: |
Older messages
Meta is building a decentralized, text-based social network
Friday, March 10, 2023
Is this the Twitter replacement we've been waiting for?
How a single engineer brought down Twitter on Monday
Monday, March 6, 2023
The high cost of cutting expenses
Winners and losers in the race to add AI
Friday, March 3, 2023
ChatGPT is now running inside Snapchat, Notion, and other apps. Will it give them a sustaining advantage — or just line OpenAI's pockets?
In latest round of Twitter cuts, some see hints of its next CEO
Tuesday, February 28, 2023
Is the job Steve Davis' to lose? PLUS: A smart new tool to fight child exploitation online
New cracks emerge in Elon Musk's Twitter
Friday, February 24, 2023
Jira went down, Slack's gone, and site performance is degraded. What's next?
You Might Also Like
Theory Two
Friday, November 22, 2024
Tomasz Tunguz Venture Capitalist If you were forwarded this newsletter, and you'd like to receive it in the future, subscribe here. Theory Two Today, we're announcing our second fund of $450
🗞 What's New: AI creators may be coming to TikTok
Friday, November 22, 2024
Also: Microsoft's AI updates are helpful for founders ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
behind the scenes of the 2024 digital health 50
Friday, November 22, 2024
the expert behind the list is unpacking this year's winners. don't miss it. Hi there, Get an inside look at the world's most promising private digital health companies. Join the analyst
How to get set up on Bluesky
Friday, November 22, 2024
Plus, Instagram personal profiles are now in Buffer! ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
10words: Top picks from this week
Friday, November 22, 2024
Today's projects: Remote Nursing Jobs • CopyPartner • Fable Fiesta • IndexCheckr • itsmy.page • Yumestudios • Limecube • WolfSnap • Randomtimer • Fabrik • Upp • iAmAgile 10words Discover new apps
Issue #131: Building $1K-$10K MRR Micro SaaS Products around AI Search Optimisation, Fine-Tuning Image Models, AI-…
Friday, November 22, 2024
Build Profitable SaaS products!! ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
(Free) Trial & Error— The Bootstrapped Founder 357
Friday, November 22, 2024
Today, I'll dive into the difference between a trial user and a trial abuser and what you can do to invite the former and prevent the latter. ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
💎 Specially for you - will never be repeated again!
Friday, November 22, 2024
The biggest Black Friday sale in Foundr history...but it won't last forever! Black Friday_Header_2 Hey Friend , We knew our Black Friday deal was amazing—but wow, the response has been so unreal
Northvolt files for bankruptcy
Friday, November 22, 2024
Plus: Slush 2024 takeaways; Europe's newest unicorn View in browser Sponsor Card - Up Round-31 Good morning there, European climate tech poster child Northvolt is filing for Chapter 11 bankruptcy
Nov 2024: My first million!
Friday, November 22, 2024
$1M in annual revenue, B2B sales, SOC 2, resellers, grow team, and other updates in November 2024. ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏