The team was tasked with predicting the dangers of new technologies, raising questions about the company’s commitment to responsible use

Having just invested billions into its partnership with OpenAI—the tech behemoth behind ChatGPT—Microsoft is positioning itself at the forefront of today’s boom in generative AI. And in a move seemingly destined to spur alarming headlines, the company also just laid off its ethics and society team: the employees whose job it was to ensure AI products are designed and deployed responsibly, The Verge reports. Once a 30-member department, the team was cut from 30 to seven employees during an October 2022 restructure, before those remaining became a casualty of the company’s mass layoffs this week.

The decision to eliminate the ethics department, while simultaneously doubling down on AI offerings, is controversial for reasons that are self-explanatory. The team Microsoft eliminated was tasked with identifying how this new technology could be misused—and amidst a boom in generative AI tech, the company is under “immense pressure” to move recent OpenAI models into customers’ hands at a “very high speed,” according to Vice President of AI John Montgomery. “We appreciate the trailblazing work the Ethics & Society did to help us on our ongoing responsible AI journey,” the company stated, claiming that the dissolution of the ethics team is not indicative of a declining investment in responsible use of the technology. But their words have done little to quell public concern.

While Microsoft maintains an active Office of Responsible AI—charged with creating rules and principles to govern Microsoft’s AI initiatives—many believe that this department alone can’t identify all the risks. Former Microsoft employees emphasize that it was the job of the ethics and society team to consider the potential negative outcomes of AI, and ensure that the company’s responsible AI principles were actually embodied in its products; sometimes, they even engaged in role-playing exercises like “Judgement Call,” helping designers envision potential dangers that they might otherwise fail to recognize.

The team Microsoft laid off is the one who warned them about the fact that misappropriating images from artists was legally and ethically questionable. But those recommendations were basically just ignored.”

According to computer scientist Ben Zhao, the negative implications of applying Silicon Valley’s “move fast and break things” mentality to the development of AI products could be disastrous. “The decision-makers at Microsoft and Meta and Google and so on—the ‘powers that be’ in big tech—do not seem to have a core sense of ethics,” he says, citing multiple lawsuits that have resulted from usage of artists’ work to train data sets. “The team Microsoft laid off is the one who warned them about the fact that misappropriating images from artists was legally and ethically questionable. But those recommendations were basically just ignored. And now teams like this are being laid off… That’s not a good trend.”

In recent months, Zhao has been busy cleaning up the fallout. Together with a group of researchers at the University of Chicago, he has led the development of a new tool called Glaze, which can allegedly protect artists from having their styles replicated without permission by changing the composition of images on a pixel level—which, while almost undetectable to the human eye, is enough to stop machine-learning algorithms from accurately replicating their work. He sees this not as a full solution, but as a temporary measure designed to help those whose livelihoods depend on the ability to profit from art, and who are most negatively impacted by the AI boom—namely, illustrators and artists who work on commission for companies who could now turn to generative models like Stable Diffusion and DALL-E.

The implications of rushing new technologies into the hands of users—at least, before they’re fire-tested by, say, an ethics team—could extend far outside the realm of the art world. For example, Meta’s private AI language model, LLaMa, was recently leaked online, and cybersecurity experts warn that its potential for unauthorized use could lead to malicious scamming endeavors. (“Get ready for loads of personalized spam and phishing attempts,” tweeted cybersecurity expert Jeffrey Ladish, following the news.)

While some feel that open access to generative AI models is a good thing, few are happy about Microsoft’s decision to slash the team involved in interrogating ethical implications, while rapidly scaling up public offerings—especially after its recently released AI chatbot, Bing, made headlines for displaying erratic behavior and supplying users with incorrect information. While it’s clear that new AI developments are likely to revolutionize countless industries, we may as well buckle up—because, judging from the countless controversies they’ve spurred thus far, it’s going to be a bumpy ride.

Tags