Who’s Responsible for Irresponsible AI?

Software does whatever it’s programmed to do. The primary factor behind AI ethics is the people who design and create it.

MIT IDE
MIT Initiative on the Digital Economy
4 min readApr 21, 2022

--

By Peter Krass

Talk about artificial intelligence (AI) being ethical and responsible is a bit misleading. Software itself is neither ethical nor responsible; it just does what it’s been programmed to do. The greater concern is the people behind the software. Unfortunately, said panelists at the March 31 Social Media Summit (MIT@SMS), the ethics of many AI developers and their companies, falls short.

Some irresponsible or biased practices are due to a kind of high-tech myopia, said Rumman Chowdhury, Twitter’s director of machine learning ethics, transparency and accountability. In Silicon Valley, “people fall into the trap of solving the problems they see right in front of their faces,” she said, and those are often problems faced by the privileged. As a result, she added, “we can’t solve, or even put adequate resources behind solving larger issues of imbalanced data sets or algorithmic ethics.”

“The most fascinating part of working in responsible AI and machine learning is that we’re the ones that get to think about these systems, truly as socio-technical systems,” Chowdhury said.

Myopia also can be seen in business-school students, noted Renée Richardson Gosline, the panel’s moderator and a senior lecturer in management science at MIT Sloan School and a leader at the MIT IDE.

MBA student “have all of these wonderful ideas for companies that they’d like to launch,” she said. “But the ethics of the AI conversation oftentimes lags behind other concerns that they have.”

‘Massive Harms’

Panelist Chris Gilliard, Professor of English at Macomb Community College and an outspoken social media critic, took a much more direct stance. “We should do more than just wait for AI developers to become more ethical,” he insisted. Instead, Gilliard advocates for stringent government intervention. The tradeoff for having sophisticated technology should not be surveillance and sacrificing privacy, in his view:

“If we look to the ways that other industries work…there are mechanisms so that you are typically not allowed to just release something, do massive amounts of damage, and then perhaps address those damages later on.”

Gilliard acknowledged that his pro-regulation stance is opposed in Silicon Valley, where unfettered innovation is coveted. “Using that as an excuse for companies to perpetuate all manner of harms, has been a disastrous formulation,” Gilliard said, “not just for individuals, but for countries and society and democracy.”

Panelists, clockwise from top left: Chris Gilliard; Moderator, Renee Richardson Gosline; Suresh Venkatasubramanian, and Rumman Chowdhury.

Chowdhury acknowledged the responsibility corporations bear. “In industry, doing responsible AI means that you are ensuring that what you are building is, at the very least, not harming people at scale, and you are doing your best to help identify and mitigate those harms,” she said. Beyond that, she added, “responsible AI is also about enabling human to flourish and thrive.” She sees many startups building on these ideas as they develop their companies, she said, and ethical AI may actually “drive the next wave of unicorns.”

Being Ethical Together

Suresh Venkatasubramanian, Assistant Director of the U.S. Office of Science and Technology Policy, a branch of the White House, has a pragmatic perspective. He maintained that “there isn’t one single thing that government or industry or academia needs to do to address these broader questions. It’s a whole coalition of efforts that we have to build together.”

Those efforts, he added, could include “guardrails” and best practices for software development, making sure that new products are tested on the same populations that will ultimately use them, and more rigorous testing to protect people from what he called “discriminatory impacts.”

Chowdhury summed it up by saying that “responsible AI is not this thing you do on the side after you build your tech. It is actually a core part of ensuring your tech is durable.”

She urged companies to “carve out meaningful room for responsible AI practices, not as a feel-good function, but as a core business value.”

Venkatasubramanian agreed that articulating ethical values and rights is important. But once that’s done, he added, it’s time to “allow our technologists and our creative folks to build technologies that can help us respect those rights.”

3 Organizations Working for More Ethical Tech

SMS panelists are not just thinking about making tech more ethical; they’re also working with these groups, and others, to make change happen:

· Startups & Society Initiative: Works to accelerate the adoption of more world-positive, ethical, and socially responsible practices in technology firms.

· Parity Responsible Innovation Fund: Invests in technological innovation that preserves or protects privacy and security rights, and ethical use of emerging technologies such as AI and quantum computing.

· National AI Research Resource Task Force: A shared computing and data infrastructure jointly coordinated by the U.S. National Science Foundation (NSF) and Office of Science and Technology Policy (OSTP).

--

--

MIT IDE
MIT Initiative on the Digital Economy

Addressing one of the most critical issues of our time: the impact of digital technology on businesses, the economy, and society.