What Does Ethical AI Look Like?

As the power of generative AI is realized, policies and practices to keep it in check are the topic of debate.

MIT IDE
MIT Initiative on the Digital Economy

--

By Paula Klein

Using AI ethically may seem obvious to some, yet experts from various fields are grappling with multiple — and sometime conflicting — governance and regulatory approaches being adopted by private sector companies and nations around the globe. What ethics should apply to the new world of AI, and what guardrails are needed?

At the recent 2024 MIT AI Conference: Tech, Business, and Ethics, sponsored by the MIT Industrial Liaison Program, MIT and industry speakers addressed specific concerns such as better auditing tools, uniform training standards, risk avoidance and coordinated human-machine interaction to improve today’s machine learning languages and their application. They also raised some overarching philosophical issues that surface when data science collides with the nuances of social norms and human behavior.

To Julie Shah, Interactive Robotics Group Leader, at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL), a primary core goal of AI should be, “positive-sum automation where machines and humans are designed to work together to increase productivity, transparency and flexibility.”

Shah, also a Professor at the MIT Department of Aeronautics and Astronautics, described the need for a “bottom up” approach to employee involvement so that success can be measured by “enhanced performance and satisfaction of human teams.”

It’s general assumed that ethical AI means taking a safe, secure, humane, and environmentally friendly approach to AI. But we’re not there yet.

For example, sensors and other AI-enabled systems have deployment limitations as well as vulnerabilities, said Retsef Levi, MIT Sloan Professor of Operations Management. To optimize human-machine collaboration, Levi said we need to assess what humans excel at (i.e. context, nuance) compared to the strengths of machines (i.e. speed and repetition).

What Machines Do Best

What Humans Do Best

From a broad perspective, Levi is also concerned about the “erosion of human capabilities and task execution” when we rely too heavily on AI systems for operations and decision-making. “What is the impact on [human] resilience” in this new environment? “What are the societal consequences?”

Research Responsibility

Also at issue are the difficulties of devising appropriate governance and regulatory guidelines for generative AI in academic research as well as business markets. While benefits for efficiency and speed are clear, threats to labor, reliability and accuracy loom large.

Aude Oliva, Director of the MIT-IBM Watson AI Lab, focused on the datasets used to train MLL models, the capabilities and failure modes of foundation models, and the impact of scaling. “Were living in a nonlinear world,” she said, where change — from human to mechanical intelligence — can happen very rapidly, over time, or not at all. “We need open data sets, new architectures, energy efficiency, and transparency” in new datasets to fuel ongoing innovation.

Aude Oliva noted that it’s unclear how long it could take for human perception and reasoning to progress to pattern matching and eventually into complex AI pattern generation which would require human characteristics such as common sense.

Oliva, also Director of Strategic Industry Engagement at the MIT Schwarzman College of Computing, contrasted the broad scope of the U.S executive order recently issued by the White House with the more narrow focus of AI risk outlined by the EU. Although “the EU and U.S. are looking at different issues, both are important” to international AI development.

In addition to government intervention, individual industries and companies need to adapt and adopt new procedures to accommodate AI-driven business. Prasanna Sattigeri, Principal Research Scientist, IBM Research AI and, MIT-IBM Watson AI Lab, noted that, change management continues to be critical. “Culture has to change and is changing very quickly,” he said, helping businesses and employees absorb new technologies and use them appropriately.

Forging Ahead

Shah of CSAIL also emphasized the necessity to push ahead despite obstacles and the unknown. Many previous efforts — from GM using robots in 1982, to a U.S. Navy plan for robotic vessels in 2002 — have failed to deliver, she said.

But we can’t look for “zero- sum automation or measure against present-day human output.” AI success requires “tremendous time, efforts and costs…but designing for humans and automation together” from the outset, will accelerate change.

From an industry perspective, Teddy Ort, Senior Director of Robot Perception & AI at Symbotic, said the biggest threat is not getting on board with AI and being left behind.

Elenna Dugundji, Research Scientist at the MIT Center for Transportation and Logistics, spoke about AI advances in pharmaceutical industry supply chains including procurement, demand forecasting, and storage analysis. She is “optimistic about where we are now, the future, and how we’ve used AI to do tasks more efficiently and reliably. Generative took our world by storm and I’m really interested to see how it evolves,” she said. At the same time, critical thinking and continuous checks are vital, as well.

See the full agenda and watch videos from the event here.

--

--

MIT IDE
MIT Initiative on the Digital Economy

Addressing one of the most critical issues of our time: the impact of digital technology on businesses, the economy, and society.