Back to Blog Lobby

What is Ethical AI?

With the explosion of artificial intelligence (AI) from science fiction into mainstream technology, ethical AI, the ethics of how machines are trained and used to utilize our data to make or assist with decisions impacting human lives is increasingly under scrutiny. But is there a way to formalize ethics – a human philosophy – in the context of machines? 

We are honored to publish this uniquely insightful interview on various aspects of ethical AI. Prof. Michael Jordan, Distinguished Professor in the Department of Electrical Engineering and Computer Science and the Department of Statistics at Berkeley, and a thought leader in machine learning and AI, sits down with Prof. Shafi Goldwasser, a Turing Award winner, Chief Scientist and Co-founder of Duality and the Director of Simons Institute for the Theory of Computing at Berkeley, for a fascinating conversation about how human values of privacy, fairness, transparency and trust have to be included in  AI to make it ethical and why this is also good for business.

Prof. Jordan explains his interest in the intersection of three disciplines, where “economics, meets computer-science, meets statistics.” These three disciplines often get involved when real world problems are being solved, such as building commerce systems or transportation systems or logistics systems.

Another important topic discussed is the limitations and the expected evolution of AI and ML systems, considering the massive growth in their applications and scale of populations being affected by them.

Ethical AI and Human Values

The importance of human ethical values of privacy, fairness, transparency, and trust to become integral parts of AI and ML applications. The role that Computer Science, Economics, Data Science as well as Social Sciences and Law must play to responsibly use these powerful technologies in mass deployments.

Goldwasser and Jordan also discuss the need to make sure that the public is well educated on the terminology of AI applications and how people might be affected by such applications when they are used for decision making support in public and private sectors.

Finally, Jordan and Goldwasser discussed trust and what can be done to enhance people’s trust in machine-learning  systems – how audit, recourse and benchmarking can support higher levels of trust in decisions supported by machines.

To summarize, in Prof. Jordan’s own words: “…The mature thing to be doing is thinking about systematic ways that technology and humans interact.”

Watch the full interview below.

Sign up for more knowledge and insights from our experts