Map Unavailable

Date/Time
Date(s) - 29/04/2020
7:00 pm - 8:30 pm

Categories


Speaker: Prof. Dr. Christoph Lütge, Director, TUM Institute for Ethics in Artificial Intelligence (IEAI – https://ieai.mcts.tum.de/ )

Register in advance for this webinar:

https://zoom.us/webinar/register/WN_eRzDGkQoTkaM57sCNER1_g

After registering, you will receive a confirmation email containing information about joining the webinar.

Artificial Intelligence (AI) systems and applications are more than merely technical innovations. In the decades to come, they will shape societies and the lives of billions of people across the globe. They will raise, and are already raising, ethical questions related to technological innovation in many new ways.

Recently, a number of ethical principles and policies for AI have been proposed, by the OECD, the EU High Level Expert Group on AI, the AI4People Group (for the European Parliament), by Chinese institutions, as well as others. It is interesting to note that there is a considerable degree of overlap between these approaches. The main question for the future will be how to implement these principles on the level of concrete AI systems. The Technical University of Munich’s Institute for Ethics in Artificial Intelligence (TUM IEAI) is one of the key institutes worldwide where this interdisciplinary research is already being conducted. This presentation will focus particularly on AI in the Health Sector.

Christoph Lütge is Full Professor of Business Ethics and Director of the Institute for Ethics in Artificial Intelligence at Technical University of Munich (TUM). He has a background in business informatics and philosophy, having taken his PhD at the Technical University of Braunschweig in 1999 and his habilitation at the University of Munich (LMU) in 2005. He was awarded a Heisenberg Fellowship in 2007. His most recent books are: “The Ethics of Competition” (Elgar, 2019) and “Ethik in KI und Robotik” (Hanser, 2020, with coauthors). He has held visiting positions at Harvard, University of Pittsburgh, University of California (San Diego), Taipei, Kyoto and Venice. He is a member of the Scientific Board of the European AI Ethics initiative AI4People as well as of the German Ethics Commission on Automated and Connected Driving. He has also done consulting work for the Singapore Economic Development Board and the Canadian Transport Commission.

The TUM has long been a driving force in researching the mutual interactions of science, technology and society. With financial support from Facebook, the TUM launched the IEAI in October 2019.  The IEAI is chartered to ensure independent, game-changing research into wide-reaching, ethical and responsible applications for AI.  The new institute follows the university’s bold creation of the Munich Center for Technology in Society (MCTS) in 2012, whose mission is to better understand and reflexively shape the multiple interactions between science, technology and society.

The IEAI conducts inter-, multi-, and transdisciplinary research that promotes active collaboration between the technical, engineering and social sciences, while also actively courting interaction with a wide group of international stakeholders from academia, industry and civil society. This approach enables the IEAI to comprehensively address a growing group of ethical challenges arising at the interface of technology and human values while aiding in the development of thoroughly operational ethical frameworks in the field of AI.

The IEAI´s research addresses challenges such as privacy; safety; ethics, fairness and diversity; public discourse; transparency and accountability; governance and regulation;

and social responsibility and sustainability.  As a platform for meaningful cooperation between industry, civil society and academia, the IEAI organizes workshops, conferences and seminars to promote exchange between a wide range of important stakeholders.