The ELLIS Unit „SAM“ has been founded in 2020 as part of the ELLIS Society which seeks to establish internationally visible, top-level research facilities in Europe in the area of machine learning and modern AI. The SAM Principal Investigators have agreed to jointly work on both the foundations for enhanced functionalities of Artificial Intelligence and Machine Learning (AIML) systems and the pressing needs for security, privacy, and trustworthiness that arise from the widespread use of Artificial Intelligence and Machine Learning systems.
Future Artificial Intelligence and Machine Learning systems will capture reality through a multitude of sensors, interact with humans, derive knowledge, and influence our lives. These systems need to be based on accurate and robust models of the physical and digital worlds as well as on powerful methods for perception and knowledge inference to cope with the complexity of the real world. They will make autonomous decisions and enable enhanced functionalities, e.g., for personalized assistants, immersive multi-person platforms, and autonomous systems. This ELLIS Unit will research the foundational algorithmic solutions needed to enable systems capable of such new functionalities.
Our ELLIS Unit is organized into two research areas led by 9 Principal Investigators. The PIs of the Unit form a distinguished group at different levels of seniority, including 4 ERC Grant recipients (of three ERC Starting Grants, one ERC Advanced Grant and one ERC Consolidator Grant).
It will be crucial for AIML systems to de- rive understanding from data, and to simulate and anticipate future situations by processing and analyzing growing digital information from applications in business, science, and everyday life. Data arises in a multitude of modalities such as high-dimensional sensor data, visual content, text and language, and social media. As society depends more and more on data-driven computer systems, the underlying methods have to be explainable.
AIML systems will comprise large and dynamic assemblies of interacting computing systems. They will jointly affect ever more aspects of our lives, including security- and privacy-relevant tasks. We need AIML systems that we can trust and that are secure and transparent.
We address key research objectives in a coordinated effort among two research areas. Each area focuses on cutting-edge research questions that jointly constitute the scientific agenda.
Each of the six PIs has a strong research record working on both research objectives. These research objectives are also the basis for our two research areas (RAs), each addressing both objectives in different domains.
RA1 – Visual Understanding, Visual Synthesis, Privacy, and Security
Machine learning has enabled capabilities of visual understanding that have been illusive for decades. Similarly, machine learning has enabled new visual synthesis and simulation algorithms that show unprecedented realism and computational efficiency. However, scaling these initial case-specific successes up to complex entire real world scenes remains a major open challenge. Further, current AIML methods are neither robust nor do they lend themselves for the inclusion of prior knowledge and interpretability. Additionally, research related to privacy and security concerns such as model stealing or the inference of privacy relevant information are in their infancy. This RA aims to address both, enhanced functionally as well as trustworthiness, of machine learning driven visual understanding and synthesis.
RA2 – Human Centered Machine Learning
As machine learning models trained on big data are being used to assist or replace human decision makers in domains that affect people’s lives, concerns are being raised about the ethics of such data-driven decision making. Similar concerns are fueled by the possibilities offered by advanced learning-based methods for understanding from and synthesis of images and videos. Perhaps, not surprisingly, these developments have been characterized by an increasing number of missteps, from discriminating minorities, fueling the spread of misinformation and increasing societal polarization. This RA will focus on human-centered machine learning models and algorithms, specifically designed to avoid the above missteps moving forward and guarantee fairness, accountability, transparency, explainability, and privacy for all.
SAM brings together researchers from four research institutions on or near Saarland Informatics Campus (SIC). It comprises the Max Planck Institute of Informatics (MPI-INF), the Max Planck Institute of Software Systems (MPI-SWS), Saarland University (UdS), and the CISPA Helmholtz Center for Information Security (CISPA).