Webinar on the ELI Model Rules on Impact Assessment of Algorithmic Decision-Making Systems Used by Public Administration

13.04.2022

On 13 April 2022, ELI organised a webinar on the ELI Model Rules on Impact Assessment of Algorithmic Decision-Making Systems Used by Public Administration.

The webinar provided an opportunity to discuss the recently published ELI Model Rules on Impact Assessment of Algorithmic Decision-Making Systems Used by Public Administration, which lay down the foundation for supplementing European legislation on artificial intelligence (AI) in the specific context of public administration. They do so in a manner that will not hinder innovation but will provide solid safeguards to improve citizens’ confidence in the use of the technology in this field.

The webinar was opened by ELI President Pascal Pichonnaz (Professor, University of Fribourg), who underlined that the ELI Model Rules are a significant and timely contribution to legal discussions and attempts to regulate AI in Europe. They take a proactive approach to assessing the impact of algorithmic decision-making systems (ADMSs) and ensuring appropriate follow up.

Team Members then went on to present the ELI Model Rules. Project Co-Reporter Jens-Peter Schneider (Professor, Freiburg University) explained the process of designing, developing, training and testing an ADMS to be used by public authorities as well as the screening exercise foreseen in the Model Rules, underlining that the Model Rules take a risk-based approach in determining whether an impact assessment is needed.

Project Team Member Jonathan Dollinger (Research Assistant and Doctoral Candidate, University of Freiburg) focused on the impact assessment procedure, which starts with scoping (Article 5 of the ELI Model Rules) and continues with an impact assessment report (Article 6 of the ELI Model Rules). The impact assessment report foresees several steps to be taken by an implementing authority to make a comprehensive assessment of an ADMS.

Project Team Member Katarzyna Ziolkowska (Doctoral Candidate, University of Warsaw; Junior Researcher and Lecturer; Junior Associate, Kochański and Partners) went on to present additional provisions for high risk ADMSs – the involvement of experts in the auditing of such systems as well as the involvement of the public (by way of colleting feedback).

Project Team Member Karolina Wojchiechowska (Researcher and Assistant Professor, Attorney-at-Law) added that in order to ensure transparency, the Model Rules foresee that the (extended) impact assessment report should be made publicly available. Such a report should be edited to preserve secret information, where necessary. A review and repetition of the assessment is also foreseen in certain situations.

Jens-Peter Schneider then reflected on common questions raised by different ELI bodies and Members during the drafting of the Model Rules. He underlined that the Model Rules, which complement the proposed EU AI Act, aim to balance innovation and risk management as well as that the impact assessment does not present a licensing procedure, among other things.

Yordanka Ivanova (Legal and Policy Officer, Artificial Intelligence Policy Development and Coordination Unit (CNECT.A.2), European Commission) reflected on the compatibility and complementarity of the ELI Model Rules with EU law. She congratulated the Team on the important work done, as it is highly relevant, also for the proposed AI Act that is still under discussion. She also reflected on the possible compatibility with initiatives at national level, on the legal basis for the EU to prescribe an impact assessment and on the interaction of the ELI Model Rules with the impact assessment under the EU’s General Data Protection Regulation (GDPR).

Marc Rotenberg (President, Center for AI and Digital Policy (CAIDP); Adjunct Professor, Georgetown Law) also congratulated the Team for their impressive report. He pointed out that CAIDP similarly identified the need for an impact assessment in its ‘Artificial Intelligence and Democratic Values’ report. The ELI Model Rules therefore provide a key piece for how to govern AI in a manner that is not only accountable, but also consistent with democratic values. He reflected further on the experience in the USA, work of the Council of Europe and EU as well as the blurred line between what might be considered high risk applications and prohibited applications in the risk assessment analysis and challenges ahead.

The presentations were followed by a Q&A session with participants, during which the distinction between low and high risk systems, the need to consider positive State obligations under human rights law during an impact assessment, and the protection of trade secrets were discussed, among other things.

The Project Team’s PowerPoint presentation is available here and the webinar recording is available below.

To learn more about the ELI Model Rules, please click here.