ELI Issues Guidance on the Use of Algorithmic Decision-Making Systems by Public Administration


The ELI Model Rules on Impact Assessment of Algorithmic Decision-Making Systems Used by Public Administration lay down the foundation for supplementing European legislation on artificial intelligence (AI) in the specific context of public administration. They do so in a manner that will not hinder innovation but will provide solid safeguards to improve citizens’ confidence in the use of the technology in this field.

As an emanation of the State's public functions, public administration entails the processing of much more data than most private entities. New technologies, such as AI, can therefore play a significant role in the modernisation and overall improvement of the functioning of public administration. At the same time, however, a guarantee of the transparency, correctness and security of the processed data is also fundamental. Therefore, the possibility of implementing AI in the operation of public administration is limited by the principle of legality and the need to ensure a high degree of reliability of technologies used, as well as the need to ensure respect for citizens’ rights.

Public administration is, as a result, confronted with specific challenges in the deployment of AI and, more generally, algorithmic decision-making systems (ADMSs), even if they do not use specific AI technologies, such as machine learning. The use of these techniques poses specific problems relating to the principle of good administration. In addition, issues such as transparency, accountability, compliance and non-discrimination are particularly relevant in the context of public administration. 

The Model Rules are a significant and timely contribution to legal discussions and attempts to regulate AI in Europe. While inspired to some extent by EU law, and compatible not only with existing EU law, but also with the law currently being drafted, the ELI Model Rules were designed so as not to be dependent on EU law. As such they could serve as inspiration to national (including non-EU) and EU legislators, to governments and administrations.

There are various ways in which concerns raised by algorithmic decision-making can be addressed. The central idea underlying the Model Rules is an Impact Assessment. As the variety of situations in which algorithmic decision-making is employed precludes a one-size fits all approach, the Model Rules adopt an approach, distinguishing high-risk systems warranting an Impact Assessment (Annex 1), low risk systems (Annex 2) which do not, and systems that cannot readily be classified ex ante as falling within either Annex 1 or Annex 2. An Impact Assessment is required in the latter case if a risk evaluation (screening procedure) reveals that the system constitutes at least a substantial risk.

The output consists of 16 articles which include the Impact Assessment procedure and provide for additional safeguards for high-risk systems, such as scrutiny of the Impact Assessment by an expert board, as well as the opportunity for public participation.

The Model Rules were prepared under the leadership of Professor Marek Wierzbowski (as Chair) and Project Co-Reporters Judge Marc Clément, Professor Paul Craig and Professor Jens-Peter Schneider:

‘Artificial intelligence and algorithmic decision-making systems transform public administration. Such transformation needs rigorous assessment of potentially positive and negative impacts of these technologies. The Model Rules drafted as a team-effort and discussed widely within the unique framework provided by the European Law Institute help to ensure that such innovation in public administration is in accord with the rights of European citizens.’

More information about the project and the full report are available here. A webinar on the topic, open to the public free of charge, will take place on 13 April 2022 from 12:00–13:30 CET. To register, please click here.