Brussels, 06 October 2022
WK 13423/2022 INIT
LIMITE
TELECOM
WORKING PAPER
This is a paper intended for a specific community of recipients. Handling and
further distribution are under the sole responsibility of community members.
CONTRIBUTION
From:
General Secretariat of the Council
To:
Working Party on Telecommunications and Information Society
Subject:
Artificial Intelligence Act - DE comments on 2nd part of 3rd compromise proposal
(document 12549/22; Title IA, Arts 30-85, Annexes V-IX)
Delegations will find in the Annex the DE comments on 2nd part of 3rd compromise proposal on
Artificial Intelligence Act (document 12549/22; Title IA, Arts 30-85, Annexes V-IX).
WK 13423/2022 INIT
LIMITE
EN
DE comments on second part of third compromise proposal on AIA (document
12549/22; Title IA, Arts 30-85, Annexes V-IX)
Reference
Third compromise proposal
Drafting suggestion
Comment
Please note that the following views are
preliminary as we are still examining the
proposal. We reserve the right to make
further comments.
We also refer to the comments that we
handed in on 14th September 2022 and
within the CWP on 29 September 2022.
Germany is of the opinion that the
specific characteristics of the public
administration (and in particular those of
the security, migration and asylum
authorities, as well as the tax and customs
authorities, including the FIU) can be
better accommodated in a separate,
specific technology act or in a separate
section in the Regulation (referred to in
this document as “separate regulation”).
The provisions in the separate regulation
should be exhaustive. For details, we
refer to our Paper on the separate
regulation.
We would like to state as well that double
regulation should be avoided.
Recital 61
(61) Standardisation should play a key (61) Standardisation should play a key role to
Addition 1: Art. 41 does not allow for
role to provide technical solutions to
provide technical solutions to providers to ensure
CS anymore if existing hENs are
providers to ensure compliance with
compliance with this Regulation
, in line with
“insufficient”.
this Regulation
, in line with the
the state of the art. Compliance with
Addition 2: copied from the Nonpaper
state of the art. Compliance with
harmonised standards as defined in Regulation
CS from the Commission to clarify the
Reference
Third compromise proposal
Drafting suggestion
Comment
harmonised standards as defined in
(EU) No 1025/2012 of the European Parliament conditions for CS as described in Art. 41.
Regulation (EU) No 1025/2012 of the and of the Council25 should be a means for
Alternatively, Recital 40 from Machinery
European Parliament and of the Council25 providers to demonstrate conformity with the
regulation (as also stated in the Appendix
should be a means for providers to
requirements of this Regulation. However, the
of the Nonpaper) could be copied here
:
demonstrate conformity with the
Commission could adopt common technical
(40) The current EU standardisation framework
requirements of this Regulation.
specifications in areas where no harmonised
which is based on the New Approach principles
However, the Commission could adopt
standards exist and a standardization request
and on Regulation (EU) No.1025/2012 represents
common technical specifications in
following Art. 10 of Regulation (EU) No 1025/2012
the framework by default to elaborate standards
areas where no harmonised standards
has not been successful for the reasons described in
that provide presumption of conformity with the
relevant essential health and safety requirements
exist or where they are insufficient.
An Art.41 of this regulation or where they are
of this Regulation. European standards should be
appropriate involvement of small
insufficient. Concerning these reasons, it is to be
market-driven, take into account the public
and medium enterprises in the
noted that Regulation (EU) 1025/2012 provides
interest, as well as the policy objectives clearly
elaboration of standards supporting
that the European standardisation organisation
stated in the Commission’s request to one or
the implementation of this Regulation should adopt a standard within the deadline set out
more European standardisation organisations to
is essential to promote innovation and in the request of the Commission. However,
draft harmonised standards, within a set deadline
competitiveness in the field of
possible delay may occur in the delivery of such
and be based on consensus. However, in the
absence of relevant references to harmonised
artificial intelligence within the Union. harmonised standards due to the technical
standards, the Commission should be able to
Such involvement should be
complexity of that standard. Such possibility
establish, via implementing acts, common
appropriately ensured in accordance should be considered thoroughly before having
specifications for the essential health and safety
with Article 5 and 6 of Regulation
recourse to the adoption of common specification,
requirements of this Regulation as an exceptional
1025/2012.
which should remain an exceptional measure.
fall back solution to facilitate the manufacturer’s
An appropriate involvement of small and
obligation to comply with the health and safety
medium enterprises in the
requirements, when the standardisation process
elaboration of standards supporting the
is blocked or when there are delays in the
implementation of this Regulation is essential to establishment of an appropriate harmonised
promote innovation and competitiveness in the
standard. If such delay is due to the technical
field of artificial intelligence within the Union.
complexity of the standard in question, this
should be considered by the Commission before
Such involvement should be appropriately
contemplating the establishment of common
ensured in accordance with Article 5 and 6 of
specifications.
Regulation 1025/2012.
Reference
Third compromise proposal
Drafting suggestion
Comment
Art. 3
We recommend referring to the
(28)
‘common specifications’ means (28)
‘common specifications’ means a
set of
definition in Regulation (EU) 1025/2012.
a
set of technical specifications
technical specifications document as defined in
document, other than a
point 4 of Article 2 of Regulation (EU) No
Art. 41/4 of this Regulation states in line
standard, containing solutions, providing 1025/2012, other than a
with that producers may use technical
a
mandatory means to, comply with
standard, containing solutions, providing a
certain requirements and obligations
mandatory means to, comply with certain the
solutions that are equivalent to those
established under this Regulation;
essential requirements and obligations established
outlined in the CS to prove conformity.
under this Regulation;
Thus, CS are voluntary and not
mandatory. This choice reflects the fact
that harmonised standards are also
voluntary, unless the Regulation in
question defines them to be mandatory
– this is not the case for AI.
As harmonized standards and following
the rules of NLF, CS can only cover
essential requirements. Not all
requirements of the Regulation in
question or those that the regulator
wishes to be covered.
Article 4b(1)
Article 4b refers to implementing acts
regarding specifying and adopting
requirements established in title III,
Chapter 2. From our point of view - even
though these requirements may need
some more examination and assessment -
we would prefer the conditions for
specifying and adopting these
requirements would be formulated
directly in the AI act.
Reference
Third compromise proposal
Drafting suggestion
Comment
Article 4b(1)
…When fulfilling those requirements,
including as reflected in relevant harmonised
Otherwise delegated acts might be an
the generally acknowledged state of
option.
standards or common specifications.
the art shall be taken into account,
including as reflected in relevant
harmonised standards or common
In general, the application of harmonised
specifications.
standards and common specifications is
voluntary. By pointing out harmonised
standards and common specifications in
this context it sounds like a required
mandatory application.
Article 4b(5)
In order to ensure uniform conditions
If not regulated directly in the AI Act -
for the implementation of this
what we would prefer- we would suggest
Regulation as regards the information to
guidelines on the proposed cooperation of
be shared by the providers of general
providers.
purpose AI systems, the Commission
may adopt implementing acts in
accordance with the examination
procedure referred to in Article 74(2).
Article 35
2.
The Commission shall make
The Commission shall make publicly available the
Regardless of the drafting suggestions the
publicly available the list of the bodies
German Federal Government is
list of the bodies notified under this Regulation,
notified under this Regulation, including
concerned that the publication of the list
the identification numbers that have been including the identification numbers that have been of notified bodies in the area of law
assigned to them and the activities for
enforcement, as provided for in Article
assigned to them and the activities for which they
which they have been notified. The
35(2) of the AI Regulation, could
Commission shall ensure that the list is
have been notified accessible to the public in
encourage illegal influence on or research
kept up to date.
of such bodies, for example by foreign
NANDO. The Commission shall ensure that the list services. Do the Commission or other
is kept up to date.
Member States share this view? Should
an exemption clause be included to
enable the Member State concerned,
under conditions to be defined in more
Reference
Third compromise proposal
Drafting suggestion
Comment
detail, to refrain from publication in
individual cases if and to the extent that
interests in the protection of secrets
conflict with this? DEU security
authorities are in favour of this.
Article 40
In the area of law enforcement, there are
and 41
specific requirements for IT security,
confidentiality, protection of fundamental
rights and data protection as well as
specific technical requirements. In DEU,
it is being discussed whether it can be
ensured that such specific requirements
of the security sector can be taken into
account within the framework of the
standards according to Article 40 and the
specifications according to Article 41 of
the Draft Regulation. How do the COM
and the other Member States see this?
Should the amendments introduced in
Article 41(2) ensure this for "commen
specifications"?
Art. 41
Following the NLF, harmonized standards
1.
The Commission is
1.
The Commission is empowered to
and hence CS can only deal with
empowered to adopt, after consulting
adopt, after consulting the AI Board referred to essential requirements. Not all
the AI Board referred to in Article 56, in Article 56, implementing acts establishing
requirements of the Regulation or those
implementing acts establishing
common technical specifications for the
that the regulator wishes to be covered.
common technical specifications for
requirements set out in Chapter 2 of this Title,
the requirements set out in Chapter 2
or, as applicable, with requirements set out in
of this Title, or, as applicable, with
Article 4a and Article 4b, where the
Implemting acts follow comitology rules:
requirements set out in Article 4a
following conditions have been fulfilled:
The practical rules and general principles
and Article 4b, where the following
to be followed on comitology are laid
conditions have been fulfilled:
down in Regulation (EU) No 182/2011.
Reference
Third compromise proposal
Drafting suggestion
Comment
Accordingly, the Commission consults a
(a)
no reference to harmonised
(a)
no reference to harmonised standards
committee composed of representatives
standards covering the relevant
covering the relevant essential
of all Member States and chaired by the
essential safety or fundamental right
r e q u i r e m e n t safety or fundamental right
Commission on draft implementing acts.
concerns is published in the Official
concerns is published in the Official Journal of
These committees use 2 types of
Journal of the European Union in
the European Union in accordance with
accordance with Regulation (EU) No
Regulation (EU) No 1025/2012;
procedures:
1025/2012;
- examination and
(b)
the Commission has requested one
- advisory.
(b)
the Commission has
or more European standardization
requested one or more European
organisations to draft a harmonised standard for These procedures differ in their voting
standardization organisations to draft
the requirements set out in Chapter 2 of this
rules and in the way their votes influence
a harmonised standard for the
Title;
the Commission’s possibilities to adopt
requirements set out in Chapter 2 of
the implementing act in question. The
this Title;
(c)
the request has not been accepted by
choice of procedure for a given act is
any of the European standardization
made by the EU legislator, and depends
(c)
the request has not been
organisations or the standard is not delivered
on the nature of the implementing
accepted by any of the European
within the deadline
powers that are laid out in the basic act
standardization organisations or the
(regulation, directive or decision).
standard is not delivered within the
Those implementing acts shall be adopted in
deadline
accordance with the examination procedure
referred to in Article 74 (2) of this regulation.
CS are to be a temporary safety-net if
1a.
Before preparing a draft
and only if harmonized standards do not
implementing act, the Commission
1a.
Before preparing a draft
exist. The Regulation has to clear about
shall inform the committee referred
implementing act, the Commission shall
what happens if standards are then later
to in Article 22 of Regulation EU (No)
inform the committee referred to in Article 22
cited in OJEU and also about what
1025/2012 that it considers that the
of Regulation EU (No) 1025/2012 that it
happens if the CS carry some problems
conditions in paragraph 1 are fulfilled. considers that the conditions in paragraph 1 are
with respect to other Regulations or
fulfilled.
standards touched upon by the CS.
2.
In the early preparation of
Hence there need to be more practical
the draft implementing act
2.
In the early preparation of the draft
rules for CS in this Regulation. All copied
establishing the common
implementing act establishing the common
from the Nonpaper’s Appendix.
specification, the Commission shall
specification, the Commission shall fulfil the
Reference
Third compromise proposal
Drafting suggestion
Comment
fulfil the objectives referred to in
objectives referred to in Article 40(2) and
Article 40(2) and gather the views of
gather the views of relevant bodies or expert
relevant bodies or expert groups
groups established under relevant sectorial
established under relevant sectorial
Union law. Based on that consultation, the
Union law. Based on that
Commission shall prepare the draft
consultation, the Commission shall
implementing act.
prepare the draft implementing act.
The Commission,
Wwhen preparing the common
The Commission,
Wwhen preparing the
specifications referred to in paragraph 1,
common specifications referred to in
the Commission shall
fulfil the objectives referred
paragraph 1,
of Article 40(2) and gather the views of relevant
the Commission shall
fulfil the bodies or expert groups established under relevant
objectives referred of Article 40(2) and sectorial Union law.
gather the views of relevant bodies or
expert groups established under relevant 3.
High-risk AI systems
or general purpose
sectorial Union law.
AI systems which are in conformity with the
common specifications referred to in paragraph 1
3.
High-risk AI systems
or general shall be presumed to be in conformity with the
purpose AI systems which are in requirements set out in Chapter 2 of this Title
conformity
with
the
common
or, as applicable, with requirements set out in
specifications referred to in paragraph 1
Article 4a and Article 4b, to the extent those
shall be presumed to be in conformity common specifications cover those requirements.
with the requirements set out in
Chapter 2 of this Title
or, as 4a.
Where providers do not comply with the
applicable, with requirements set out common specifications referred to in paragraph 1,
in Article 4a and Article 4b, to the extent they shall duly justify
in the technical
those common specifications cover those
documentation referred to in Article 11 that they
requirements.
have adopted technical solutions that are at least
equivalent thereto.
4.
Where providers do not comply
with the common specifications referred
4b. When references of a harmonised standard are
to in paragraph 1, they shall duly justify
published in the Official Journal of the European
in the technical documentation
Union, implementing acts referred to in this Article,
Reference
Third compromise proposal
Drafting suggestion
Comment
referred to in Article 11 that they have
or parts thereof which cover the same essential
adopted technical solutions that are at
requirements shall be repealed.
least equivalent thereto.
4c. When a Member State considers that a common
specification does not entirely satisfy the essential
reuqirements set out in this Regulation as stated
above, it shall inform the Commission thereof with
a detailed explanation and the Commission shall
assess that information and, if appropriate, amend
the implementing act establishing the common
specification in question.
Article 42
3. For high-risk AI systems where the provider is a
As the entities regulated by Directive
credit institutions regulated by Directive
2013/36/EU, Directive 2009/138/EC,
2013/36/EU or an entity regulated by Directive
Directive (EU) 2016/2341, Directive
2009/138/EC, Directive (EU) 2016/2341, Directive 2014/65/EU, Directive (EU) 2015/2366,
2014/65/EU resp. Directive (EU) 2015/2366,
Directive 2009/65/EG resp. Directive
Directive 2009/65/EG and Directive 2011/61/EU,
2011/61/EU already follow highest
conformity is assumed when these entities fulfill the standards and double regulation has to be
requirements following Directive 2013/36/EU,
avoided, conformity of high-risk AI
Directive 2009/138/EC, Directive (EU) 2016/2341, systems provided by them should be
Directive 2014/65/EU, Directive (EU) 2015/2366,
assumed when they fulfill the respective
Directive 2009/65/EG resp. Directive 2011/61/EU
requirements to the extent that those
to the extent those Directives cover the
requirements cover the requirements set
requirements set out in this Regulation.
out in this Directive.
Article 43
set out in Chapter 2 of this Title this Regulation,
Providers should demonstrate compliance
with the requirements of the entire
regulation in the conformity assessment
procedure. The reference (only) to
chapter 2 would, for example, omit the
quality management system or the post-
market surveillance system.
Reference
Third compromise proposal
Drafting suggestion
Comment
Article 46(4)
The obligations referred to in
4. The obligations in this provision only apply as far Information from the notified body can
paragraphs 1 to 3 shall be complied
as secrecy obligations do not conflict.
only be transmitted as far as transmission
with in accordance with Article 70.
does not interfer with serecy obligations
especially regarding operative scenarios.
This may already be covered by Art. 70,
but we would still like to emphasize that
this is an important issue for us, wich
may be addressed here as well.
Article 47
We suggest to add a paragraph to
empower the EU-COM to extend the
validity of an authorisation to the
territory of the Union for a limited period
of time by means of implementing acts.
Article
In a duly justified situation of urgency
Germany is of the view that the
47(1a)
for exceptional reasons of public security
presidency’s supplementary proposal is
or in case of specific, substantial and
generally reasonable. However, Germany
imminent threat to the life or physical
considers that the requirements of the
safety of natural persons, law
derogation regulation in Article 47 (1a)
enforcement authorities or civil
of the proposal are too unspecific. In
protection authorities may put a specific
particular, it remains unclear exactly
high-risk AI system into service without
what “duly justified situation of urgency
the authorisation referred to in
for exceptional reasons of public
paragraph 1 provided that such
security” means. Discussions are under
authorisation is requested during or
way in Germany regarding whether, from
after the use without undue delay, and if
the point of view of protection of
such authorisation is rejected, its use
shall be stopped with immediate effect.
fundamental rights, provisions should be
included for safeguards and an
arrangement for the legal consequences
Reference
Third compromise proposal
Drafting suggestion
Comment
of violations of the rule. Moreover,
discussions are also under way in
Germany on whether the market
surveillance authorities should be
informed before the provisional putting
into service in such cases, to enable
verification of the criteria. From an
operational point of view, the proposal
also does not yet answer the question of
the usability of intelligence obtained from
the deployment of a non-certified AI
system in urgent cases. Germany sees a
need for this question to be addressed in
the proposed regulation. As a whole,
regulation of AI should include
provisions to balance fundamental rights
aspects with operational aspects in these
urgent cases, providing legally secure
certification that ensures that data and
information from these systems can be
used; in doing so, the AI Act should in
particular stipulate the conditions under
which the certification procedure affects
the legitimacy of measures based on the
provisional deployment of AI systems.
Please refer to the separate position paper
handed in, proposing necessary diverging
regulations for public administration
(especially LEAs and migration
authorities) „[Regulation of AI – taking
greater account of the specific
characteristics of the public
Reference
Third compromise proposal
Drafting suggestion
Comment
administration, particularly in the fields
of security and migration]“.
Article 51
There is a fear that the disclosure of all the
law enforcement agencies' AI applications
in operation or development will facilitate
the assessment of an overall picture of the
operational capabilities of the respective
agencies. This database could potentially
be used to identify capability gaps or to
create thematic profiles of individual
countries. This in itself could pose a
security risk and affect the capabilities of
the authorities. Do the Commission or
other Member States share this view? The
newly added para 2 is not sufficient to
adress this issue.
Please also refer to the separate position
paper handed in, proposing necessary
diverging regulations for public
administration (especially
LEAs and migration authorities)
„[Regulation of AI – taking greater
account of the specific characteristics of
the public administration, particularly in
the fields of security and migration]“ .
Article 51(2)
Before using a high-risk AI system,
2. Before using an AI system, the relevant public
We thank the presidency for introducing
users of high-risk AI system that are
authorities shall register the system used by the
an obligation for public authorities to
public authorities, agencies or bodies,
public authority in the EU database referred to in
with the exception of law enforcement, Article 60a.
register ths use of AI systems in a public
border control, migration or asylum
database. However, we would like to ask
authorities, shall register themselves in
the EU database referred to in Article
Reference
Third compromise proposal
Drafting suggestion
Comment
60 and select the system that they
for some changes and refer to our
envisage to use.
comment regarding Art. 60a (new).
Article 52
NEW (1) The Commission is empowered to adopt
In order to accommodate the AI-specific
delegated acts in accordance with Article 73 to
environmental and sustainability aspects,
appropriate changes should be made. DE
amend this Regulation by establishing, after having proposes laying down horizontal
consulted relevant stakeholders, a common Union
transparency rules in Art. 52a in order to
scheme for describing and rating the environmental enable providers and users to lower the
energy and resource consumption caused
sustainability of AI systems placed on the market or by the development and the application
put into service in its territory. The scheme shall in
of AI systems and to contribute to reach
a first step, by one year after the entering into force the goal of carbon neutrality.
of this Regulation, establish a definition of AI
This proposal of a horizontal
systems’ sustainability and set out a small number
transparency requirement aims at
of easy-to-monitor indicators related, for example,
reporting a limited number of easy-to-
to good practice through energy-efficient
monitor sustainability indicators of AI
systems. These might entail simple,
programming or to data centre resource efficiency.
binary statements on whether AI
In a second step, by 2027, the scheme shall set up a providers follow a good practice
lean methodology to measure and rate AI systems
regarding energy-efficient programming
(‘green coding’) or whether the
based on the indicators. The indicators and
computing power originates from
methodology shall be updated in light of technical
certified data centres that, for example,
progress. The scheme shall only concern direct
generate own renewable energy, obtain
green electricity, use waste heat or
Reference
Third compromise proposal
Drafting suggestion
Comment
environmental impacts of AI systems and may
employ more sustainable cooling
allow for exceptions for SMEs.
techniques.
The definition of sustainability indicators
is best left to an expert committee;
therefore, no pre-determinations should
be made. This committee should also take
into account the ease of monitoring and
reporting to minimize burdens for AI
providers, particularly SME providers. It
should be composed of a broad range of
experts from science, business, civil
society and standardization organisations.
It is conceivable that the AI Board may
be involved or take over (a part of) the
function of the expert committee.
While we emphasize that many AI
products lead to major environmental
benefits, our goal is to ensure that the
positive environmental outcome of an AI
system is not, as an undesirable side
effect, partially negated by poor energy
and resource efficiency. The proposed
reporting requirement firstly aims at
creating incentives for AI providers to
raise their sustainability ambitions and in
the medium term, increase the demand
for more sustainable computing power
provision.
Reference
Third compromise proposal
Drafting suggestion
Comment
Even though energy prices spike and
chips become scarce, we do not witness
significant changes in AI development
practices, as other performance criteria
than energy efficiency predominate in
design and sourcing choices. In addition,
it is often not transparent for AI
developers, how much energy their
models and programs actually consume,
due to time-based, flat rate pricing of
cloud services providers. Thus, as price
signals do not sufficiently incentivize
more sustainable practices, transparency
requirements present a necessary
additional and unrestrictive incentive.
To highlight credible sustainability
information also offers advantages in the
marketing of AI systems. It is only
through such information that a
distinctiveness, ideally a unique selling
proposition, can be established, which
gives European providers a competitive
advantage in the long run ("sustainable
AI made in Europe").
Due to the pace of development and
many crossroad decisions on the direction
of AI development underway, the AI Act
offers a timely and flexible option to
Reference
Third compromise proposal
Drafting suggestion
Comment
address sustainability issues of AI
systems, in contrast to the long, complex
and detailed procedures under the
Ecodesign Regulation.
Further clarifications are given in recital
70.
Article 52(3)
However, the first subparagraph shall
or it is necessary for the exercise of the right to
The deleted sentence should be reinserted
not apply where the use is authorised
freedom of expression and the right to freedom
(in suggested Titel IV A).
by law to detect, prevent, investigate
of the arts and sciences guaranteed in the
and prosecute criminal offences or it is Charter of Fundamental Rights of the EU,
necessary for the exercise of the right
to freedom of expression and the right
to freedom of the arts and sciences
guaranteed in the Charter of
Fundamental Rights of the EU, and
subject to appropriate safeguards for
the rights and freedoms of third
parties.
TITLE IVA
Please also move Art. 52(1) and (2) to
INFORMATION TO BE PROVIDED TO
this title as new Art. 52a.
NATURAL PERSONS
One of the primary reasons why AI is
Art. 52b
being regulated at all is to protect
Information to be provided for high-risk AI
individuals from the risks generated by
systems
AI systems to fundamental rights and to
create trust. In this context, the need for
1. Users of High Risk-AI systems shall provide
transparency is one of the main factors
the person affected by a decision at least
explicitly addressed by the AIA. The
partially determined by the output of the AI
GDPR already grants certain rights to
system (“Affected Person”) with standardized
information to natural persons/data
information about
subjects. However, the GDPR does not
sufficiently cover constellations where AI
Reference
Third compromise proposal
Drafting suggestion
Comment
(a) the fact that an AI system has been used
systems are involved. For example,
within the context of the decision-making
Articles 22, 13 (2) (f), 14 (2) (g) and 15
process;
(1) (h) address automated processing, but
these provisions only cover cases where
(b) a reference to the EU-data base as referred to natural persons are directly exposed to
in Art 51, 60 and Annex VIII;
automated decision making. This would –
at least according to the wording of the
(c) the general role and purpose of the AI system GDPR - not cover the constellation that
in the decision-making-process;
AI is used to prepare a decision
ultimately made by a human (for
(d) the relevant data categories of the input data; example, an AI might provide a credit
rating score that a bank employee uses to
(e) information provided to the user pursuant to decide on the granting of a loan to a
Article 13 paragraph 3 letters b and d; and
natural person).This constellation may
have consequences for the natural person
(f) the right to receive an explanation upon
that can be just as serious as where the
request according to paragraph 3.
natural person is directly exposed to
automated processing, and gives
The information shall be provided at the time
therefore rise to a similar need for
the decision is communicated to the affected
protection. It is necessary for an affected
natural person.
natural person to understand the risks
which they are being subjected to in order
2. Where a high-risk AI system is used for
to be able to seek redress.
automated individual decision-making, including Therefore, we consider it necessary to
profiling, within the meaning of Article 22 of
include an obligation of the user to
Regulation (EU) 2016/679, information
provide the affected natural person with
according to Articles 13 (2) (f), 14 (2) (g) and 15
standardized information on the use and
(1) (h) of Regulation (EU) 2016/679 shall also
general function of the AI system and to
comprise information according to paragraph
include a substantive right for affected
(1) (b), (d), (e) and (f).
persons to request further information on
the input data and the relevant data
categories, in constellations, where an AI
Reference
Third compromise proposal
Drafting suggestion
Comment
3. Users of high-risk AI systems shall provide the system is used to prepare a human
affected natural person upon his or her request
decision.
in addition to the standardized information
We also consider it necessary to
provided according to paragraph 1 with concise, supplement the existing information
complete, correct and clear explanation of the
requirements in the GDPR with some
individual input data relating to the affected
further information that seem necessary
natural person and the relevant data categories
specifically in the context of AI systems
on the basis of which the decision was made.
in order to provide natural persons with
all relevant knowledge to understand
4. Paragraph 1 (e),2 insofar as it refers to
their situation.
paragraph 1 lit. (e) and paragraph 3 shall not
With the suggested Article 52b, we aim
apply to the use of AI systems that are
to avoid any duplication or overlapping
authorised by law to detect, prevent, investigate
with existing rights under the GDPR, but
and prosecute criminal offences or prevent of a
merely to supplement them only to the
threat to critical infrastructure, life, health or
extent necessary, as it is important to
physical safety of natural persons.
avoid legal uncertainty regarding the
relation of this Regulation to the GDPR.
5. Paragraph 1 to 3 shall not apply to the use of
AI systems
In addition to the implementation of Art.
52b new, the following sentence should
(a) for which exceptions from, or restrictions to,
be added to Recital 43: „Natural persons
the obligations under this Article follow from
affected by decisions at least partially
Union or Member State law (such as a
determined by high-risk AI systems (this
prohibition or restriction to disclose information includes decisions that were made after a
covered by paragraph 1 and 2 to the affected
high-risk AI system provided a
person), which lays down appropriate other
recommendation for the decision) placed
safeguards for the affected person's rights and
on the EU market or otherwise put into
freedoms and legitimate interests when such an
service should be informed in an
exception or restriction respects the essence of
appropriate, easily accessible and
the fundamental rights and freedoms and is a
comprehensible manner about the use of
necessary and proportionate measure in a
the AI system, the role and purpose of the
democratic society; or
AI system in the decision-making
Reference
Third compromise proposal
Drafting suggestion
Comment
(b) that have only minor influence within the
process, the logic involved and the main
decision-making process.
parameters of decision making. Such
information could be provided in
6. Information according to paragraph 1 to 3
electronic form, for example, when
shall be given in a concise, transparent,
addressed to the public, through a user‘s
intelligible and easily accessible form
website while providing the link to this
appropriate to different kinds of disabilities,
website at the time the decision is
using clear and plain language.
communicated to the affected person. For
this purpose, with regard to the
Article 52c
standardized information to be provided
Relation to Title III
under para. 1 and 2, the user should be
able utilise the information received from
Obligations under this Title shall not affect the
the provider according to article 13
requirements and obligations set out in Title III
paragraph 3 letters b and d. With regard
of this Regulation.
to the individual explanation according to
para. 3, the affected natural person must
be provided with the individual input data
relating to the affected natural person and
the relevant data categories that serve as
the main parameters on the basis of
which the output was given.”
Furthermore, the term “affected natural
person” should be defined in Art. 3 AIA.
The rights of the persons affected are
limited by the wording to individual
persons. This does not include the
protection or representation of collective
interests. This means that particularly
vulnerable groups or groups at risk of
discrimination can exercise their rights
Reference
Third compromise proposal
Drafting suggestion
Comment
less effectively. Possibilities for
collective enforcement still need to be
examined within the federal government.
Generally, it has to be made sure that
Union or Member State law containing
prohibitions of disclosure or relevant
restrictions on the affected person’s right
of access to the information covered by
Art. 52a (new) remains unaffected,
especially in the area of law enforcement.
For example: In case of suspicion of
money laundering, the competent
authority (Financial Intelligence Unit,
“FIU”) is prohibited to reveal information
to the affected person (based on Art. 41
para. 4 EU-act ). Therefore, we suggest to
add para. 4 or a similar provision inspired
by Art. 23 GDPR saying that the
obligations or rights granted under Art.
52 a (new) can be restricted by Union or
Member State law that e.g. prohibits or
restricts the user of the AI system to
reveal the information, provided that such
a prohibition or restriction respects the
essence of the fundamental rights and
freedoms and is a necessary and
proportionate measure in a democratic
society.
We are still discussing details on this
provision.
Reference
Third compromise proposal
Drafting suggestion
Comment
Paragraph 2 only covers situations that
are already covered by automated
processing in accordance with Article 22
of the GDPR (i.e., constellations where a
natural person is exposed directly to an
AI system). In these constellations,
information obligations under the GDPR
are extended to certain further, AI
specific information according to
paragraph 1.
We are still discussing if AI systems used
for the prevention of a threat to critical
infrastructure, life, health, physical safety
of natural persons or public safety should
also be excluded from the obligations of
par. 1-3 and may comment on this later.
This corresponds to the current Article
52(4) and should apply to the entire title.
Article 53
1a.
National competent authorities
1a.
National competent authorities may
Add “under the direct supervision,
may establish AI regulatory sandboxes establish AI regulatory sandboxes for the
guidance [..] by the national competent
for the development, training, testing
development, training, testing and validation of
authority”. The key element of
and validation of innovative AI
innovative AI systems under the direct
supervision and guidance by the national
systems, before their placement on the
supervision, guidance and support by the
competent authority was deleted by
market or putting into service. Such
national competent authority, before their
deleting the whole article 53 (1) and
regulatory sandboxes may include
placement on the market or putting into service. should be returned.
testing in real world conditions
Such regulatory sandboxes may include testing
Reference
Third compromise proposal
Drafting suggestion
Comment
supervised by the national competent
in real world conditions supervised by the
Add
“support”: Especially for start-ups it
authorities.
national competent authorities.
is very important that competent
authorities – within their legal
possibilities – act as supporters in
ensuring compliance, e.g. through
mentoring, personal exchange or
customized guidance. The impressive
examples of data regulatory sandboxes by
the French CNIL and the British ICO also
explicitly “support” the projects. The
term “support” is also used in EU
Commission’s Better Regulation Toolbox
Tool #69 on regulatory sandboxes (page
597).
Article
d)
enhance authorities’
d)
enhance authorities’ understanding of the To prevent diverging approaches, the key
53(1b)
understanding of the opportunities and opportunities and risks of AI systems as well as
objective of regulatory learning with its
risks of AI systems as well as of the
of the suitability and effectiveness of the
different facets (better understanding of
suitability and effectiveness of the
measures for preventing and mitigating those
opportunities and risks, contribution to
measures for preventing and
risks;
effective implementation and
mitigating those risks;
development of standards and
specification) should be returned. The
Council conclusion on regulatory
sandboxes (para 10) as well as the
Commission’s Better regulation toolkit
(page 595) highlight regulatory learning
as crucial feature of regulatory
sandboxes.
e)
contribute to the uniform and
e)
contribute to the uniform and effective
see comment on d)
effective implementation of this
implementation of this Regulation and, where
Regulation and, where appropriate, its appropriate, its evidence based swift adaptation,
swift adaptation, notably as regards
notably as regards the techniques in Annex I, the
the techniques in Annex I, the high-
Reference
Third compromise proposal
Drafting suggestion
Comment
risk AI systems in Annex III, the
high-risk AI systems in Annex III, the technical
technical documentation in Annex IV;
documentation in Annex IV;
f)
contribute to the development
f)
contribute to the development or update
see comment on d)
or update of harmonised standards
of harmonised standards and common
and common specifications referred to specifications referred to in Articles 40 and 41
in Articles 40 and 41 and their uptake
and their uptake by providers.
by providers.
g) contribute to the possible future evidence-
Add new para: The Council conclusion
based advancement of this Regulation and,
on regulatory sandboxes (para 10) as well
where appropriate, of other Union and Member
as the Commission’s Better regulation
States legislation through regulatory learning.
toolkit (page 595) highlight regulatory
learning as crucial feature of regulatory
sandboxes. Regulatory sandboxes should
contribute to resilient and relevant
legislation through facilitating regulatory
learning.
Article
Participation in the AI
Participation in the AI regulatory
The requirements to the specific plan
53(2a)
regulatory sandbox shall be based on a sandbox shall be based on a specific plan
should be returned. The lessons from
specific plan referred to in paragraph
referred to in paragraph 6 of this Article that
sandboxes are only comparable if there is
6 of this Article that shall be agreed
shall be agreed between the participant(s) and
a common framework. Harmonizing the
between the participant(s) and the
the national competent authoritie(s) or the
rules concerning this specific plan of
national competent authoritie(s) or the European Data Protection Supervisor, as
participation helps regulatory learning as
European Data Protection Supervisor, applicable. The plan shall contain as a minimum well. It is important to have a clear
as applicable. The plan shall contain as the following:
objective in mind when operating a
a minimum the following:
regulatory sandbox.
If the context of participation is
documented well, it is easier to compare
the results of the sandbox with sandboxes
that have taken place under the
supervision of other national competent
authorities
Reference
Third compromise proposal
Drafting suggestion
Comment
a)
description of the participant(s) a)
description of the participant(s) involved
See comment above.
involved and their roles, the envisaged
and their roles, the envisaged AI system and its
AI system and its intended purpose,
intended purpose, and relevant development,
and relevant development, testing and
testing and validation process;
validation process;
b)
the specific regulatory issues at b)
the specific regulatory issues at stake and See comment above; add “
support”:
stake and the guidance that is expected the guidance and support that is expected from
Especially for start-ups it is very
from the authorities supervising the AI the authorities supervising the AI regulatory
important that competent authorities –
regulatory sandbox;
sandbox;
within their legal possibilities – act as
supporters in ensuring compliance, e.g.
through mentoring, personal exchange or
customized guidance.
bb) the novelty of the specific regulatory issue,
Additionally we propose a new provision
compared to the annual reports referred to in
2a(bb). Note that this does not require
Article 53(5), and whether analyzing this
regulatory issue in the regulatory sandbox
participants to have a novel regulatory
contributes to the objectives of Article 53(1b)(c)
issue in order to participate in the
and (d);
sandbox. Whether a regulatory issue is
novel can also become clear during the
sandbox.
c)
the specific modalities of the
c)
the specific modalities of the
See comment above.
collaboration between the
collaboration between the participant(s) and the
participant(s) and the authoritie(s), as
authoritie(s), as well as any other actor involved
well as any other actor involved in the
in the AI regulatory sandbox;
AI regulatory sandbox;
d)
a risk management and
d)
a risk management and monitoring
See comment above.
monitoring mechanism to identify,
mechanism to identify, prevent and mitigate any
prevent and mitigate any risk referred risk referred to in Article 9(2)(a);
to in Article 9(2)(a);
Reference
Third compromise proposal
Drafting suggestion
Comment
(dd) obligations for the participants to provide An evaluation on the basis of current and
the authority with information needed for the
accurate data is crucial in order to
authority’s evaluation of the project.
enhance authorities’ understanding and to
allow for regulatory learning. COM’s
better regulation toolkit p. 597 stresses
that the main evaluation criteria (and that
includes also the data and data source)
should be established ex ante.
e)
the key milestones to be
e)
the key milestones to be completed by the See comment above.
completed by the participant(s) for the participant(s) for the AI system to be considered
AI system to be considered ready to
ready to exit from the regulatory sandbox.
exit from the regulatory sandbox.
(2b) After an AI regulatory sandbox has ended,
In various national regulatory sandboxes,
it is common practice to issue an exit
the participant(s) and the national competent
report after the sandbox has concluded.
authoritie(s) or the European Data Protection
We propose to include this practice in the
AI Act as well. The exit reports focus
Supervisor, as applicable, shall draw up an exit
more specifically on the case at hand,
report. This exit report shall contain as a
instead of the more vaguely drafted
‘annual reports’ (which also focus on the
minimum the following:
implementation of sandboxes).
a)
The plan referred to in paragraph 2a of
In order to truly utilize lessons learnt,
they must first be defined. The national
this Article;
competent authorities are in the best
b)
An evaluation of the specific regulatory
position to do this, right after a sandbox
has ended.
issues that were at stake during the AI
Under paragraph 5a, the exit reports will
regulatory sandbox, including a problem
then be used by the AI Board and
Commission to improve interpretation,
definition and proposed solutions;
guidance, communication and
amendments regarding this Regulation.
Reference
Third compromise proposal
Drafting suggestion
Comment
c)
Whether the key milestones referred to in
paragraph 2a(e) of this Article have been
completed;
d)
A conclusion on the lessons learnt,
specified in the following categories of use:
1. An improved understanding on the
implementation of the AI regulatory
sandboxes;
2. Improved methods of supervision by
national competent authorities;
3. A revised or novel interpretation of this
Regulation.
3.
The
participation in the AI
with the objective of supporting innovation in AI in The former formulation should be
regulatory sandboxes shall not affect the
the Union. Any significant risks to health and safety reinserted and amended since significant
supervisory and corrective powers of the
and fundamental rights identified during the
risks to health and safety and
competent authorities
supervising the
development and testing of such systems shall
fundamental rights require immediate
sandbox.
Those authorities shall
result in immediate mitigation and, failing that, in
mitigation (and failing that suspension).
exercice their supervisory powers in a
the suspension of the development and testing
The authorities shall support the
flexible manner within the limits of the process until such mitigation takes place. The
participants in developing and
relevant legislation, using their
authorities shall cooperate with the participants of
implementing the mitigation.
discretionary powers when
the sandbox to develop and implement a mitigation
implementing legal provisions to a
plan to enable a resumption of the testing process
specific AI sandbox project., with the
without undue delay.
objective of supporting innovation in
AI in the Union Any significant risks to
Reference
Third compromise proposal
Drafting suggestion
Comment
health and safety and fundamental rights
identified during the development and
testing of such systems shall result in
immediate mitigation and, failing that, in
the suspension of the development and
testing process until such mitigation takes
place.
However, pProvided that the
We welcome the amendment. However,
participant(s) respect the sandbox plan
in order to ensure the protection of
and the terms and conditions for their
fundamental rights, we remain critical of
participation as referred to in
the fact that fines should continue to be
paragraph 6(c) and follow in good faith
excluded even in the case of
the guidance given by the authorities,
infringements that lead to high risks for
no administrative enforcement action
the rights and freedoms of natural
shall be taken fines shall be imposed by
persons.
the authorities for infringement of
applicable Union or Member State
Legal Council Service should verify if
legislation, including the provisions of
with regard to the financial market a
this Regulation.
sector specific clarification is necessary.
The rules and provisions for participation
in a sandbox program should not
contradict the harmonized financial
market regulation. We would therefore
advise to consult with COM (DG
FISMA) on this specific financial sector
related question.
5a.
To ensure that sandboxes will deliver
more than vaguely defined annual
1. After an AI regulatory sandbox has ended, the reports, this paragraph requires the AI
national competent authority shall share the Board and Commission to utilize the exit
Reference
Third compromise proposal
Drafting suggestion
Comment
exit report of that sandbox with the AI Board reports that have been drawn by the
national competent authorities.
and the Commission.
2. The AI Board shall use the annual reports of As these exit reports may contain
sensitive information that should be kept
paragraph 5 of this Article and the exit reports confidential, an explicit reference to
it recieves according to paragraph 1 in the Article 70 has been made. This also
prevents a situation in which participants
exercise of its tasks as listed in Article 58.
may be reluctant to participate in
3. The Commission shall use the annual reports sandboxes because they are afraid that
their trade secrets or other sensitive
of paragraph 5 of this Article and the exit information will be made public.
reports it recieves according to paragraph 1 in
the exercise of its tasks in Articles 4, 7, 11(3)
and 58a.
The exit reports shall be shared on a confidential
basis and in accordance with Article 70.
Art. 53(6)
6.
The
detailed modalities and the
[..] with
the examination procedure referred to in
Add
“These modalities and conditions
conditions
for the establishment and of Article 74(2). These modalities and conditions shall
shall foster innovation and shall take into
the operation of the AI regulatory
foster innovation and shall take into account
account particularly the special
sandboxes
under this Regulation,
particularly the special circumstances of
circumstances of participating small and
including the eligibility
criteria and the
participating small and medium-sized enterprises.
medium-sized enterprises”: The
procedure for the application, selection,
objectives of the regulatory sandboxes
participation and exiting from the
should be to foster AI innovation (recital
sandbox, and the rights and obligations of
71). In order to promote innovation, it is
the participants shall be set out in
important that the interests of small-scale
implementing acts. Those implementing
providers are taken into particular
acts shall be
adopted
through
account (recital 73. Both must be
implementing acts in accordance with
reflected in the regulatory sandboxes’
modalities and conditions.
Reference
Third compromise proposal
Drafting suggestion
Comment
the examination procedure referred to in
Article 74(2).
Art. 53(6)
(bb) provisions for a possible subsequent
Add “provisions for a possible
introduction into permenant operation
subsequent introduction into permanent
operation”: In order to provide innovators
with transparent and reliable investment
conditions, perspectives for scaling the
AI systems outside the regulatory
sandbox should be set up.
(cc) The modalities for the evaluation of the
As emphasized in recital 72, one
sandbox and the transfer of results into
objectives of the regulatory sandboxes is
legislative process;
to enhance the competent authorities’
oversight and understanding. EU
Commission’s Better Regulation Toolkit
(page 595 and 597) and the Council
Conclusions on Reg. Sandboxes (para 10)
also stress the objective of advancing
regulation through regulatory learning.
To achieve this overarchingly, clear rules
shall be set up.
7.
When national competent
7. When national competent authorities consider In order to avoid fragmentation, national
authorities consider authorising testing authorising testing in real world conditions
competent authorities should not draw up
in real world conditions supervised
supervised within the framework of an AI
their own terms and conditions but
within the framework of an AI
regulatory sandbox established under this
should ensure the compliance with the
regulatory sandbox established under
Article, they shall ensure that the testing in real
relevant requirements of Article 54a and
this Article, they shall specifically
world conditions takes place according to the
54b when allowing testing in real world
agree with the participants on the
requirements of Articles 54a and 54b specifically conditions in an AI regulatory sandbox.
terms and conditions of such testing
agree with the participants on the terms and
and in particular on the appropriate
conditions of such testing and in particular on
safeguards. Where appropriate, they
the appropriate safeguards. Where appropriate,
shall cooperate with other national
they shall cooperate with other national
Reference
Third compromise proposal
Drafting suggestion
Comment
competent authorities with a view to
competent authorities with a view to ensure
ensure consistent practices across the
consistent practices across the Union.
Union.
Art. 54(1)
(ii)
public safety and public health,
(ii)
public safety, long-term care and public
In the current draft, AI in the field of care
including disease prevention, control and health, including disease prevention, control and
cannot be trained and supported in
treatment
of disease and improvement
treatment
of disease and improvement of health
regulatory sandboxes. Therefore we
of health care systems;
care systems;
propose the addition of
“term long-term
care”.
(v)
a high level of efficiency and
(v)
a high level of efficiency and quality of e-
“public administration and public
quality of public administration and
governmentpublic administration and public
services” seems to be far too vague and
public services.;
services.;
should be replaced by “e-government”.
According to the GDPR, a legal basis for
the processing of personal data should be
clear and precise. The inclusion of the
public sector could be ensured by our
proposal “e-government”.
(vii) ensuring or increasing data protection and data We suggest to add this purpose to the list.
security in AI systems or other technology;
There are risks for data protection and
security regarding technology in general,
but also AI (such as membership attacks),
and counter-measures are currently still
under research. It would be in the public
interest to foster such research as well by
providing regulatory sandboxes. This
could also help providers increase legal
certainty (Art. 53 (1b) (c)) and reduce
risks and costs by defining and
implementing appropriate mitigation
measures.
Art. 54a(5)
5.
Any subject of the testing in
Since "informed consent" is a different
real world conditions, or his or her
form of consent than in Art. 6 (1) lit a, 7
legally designated representative, as
Reference
Third compromise proposal
Drafting suggestion
Comment
appropriate, may, without any
GDPR we might suggest clarifying this in
resulting detriment and without having
a recital.
to provide any justification, withdraw
Further, we suggest that the possibility to
from the testing at any time by
revoke consent be included directly in the
revoking his or her informed consent.
definition in Art. 3 (51).
The withdrawal of the informed
consent shall not affect the activities
already carried out and the use of data
obtained based on the informed
consent before its withdrawal.
Art. 56
Article 56
WE CONTINUE TO SUGGEST AN
Establishment and structure of the
ORIENTATIONAL DEBATE
European Artificial Intelligence Board
REGARDING THE AI BOARD. THIS
DEBATE SHOULD INCLUDE THE
GENERAL ALIGNEMENT OF THE
BOARD AND THE SCOPE, ITS
MEMEBERS AND POSSIBLE RULES
OF PROCEDURE. WE WOULD LIKE
TO RESERVE THE OPPORTUNITY
TO MAKE FURTHER COMMENTS
AFTER THAT PROPOSED DEBATE.
Art. 60
Article 60
The German Federal Government is
EU database for stand-alone high-risk AI
concerned that public access to the EU
systems listed in Annex III
database of high-risk AI applications
provided for in Article 60 (5) of the AI
Regulation could clash with justified
security interests of the Member States.
There are fears that even the publication
of all AI applications that are operated or
under development by the security
authorities would make it easier to gain
an overall picture of the operational
Reference
Third compromise proposal
Drafting suggestion
Comment
capabilities of the authorities in question.
Using this database, potential gaps in
capabilities could be identified, or
profiles of the focus areas of individual
countries could be compiled. This could
represent a security risk in itself and
could impact the capabilities of the
authorities. Do the Commission or the
other Member States share this
assessment? Should an exception be
included in this regard in Articles 60 and
61 enabling the Member State in question
under certain conditions, which must first
be defined in more detail, to refrain in
individual cases from publishing AI
applications where security interests are
at odds with this? German security
authorities are in favour of this. Please
also refer to the separate position paper
handed in, proposing necessary diverging
regulations for public administration
(especially LEAs and migration
autoritiess) „[Regulation of AI – taking
greater account of the specific
characteristics of the public
administration, particularly in the fields
of security and migration]“.
Article 60a
Due to the unique role and responsibility
public authorities bear, the sensitive
EU database for stand-alone AI systems used by
personal data they have access to, the
public authorities
consequentional effects their decisions
have on individuals, and thus their
Reference
Third compromise proposal
Drafting suggestion
Comment
1. The Commission shall, in collaboration with the primary obligation to respect, protect and
fulfil fundamental rights, public
Member States, set up, maintain and manage a EU authorities should be subject to more
database to enable the public to be adequately stringent transparency requirements when
using AI systems. Hence, any
informed about AI systems placed on the market and deployments of AI systems – regardless
containing information referred to in paragraph 2 of their level of risk – by or on behalf of
public authorities should be registrered
concerning AI systems used by the public authorities within a separate EU database in addition
registered in accordance with Article 51 (2).
to the registration as High Risk AI in the
database referred to in Article 60.
2. The data listed in Annex VIIIb shall be entered We refrain from drafting up an Annex
VIIIb for this comment. However, the
into the EU database by the public authorities. The data base should include the name of the
Commission shall provide them with technical and AI system and a short description of its
intended purpose as well as the name,
administrative support.
address and contact details of the public
authority by whom or on whose behalf it
is used. However, in the field of law
3. Art. 60 par. 3-5a shall apply accordingly.
enforcement, the possible security risk
arising from the database must also be
considered. Please also refer to the
comment above (no. 130) and the
separate position paper handed in,
proposing necessary diverging
regulations for public administration
(especially LEAs and migration
authorities) „[Regulation of AI – taking
greater account of the specific
characteristics of the public
Reference
Third compromise proposal
Drafting suggestion
Comment
administration, particularly in the fields
of security and migration]“.
We are also still discussing this topic
under aspects such as operating expenses,
especially how to avoid exceeding
operating expenses, as well as how often
the database should be updated.
Art. 61(2)
In order to allow the provider to
There is concern that the mere disclosure
evaluate the compliance of AI systems
of all law enforcement agencies' AI
with the requirements set out in Title
applications in operation or development
III, Chapter 2 throughout their life cycle,
will facilitate an assessment of an overall
T
the post-market monitoring system
picture of the relevant agencies'
shall actively and systematically collect,
operational capabilities. Should an
exception be included in Articles 61,
document and analyse relevant data
,
which would allow the Member State
which may be provided by users or
concerned, under conditions to be defined
which may be collected through other
in more detail, to refrain from publication
sources on the performance of high-risk
in individual cases if and to the extent
AI systems
. This obligation shall not
that confidentiality interests conflict with
cover sensitive operational data of
this? DEU security authorities are in
users of AI systems which are law
favor of this.
enforcement authorities. throughout
their life time and allow the provider to
evaluate the continuous compliance of
AI systems with the requirements set out
in Title III, Chapter 2.
Art. 61(4)
For high-risk AI systems covered by the
It remains unclear whether the AI Act
legal acts referred to in Annex II,
Section
poses additional requirements for entities
A, where a post-market monitoring system
already regulated by comprehensive
and plan is already established under that
financial sector regulation. Please specify
Reference
Third compromise proposal
Drafting suggestion
Comment
legislation, the elements described in
how the AI Act does avoid double
paragraphs 1, 2 and 3 shall be integrated
regulation for the highly regulated
into that system and plan as appropriate
the
financial sector while also avoiding
post-market monitoring documentation
regulation gaps.
as prepared under that legislation shall
be deemed sufficient, provided that the
template referred to paragraph 3 is used.
The first subparagraph shall also apply
high-risk AI systems referred to in point
5(b) of Annex III placed on the market or
put into service by credit
financial
institutions
that are subject to
requirements regarding their internal
governance, arrangements or processes
under Union financial services
legislation.regulated by Directive
2013/36/EU.
Art. 62(4)
For high-risk AI systems which are
For high-risk AI systems which are safety
In the Regulation (EU) 2017/745 and
safety components of devices, or are
Regulation (EU) 2017/746 there are
components of devices, or are themselves
themselves devices, covered by
serious incidents that directly or
Regulation (EU) 2017/745 and
devices, covered by Regulation (EU) 2017/745
indirectly led into death of patient, user
Regulation (EU) 2017/746 the
or other persons. It is important that
and Regulation (EU) 2017/746 the notification of
notification of serious incidents shall be
every notification of serious incident is
limited to those referred to in Article
serious incidents shall be limited to those
reported to the competent authority for
3(44)(c) and be made to the national
Medical Devices or IVD. If the serious
referred to in Article 3(44)(c) and be made to the
supervisory competent authority
incident is limited to those referred to in
chosen for this purpose by of the
national supervisory competent authority chosen Article 3(44)c the national competent
Member States where that incident
authority for Medical Devices or IVD
for this purpose by of the Member States where
occurred.
forward the notification of serious
that incident occurred under this legislation. If
incidents to the competent authority for
this purpose.
the serious incidents is limited to those referred
Reference
Third compromise proposal
Drafting suggestion
Comment
to in Article 3(44)c the national competent
authority referred to in the precending sentence
shall be forward the notification of serious
incidents to the competent authority for this
purpose.
Art. 62a
CHAPTER 2A
Regarding the data access for vetted
researchers, in the field of law
enforcement, the possible security risk
DATA ACCESS FOR VETTED RESEARCHERS
arising from data access must also be
considered. Therefore, there is a need for
excemptions from the requests laid down
Article 62a
in article 62a. These excemptions are still
under discussion. We may provide
Data Access for vetted Researchers
further comments.
1 Upon a reasoned request from a public or private
body to be determined by each member state,
providers shall within a reasonable period, as
specified in the request, provide access to training,
validation and test-ing datasets used by the provider
to vetted researchers who meet the requirements in
para-graph 2 of this article for the sole purpose of
conducting research that contributes to the
development, training, validation and testing of AI
Reference
Third compromise proposal
Drafting suggestion
Comment
systems within the existing legal framework, in
particular with regards to bias monitoring, detection
and correction of such systems and that is related to
a public interest. Access to personal data shall be
provided in anonymised or at least pseudonymised
form as long as this is possible without jeopardizing
the research pur-pose.
2 Upon a duly substantiated application from
researchers, the responsible body shall award them
the status of vetted researchers and issue data access
re-quests pursuant to paragraph 1, where the
researchers demonstrate that they meet all of the
following conditions:
(a) researchers shall be affiliated to a research
organisation as defined in Article 2 (1) of Di-rective
(EU) 2019/790 of the European Parliament and of
the Council
(c) the application submitted by the researchers
justifies the necessity and proportionality for the
Reference
Third compromise proposal
Drafting suggestion
Comment
purpose of their research of the data requested and
the timeframes within which they re-quest access to
the data, and they demonstrate the contribution of
the expected research results to the purposes laid
down in paragraph 1,
() the planned research activities will be carried out
only for the purposes laid down in para-graph 1,
(f) shall commit and be in a capacity to preserve the
specific data security and confidentiali-ty
requirements corresponding to each request. In
particular, a protection concept shall be provided
with the request, containing a description of the
research purpose, the intended use of the
information, measures taken to protect the interests
of the data subject and tech-nical and organisational
measures taken to protect personal data.
3 The provider may refuse the requested
information, if trade secrets are affected and the
public interest in the research does not outweigh the
interest in confidentiality. The provid-er may refuse
Reference
Third compromise proposal
Drafting suggestion
Comment
access to personal data, if the legitimate interests of
the data subject outweigh the public interest in the
research. Where the data holder claims
compensation for making data available, such
compensation shall not exceed the technical and
organisational costs incurred to comply with the
request including, where necessary, the costs of
anonymisation and of technical adaptation.
4 The public or private body that awarded the status
of vetted researcher and issued the access request in
favour of a vetted researcher shall issue a decision
terminating the ac-cess if it determines, following
an investigation either on its own initiative or on the
basis in-formation received from third parties, that
the vetted researcher no longer meets the condi-
tions set out in paragraph 2. Before terminating the
access, the body shall allow the vetted researcher to
react to the findings of its investigation and its
intention to terminate the access. As soon as the
vetted researcher no longer meets the conditions set
out in paragraph 2, the vetted researcher shall report
this circumstance to the market surveil-lance
authority.
Art. 64
5.
Where the documentation referred 5. Where the documentation referred to in
to in paragraph 3 is insufficient to
paragraph 3 is insufficient to ascertain whether a
ascertain whether a breach of obligations breach of obligations under Union law intended to
under Union law intended to protect
protect fundamental rights, including the right to
Reference
Third compromise proposal
Drafting suggestion
Comment
fundamental rights has occurred, the
non-discrimination, has occurred, the public
public authority or body referred to
authority or body referred to paragraph 3 may make
paragraph 3 may make a reasoned request a reasoned request to the market surveillance
to the market surveillance authority to
authority to organise testing of the high-risk AI
organise testing of the high-risk AI
system through technical means. The market
system through technical means. The
surveillance authority shall organise the testing with
market surveillance authority shall
the close involvement of the requesting public
organise the testing with the close
authority or body within reasonable time following
involvement of the requesting public
the request.
authority or body within reasonable time
following the request.
Art. 70
In DEU, although lit (e) has been
included in Art. 70 (1) and references to
Article 70 (2) have been inserted at
various points in the text, there is still
discussion on whether further
requirements on secrecy and the
guarantee of confidentiality and data
protection are necessary. For example,
Art. 70(2) of the AI Regulation provides
only few concrete requirements for
confidentiality and only for the case that
authorities from the law enforcement
sector are themselves developers or
providers of AI applications. We refer to
the separate position paper handed in,
proposing necessary diverging
regulations for public administration
(especially LEAs and migration
authorities) Regulation of AI – taking
greater account of the specific
characteristics of the public
Reference
Third compromise proposal
Drafting suggestion
Comment
administration, particularly in the fields
of security and migration].
Art. 70(2)
Without prejudice to paragraph 1,
(1)
shall not be disclosed without the prior
Mere consultation in Art. 70 (2) is not
information exchanged on a confidential
sufficient.
consultation
approval (…)
basis between the national competent
authorities and between national
competent authorities and the
Commission shall not be disclosed
without the prior consultation of the
originating national competent authority
and the user when high-risk AI systems
referred to in points 1, 6 and 7 of Annex
III are used by law enforcement
, border
control, immigration or asylum
authorities, when such disclosure would
jeopardise public and national security
interests.
This obligation to exchange
information shall not cover sensitive
operational data in relation to the
activities of law enforcement, border
control, immigration or asylum
authorities.
Art. 73
"(new [NR])
Before adopting a delegated act, the Commission
shall consult experts designated by each Member
State in accordance with the principles laid
down in the Interinstitutional Agreement of [13
April 2016] on Better Law-Making."
Art. 82a
Article 82 new
The AI Act must allow representative
Amendments to Directive
actions to be used to defend natural
person’s rights collectively. This should
Reference
Third compromise proposal
Drafting suggestion
Comment
2020/1828/EC on Representative Actions for the
apply in the case of illegal commercial
Protection of the Collective Interests of Consumers
practices, or in obtaining compensation in
case of harm suffered by a group of
natural persons. Natural persons must be
3.The following is added to Annex I of the Directive
able via authorised organisations to
jointly bring a court case to obtain
2020/1828/EC on Representative Actions for the
compensation for damages arising from
Protection of the Collective Interests of Consumers:
the same source (e.g. multiple consumers
harmed by the same non-compliant AI
“Regulation xxxx/xxxx of the European Parliament
system or practice). In the absence of
and of the Council laying down harmonised rules on
adding the AI Act to the RAD Annex I,
artificial intelligence (artificial intelligence act) and
consumers would have no way of
amending certain union legislative acts”
exercising their rights collectively.
Art. 83
DEU further suggests to exclude large-
scale IT systems established by the legal
acts listed in Annex IX from obligations
of users of high-risk AI systems set forth
in Art. 29 (in connection with Art. 12 and
Art. 11) regardless of the date the
systems have been placed on the market
or put into service, since these systems
are already regulated with regard to those
obligations and the obligations laid down
in the AI act may conflict with the
obligation laid down in the existing
legislation.
Reference
Third compromise proposal
Drafting suggestion
Comment
If the amendment of those legal acts leads
to a significant change in the design or
intended purpose of the AI system, it then
should be considered as a question of
legal technique if any obligations of users
of high-risk AI systems under the AI Act
should be implemented directly within
the legal acts listed in Annex IX itself.
Furthermore, the suggested exemption is
without prejudice to Art. 83 (2) of the
Commission’s proposal, according to
which the requirements laid down in this
Regulation shall be taken into account, in
the evaluation of each large-scale IT
systems established by the legal acts
listed in Annex IX to be undertaken as
provided for in those respective acts.
Please also refer to the separate position
paper handed in, proposing necessary
diverging regulations for public
administration (especially LEAs)
„[Regulation of AI – taking greater
account of the specific characteristics of
Reference
Third compromise proposal
Drafting suggestion
Comment
the public administration, particularly in
the fields of security and migration]“.
Art. 83(3)
Without prejudice to paragraphs 1 and
Deletion
We ask for the deletion of the new Art.
2, high-risk AI systems that have been
placed on the market before [date of
83(3), which goes much too far and
application of this Regulation referred to
changes the approach of the AI act (as
in Article 85(2)], may continue to be
made available on the market until 3
regulation regarding the future). We need
years after [date of application of this
a sufficient grandfathering rule for high-
Regulation referred to in Article 85(2)].
risk AI systems that came onto the
market before the regulation came into
force. It cannot be that systems in
operation have to be taken off the market
after 3 years and can not be made
available after a certain target date. For
example, the acquisition of further
licenses for AI systems which are already
in use must remain possible on a
permanent basis. It is highly unlikely,
that the implication of the requirements
of the AI act can occure retroactively for
AI systems that are already developed
and on the market. However, new
development of certain AI systems can
Reference
Third compromise proposal
Drafting suggestion
Comment
take much longer then 3 or maybe even
10 years, if at all possible. The
implications of this change to Art. 83 are
huge and may lead to problems for entire
sectors.
We will timely submit concrete examples
to further elaborate on the problems we
assume.
Kindly indicate the Member State you are representing in the Title and when renaming the document. For specifying the relevant provision, please
indicate the relevant Article or Recital in 1st column and copy the relevant sentence or sentences as they are in the current version of the text in 2nd
column. For drafting suggestions, please copy the relevant sentence or sentences from a given paragraph or point into the 3rd column and add or
remove text.
Please do not use track changes, but highlight your additions in yellow or use strikethrough to indicate deletions. You do not need to
copy entire paragraphs or points to indicate your changes, copying and modifying the relevant sentences is sufficient. For providing an explanation
and reasoning behind your proposal, please take use of 4th column.
Document Outline