Brussels, 04 October 2022
WK 13231/2022 INIT
LIMITE
TELECOM
WORKING PAPER
This is a paper intended for a specific community of recipients. Handling and
further distribution are under the sole responsibility of community members.
CONTRIBUTION
From:
General Secretariat of the Council
To:
Working Party on Telecommunications and Information Society
Subject:
Artificial Intelligence Act - DE comments on 1st part of 3rd compromise proposal
(doc. 12206/1/22 REV 1; Arts 1-29, Annexes I-IV)
Delegations will find in the Annex the DE comments on 1st part of 3rd compromise proposal on Artificial
Intelligence Act (doc. 12206/1/22 REV; 1 Arts 1-29, Annexes I-IV).
WK 13231/2022 INIT
LIMITE
EN
MEMBER STATE comments on first part of third compromise proposal on AIA
(document 12206/1/22 REV 1; Arts 1-29, Annexes I-IV)
Reference
Third compromise proposal
Drafting suggestion
Comment
Please note that the following views are
preliminary as we are still examining the
proposal. We reserve the right to make
further comments.
We also refer to the comments that we
handed in on 14th September 2022 and
within the CWP on 22 September 2022.
Germany is of the opinion that the
specific characteristics of the public
administration (and in particular those of
the security, migration and asylum
authorities, as well as the tax and
customs authorities, including the FIU)
can be better accommodated in a
separate, specific technology act or in a
separate section in the Regulation
(referred to in this document as
“separate regulation”). The provisions in
the separate regulation should be
exhaustive. For details, we refer to our
Paper on the separate regulation.
Reference
Third compromise proposal
Drafting suggestion
Comment
Recital 5(a)
The harmonised rules on the placing on
The harmonised rules on the placing on the
We appreciate these further
clarifications concerning the relationship
the market, putting into service and use market, putting into service and use of AI systems between this Regulation and data
of AI systems laid down in this
laid down in this Regulation should apply across
protection law, but we suggest to include
any supplementing provisions of national
Regulation should apply across sectors
sectors and, in line with its New Legislative
law clearlier within the whole Regulation
and, in line with its New Legislative
Framework approach, should be without
(see Recital 58a, for example Recital 9).
Framework approach, should be
prejudice to existing Union law, and in particular
without prejudice to existing Union law, without prejudice to Union law notably on data
and in particular without prejudice to
protection including any supplementing
Union law notably on data protection,
provisions of national law, consumer protection,
consumer protection, fundamental
fundamental rights, product safety and
rights, product safety and employment
employment and product safety, to which this
and product safety, to which this
Regulation is complementary.
Regulation is complementary.
Recital 6a
Machine learning approaches focus on
the development of systems capable of
learning and inferring from data to
solve an application problem without
being explicitly programmed with a set
of step-by-step instructions from input
to output. Learning refers to the
computational process of optimizing
from data the parameters of the model,
which is a mathematical construct
Reference
Third compromise proposal
Drafting suggestion
Comment
generating an output based on input
data. The range of problems addressed
by machine learning typically involves
tasks for which other approaches fail,
either because there is no suitable
formalisation of the problem, or
because the resolution of the problem is
intractable with non-learning
approaches. Machine learning
approaches include for instance
supervised, unsupervised and
Machine learning approaches include for instance
reinforcement learning, using a variety
supervised, unsupervised and reinforcement
To avoid misunderstandandings we
of methods including deep learning,
learning, using a variety of methods including deep suggest to delete logistic regression.
statistical techniques for learning and
learning, statistical techniques for learning and
inference (including for instance logistic inference (including for instance logistic regression,
regression, Bayesian estimation) and
Bayesian estimation) and search and optimisation
search and optimisation methods.
methods.
Recital 9
…Publicly accessible spaces should not
[…] Publicly accessible spaces should not include
In our understanding, security restricted
include prisons or border control areas.
prisons, security restricted areas at airports and
areas at airports are not publicly
Some other areas may be composed of
border control areas.[…]
accessible spaces. For clarification, this
both not publicly accessible and publicly
should be addressed in recital 9.
accessible areas, such as the hallway of a
private residential building necessary to
access a doctor's office or an airport.
We also ask, if the assumption, that
„Publicly accessible spaces should not
include prisons, security restricted areas
at airports and border control areas", be
included in the articles of the AI Act (Art.
3 (39)), not just the recitals?
Reference
Third compromise proposal
Drafting suggestion
Comment
Recital 37
[…[ Considering the very limited scale of Considering the very limited scale of the impact and To ensure consistency with changes in
the impact and the available the available alternatives on the market, it is
Annex III Nr 5b
alternatives on the market, it is appropriate to exempt AI systems for the purpose of
appropriate to exempt AI systems creditworthiness assessment and credit scoring
for the purpose of creditworthiness when put into service by small-scale providers
assessment and credit scoring SMEs, including start-ups,
when put into service by small-
providers that are micro and small-sized
scale providers
SMEs, including enterprises as defined in the Annex of Commission
start-ups, for their own use.
Recommendation 2003/361/E, for their own use.
Recital 58a
..In particular, it is appropriate to clarify […] In particular, it is appropriate to clarify that
We appreciate these further
that this Regulation does not this Regulation does not affect the obligations of
clarifications concerning the relationship
affect
the
obligations
of
providers and users of AI providers and users of AI systems in their role as
between this Regulation and data
systems in their role as data data controllers or processors stemming from
protection law, but we suggest to include
controllers
or
processors
stemming from Union law on Union law on the protection of personal data
any supplementing provisions of national
the protection of personal data including any supplementing provisions of
law clearlier within the whole Regulation
in so far as the design, the
development or the use of AI national law in so far as the design, the
(in Recitals, e.g. in Recital 58a, as well as
systems involves the processing development or the use of AI systems involves the in Articles).
of personal data. ..
processing of […]
Recital 79a
Where necessary for their mandate,
Where necessary for their mandate, equality
national public authorities or bodies,
bodies as the relevant national bodies in cases of
which supervise the application of
Union law protecting fundamental
discrimination, as well as other national public
rights, including equality bodies, should
authorities or bodies, which supervise the
also have access to any documentation
Reference
Third compromise proposal
Drafting suggestion
Comment
created under this Regulation. A specific application of Union law protecting fundamental
safeguard procedure should be set for
rights, should also have access to any
ensuring adequate and timely
enforcement against AI systems
documentation created under this Regulation.
presenting a risk to health, safety and
fundamental rights. The procedure for
such AI systems presenting a risk
should be applied to high-risk AI
systems presenting a risk, prohibited
systems which have been placed on the
market, put into service or used in
violation of the prohibited practices laid
down in this Regulation and AI systems
which have been made available in
violation of the transparency
requirements laid down in this
Regulation and present a risk.
Art. 1
This regulation lays down:
(c) harmonised transparency rules for certain AI
For DEU a right for natural persons need
syststems
to be included in the AI Act. Otherwise, it
…
and rights for natural persons
would not be possible for individuals to
(c) harmonised transparency rules for
legally address the impact of harmful AI
certain AI syststems
applications with the user.
However, regarding LEAs it might be
necessary to amend or restrict certain
rights of natural persons to ensure a
balanced regulation – as in a feasible
method to operate public
Reference
Third compromise proposal
Drafting suggestion
Comment
administration, being bound to respect
the fundamental rights of the individual
as such.
Art. 2
This Regulation applies to:
...
(new) Member States remain free to take measures
DEU considers it necessary to include an
at national level to protect minors (persons below
opening clause for the adoption of
the age of 18 years) in accordance with UNCRC
national rules in the area of protection of
General Comment No. 25.
minors.
We suggest a clarification by the
8.
This Regulation is without prejudice to
suggested addition in Article 2 (7) for
Union law on the protection of personal data, in
more clarification.
particular Regulation (EU) 2016/679 and Directive
2002/58/EC including any supplementing
provisions of national law
Art. 2a
(1) Member States may, by law or collective
The AI Act must not be a „regulatory
agreements, provide for more specific rules for AI
ceiling“ for specific requirements
systems used in the employment context, in
imposed by Member States in the area
particular to ensure the protection of employees'
of employment. The more generalized
rights, freedoms and health and safety at work.
rules of the AI Act might prove
Reference
Third compromise proposal
Drafting suggestion
Comment
(2) Each Member State shall notify to the
insufficient for the specific nature of the
Commission those provisions of its law which it
employment relationship with its
adopts pursuant to paragraph 1 and, without
structural imbalance of power.
delay, any subsequent amendment affecting them. Therefore, it is necessary to ensure that
Member States and social partners are
still able to set more specific rules for AI
deployed in the context of employment,
without generally precluding the use of
AI applications in the employment
context. This includes requirements for
employers deploying AI systems and
grant rights to workers and their
representatives regarding the use of AI
at the workplace.
Art. 3
For the purpose of this Regulation, the
‘artificial intelligence system’ (AI system) means For DEU, it is very important that the
following definitions apply:
a system that is designed to operate with
scope of the regulation is clear so
elements a certain level of autonomy and that,
providers know whether their systems
…
based on machine and/or human-provided data
have to comply with it. By contrast, the
‘artificial intelligence system’ and inputs, infers how to achieve a given set of
newly added requirement, that a system
(AI system) means a system human-defined objectives using machine
must be designed to operate with a
that is designed to operate with learning and/or logic- and knowledge based
"certain level of autonomy" in order to
a certain level of autonomy and approaches, and produces system-generated
be classified as an AI system, creates
that, based on machine and/or outputs such as content (generative AI systems),
legal uncertainty. Whilst DEU
human-provided data and predictions, recommendations or decisions ,
understands that autonomy is at present
inputs, infers how to achieve a influencing the environments with which the AI
one of the signifying components of an
given set of human-defined system interacts;
AI system, a ‘certain level’ is too broad.
objectives
using
machine
learning and/or logic- and
The required threshold remains unclear
knowledge based approaches,
at least until the CJEU provides further
and
produces
system-
clarification. DEU therefore proposes
generated outputs such as
‘elements of’, since this is a prerequisite,
Reference
Third compromise proposal
Drafting suggestion
Comment
content
(generative
AI
which is verifiable – a system either
systems),
predictions,
operates with (elements of) autonomy,
recommendations or decisions,
or it does not.
influencing the environments
DEU further suggests that the term
with which the AI system
‘autonomy’ may be defined or explained
interacts;
in the sense of this Directive in the
recitals. In addition, it would be helpful
to amend recital 6b.
Reference
Third compromise proposal
Drafting suggestion
Comment
Art. 3
(23)
‘substantial modification’ means (23) ‘substantial modification’ means a change to We consider the term “affects” as too
a change to the AI system the AI system following its placing on the market or vague for practical use, therefore further
following its placing on the
market or putting into service putting into service which
exceeds updates and
clarification is needed.
which affects the compliance of
technical adaptations and
results in a new
Furthermore, there is a need for AI
the AI system with the
requirements set out in Title III,
assessement of the compliance with the
systems that continues to learn after
Chapter 2 of this Regulation, or requirements set out in Title III, Chapter 2 of of
being placed on the market or put into
results in a modification to the
intended purpose for which the this Regulation, or results in a modification to the
service, changes should not constitute a
AI system has been assessed;
. intended purpose for which the AI system has been substantial modification as follows:
For high-risk AI systems that
continue to learn after being assessed;
. For high-risk AI systems that continue 1. allow changes to performance
placed on the market or put to learn after being placed on the market or put
optimisation and minor software changes;
into service, changes to the
high-risk AI system and its into service, changes to the high-risk AI system
2. changes to the intended purpose and
performance that have been and its performance optimisation, except
use must be excluded.
pre-determined
by
the
provider at the moment of the changes regarding the intended purpose and use, initial conformity assessment that have been pre-determined by the provider
and are part of the information
contained in the technical at the moment of the initial conformity
documentation referred to in assessment and are part of the information
point 2(f) of Annex IV, shall not
constitute
a
substantial contained in the technical documentation
modification.
referred to in point 2(f) of Annex IV, shall not
constitute a substantial modification.
For AI systems related to products covered by the
Machinery Regulation listed in Annex II, section A,
Reference
Third compromise proposal
Drafting suggestion
Comment
the definition of “substantial modification”
according to the Machinery Regulation listed in
This last addition is necessary in case that
Annex II, section A applies.
the AI system is related to a product
covered by the Machinery Regulation
listed in Annex II, section A.
Otherwise, e.g. a substantially modified
AI system, which is part of a machinery,
would follow the definition for a
substantial modification of the AI
Regulation and not the definition for
“substantial modification” according to
the Machinery Regulation.
Art. 3 (33)
(33) ‘biometric data’ means personal (33) ‘biometric data’ means personal data
as
A divergence of terminology must be
data resulting from specific technical
defined in point 14 of Article 4 of Regulation
avoided.
processing relating to the physical,
(EU) 2016/679 physiological
or
behavioural
characteristics of a natural person, which
allow or confirm the unique identification
of that natural person, such as facial
images or dactyloscopic data;
Art. 3 (51)
Since "informed consent" is a different
form of consent than in Art. 6 (1) lit a, 7
GDPR we might suggest clarifying this in
a recital.
Further, we suggest that the possibility
to revoke consent be included directly
here in the definition and not in Art. 54a
Reference
Third compromise proposal
Drafting suggestion
Comment
(5); or at least there should be a
reference to Art. 54a (5) here.
New article
For High-risk AI systems covered by Regulation
This Regulation should complement and
(EU) 2017/745 or Regulation (EU) 2017/746, the
strengthen existing provisions such as
Regulations (EU) 2017/745 and (EU)
requirements set out in this regulation shall apply to 2017/746 relating to the ensuring of
the extent, to which these requirements are more
compliance of medical device AI systems
and controls on those products by
specific than the sector specific requirements and
notified bodies. However, because of the
provisions with the same objective, nature or effect, specific nature of and the specific risks
such as requirements on data and data governance,
related to medical device AI systems and
in accordance with the principle of
lex
AI system validation and AI systems that
specialis, this Regulation should apply
continuously learn in the field.
only in so far as there are no specific
provisions with the same objective,
nature or effect in Regulation 2017/745
and 2017/746. In so far the requirements
of this Regulation on data and data
governance, AI system validation, AI
systems that continuously learn in the
field and the necessary qualification of
Notified Bodies are fully applicable.
Other provisions of this Regulation
should not apply in the areas covered by
more specific provisions set out in
Regulations (EU) 2017/745 and (EU)
2017/746.
Reference
Third compromise proposal
Drafting suggestion
Comment
Art. 4b
including as reflected in relevant harmonised
In general, the application of harmonised
standards and common specifications is
standards or common specifications.
voluntary. By pointing out harmonised
standards and common specifications in
this context it sounds like a required
mandatory application.
Art. 5
1.
The
following
artificial
Remote real-time biometric
intelligence practices shall be prohibited:
identification in public spaces through AI
must be ruled out by European law.
However, retrograde biometric
identification, e.g. during the evaluation
of evidence, must not be ruled out by
European law.
DEU reserves the right to an in-depth
comment regarding biometric
identification systems at a later stage,
final discussions are still ongoing.
(g) AI systems that substitute human judges in
AI systems should in no event be used to
judicial proceedings when issuing judicial decisions replace human judges. To ensure this,
mere regulation through classification as
on the merits;
high-risk AI is not sufficient in our view.
Thus, we suggest to explicitly include this
aspect in the list of prohibited AI.
(h) AI systems intended to be used by law
enforcement authorities for making individual risk
assessments of natural persons in order to assess the
risk of a natural person for offending or
reoffending;
Reference
Third compromise proposal
Drafting suggestion
Comment
(i) AI systems intended to be used by public
Certain AI systems used in the work
authorities as polygraphs and similar tools or to
environment can enable employers to
detect the emotional state of a natural person
systematically and comprehensively
monitor their employees. Especially in a
digital work environment, such systems
(j) the placing on the market, putting into service or can monitor almost every step of an
use of an AI system that is for comprehensive,
employee and, for example, process data
systematic surveillance and monitoring employee
on communication, applications used or
performance and behaviour without a specific
an employee’s search history. Using this
reason and that is suited to creating psychological
data, these AI systems can accurately
pressure to adapt in a way that significantly inhibits track employee performance and
employees in their self-determination and the free
behavior, generate scores on an
development of their personality.
employees' likelihood of quitting or their
productivity, indicate which employees
might be spreading negative sentiment,
and ultimately create comprehensive
profiles of employees. If an employer
monitors his workforce in this way
without a specific reason, it can lead to
immense psychological pressure and
endanger the mental health of
employees. In order to effectively deal
with these dangers, it is necessary to
ensure that systems designed for such a
purpose cannot be legally placed on the
market in the first place.
Art. 6
2.
An AI system intended to be
Our examination is ongoing, especially
used as a safety component of a
regarding the role of safety components.
Reference
Third compromise proposal
Drafting suggestion
Comment
product
covered
by
the
legislation referred to in
paragraph
1
shall
be
considered as high risk if it is
required to undergo a third-
party conformity assessment
with a view to the placing on
the market or putting into
service
of
that
product
pursuant to above mentioned
legislation. This provision shall
apply irrespective of whether
the AI system is placed on the
market or put into service
independently
from
the
product.
Art. 6
(3) AI systems referred to in Annex III Deletion
While we understand the intend of the
shall be considered high-risk if
presidency’s work this proposal falls
in any of the following cases:
short in many regards so we propose its
(a) the output of the system is
immediately effective with
deletion.
respect to the intended purpose
of the system without the need
for a human to validate it;
As we understand it, the provider must
make the assessment to determine
(b) the output of the system
whether its AI system is a high-risk AI
consists of information that
system. Under the new proposal, this
constitutes the sole basis or is
not purely accessory in respect
Reference
Third compromise proposal
Drafting suggestion
Comment
of the relevant action or
classification now depends not only on
decision to be taken by the
the area of application chosen, but also
human, and may therefore lead
to a significant risk to the
on the specific use in each case. This will
health, safety or fundamental
not be known to the provider, so he will
rights.
have to anticipate typical use cases.
However, he will hardly be able to ensure
that his AI system will later only be used
for these specific use cases in individual
cases. In this respect, there is a lack of
enforcement possibilities. The intended
powers of the market surveillance
authorities against providers does not
lead any further, since the duty of users to
use the system as intended in Art. 29(1)
only applies to high-risk AI systems. In
addition, supervisory rights of the
authorities against individual users are
per se not very effective in enforcing the
law.
Also missing are obligations of the
provider for non-high-risk systems, e.g.
documentation obligations on how he
reached his classification. Currently, the
Reference
Third compromise proposal
Drafting suggestion
Comment
proposal therefore contains numerous
possibilities for circumvention and leads
to gaps in protection.
Art. 7 (3)
3.
The Commission is empowered Deletion
to adopt delegated acts in
accordance with Article 73 to
amend the list in Annex III by
deleting removing high-risk AI
systems where both of the
following
conditions
are
fulfilled:
(a) the high-risk AI system(s)
concerned no longer pose
any significant risks to
fundamental
rights,
health or safety, taking
into account the criteria
listed in paragraph 2;
(b) the deletion does not
decrease the overall level
of protection of health,
safety and fundamental
rights under Union law.
Art. 10 (3)
Training, validation and testing data sets
DEU discusses how it can be ensured
shall be relevant, representative,
that the specifications in the regulation
and to the best extent possible,
correspond to the current state of the
free of errors and complete. They
art in the development of AI and the
current scientific standard for ensuring
Reference
Third compromise proposal
Drafting suggestion
Comment
shall have the appropriate
AI that is as error-free and unbiased as
statistical properties, including,
possible. In this context, the question is
where applicable, as regards the
raised, for example, to what extent a
persons or groups of persons on
requirement that training data be "error-
which the high-risk AI system is
free" and “complete”corresponds to the
intended to be used. These
current state of scientific research on AI
characteristics of the data sets
development that is as error- and bias-
may be met at the level of
individual data sets or a
free as possible. What is the view of the
combination thereof.
Commission or other Member States on
this issue?
It also remains unclear, how a data set
should be prepared so that the data set
is regarded as “representative” data set
since representativeness is not a
technical term in statistical data science.
(unlike e.g. the term of a random sample
).
Please also refer to the separate position
paper handed in, proposing necessary
diverging regulations for public
administration (especially LEAs and
migration authorities) „Regulation of AI –
taking greater account of the specific
characteristics of the public
administration, particularly in the fields
of security and migration“.
Art. 11
1. The technical documentation of a high-
For reasons of secrecy and protection of
risk AI system shall be drawn up before
methods, updating the technical
that system is placed on the market or put
documentation pursuant to Art. 11 Par. 1
into service and shall be kept up-to date.
for a provider is also viewed critically as
Reference
Third compromise proposal
Drafting suggestion
Comment
soon as the AI system is used in the law
enforcement area. Could a solution in
this area also be to shift certain
obligations under Article 16 from the
provider to the user, if and to the extent
that state secrecy interests require this?
Do the Commission or other Member
States also see these issues?
Art. 14
It is important to us that the provider
does not reduce the risks identified in
the risk management system according
to article 9 solely through human
oversight. Articles 9 and 14 need to be
reconciled that providers reduce risks
according to the principle of integrated
safety and security (reduce risk through
1st design, 2nd described safety and
security, 3rd information about residual
risks and 4th user). It seems necessary to
describe the intended human-machine
interface in more detail in order to
enable human oversight and to increase
explainability, traceability and trust of an
AI system.
Art. 14 (5)
For high-risk AI systems referred to in
The DEU security authorities are
point 1(a) of Annex III, the measures
concerned whether the provisions on the
referred to in paragraph 3 shall be such as
four-eyes principle in Article 14 (5) of the
to ensure that, in addition, no action or
Draft Regulation in the area of law
decision is taken by the user on the basis
enforcement could cover cases of
of the identification resulting from the
application that currently exist for
system unless this has been
separately
Reference
Third compromise proposal
Drafting suggestion
Comment
verified and confirmed by at least two
procedures within the meaning of Annex
natural persons.
The requirement for a
III No. 1 a) of the Draft Regulation and, in
separate verification shall not apply to
particular, could also affect
high risk AI systems used for the
constellations in which only one person
purpose of border control, in cases
acts on the side of the authority
where Union or national law considers
authorized to intervene, so that the four-
the application of this requirement to
eyes principle provided for in the AI
be disproportionate.
Regulation could lead to
disproportionate compliance costs in this
respect.
DEU welcomes the attempt made in
Article 14 Paragraph 5 to exempt AI
systems from the dual control obligation.
However, the exemption has too high
hurdles (formal law). In this respect, the
question also arises as to why the
exemption is limited to the area of
"border control" and which border
control processes and decisions should
now be covered by Art. 14 (5) AI Act?
DEU also asks for an explanation of the
legal basis of Art. 14 (5) and the
exception included.
Please also refer to the separate position
paper handed in, proposing necessary
diverging regulations for public
administration (especially LEAs and
migration authorities) Regulation of AI –
Reference
Third compromise proposal
Drafting suggestion
Comment
taking greater account of the specific
characteristics of the public
administration, particularly in the fields
of security and migration”.
Art. 16
Obligations of providers of high-risk AI
In DEU it is being discussed whether
systems
further requirements for the protection
of confidentiality interests must be
established in Art. 16 for cases in which
high-risk AI systems of a (private)
provider are used in the law
enforcement area. Specifically, one
solution could be that the information
required under Article 16 of the Draft
Regulation is not transmitted to the
national authority by the (private)
provider, but rather by the user directly.
Please also see our seperate paper titled
[Regulation of AI – taking greater
account of the specific characteristics of
the public administration, particularly in
the fields of security and migration] on
this issue.
Annex III
(1) AI systems
Biometric identification
DEU reserves the right to an in-depth
systems intended to be used for the ‘real-
comment regarding biometric
time’ and ‘post’ remote biometric
identification systems at a later stage,
identification of natural persons
without
final discussions are still ongoing.
their agreement
(b) emotion recognition systems,
DEU asks that biometric categorization
systems and emotion recognition systems
(c) biometric categorisation systems.
be included in Annex III no. 1, as these
systems pose comparable risks to
Reference
Third compromise proposal
Drafting suggestion
Comment
fundamental rights as biometric
identification systems.
(2) Management and operation of 2.
Management and operation of c
Critical
Addition in reference to proposed high
c
Critical infrastructure
and protection of infrastructure
and protection of environment and
risk area 2.b)
environment:
emmission intensive industries
(a) AI systems intended to be (a) AI systems intended to be used as safety
Remove road traffic here from the list
used as safety components components in the management and operation of
and add separate point 2 (aaa) because of
in the management and road traffic and critical infrastructure used for the
operation of road traffic supply of water, gas, heating and electricity and the other EU sector regulation to be
and the supply of water, collection, treatment and discharge of wastewater
;
referenced there in general terms, e.g.
gas, heating and electricity
;
ITS RL.
AI use in sanitation may be conceivable
especially concerning wastewater
disposal. Failure or malfunction of such
AI may cause detrimental effects on
health and hygiene on a larger scale.
Hence, the functioning of wastewater
disposal is essential to public health and
as such to public infrastructure and
should be regarded as high-risk.
(aa)
AI
systems AI systems intended to be used to control or as
We only consider AI systems used as
intended to be used to safety components in the management and
safety components in the management
control or as safety operation of critical digital infrastructure with
components
in
the the exception of process optimisation methods
and operation of critical digital
management
and for complex machines and plants, virtual digital
infrastructures (such as process
assistants, predictive mainte-nance,
programmable logic controllers (PLCs);
Reference
Third compromise proposal
Drafting suggestion
Comment
operation
of
critical
optimation methods for complex
digital infrastructure;
machines and plants, virtual digital
assistants, predictive maintenance,
programmable logic controllers (PLCs))
to be high risk AI if their results actually
lead directly - i.e. without human
validation - to implementation (i.e.
autonomously operating systems) or
where the results are the only basis for
the relevant action or decision to be taken
by a human. Therefore, process
optimisation methods for complex
machines and plants, virtual digital
assistants, predictive maintenance,
programmable logic controllers (PLCs)
should not fall under 2 (aa).
It is our understanding that AI systems
which do not pose a significant risk to the
health, safe-ty or fundamental rights are
not considered high risk.
We ask the COM to integrate a recital
that clear-ly states which systems are
Reference
Third compromise proposal
Drafting suggestion
Comment
considered critical (digital)
infrastructures and to give concrete
examples.
(aaa) AI systems intended to be used as
Road traffic is highly regulated, e.g. ITS
safety components in the management and
directive incl. delegated acts and road
operation of critical infrastructure used for road
safety regulation. Those more specific
traffic if not regulated in sector-specific acts.
acts (existing and new ones) should
prevail. It is also our understanding that
(aaaa) AI systems intended to be used in the
AI systems which do not pose a
management and operation of public warning
significant risk to the health, safety or
systems as well as AI systems intended to be used
as safety components in the management and
fundamental rights are not considered
operation of technical systems for the protection
high risk.
against extreme weather events such as floods and
droughts.
Public warning systems may, for
instance, alert the population based on
AI-steered predictions about cases of
extreme weather such as floods. With a
view to possible, large-scale
ramifications especially to human health
as well as property, such stand-alone
systems should be regarded as posing a
high risk. The failure or malfunction of
safety components in protection systems
against extreme weather systems may
also result in serious harms. Such systems
could encompass e.g. systems for the
opening/closing of locks or of transport
Reference
Third compromise proposal
Drafting suggestion
Comment
networks adjacent to water in cases of
floods or reservoirs and barrage dams in
cases of droughts.
(b)
AI
systems (b) AI systems intended to be used to control
AI systems deployed to control activities
intended to be used to industrial activities of energy industries or
of emmission intensive industries, which
control emissions and processing of metals, mineral industry, chemical
pollution.
industry and waste management referred to in the
affect the release of the substantial
Industrial Emission Directive (IED) 2010/75/EU
amounts of emmissions and pollution
pose significant risks of harm to the
environment, therefore infringing the
fundamental right to a high level of
environmental protection and resulting in
immediate or mediate risks to health and
safety.In particular technical errors of AI
systems which are used to monitor and
control operational processes in industrial
plants may lead to malfunctions resulting
in major environmental damage, such as
the release of toxic substances.
3. Education and vocational training:
In the view of DEU it is necessary to
sharpen the wording of the use case.
Reference
Third compromise proposal
Drafting suggestion
Comment
For example, what is meant by
“educational training institutions”? Does
this include both privat and public
institutions? What exactly is meant by
“institution”?
(b) AI systems intended to be
What constitutes a “steering learning
used for the purpose of the
process”? For instance, does this include
purpose of to assessing
mobile applications for learning
assessing students natural
languages? What are “programmes” in
persons in with the view to
this context?
evaluateing
learning
outcomes, including when
those outcomes are used to
or steering the learning
process of natural persons
in
educational
and
vocational
training
institutions or programmes
at all levels educational and
vocational
training
institutions
and
for
assessing participants in
tests commonly required
for
admission
to
educational institutions.
4 (b) AI intended to be used
to make for AI intended to be used to make decisions on
The addition of "task allocation based on
making decisions on promotion and promotion and termination of work-related
individual behavior or personal traits or
termination of work-related contractual contractual relationships, for task allocation that is
based on individual behavior, or personal traits
Reference
Third compromise proposal
Drafting suggestion
Comment
relationships,
to allocate for task
s or characteristics or potentially affects a natural
characteristics" limits the scope of this
allocation
based on individual behavior person’s health, safety, fundamental rights or
use case too much. If only task allocation
or personal traits or characteristics and legitmate
interests
and to monitor and evaluate
to
monitor
for
monitoring
and performance and behavior of persons in such
based on individual traits of a single
evaluat
eing performance and behavior of relationships.
person is regulated, essentially only
persons in such relationships.
discriminatory cases are likely to be
covered. However, AI systems for task
allocation can also pose other dangers.
For example, decisions of AI systems
used in warehouses or by transportation
platforms can be based on (from a
worker’s perspective) external factors
like customer needs, the type of goods to
be transported, traffic, weather or
efficiency. If an AI systems micro-
manages workers with granular
instructions, it can lead to a loss of
autonomy and dignity for the workers
while performing their work. Demanding
instructions and shift-schedules based on
such external factors can be exhaustive
and stressful for workers.
Reference
Third compromise proposal
Drafting suggestion
Comment
5. Access to and enjoyment of
essential
In our opinion statutory social insurance
essential private services and
essential
schemes (e.g. pension insurance, health
public services and benefits:
and long-term care insurance) are
(a) AI systems intended to be
used by public authorities or on
covered by Annex III,5a. Also covered
behalf of public authorities to
are insurance policies,
evaluate the eligibility of natural
persons for public assistance
in which property-like entitlements to
benefits and services, as well as
social benefits are acquired. Does COM
to grant, reduce, revoke, or
reclaim such benefits and
agree?
services;
(b) AI systems intended to be
It is our understanding that AI
used to evaluate the
systems used by credit agencies to
creditworthiness of natural
persons or establish their
establish a credit score for natural
credit score, with the
persons which will be used for other
exception of AI systems
put into service by small
purposes than the evaluation of their
scale providers
providers
creditworthiness (e.g. access to essential
that are micro and small-
sized
enterprises
as
services such as housing, electricity, and
defined in the Annex of
telecommunication services) do fall
Commission
Recommendation
under No. 5(b) and are therefore
2003/361/EC for their own
considered high-risk. This is important to
use;
us because these systems have a
Reference
Third compromise proposal
Drafting suggestion
Comment
significant impact on the lives of natural
persons. Flawed systems pose a
significant threat to people's ability to
participate fully in society. In that sense,
we ask the Pres/COM to further specify
which processes are precisely covered by
the use cases evaluation of
creditworthiness and establishment of
credit scores (e.g. access to Buy-Now-
Pay-Later offerings) as well as to clarify
which entities are subject to this high-risk
use case.
On the other hand entities already
regulated by comprehensive financial
sector regulation should only be included
insofar as the AI act pose additional
requirements, e.g. reporting provisions on
fundamental rights issues, the EU data
base or (potential) obligations towards
affected persons (see Art. 52a and 52b).
Reference
Third compromise proposal
Drafting suggestion
Comment
Concerning Annex III point 5 (b) and (d),
we ask the Pres/COM to thoroughly
analyse and present to the WP whether,
and which areas, the existing European
Financial and insurance sector regulatory
sufficiently covers the regulatory areas
covered by the AI act, to avoid regulatory
gaps and duplication with exisiting
regulation.
(d) AI systems intended to be (d) AI systems not covered under (a) intended to AI systems for health insurances and
used for insurance premium be used for health insurance and long-term care long-term care insurances must be added
setting, underwritings and
claims assessments.
insurance premiums setting, underwriting and
as cases for high-risk AI. Highly sensitive
claim assessment or for decision on provision of
data is processed in this area, and
benefits and services
decisions in this area can have
particularly far-reaching consequences.
This is necessary despite the fact that
- AI systems used by institutions covered
by Union and Member State financial
market regulation for insurance premium
setting, underwriting and claims
Reference
Third compromise proposal
Drafting suggestion
Comment
assessment are subject to extensive
regulation and strict supervision, and
- insurance services provided by
insurance companies covered by sector-
specific financial market regulation
should in principle not be included in the
AI Act, as also stated by EIOPA.
Concerning Annex III point 5 (b) and (d),
we ask the Pres/COM to thoroughly
analyse and present to the WP whether,
and which areas, the existing European
Financial and insurance sector regulatory
sufficiently covers the regulatory areas
covered by the AI act, to avoid regulatory
gaps and duplication with exisiting
regulation.
With regard to the term "premium
setting," we assume that the reporting
procedure of the statutory social
insurance is not covered by the term
Reference
Third compromise proposal
Drafting suggestion
Comment
"premium setting" insofar as it is carried
out by the health insurance.
New (e) AI systems intended to be used in access Housing is correctly named as an
to housing.
essential service in the recitals and
should therefore be included here. It is
an area where EU anti-discrimination
legislation applies, and also objectively
one of the main areas where
discrimination occurs, and where
robust protections are needed. This
justifies classfying AI used in this area
e)
as high-risk.
AI systems intended for or used in
the context of debt collection services
new e) These systems could be potential
harmful for vulnerable persons that are
indebted. If the AI systems makes an
mistake in this context, this could exclude
new f)
the indebted person from participation in
AI systems intended for personalised pricing within the economy.
the meaning of Article 6 (1) (ea) of Directive
2011/83/EU
Reference
Third compromise proposal
Drafting suggestion
Comment
new f) AI Systems used for personalising
prices could potentially discriminiate
consumers based on ethnicity, income
and othe variables. This could lead to a
divide between consumers on the market
and decreases economic price
transparency.
6. Law enforcement:
We ask for further clarification. The
(d)AI systems intended to be
description of AI systems covered by lit.
used by law enforcement
(d) should be clear-cut. It must be
authorities
or on their behalf for
to evaluation
evaluate of the
ensured that systems without risk to
reliability of evidence in the
health, safety or fundamental rights are
course of investigation or
prosecution of criminal offences;
not covered. For DEU, it is very
important that lit. (d) is defined more
narrowly in this respect. At the same
time, systems that pose a risk to the
above-mentioned protected interests must
remain covered.
DEU discusses whether the definition
provided in Article 3 (4) of Directive
(EU) 2016/680 is too broad for the
Reference
Third compromise proposal
Drafting suggestion
Comment
classification as a high-risk CI, and
whether a definition should be included
in the regulation itself or a concrete
description of the facts deemed critical in
Annex III. For DEU, for example, it is
important in this context that this
definition does not include in particular
the tasks of an FIU in the sense of "The
core function of an FIU is the receipt,
analysis and transmitting of suspicious
transaction reports identified and filed by
the private sector". This is especially true
if these suspicious transaction reports are
related to financial transactions of natural
persons. What do the Commission or
other Member States think about the need
to clarify (f)?
7. Migration, asylum and border
We ask for further clarification. The
control management
description of AI systems covered by lit
(d) AI systems intended to assist
(d) should be clear-cut. It must be
to be used by competent
public authorities
or on
ensured that systems without risk to
their behalf to examine
Reference
Third compromise proposal
Drafting suggestion
Comment
for the examination of
health, safety or fundamental rights are
applications for asylum,
not covered. For DEU, it is very
visa and residence permits
and associated complaints
important that lit. (d) is defined more
with
regard
to
the
narrowly in this respect. At the same
eligibility of the natural
persons applying for a
time, systems that pose a risk to the
status.
above-mentioned protected interests must
remain covered.
(8) Administration of justice and applying the law to a concrete set of facts
In order to more clearly distinguish AI
democratic processes:
whenever these systems provide predictions,
systems that are classified as high-risk
recommendations or suggestions to the user with
(a) AI systems intended to assist respect to a specific case.
from AI systems that are intended for
be used by a judicial authority
or
on their behalf in
to interpret
purely ancillary activities and that
for researching and interpreting
therefore have no direct impact on the
facts and
or the law and in
to
apply for applying the law to a
decision of a specific case, we suggest a
concrete set of facts.
further addition in Annex III no. 8(a).
Alternatively, this could be added in
recital 40.
For specifying the relevant provision, please indicate the relevant Article or Recital in 1st column and copy the relevant sentence or sentences as they
are in the current version of the text in 2nd column. For drafting suggestions, please copy the relevant sentence or sentences from a given paragraph
or point into the 3rd column and add or remove text.
Please do not use track changes, but highlight your additions in yellow or use strikethrough to
indicate deletions. You do not need to copy entire paragraphs or points to indicate your changes, copying and modifying the relevant sentences is
sufficient. For providing an explanation and reasoning behind your proposal, please take use of 4th column.
Document Outline