Ceci est une version HTML d'une pièce jointe de la demande d'accès à l'information 'EU Agencies and the AI Act'.

Ref. Ares(2021)6483454 - 21/10/2021
Personal data

A united and strengthened research and innovation community striving for excellence
Joining forces at all levels, from basic research to deployment, will be key to overcome fragmentation and create synergies between the existing
networks of excellence.
In your opinion how important are the three actions proposed in sections 4.B, 4.C and 4.E of the White Paper on AI (1-5: 1 is not important at
all, 5 is very important)?
Are there any other actions to strengthen the research and innovation community that should be given a priority?
Focusing on Small and Medium Enterprises (SMEs)
The Commission will work with Member States to ensure that at least one digital innovation hub per Member State has a high degree of
specialisation on AI.
In your opinion, how important are each of these tasks of the specialised Digital Innovation Hubs mentioned in section 4.D of the White Paper in
relation to SMEs (1-5: 1 is not important at all, 5 is very important)?
Are there any other tasks that you consider important for specialised Digital Innovations Hubs?
Section 2 - An ecosystem of trust
Chapter 5 of the White Paper sets out options for a regulatory framework for AI.
In your opinion, how important are the following concerns about AI (1-5: 1 is not important at all, 5 is very important)?
Do you have any other concerns about AI that are not mentioned above? Please specify:
Do you think that the concerns expressed above can be addressed by applicable EU legislation? If not, do you think that there should be
specific new rules for AI systems?
Other, please specify
If you think that new rules are necessary for AI system, do you agree that the introduction of new compulsory requirements should be limited to
high-risk applications (where the possible harm caused by the AI system is particularly high)?
Other, please specify:
Do you agree with the approach to determine “high-risk” AI applications proposed in Section 5.B of the White Paper?
Other, please specify:
If you wish, please indicate the AI application or use that is most concerning (“high-risk”) from your perspective:
In your opinion, how important are the following mandatory requirements of a possible future regulatory framework for AI (as section 5.D of the
White Paper) (1-5: 1 is not important at all, 5 is very important)?
In addition to the existing EU legislation, in particular the data protection framework, including the General Data Protection Regulation and the
Law Enforcement Directive, or, where relevant, the new poss bly mandatory requirements foreseen above (see question above), do you think
that the use of remote biometric identification systems (e.g. face recognition) and other technologies which may be used in public spaces need
to be subject to further EU-level guidelines or regulation:
Please specify your answer:
Do you believe that a voluntary labelling system (Section 5.G of the White Paper) would be useful for AI systems that are not considered high-
risk in addition to existing legislation?
Do you have any further suggestion on a voluntary labelling system?
What is the best way to ensure that AI is trustworthy, secure and in respect of European values and rules?
Please specify any other enforcement system:
Do you have any further suggestion on the assessment of compliance?
Section 3 – Safety and liability implications of AI, IoT and robotics

The overall objective of the safety and liability legal frameworks is to ensure that all products and services, including those integrating emerging
digital technologies, operate safely, reliably and consistently and that damage having occurred is remedied efficiently.
The current product safety legislation already supports an extended concept of safety protecting against all kind of risks arising from the product
according to its use. However, which particular risks stemming from the use of artificial intelligence do you think should be further spelled out to
provide more legal certainty?
In your opinion, are there any further risks to be expanded on to provide more legal certainty?
Do you think that the safety legislative framework should consider new risk assessment procedures for products subject to important changes
during their lifetime?
Do you have any further considerations regarding risk assessment procedures?
Do you think that the current EU legislative framework for liability (Product Liability Directive) should be amended to better cover the risks
engendered by certain AI applications?
Do you have any further considerations regarding the question above?
Do you think that the current national liability rules should be adapted for the operation of AI to better ensure proper compensation for damage
and a fair allocation of liability?
Please specify the AI applications:
Do you have any further considerations regarding the question above?
Thank you for your contr bution to this questionnaire. In case you want to share further ideas on these topics, you can upload a document
You can upload a document here:
Related publication

External reference AIConsult2020
Type Public consultation
Lead Service CNECT
Full title Consultation on the White Paper on Artificial Intelligence - A European Approach
Short title
White Paper on Artificial Intelligence - a European Approach
Feedback start date 20/02/2020 12:42:26
Number of weeks 16
Feedback end date 14/06/2020 00:00:00
Internal reference NOPLAN/2020/0001
Target groups The European Commission wishes to consult stakeholders with an interest in artificial intelligence (AI):
<li>• AI developers and deployers
<li>• companies and business organisations
<li>• Small and Medium-sized Enterprises (SMEs)</li>
<li>• public administrations</li>
<li>• civil society organisations</li>
<li>• academics</li>
<li>• citizens</li>
Consultation objective The public consultation aims to give stakeholders the opportunity to express their views on the questions
raised and policy options proposed in the White Paper on Artificial Intelligence.