
Ref. Ares(2022)2869279 - 09/04/2022
Ref. Ares(2022)5039270 - 11/07/2022
Consultation on the inception impact
assessment for adapting liability rules
to the digital age and circular economy
Google’s Comments
Google welcomes the consultation on the inception impact assessment (I A) regarding
adapting liability rules for so ware and a i cial intel igence, and the oppo unity to provide
feedback as pa of a multi-stakeholder discourse on this impo ant issue.
Overal , Google believes that Europe’s current liability framework remains t for purpose,
being both e ective and technology neutral, so sweeping changes should be approached with
caution. While the evaluation of the Product Liability Directive (the Directive) identi ed
hypothetical chal enges under the existing framework, we have yet to see real-world evidence
of problems that warrant altering such a fundamental underpinning of European law and
running the risk of severe unintended consequences. In pa icular, a strict liability regime is
unnecessary and il -suited to the prope ies of so ware and AI systems, and would have a
profound chil ing e ect on innovation and digitization in Europe, dispropo ionately impacting
European SMEs.
In addition, any potential changes to liability rules should take into account the e ects of the
recently proposed AI Act (AIA). The AIA introduces additional obligations on providers and
users of high-risk AI applications intended to increase the safety and trustwo hiness of AI
systems put on the EU market. This wil address many of the potential problems identi ed in
the evaluation and I A. Rather than prematurely adding new rules on liability that could create
redundant or con icting requirements and add additional complexity, the Commission should,
at a minimum, evaluate the impact of the AIA on the gaps identi ed in the evaluation before
introducing any changes to liability frameworks.
Additional points in response to the two categories of policy options proposed by the
Commission in the I A are included below:
1. Options to adapt strict liability rules to the digital age and
circular economy
Any initiative to introduce “strict liability” for AI systems/digital content/so ware should be
approached with great caution. Global y, strict liability frameworks are reserved for abnormal y
hazardous situations, as they preclude any consideration of intent or negligence. Introducing
strict liability would mean that anyone involved in developing or operating an AI system could
be held liable for problems they had no awareness of or in uence over. This could lead to
misplaced responsibility if the AI system was simply a conduit rather than the source of harm
(such as if an operator used an automatic translation system to mistranslate medical advice
even though the system is not intended to be used for medical purposes).
Fu hermore, while so ware providers do extensive testing and debugging before releasing
so ware, bugs almost always become apparent over time and are xed by updates or in later
releases. This includes bugs that create cyber vulnerabilities in so ware that can be exploited
by malicious actors. Despite decades of e o , it has proven impossible to entirely eradicate
bugs from so ware due to the complexity of writing code, and this is general y accepted as an
inherent feature of so ware development.
If so ware developers are subject to strict
liability for any bug in their code, it could e ectively forestall the deployment of vi ually
all so ware in Europe, and dispropo ionately impact European SMEs.
Applying strict liability to so ware updates and refurbishments would fu her disincentivize
so ware deployment and maintenance by e ectively removing any time-limitation on strict
liability, making it harder for producers to extend the useful lives of digital products and
address bugs and vulnerabilities in so ware. Such a drastic change would destroy the current
wel -functioning balance struck between business innovation and consumer protection, and is
unnecessary. 1 Damages due to defects that occurred a er a product has been put into
circulation can already be covered under national to or delicts laws. In addition, so ware
providers have limited control over security updates actual y being accepted by end users,
meaning this approach could make so ware providers directly liable for the omissions of other
market pa icipants.
There are also fundamental issues with the notion of extending the types of damage for which
losses are recoverable via strict liability to non-material damages. Doing so would put a
dispropo ionate burden on so ware developers. For example, if a so ware crash lead to the
loss of a term paper and a student failing their class, the so ware developer could be sued for
compensation of al the consequential damages, e.g. an additional year of studies and lost
1 See e.g. Astrid Seehafer and Joel Kohler: Künstliche Intel igenz: Updates für das Produkthaungsrecht?
EuZW He 6/2020, 213.
income. Where non-material damages are claimed under a fault-based or breach of contract
claim, they are only recoverable if the claimant can prove that the losses were caused by the
relevant failure of standard of care or breach of contract, and were not too remote (amongst
other factors).
Applying strict liability equitably to these less proximate forms of damage
will be e ectively impossible without the kind of detailed analysis (factual and legal) of
the relationship between cause and e ect that occurs in a fault-based or contract claim.
Strict liability is only appropriate in the clearer cut cases of personal injury and damage to
prope y that have direct and severe consequences for consumers.
Fu hermore, many of the speci c types of non-material damages referenced in the I A raise
their own chal enges. For example, damages for privacy infringements are notoriously di cult
to quantify and the procedural burdens for al pa ies involved would be dispropo ionate.
These risks to consumers are also already covered by the GDPR, which imposes severe
penalties, providing a strong incentive for producers to protect their customers’ privacy and
personal data. Environmental damages, as another example, would raise pa icularly complex
issues of causation and remoteness that would be di cult to address under a strict liability
regime. It is also hard to imagine how environmental damage could be associated with an
individual consumer in a way that would be appropriate to a consumer protection framework
like the Directive.
Rather than revise the Directive as proposed and risk exposing a wide swath of
intangible products, including so ware, to strict liability, a sensible middle ground would
be to clarify when so ware should be treated as a quasi-product. In Google’s view, this
should apply to so ware that is used in a manner more like a product than a service, and which
has the potential to cause physical damage to persons or prope y. Such so ware wil normal y
be subject to special regulation already. An example is so ware used as a medical device,
which is already treated as a quasi-product under the Medical Devices Regulation. Adherence
with relevant safety standards under existing regulation could serve as the basis for
case-by-case exemptions, similar to the approach taken in the AIA. To deter unreasonable
claims against AI system operators, there should also be an exemption for cases where
evidence shows that an accident was caused by another pa y or “force majeure”.
As far as the harmonisation of strict liability of operators/users of AI-systems is concerned,
clarity around scope would be vital to provide legal ce ainty for AI system operators. While
having potential y con icting de nitions of “high risk” compared to the AIA is not ideal, it is a
reasonable compromise given that the assessment of liability by nature requires a narrower,
compensation-oriented framing than more general ex-ante regulation. A possible approach
could be to provide an exhaustive list of “high risk” AI applications which play a signi cant role
in situations where strict liability already applies (e.g., nuclear power plants, aviation).
Adherence to existing safety standards that mitigate the heightened risk could provide
exemptions on a case by case basis, similar to the approach taken in the AIA. It should be
acknowledged, however, that most AI systems referred to as having a “speci c risk pro le” are
already subject to national sectoral strict liability rules, many of which remain unharmonized.
Even motor vehicle liability, presented by the Commission as a guiding example, is not
harmonized, even though it presents severe safety risks.
2. Other options to address proof-related and procedural
obstacles to ge ing compensation
As a general rule, we believe that al eged victims should continue to be required to prove what
caused them harm under the liability framework. The strict liability regime of the Directive is
designed to protect consumers, not to provide a simple route for consumers to obtain judicial
remedies, beyond the strict liability itself. Any proposals to reduce obstacles should consider
the advantageous rights and remedies claimants already have under the Directive.
The burden of proving causation should only be altered if, given the prope ies of a
speci c AI system, establishing proof would create an unreasonable obstacle for the
alleged victim. In making this determination, factors to take into account include the
likelihood that the technology contributed to the harm (e.g., if there are known defects), the
nature and scale of the harm claimed, and the degree of ex-post traceability of contributing
processes within the technology.
The proposed AIA is designed to address many of the perceived chal enges relating to how the
prope ies of AI impact the burden of proof by introducing mandatory requirements around
risk management, data and data governance, technical documentation, record-keeping,
transparency and provision of information to users, human oversight, as wel as accuracy,
robustness and cybersecurity. For example, requirements around documentation,
transparency and the provision of information to users are speci cal y designed to address the
issues around opacity and complexity, which should eliminate or drastical y reduce the need
for the proof-related and procedural amendments to the liability framework.
Any changes to
the liability framework should re ect a rigorous analysis of real-world problems that
persist a er the AIA is in place.
Finally, changing the conditions under which claims can be made under the directive is
unnecessary and carries substantial risks. The rationale for a minimum damage threshold in
the Directive was to avoid overburdening the cou s with frivolous litigation. This risk is
arguably greater for so ware-related claims, given that a defect in a single line of code (out of
mil ions) that was used in mil ions of devices could (under this proposal) be the basis for a strict
liability claim from each individual user. It is also unnecessary, since nothing would prevent
consumers from bringing actions in the national cou s for lower value claims, they just cannot
rely on strict liability under the Directive when doing so. Similarly, the Directive stresses that it
is in the interests of both the injured person and of the producer to have a uniform period of
limitation for bringing a claim. The limitation period provides an incentive to diligently pursue
damages claims and provides legal ce ainty to al pa ies involved.
Google thanks the Commission for the oppo unity to provide feedback, and would welcome
the oppo unity to discuss these points fu her.
[END]