Artificial Intelligence and Cyber security (legal aspects)
The development of artificial intelligence (or the extent to which today's systems - often based on machine learning principles - are referenced as artificial intelligence) brings significant legal challenges – be it liability for AI systems, copyright, privacy, data processing, and cybersecurity. Cybersecurity plays a crucial role in ensuring that AI systems are resilient to attempts to alter their use, behaviour, performance or characteristics.
Cyber-attacks on AI systems can exploit AI-specific assets such as training datasets (e.g., disruption of these datasets) or trained models (e.g., adversarial attacks) or exploit weaknesses in the AI system's digital assets or underlying ICT infrastructure. AI system providers need to take appropriate measures to ensure a proper level of cybersecurity reflecting the risks of each specific system. From a legal perspective, a set of security measures should not be determined dogmatically. The legislator does not usually have the necessary knowledge. In addition, any predetermined set of measures would significantly constrain any further development of cybersecurity and, as a result, could undermine the security of individual systems.
Dominik is a senior associate in Pierstone (law firm) where he advises on all aspects of privacy, cybersecurity, telecommunications, ICT and IP law. He regularly publishes and lectures on a wide variety of topics spanning privacy, technology, digitalization and artificial intelligence. He co-authored the academic commentaries on the GDPR and its Czech implementing law prepared and published by Pierstone.