
AI Risk Management in Information Security Context
The advancement of artificial intelligence (AI) is fundamentally transforming how organizations and individuals approach information security. Current AI systems, particularly large language models (LLMs) and automated decision-making tools, while bringing innovation and benefits, also introduce new risks that can compromise the confidentiality, integrity, availability, and authenticity of information. This paper presents a systematic approach to managing risks associated with AI systems and their impact on organizational information security.
AI technologies represent powerful tools that come with specific vulnerabilities. These include systematic biases in models, potential exploitation by malicious actors, and threats to user privacy. The paper analyzes key risk areas, including model discrimination and toxicity, with emphasis on documented cases of AI system failures in critical decision-making processes.
The article further addresses privacy and security concerns, such as attacks on AI models, training data manipulation, and insufficient data set protection. Additional discussed risks include the spread of disinformation through deepfakes, artificial media, or automated platforms that can contribute to destabilizing societal trust. Special attention is given to malicious actors who utilize AI for cyber attacks, social engineering, or financial fraud. The overview also includes risks in human-AI interaction, where excessive trust in AI systems can lead to erroneous decisions or loss of autonomy.
The material proposes a strategic approach to AI risk management, encompassing systematic risk analysis, prioritization based on impact and probability, and the design of appropriate mitigation strategies. The presented framework includes technical measures, comprising technical models and monitoring, process controls, auditing and continuous improvement, and importantly, a governance framework that defines responsibilities and roles. The paper will also present practical procedures for risk identification and mitigation.
Conference participants will gain an overview of current trends, best practices, and available standards in AI risk management. The aim is not only to raise awareness of these issues but also to provide concrete steps for implementing security measures in organizations of various sizes and sectors.
The practical implementation section addresses three key perspectives. The organizational view outlines implementing controls aligned with AI Act requirements for both on-premise and cloud AI services, including basic evaluation of AI service dependencies. The AI solution provider's perspective presents core security requirements for AI system development and maintenance, covering both on premise enterprise solutions and AI cloud services. The individual user perspective provides essential guidelines for secure use of generative AI, including data handling practices and output validation.
The paper draws from the latest findings and practical case studies, offering inspiration for organizations seeking to responsibly utilize AI technologies. The outputs include practical elements such as surveys of AI risk experiences, recommended resources, and standards for risk management in the dynamically evolving world of AI.
Jiří Diepolt
Jiří Diepolt is a leading expert in artificial intelligence risk management and information security, combining deep technical knowledge with practical experience in implementing AI governance frameworks. With over 25 years of experience in IT security and risk management, he specializes in identifying and mitigating risks associated with AI systems, particularly in financial services and critical infrastructure sectors.
His expertise spans from traditional information security to cutting-edge AI challenges, including bias detection, model security, and AI compliance frameworks. As an independent IT advisor and auditor, he helps organizations navigate the complex landscape of AI implementation while ensuring robust security measures and regulatory compliance, including ISO 27001 certification processes and AI-specific risk assessments.
In his previous role as Member of the Board and COO/CIO at NEY spořitelní družstvo, he established comprehensive IT and information security frameworks. During his tenure at KPMG as Senior Advisor, he led numerous information security engagements, including security management reviews, IT security audits, and risk analyses.
He currently focuses on developing and implementing AI risk management frameworks, DORA & NIS2 compliance, and delivering specialized training programs in these areas. His recent work includes pioneering methodologies for AI risk assessment and governance in financial institutions.
He graduated from the University of Economics, Prague, Faculty of Informatics and Statistics, and actively contributes to professional discussions about emerging AI challenges, cybersecurity, and risk management practices.
