Kaspersky calls for ethical use of AI in cybersecurity

We propose six principles of ethical use of AI in the cybersecurity industry — to be presented at the global Internet Governance Forum.

Kaspersky proposes six principles of ethical use of AI

The rapid development of AI systems, and the attempts to introduce them ubiquitously, are a source of both optimism and concern. AI can help humans in many different areas — as the cybersecurity industry knows firsthand. We at Kaspersky have been using machine learning (ML) for almost 20 years, and know for a fact that without AI systems it’s simply not possible to defend against the huge array of cyberthreats out there. During this time we’ve also identified a wide range of issues associated with AI — from training it on incorrect data to malicious attacks on AI systems and using AI for unethical purposes.

Various international discussion platforms and organizations have already developed general principles of ethical AI (here are the UNESCO recommendations, for example), but more specific guidelines for the cybersecurity industry have yet to be commonly accepted.

In order to apply AI in cybersecurity without negative consequences, we propose that the industry adopt a set of AI ethical principles, the first version of which we are presenting at the UN Internet Governance Forum in Kyoto, Japan. The principles need to be discussed and clarified across the wider cybersecurity community, of course, but we are already adhering to them. What are these principles? Here they are in brief.

Transparency

Users have the right to know if a security provider uses AI systems, as well as how these systems make decisions and for what purposes. This is why we are committed to developing AI systems that are interpretable to the maximum extent possible, with all necessary safeguards in place to ensure they produce valid outcomes. Anyone can get acquainted with our code and workflows by visiting one of our Kaspersky Transparency Centers.

Safety

Among the threats facing AI systems is the manipulation of input datasets to produce inappropriate decisions. Therefore, we believe that AI developers must prioritize resilience and security.

To this end, we adopt a whole range of practical measures to deliver high-quality AI systems: AI-specific security audits and red teaming; minimal use of third-party datasets in training; plus implementing an array of technologies for multilayered protection. Where possible, we favor cloud-based AI (with all the necessary safeguards in place) over locally installed models.

Prof. Amal El Fallah Seghrouchni

Human control

Although our ML systems can operate autonomously, their results and performance are constantly monitored by experts. Verdicts of our automated systems are fine-tuned as required, and the systems themselves are adapted and modified by experts to resist fundamentally new and/or highly sophisticated cyberthreats. We combine ML with human expertise, and are committed to forever maintaining this human element of control in our systems.

Privacy

AI cannot be trained without big data — some of which may be personal. Therefore, an ethical approach to its use must respect the rights of individuals to privacy. In information security practice, this can involve various measures: limiting the types and quantity of data processed; pseudonymization and anonymization; reducing data composition; ensuring data integrity; and applying technical and organizational measures to protect data.

Prof. Dr. Dennis-Kenji Kipker

Developed for cybersecurity

AI in cybersecurity must be used solely for defensive purposes. This forms an integral part of our mission to build a secure world in which tomorrow’s technologies enhance all our lives.

Open for dialogue

We believe that only through working together can we overcome the obstacles associated with the adoption and use of AI for security. For this reason, we promote dialogue with all stakeholders to share best practices in the ethical use of AI.

Read more about our principles of ethical use of AI in security.

Tips