Advocating for the secure use of AI technologies, Kaspersky has signed the AI Pact, a European Commission initiative that aims to prepare organizations for the implementation of the AI Act – the first ever comprehensive legal framework on AI worldwide, adopted by the European Union (EU). The pledge signing reflects Kaspersky’s wider commitment to promote and facilitate the prudent and responsible use of AI technologies, recognizing the importance of this stance within the cybersecurity field.
The EU AI Act seeks to encourage trustworthy AI in the European region and beyond by ensuring that AI technologies comply with safety and ethical principles and by addressing AI-associated risks. Enacted in 2024, the AI Act is set to become fully applicable[1] in mid-2026. The AI Pact aims to facilitate transition to the new regulation by inviting organizations to proactively work on implementing AI Act key provisions.
By signing the pledge, Kaspersky has taken on three core commitments relating to AI technology use, namely to:
· adopt an AI governance strategy to foster the uptake of AI in the company and work towards future compliance with the AI Act;
· carry out a mapping of AI systems provided or deployed in areas that would be considered high-risk under the AI Act;
· promote awareness and AI literacy of the company’s staff and other persons dealing with AI systems on their behalf, considering their technical knowledge, experience, education and training and the context the AI systems are to be used in.
In addition to core
commitments, Kaspersky has vowed to conduct the profiling of foreseeable risks
to rights of persons that might be affected through the use of AI systems,
ensure that individuals are informed, when they are directly interacting with
an AI system, and inform employees about the deployment of AI systems at the
workplace.
"As we witness the rapid deployment of AI technologies, it's
crucial to ensure that the drive for innovation is balanced with proper risk
management," comments Eugene Kaspersky, founder and CEO of
Kaspersky. "Having been an advocate for AI literacy and
the sharing of knowledge about AI-related risks and threats for years, we're
happy to join the ranks of organizations working to help companies responsibly
and securely benefit from AI technologies. We'll be working to further advance
transparent and ethical AI practices and contribute to building confidence in
this technology."
Kaspersky has a vast experience of employing AI technologies in its protection
technologies. For close to 20 years, the company has used AI-powered automation
to enhance the security and privacy of its customers, and detect the widest
range of cyberthreats. With AI systems becoming increasingly prevalent,
Kaspersky is committed to sharing the expertise accumulated in its AI
Technology Research Center to ensure that organizations are well-positioned to
tackle risks that come with AI system deployment.
To assist practitioners in implementing AI systems and equip them with specific
recommendations on how to deploy AI securely, Kaspersky experts have developed
“Guidelines for Secure Development and Deployment of AI
Systems,” which were presented during the 2024 UN Internet
Governance Forum. In addition to security considerations, Kaspersky emphasizes
the ethical use of AI technologies and has formulated principles of ethical use of AI systems in cybersecurity,
inviting other cybersecurity providers to join and follow these principles.
About Kaspersky AI Technology Research Center
Our experts at the
Kaspersky AI Technology Research Center have been working with AI in
cybersecurity and Secure AI for almost 20 years to help discover and counter
the broadest range of threats. Our team contributes AI expertise, based on
their research, to enhance our solutions, from AI-powered threat detection and
alert triage to GenAI-powered Threat Intelligence.
[1] With certain exceptions outlined in the regulation: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689