Cyber Range Technology Enables Compliance with AI Regulations

As AI systems become more sophisticated and impactful in a wide range of applications – from self-driving cars to healthcare to finance – it is important to ensure that they are being used safely and ethically. AI systems can sometimes make decisions or take actions that have unintended consequences, which can result in harm to individuals or society. Moreover, AI systems are often complex and opaque, which can make it difficult to hold AI systems accountable for their actions using traditional liability regimes. In order to address these emerging concerns, governments around the world are adopting a variety of AI liability regulations that require extensive AI testing in controlled virtual environments.

On September 28, 2022, the European Commission adopted two proposals related to AI liability. The first aims to update the existing rules on the strict liability of manufacturers for defective products, including smart technology and pharmaceuticals. This will provide businesses with legal certainty to invest in new and innovative products, and ensure that victims can receive fair compensation for damages caused by defective products, including digital and refurbished products. The second proposal is a first-of-its-kind effort to harmonize national liability rules for AI across the European Union. This will make it easier for victims of AI-related damage to obtain compensation. These new rules will ensure that victims have the same level of protection when harmed by AI products or services as they would in any other situation.

The new rules are in line with the objectives of the AI White Paper and the Commission’s 2021 AI Act proposal, which adopts a risk-based approach for use of AI systems. For example, high risk AI systems are subject to strict obligations before they can be put on the market, including adequate risk assessment and mitigation systems, high-quality datasets to minimize risks and discriminatory outcomes, logging of activity for traceability, clear and adequate information for the user, and appropriate human oversight measures. AI systems that are considered high risk include those used in critical infrastructure (such as transport), educational or vocational training, safety components of products, employment, essential private and public services, law enforcement, migration and border control, administration of justice and democratic processes, and remote biometric identification systems.

The AI Act also establishes governance measures, including national competent market surveillance authorities to oversee the new rules, the creation of a European Artificial Intelligence Board to facilitate their implementation and drive the development of standards for AI, and regulatory sandboxes to support responsible innovation. The goal of these and other similar regulations around the world is to create a framework for developing secure, safe and trustworthy AI systems with appropriate accountability measures.

To ensure compliance with these requirements, high-risk AI systems must undergo extensive testing in a controlled virtual environment, such as a cyber range. ˇ This helps mitigate the risk of accidents, errors, or malicious attacks that could have serious and harmful consequences. In addition to testing the functionality of the AI system itself, cyber ranges are used to assess the interaction between the system and other systems and devices in the environment to identify potential impacts, dependencies or vulnerabilities.