Europe has taken the lead on AI regulation – a move that will shape the development of autonomous transport across the continent.
Volvo Autonomous Solutions’ virtual driver may incorporate AI systems, particularly in modules such as perception and motion planning. These systems could potentially be classified as high-risk under the AI Act if AI systems are used as safety components. This would trigger strict requirements on classification (Art.6), data and robustness (Arts. 10 & 15), and human oversight (Art. 14). This article focuses on these three areas of the AI Act because they would then map directly to how we engineer and operate. But first, a brief overview of what the AI Act is and how it will impact AI development and deployment.
Magnus Heidenvall is General Counsel and Head of Legal at Volvo Autonomous Solutions
The AI Act is the world’s first unified rulebook for AI systems, set to be enforced widely in the European Union from August 2026. The Act sets rigorous and uniform standards for the industry in European countries. The EU Parliament’s and Council’s purpose with the legislation is to create better conditions for the development of AI while ensuring it remains safe, trustworthy, and transparent.
Article 3 of the act defines an AI system as software that uses machine learning, logic-based, or statistical methods to produce outputs, predictions, or decisions that affect the world. The Act bans certain uses of these systems (e.g., social scoring and untargeted face scraping), sets strict rules for high-risk AI, and adds transparency rules for AI that interacts with people or generates content.
Under the Act, providers and deployers of AI have new responsibilities, and non-compliance carries the risk of significant fines.
Some stakeholders’ critiques of the Act highlight gaps we need to be conscious of. The Act asks companies to judge much of their own risk level, which can mean some systems may not be treated as strictly as they should be. Many of the day-to-day rules for “how to comply” are still being turned into common standards, meaning some details are still in progress. Also, today’s liability rules are clearer on physical injury or property damage than on societal harms like discrimination or misinformation. And parts of the law were written with older, predictive AI in mind, while newer general-purpose and generative models raise different risks. The Act is not perfect, yet it is a step in the right direction.
Article 6 outlines what classifies AI systems as “high-risk”, particularly when they perform safety-critical functions. But the label can easily be misinterpreted: high-risk does not mean unsafe. It means safety-critical.
For example, an AI that manages braking and steering of a loaded truck is safety-critical; an AI that transcribes meeting notes is not. Classification depends on function: an AI system that, in case of failure, could cause harm to people is considered high-risk. Compliance under the AI Act depends on correctly identifying which functions are safety-critical and which are not. This classification is reviewed continuously as systems evolve.
Classification isn’t always pre-cleared by a regulator. For many systems, the provider performs a conformity assessment and documents why it is (or isn’t) high risk. That flexibility speeds market entry but also creates a self-assessment loophole if not backed by solid evidence and post-market monitoring.
Our virtual driver is a key safety component, which means that any AI systems may fall into this high-risk category. As a result, we would be required to demonstrate that our virtual driver has been developed with safety as a priority. This is where our safety case framework becomes essential.
A safety case is a structured and evidence-based argument that demonstrates an autonomous transportation solution is acceptably safe for its intended purpose and complies with applicable regulatory and ethical requirements. It provides assurance that the solution is not only designed to be safe in theory but has been verified and validated to operate safely under real-world conditions. By maintaining comprehensive and continuously updated safety cases, we ensure ongoing conformity with the EU AI Act’s requirements for risk management, transparency, and lifecycle monitoring.
Article 10 of the AI Act requires training, validation, and testing data to be relevant, representative, and free from errors or bias. In simpler terms: data must reflect real use, be as accurate as possible, and be well documented. In this context, bias includes but is not limited to discrimination. It also means ensuring the system can recognize all scenarios and worker types – different heights, clothing, obstacles, and conditions.
We collect data from a wide range of scenarios from dusty quarries and foggy mornings to night operations. As part of our AI compliance framework, we need to implement robust data management practices. This includes data cleaning and annotation to address inconsistencies, bias audits with attention to edge cases (such as unusual worker postures), and the use of simulation to generate data for rare events. All data must be traceable and documented, with clear records of provenance and usage. It must also track model and software outputs, and ongoing performance monitoring during operation.
The Act also requires resilience against faults and cyber-attacks, under Article 15. At V.A.S. we perform risk analyses and define rigorous mitigation plans. We track publicly known vulnerabilities and keep third-party software up to date. We maintain internal response plans for cyber incidents, and our robustness work focuses on resilient behavior under fault or attack conditions, supported by strong operational oversight.
Under Article 14, the AI Act requires human oversight of high-risk systems, which is already built into our operations. During early phases, safety drivers remain in the cab until it is safe to remove them. Supervisors then monitor operations from control centers using live dashboards. They receive real-time alerts and can activate an emergency stop at any time. Logs and incidents are audited, and operators are continuously trained. Even with full autonomous deployment, humans remain in the loop.
Compliance requirements will soon become standard in tenders and contracts, and many customers may already expect them. For our customers, compliance provides assurance that our systems are designed and tested to the required standards, which reduces regulatory risk, and streamlines approval processes with authorities.
Unlike fragmented solutions where responsibility is split among multiple providers, Volvo Autonomous Solutions is the single point of contact for customers under our transport-as-a-service model. Customers do not need to coordinate between software developers, fleet operators, and OEMs; V.A.S. manages the entire chain within a unified safety and compliance framework. This approach builds trust, reduces risk, and prepares us to support customers globally as AI regulations inspired by the EU Act are adopted elsewhere, much like the worldwide influence GDPR had on privacy standards.
The EU AI Act is reshaping the framework for autonomy. At Volvo Autonomous Solutions, we see it as confirmation of the path we already follow: safety first.
It is a productive step in the right direction, although there are still gaps to be filled. We need to treat this technology responsibly and ensure that all actors are held accountable. By August 2, 2026, almost all requirements of the AI Act will apply. Our preparations are well underway, with clear timelines and dedicated teams in place.
We are conducting a detailed gap analysis and integrating the findings into our product roadmaps. In parallel, we are refining the required documentation and processes – from the quality management system to risk management and safety cases. We’ve set up a cross-functional AI Act compliance program, led by a core team from Technology, Product, Legal & Compliance, and Digital & IT.
Compliance is not a one-time certificate; it is an operating framework. We will continue to update, refine, and improve the safety of our products. This is how we address keeping autonomy safe at scale, before, during, and after deployment.