
Newsfeed
June 17, 2025
The Regulation (EU) 2024/1689 (AI Act), published in July 2024, applies across all sectors, including insurance. The AI Act follows a risk-based approach and classifies AI systems into four categories according to their risk level: prohibited, high risk, limited and minimal risk. The AI Act defines a comprehensive set of governance and risk management measures that high-risk systems need to comply with, alongside the requirements already in place under sectoral legislation.
AI systems with limited or minimal risk under the AI Act can operate without additional measures, except for transparency rules (e.g., informing the customer of AI interaction), promoting AI literacy among staff, and developing voluntary codes of conduct. However, their use by insurance companies and intermediaries must follow governance and risk management rules in sectoral legislation.
Art.3.1 of the AI Act: defines “AI system” as “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.
On 10 February 2025, EIOPA released a draft opinion on governance and risk-management principles for responsible AI use in insurance for consultation. The principles include:
EIOPA emphasizes the need for accurate, unbiased data and explainable AI outputs, along with monitoring and redress mechanisms for affected customers.
The draft opinion provides guidance on applying existing insurance sector legislation like Solvency II and IDD to AI systems. It aims to mitigate AI risks while maximizing benefits and promoting good supervisory practices.
The draft opinion does not cover prohibited or high-risk AI systems under the AI Act to avoid regulatory overlaps but follows a flexible principle-based approach. The aim of the consultation was for EIOPA to consider the feedback received, develop the impact assessment based on the answers to the questions included in the consultation paper, and revise this Opinion accordingly.
As of 2 February 2025, providers and deployers of AI systems, including insurance service providers, must ensure AI literacy among their staff. The Commission’s Q&A clarifies AI literacy requirements, emphasizing informed deployment and awareness of AI opportunities and risks.
The document elaborates that the concept of AI literacy, as referenced in Article 4 of the AI Act, is based on the definition provided in Article 3(56) of the AI Act. According to this definition: “AI literacy’ means skills, knowledge and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause.”
The Commission further clarifies that Article 4 of the AI Act does not impose an obligation to measure the AI knowledge of employees. However, it asserts that AI providers and deployers should ensure an adequate level of AI literacy, considering the technical knowledge, experience, education, and training of employees.
The Commission Q&As section provides valuable insights regarding the definition of AI literacy (e.g., which target group is in scope), compliance with Article 4 (e.g., what should be the minimum content of an AI literacy programme, specific requirements for financial services, how to document actions to comply with Article 4), enforcement of Article 4 (e.g., who will be enforcing, consequences of non-enforcement), and the AI office’s approach to AI literacy (e.g., guidelines).
On 14 May 2025 the European Parliament’s Economic and Monetary Affairs Committee (ECON) published a draft report on impact of artificial intelligence on the financial sector. The report highlights AI applications in fraud detection, customer support, compliance, and personalized advice. While acknowledging AI risks, it calls for clear regulatory guidance and support for responsible AI use without introducing new legislation.
Due to regulatory overlaps and legal uncertainties, which can limit the use of AI and complicate compliance for financial institutions, ECON suggests responsible use of AI instead of new restrictive legislation. The motion for a resolution requests the European Commission to:
The ECON vote is scheduled for 13 October and the Plenary vote for November 2025.
On 6 June 2025, the European Commission launched a consultation on the implementation of the AI Act’s rules for high-risk AI systems, including those used for creditworthiness evaluation and insurance risk assessment.
The AI Act identifies two types of ‘high-risk’ AI systems: (1) important for product safety under the Union’s harmonised legislation on product safety; and (2) those that can significantly affect people’s health, safety, or fundamental rights in specific use cases listed in the AI Act.
Stakeholders, including providers and developers of high-risk AI systems, businesses and public authorities using such systems, as well as academia, research institutions, civil society, governments, supervisory authorities, and citizens in general are invited to share their views.
The questionnaire is divided into five sections:
The consultation seeks input from stakeholders and is open until 18 July 2025.