EU's Plan to Regulate AI-based Assistants, such as Chat-GPT
The European Union has introduced a new voluntary code of practice aimed at helping providers of artificial intelligence (AI) model systems comply with the EU's Artificial Intelligence Act (AI Act). The code, published on July 10, 2025, serves as a practical toolkit for providers, particularly those of general-purpose AI models, to implement the AI Act's key provisions focused on transparency, copyright, and security [4].
The EU's AI Act, finalized in late 2023 and fully effective by August 2026, is the world's first comprehensive legal framework regulating AI use with a strong focus on human-centric, trustworthy, and safe AI systems [1][2][3]. The Act sets strict, risk-based rules emphasizing transparency, human oversight, data quality, and safety in AI deployment across Europe.
Key provisions of the EU AI Act include a risk-based classification system, transparency requirements, accountability and documentation obligations, data quality and privacy safeguards, and enhanced requirements for high-risk AI models [1][2][3]. High-risk AI models, such as those used in critical sectors like law enforcement, employment, education, and infrastructure safety, must undergo stringent requirements, including documentation, risk management, data quality, non-discrimination, and third-party conformity assessments [2][3].
The newly introduced voluntary Code of Practice offers structured guidance and documentation tools to aid providers in meeting their legal obligations under the Act. The code includes chapters on transparency, copyright, and safety and security, particularly targeting high-risk AI models [4].
The Transparency Chapter provides a user-friendly Model Documentation Form, enabling providers to document necessary information clearly and comprehensively. This supports compliance with the Act’s transparency obligations, helping providers demonstrate how they inform users about AI capabilities and risks [4].
The Copyright Chapter guides providers on respecting intellectual property rights in AI training data and outputs, helping navigate copyright issues relevant to content generation and dataset use [4].
The Safety and Security Chapter offers best practices and standards to ensure safety and robust security measures against misuse or vulnerabilities, particularly relevant to providers of the most advanced AI models subject to systemic risk obligations [4].
The code is expected to have a significant impact on companies transitioning to the new European regulatory framework for AI. Existing systems like ChatGPT-4 will have to comply with the new rules from next year. Non-adherence to the code may require providers to develop their own approach to demonstrate compliance, potentially involving greater effort [4].
The EU Commission views the code as an important tool to help companies transition to the new European regulatory framework. The code does not have a mandatory status, but providers who sign it can document their "good intentions" and benefit from reduced administrative burden and higher legal certainty [4].
A form is included in the code to help providers easily record technical details, such as the information recorded can make it more accessible to supervisory authorities and downstream AI developers [4]. The code also includes provisions for high-risk AI models, requiring risk reports and external auditor involvement [4].
In summary, the EU AI Act sets strict, risk-based rules emphasizing transparency, human oversight, data quality, and safety in AI deployment across Europe. The newly introduced voluntary Code of Practice aids providers by offering structured guidance and documentation tools, particularly targeting transparency, copyright compliance, and security safeguards, thus helping them meet their legal obligations under the Act while boosting user trust and regulatory compliance [1][2][3][4].
The technology outlined in the newly introduced voluntary Code of Practice focuses on providing structured guidance and documentation tools for AI providers, particularly targeting transparency, copyright, and safety and security, particularly for high-risk AI models.
Providers who adhere to this code can benefit from reduced administrative burden and higher legal certainty by demonstrating their commitment to complying with the EU's Artificial Intelligence Act.