Skip to content

Establishing rapidly acceptableAI reliability benchmarks

AI Authenticity Standards Instated Globally in Geneva to Combat Misinformation, Establish Content Accountability, and Foster Digital Trust within the Growing AI-Centric World.

AI Regulation Establishment in Geneva: Defining Reliability in the Era of Artificial Intelligence
AI Regulation Establishment in Geneva: Defining Reliability in the Era of Artificial Intelligence

Establishing rapidly acceptableAI reliability benchmarks

In the digital age where artificial intelligence (AI) and the erosion of content authenticity are growing concerns, the AI and Multimedia Authenticity Standards Collaboration (AMAS) has emerged as a beacon of hope. A multi-stakeholder initiative led by the World Standards Cooperation (WSC), AMAS brings together key players from technology companies, research institutions, civil society organizations, and international standards bodies.

AMAS's mission is to establish a globally harmonized framework of standards for the authenticity, provenance, and trustworthiness of multimedia content. The collaboration aims to counter misinformation, deepfakes, and synthetic content misuse by promoting transparency, accountability, and ethical innovation in digital content creation and distribution.

The WSC, a partnership of the International Organization for Standardization (ISO), the International Electrotechnical Commission (IEC), and the International Telecommunication Union (ITU), spearheads AMAS. This month, AMAS launched two papers to help navigate the ethical and technical challenges of synthetic media and AI.

The first paper, a technical roadmap, provides a framework for developing standards related to digital media authenticity. The second paper, a policy guidance document, details how international standards can serve as the foundation for governance frameworks in the age of generative AI.

Key objectives of AMAS include mapping existing standards and identifying gaps in digital media authenticity, provenance, watermarking, and rights management. The collaboration also aims to develop interoperable, international standards that enable content traceability, rights declaration, and authenticity verification across platforms and industries.

AMAS plays a central role in combating the risks posed by AI-generated misinformation. The collaboration supports the development of technical standards for provenance tracking, digital watermarking, and asset identifiers, making it easier to distinguish authentic from manipulated content. AMAS also promotes transparency by advocating for open, interoperable standards, ensuring that verification tools can be widely adopted and are not locked into single-vendor ecosystems.

The broad coalition of AMAS includes standards bodies, industry leaders, researchers, and civil society organizations. This inclusive, multi-sector approach ensures that standards reflect diverse perspectives and are practical for real-world deployment.

By grounding the evolution of digital authenticity standards in international cooperation, AMAS seeks to prevent a future in which society can no longer trust what it sees, hears, or reads online. The tools to detect and manage AI-generated media must evolve quickly to keep pace with their increasing sophistication and affordability.

Gilles Thonet, Deputy Secretary-General of the IEC, stated that international standards provide guardrails for the responsible, safe, and trustworthy development of AI. Silvio Dulinsky, Deputy Secretary-General of ISO, emphasized the need for practical, scalable solutions for synthetic media challenges.

AMAS represents a coordinated, international response to the urgent challenges of AI-driven misinformation and synthetic media. By harmonizing standards, fostering transparency, and empowering both creators and consumers with tools to verify authenticity, AMAS seeks to build a digital ecosystem where trust is both possible and scalable—laying the groundwork for responsible innovation and resilient public discourse in the age of AI.

Technology companies, research institutions, civil society organizations, and international standards bodies are united in AMAS's mission to establish international standards for the authenticity, provenance, and trustworthiness of multimedia content, aimed at combating misinformation, deepfakes, and synthetic content misuse. In response to the urgent challenges of AI-driven misinformation and synthetic media, AMAS develops interoperable standards that enable content traceability, rights declaration, and authenticity verification across platforms and industries, ensuring tools for verifying authenticity are scalable and not locked into single-vendor ecosystems.

Read also:

    Latest