Skip to content

Does TikTok's planned use of content labels effectively tackle AI-driven misinformation on the platform?

AI tagging on TikTok signals a modest advance in the struggle against AI misappropriation, a significant challenge ahead.

TikTok Introduces Content Labels to Battles Artificial Intelligence Hoaxes
TikTok Introduces Content Labels to Battles Artificial Intelligence Hoaxes

Does TikTok's planned use of content labels effectively tackle AI-driven misinformation on the platform?

In a significant move towards transparency and authenticity, social media giants such as Meta and Google are set to apply labels to AI-generated content on their platforms, following TikTok's lead.

TikTok, the pioneer in this initiative, will be using the Coalition for Content Provenance and Authenticity (C2PA) standard digital watermarking to identify and label AI-generated content starting in May. This system is also being employed by TikTok's Chinese counterpart, Douyin, in China.

The digital watermarking standard not only identifies AI-generated content but also embeds important metadata into images and videos, making it tamper-evident and providing a simple and reliable means of verifying the provenance of online material. TikTok's AI effects will be clearly labeled with "AI," and creators have been provided with guidelines for consistent labeling.

Google, too, has announced its plans to use Content Credentials, a similar system. Google joined the C2PA steering committee in February and is developing its own digital watermarking toolkit, SynthID. The tech giant plans to incorporate Content Credentials into platforms like YouTube and Google Images.

However, embedding personally identifiable information into content metadata could renew concerns about user privacy on TikTok. Some platforms, like X, eschew labeling practices in favour of relying on user-created "Community Notes" to highlight false or misleading information.

Numerous tech giants, including Meta and Google, have announced plans to implement C2PA's Content Credentials. While this is an important step in the right direction, creating a truly reliable system to authenticate AI-generated content will take time.

It's worth noting that screenshots of AI-generated images do not retain the original version's metadata, which could potentially bypass the detection system. A study by the University of Maryland found that applying Gaussian noise to distort an image's watermark pattern can bypass detection algorithms.

TikTok has also introduced a new feature for creators to inform followers about AI-generated content. This initiative is part of a broader scheme to provide a layer of scrutiny over the use of AI in content creation, a crucial step towards maintaining the integrity and authenticity of content on social media platforms.

Read also:

Latest