In the coming months, the Meta group will flag images that users post on Facebook, Instagram and Threads if they are found to have been generated by AI, the company announced. Both visible and invisible landmarks and metadata embedded in image files will be placed on the images.

An example of AI markup on the imagePhoto: Meta

Meta says it has worked with industry partners to agree on common technical standards to signal that certain content has been created with the help of artificial intelligence.

“The ability to detect these signals will allow us to flag AI-generated images that users post on Facebook, Instagram and Threads. The company is currently developing this capability, and in the coming months it will start applying tags in all languages ​​supported by each app,” says Meta.

What people at Meta are saying

When photorealistic images are created using Meta AI, the company does a few things to make sure people are aware of the AI ​​involved, including placing visible landmarks that users can see in the images, as well as invisible marks and metadata embedded in the image files.

Using invisible tags and metadata in this way improves the reliability of these invisible indicators and helps other platforms to identify them. This is an important part of Meta’s responsible approach to building generative AI functions.

When AI-generated content appears online, Meta works with other companies in the industry to develop common standards for identifying it in forums such as the Partnership on AI (PAI).

The invisible tags used for Meta AI images – IPTC metadata and invisible tags – follow PAI best practices. Meta builds industry-leading tools that can identify invisible markup at scale, including “AI-generated” information from the C2PA and IPTC technical standards, enabling Google, OpenAI, Microsoft, Adobe, Midjourney and Shutterstock images to be tagged when they are implementing their plans to add metadata to images created with their tools.

As the industry works to achieve this, Meta is adding a feature where people can report when they share AI-generated video or audio clips so we can tag them. Users will be required to use this tool for disclosure and tagging when posting organic content with photorealistic video or digitally created or altered audio, and Meta may impose penalties if they fail to do so. If a company decides that digitally created or altered content (images, video or audio) poses a particularly high risk of materially misleading the public about an important matter, it can add a more prominent mark so that people have more information and context.