The expansion comes after Meta's oversight board revised the company's existing policies in February after a doctored video of President Biden remained on Facebook because it did not violate Meta's rules. This follows the criticism that it was “incoherent.” The Oversight Committee, an outside group funded by Meta, called on the company to expand its policies to address cases where audio as well as video has been deceptively altered. Describe people doing something.
“We agree with the Oversight Board's assertion that the existing approach is too narrow,” Monica Bickert, Meta's vice president of content policy, said in a statement.
“Our Manipulated Media Policy was written in 2020, when real, AI-generated content was rare and our primary concern was with video.”
In response to the oversight committee's recommendation, Meta also agreed not to remove digitally created media unless it violates other rules, but it will be given a label to indicate that the content has been modified. . Starting next month, the company will start applying a “Made with AI” label to people who disclose that they are uploading content that it determines is AI or was generated by AI.
In February, Meta announced plans to develop a system to identify AI-generated content created by users using the services of other tech companies that have agreed to embed AI identifiers and watermarks.
Meta's expansion of AI policy is likely a welcome development for civil society groups and experts who have warned that misinformation generated by AI is already spreading online in a crucial election year. Dew. However, experts also warn that this labeling strategy may not be able to catch all misleading AI-generated content. While some companies, including Meta, have agreed to ensure watermarks are added to AI-generated content, others have not.