Bisnis

You can now make takedown requests for AI-generated YouTube videos that mimic your likeness

YouTube has changed its privacy policies to allow people to request the removal of AI-generated content that mimics their appearance or voice.

“If someone has used AI to modify or create artificial content that looks or sounds like you, you can ask to have it removed,” YouTube’s updated privacy guidelines state. “To be eligible for removal, the content must reflect a real or artificially altered version of your likeness.”

YouTube made the change quietly in June, according to TechCrunchwhich first reported on the new policy.

A request for removal will not be granted automatically; instead, YouTube’s privacy policy states that the platform may give the uploader 48 hours to remove the content themselves. If the uploader does not take action within that time, YouTube will initiate a review.

The Alphabet-owned platform says it will consider several factors to decide whether to remove a video:

  • Whether the content is modified or created
  • Whether the content is disclosed to viewers as modified or artificial
  • That a person can be identified differently
  • Whether the content is true
  • Whether the content contains humor, satire or other public interest value
  • Whether the content includes a public figure or a known person engaging in serious activities such as criminal activity, violence, or endorsing a product or political candidate.

YouTube also notes that it requires “first-party claims,” ​​meaning that only the person whose privacy was violated can file a claim. However, there are exceptions, including when the request is made by a parent or guardian; where the person in question does not have access to a computer; where the request is made by the legal representative of the person in question; and when a close relative makes a request on behalf of a deceased person.

Notably, the removal of a video under this policy does not count as a “claim” against the uploader, which could result in the uploader facing a ban, withdrawal of ad revenue or other penalties. That’s because it falls under YouTube privacy guidelines and not its own community guidelinesand only violations of public guidelines lead to strikes.

The policy is the latest in a series of changes YouTube has made to address the problem of deepfakes and other controversial AI-generated content appearing on its site.

Last fall, YouTube announced it was creating a system that would allow its music partners to request the removal of content that “imitates an artist’s unique singing or rapping voice.”

This comes after a number of serious songs went viral last year, including the infamous “Fake Drake” which garnered hundreds of thousands of airplays before being taken down by media outlets.

YouTube also announced that AI-generated content on its site must be labeled as such and introduced new tools that allow uploaders to add labels that alert viewers to the fact that the content was created by AI.

“Creators who consistently choose not to disclose this information may face content removal, suspension from YouTube’s partner program, or other penalties,” YouTube said.

And regardless of labels, AI-generated content will be removed if it violates YouTube’s community guidelines, the platform said.

“For example, a video created in a way that depicts actual violence may still be removed if its intent is to frighten or offend viewers.”

YouTube is not alone in trying to solve the problem of deepfakes on its site; TikTok, Meta while others have been working to deal with this problem after the controversy surrounding deepfakes appeared on their platforms.


Incoming law

The problem is also being solved at the legal level. The US Congress is debating a number of bills, including There is no AI Fraud Act in the House of Representatives and There is NO Law of Lies in the Senate, that would extend the right to advertise to cover content generated by AI.

Under these bills, people will be given intellectual property rights similar to their voice, allowing them to sue the creators of unauthorized deepfakes. Among other things, the proposed laws are intended to protect artists from having their work or images stolen, and people from being exploited by deep fakes that depict sex.


Even as it works to minimize the negative impact of AI-generated content, YouTube is also working on AI technology.

The platform is in talks with three majors – Sony Music Entertainment, Universal Music Group, and Warner Music Group – to license their music to train AI tools that will be able to make music, according to a report last month Financial Times.

That follows YouTube’s partnership last year with UMG and WMG to build AI music tools in collaboration with music artists.

According to the FT, YouTube’s previous efforts to create AI music tools fell far short of expectations. Only 10 artists signed up to help develop YouTube’s Dream Track tool, which was intended to bring AI-generated music to YouTube Shorts, the video platform’s answer to TikTok.

YouTube hopes to sign “a bunch” of artists as part of its new effort to build AI music tools, people familiar with the matter said. FT.Music Business Worldwide


Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button