Meta, the parent company of Facebook and Instagram, has announced that they are developing AI detection tools in response to the increasing use of AI-generated content, such as deepfakes, and the risks they have on social media platforms. They will also be labeling AI-generated content to help users distinguish between authentic and fake content.
Fake AI-generated content
Fake AI-generated content refers to media, text, images, or videos created by artificial intelligence(AI) systems. AI-generated content can be highly convincing from realistic images to human writing styles, voice, etc. AI-generated content brings up big challenges in discerning truth from false in the digital world.
President Joe Biden on Deepfakes
Recently President Joe Biden and many political figures were found as victims of the deep fake technology. In a recent interview, President Biden addressed the AI-generated content to be crazy and for communications organizations to do something to label them.
Not only so it was found that very recently a fake robocall with someone using President Joe Biden’s voice has been calling citizens and asking them not to vote.
Meta to label AI-generated content
With the recent Senate hearing in mind, Meta revealed plans to create AI-detecting tools. It will trace and add watermarks on all AI-generated images, videos, etc. so that users can differentiate between AI-generated content. Meta is also adding features where users might be asked to disclose when they share an AI-generated Image or Video so that Meta can label them. They might also apply penalties if failed to do so.