Twitter proposes it’s new deepfake policies and asks public opinion on the policies

On Monday, Twitter proposed it’s first deepfake policies which would help users of Twitter to differentiate manipulated videos from original videos. Twitter is also asking a public opinion on how these manipulated videos must be manipulated at the end of the draft by asking a set of feedback questions. 

Firstly, deepfake policies are to solve the problem of users from distinguishing between original video, pictures and sound from altered fake videos, pictures and sound. With the development of technology, it has become easy for people to manipulate or alter videos and make it fake using artificial intelligence. This can be done by anyone having a laptop and an Internet connection. 

These fake videos make people believe that something has really happened when it actually hasn’t. This one will be a serious problem during the presidential election, where people might get misguided by the wrong news. These manipulated videos might end up ruining the presidential candidate’s reputation. 

Last month, Twitter Safety Team had already announced that it would be taking feedback from the public regarding the manipulated videos, whether they must be just flagged or completely removed from Twitter.

Twitter says, “Synthetic and manipulated media as any photo, audio or video that has been significantly altered or fabricated in a way that intends to misleading people or changes its original meaning”. 

To address this problem, it has proposed some deepfake policies, which are:

1. Place a notice next to Tweets that share synthetic or manipulated media;

2. Warn people before they share or like Tweets with synthetic or manipulated media; or 

3. Add a link—for example, to a news article or Twitter Moment—so that people can read more about why various sources believe the media is synthetic or manipulated.

  Twitter says that it will remove tweets if it encounters any tweet having a deepfake that “could threaten someone’s physical safety or lead to other series harm”. We must also note that the company last year banned porn video deepfake. Twitter to improve their deepfake policies it is conducting survey and tweets with hashtag “Twitter policy feedback”. 

The survey started Monday, and it is closing on November 27th, 6.59 p.m. E. T. The company will then release a more formal version and definite version of the deepfake policy once it has reviewed all feedbacks. These policies will come into effect after one month of its official policy release. 

Giving its users the original news is the major problem companies like Facebook and YouTube have been facing. You can say, for instance, the circulation of a fake video of House Speaker Nancy Pelosi. At that time, YouTube removed the false video. As to whose responsibility it is to decide whether the news is fake or not is a big debate. After, Facebook announced that it would not ban political news. It faced a series of problems. 

So, Twitter announced that it would ban all political ads. Thus, the senators asked 11 tech Giants to find a solution for “sharing, removing, archiving, and confronting the sharing of synthetic content as soon as possible”. As a result, Twitter came up with this policy. Also, Amazon joined Facebook and Microsoft DeepFake Detection Challenge (DFDC) to detect fake videos or news. 

Leave a Comment

Your email address will not be published.