This Thursday's update allows Instagram users to identify content that they consider to be fake. Depending on the feedback and signals received, such as the duration of the post and the previous activity of the associated account, it is up to the platform to determine if the publication needs to be reviewed by independent fact verifiers. However, it is not yet known how long this process may take.
And how can you flag content that you consider to be fake? Users will have to click on the three-point menu in the top right corner of an Instagram post, select “is inappropriate” and choose “false information”. If Instagram finds that the post is actually wrong, it will not be deleted, but “minimized” in the Explore guide and hashtag posts.
The account behind the post will not be notified while the content is being reviewed and will not know if the fact checker decides whether it is actually false or not. The posts will be reviewed by Facebook's independent fact checkers, used a few years ago due to the fake news issue.
Quoted by The Guardian, an Instagram spokesman considers this to be just an “initial step” in what is a “broader approach to combating misinformation”.
Already in May, the platform launched a program that allows users to flag fake content for review by independent fact verifiers. From another perspective, this time to combat platform bullying, Instagram recently released two updates in July. The goal is to give less and less space to offensive interactions within the social network.