X (formerly Twitter) claims that non-consensual nudity is not tolerated on its platform. But a recent study shows that X is more likely to quickly remove this harmful content—sometimes known as revenge porn or non-consensual intimate imagery (NCII)—if victims flag content through a Digital Millennium Copyright Act (DMCA) takedown rather than using X's mechanism for reporting NCII.
In the pre-print study, which 404 Media noted has not been peer-reviewed, University of Michigan researchers explained that they put X's non-consensual nudity policy to the test to show how challenging it is for victims to remove NCII online.
To conduct the experiment, the researchers created two sets of X accounts to post and report AI-generated NCII "depicting white women appearing to be in their mid-20s to mid-30s" as "nude from the waist up, including her face." (White women were selected to "minimize potential confounds from biased treatment," and future research was recommended on other genders and ethnicities.) Out of 50 fake AI nude images that researchers posted on X, half were reported as violating X's non-consensual nudity policy, and the other half used X's DMCA takedown mechanism.
Ars Technica - All contentContinue reading/original-link]