Sept. 28 (Reuters) – Some major advertisers, including Dyson, Mazda and the chemical company Ecolab, have suspended their marketing campaigns or removed their ads from parts of Twitter because their promotions appeared alongside tweets calling for child pornography, the companies told Reuters.
Brands ranging from Walt Disney Co (DIS.N)NBCUniversal (CMCSA.O) and Coca-Cola Co (COULD) to a children’s hospital were among about 30 advertisers who have appeared on the profile pages of Twitter accounts peddling links to the exploitative material, according to a Reuters review of accounts identified in new online child sexual abuse research from cybersecurity group Ghost Data.
Some tweets contained keywords related to “rape” and “teenagers” and appeared alongside promoted tweets from corporate advertisers, the Reuters review found. In one example, a promoted tweet for shoe and accessories brand Cole Haan appeared alongside a tweet in which a user said they were “trading teen/child content”.
Register now for FREE unlimited access to Reuters.com
“We are shocked,” David Maddocks, brand president at Cole Haan, told Reuters after being informed that the company’s ads appeared alongside such tweets. “Twitter is going to fix this, or we’ll fix it any way we can, including not buying Twitter ads.”
In another example, a user looking for content tweeted “ONLY Yung Girls, NO Guys,” which was immediately followed by a promoted tweet for the Texas-based Scottish Rite Children’s Hospital. Scottish Rite has not returned multiple requests for comment.
In a statement, Twitter spokesman Celeste Carswell said the company “has no tolerance whatsoever for child sexual exploitation” and is investing more resources in child safety, including taking on new roles to write policies and implement solutions.
She added that Twitter is working closely with its advertising customers and partners to investigate and take steps to prevent the situation from happening again.
Twitter’s challenges in identifying child abuse content were first reported in an investigation by tech news site The Verge in late August. The emerging pushback from advertisers critical to Twitter’s revenue stream is reported here for the first time by Reuters.
Like all social media platforms, Twitter bans images of child sexual exploitation, which is illegal in most countries. But it generally allows adult content and is home to a thriving pornographic image exchange, comprising about 13% of all content on Twitter, according to an internal company document accessed by Reuters.
Twitter declined to comment on the amount of adult content on the platform.
Ghost Data identified the more than 500 accounts that have openly shared or solicited child sexual abuse material over a 20-day period this month. Twitter failed to remove more than 70% of the accounts during the study period, the group said, which shared the findings exclusively with Reuters.
Reuters couldn’t fully confirm the accuracy of Ghost Data’s finding, but checked dozens of accounts that remained online, requesting material for “13+” and “young-looking nudes.”
After Reuters shared a sample of 20 accounts with Twitter last Thursday, the company removed about 300 additional accounts from the network, but more than 100 remained on the site the following day, according to Ghost Data and a Reuters review.
Reuters then shared the full list of more than 500 accounts on Monday after it was provided by Ghost Data, which has reviewed and permanently banned Twitter for violating its rules, Twitter’s Carswell said Tuesday.
In an email to advertisers Wednesday morning prior to the release of this story, Twitter said it “discovered ads being displayed within profiles involved in publicly selling or soliciting child sexual abuse material.”
Andrea Stroppa, the founder of Ghost Data, said the investigation was an attempt to assess Twitter’s ability to remove the material. He said he personally funded the research after getting a tip on the subject.
Twitter suspended more than 1 million accounts for child exploitation material last year, according to the company’s transparency reports.
“There is no place for this kind of content online,” a spokesperson for automaker Mazda USA said in a statement to Reuters, adding that the company now bans its ads from appearing on Twitter profile pages.
A Disney spokesperson called the content “reprehensible” and said they are “doubling down on our efforts to ensure that the digital platforms we advertise on, and the media buyers we use, are stepping up their efforts to prevent such errors from recurring.” “
A Coca-Cola spokesperson, who released a promoted tweet on an account followed by the investigators, said it did not endorse material associated with its brand, saying that “any violation of these standards is unacceptable and very serious.” is taken.”
NBCUniversal said it has asked Twitter to remove the ads associated with the inappropriate content.
Twitter isn’t alone in grappling with moderation errors related to children’s online safety. Child welfare advocates say the number of known images of child sexual abuse has risen from thousands to tens of millions in recent years as predators have used social networks, including Meta’s Facebook and Instagram, to groom victims and exchange explicit images.
For the accounts identified by Ghost Data, nearly all dealers of child sexual abuse material have marketed the material on Twitter and then instructed buyers to reach them through messaging services such as Discord and Telegram to complete payment and file the files. that were stored on cloud storage services such as New Zealand-based Mega and US-based Dropbox, according to the group’s report.
A Discord spokesperson said the company had banned one server and one user for violating rules against sharing links or content that sexualizes children.
Mega said a link referenced in the Ghost Data report was created in early August and removed by the user shortly after, but which he declined to identify. Mega said it permanently closed the user’s account two days later.
Dropbox and Telegram said they use different tools to moderate content, but did not provide additional details about how they would respond to the report.
Still, the advertiser response poses a risk to Twitter’s business, which earns more than 90% of its revenue from selling digital ad placements to brands seeking to market products to the service’s 237 million daily active users.
Twitter is also fighting in court Tesla CEO and billionaire Elon Musk, who is seeking to pull out of a $44 billion deal to buy the social media company over complaints about the prevalence of spam accounts and its impact on the company.
A team of Twitter employees concluded in a February 2021 report that the company needed more investment to identify and remove child exploitation material on a large scale, noting that the company had a backlog of cases to be reviewed for possible reporting to law enforcement agencies.
“While the amount of (child sexual exploitation content) has grown exponentially, Twitter’s investment in technologies to detect and manage its growth has not,” said the report, which was prepared by an internal team to provide an overview. of the child’s condition. exploitation material on Twitter and receive legal advice on the proposed strategies.
“Recent reports on Twitter provide an outdated, current look at just one aspect of our work in this space, and are not an accurate reflection of where we are now,” said Carswell.
The traffickers often use code words like “cp” for child pornography and are “deliberately as vague as possible” to avoid detection, the internal documents show. The more Twitter tackles certain keywords, the more users are incentivized to use obfuscated text, which is “more difficult for (Twitter) to automate,” the documents say.
Ghost Data’s Stroppa said such tricks would complicate the search for the material, but noted that his small team of five researchers and no access to Twitter’s internal resources were able to find hundreds of accounts within 20 days.
Twitter did not respond to a request for further comment.
Register now for FREE unlimited access to Reuters.com
Reporting by Sheila Dang in New York and Katie Paul in Palo Alto; Additional reporting by Dawn Chmielewski in Los Angeles; Editing by Kenneth Li and Edward Tobin
Our standards: The Thomson Reuters Trust Principles.