IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Is X Still Taking Steps to Police Harmful Content?

Amid rising concerns that X has become less safe under billionaire Elon Musk, the platform formerly known as Twitter is seeking to assure advertisers and critics that it still polices harassment.

A hand holding a smartphone in front of the X logo.
Shutterstock
(TNS) — Amid rising concerns that X has become less safe under billionaire Elon Musk, the platform formerly known as Twitter is seeking to assure advertisers and critics that it still polices harassment, hate speech and other offensive content.

From January to June, X suspended 5.3 million accounts and removed or labeled 10.7 million posts for violating its rules against posting child sexual exploitation materials, harassment and other harmful content, the company said in a 15-page transparency report scheduled to be released Wednesday. X said it received more than 224 million user reports during the first six months of this year.

It's the first time X has released a formal global transparency report since Musk completed his acquisition of Twitter in 2022. The company said last year it was reviewing how it approaches transparency reporting, but still released data about how many accounts and how much content were pulled down.

Safety issues have long dogged the social media platform, which has faced criticism from advocacy groups, regulators and others that the company doesn't do enough to moderate harmful content. But those fears heightened after Musk took over Twitter and laid off more than 6,000 people from the company.

The release of X's transparency report also comes as advertisers plan to cut their spending on the platform next year and the company escalates its battle with regulators. This year, X's chief executive, Linda Yaccarino, told U.S. lawmakers the company was restructuring its trust and safety teams and is building a trust and safety center in Austin, Texas.

Musk, who said last year that advertisers who were boycotting his platform could "go f— yourself," has also moderated his tone. At this year's Cannes Lions International Festival of Creativity, he said that "advertisers have a right to appear next to content that they find compatible with their brands."

When Musk took over Twitter, several changes he made raised alarms among safety experts. X reinstated previously suspended accounts including those of white nationalists, stopped enforcing its policy against COVID-19 misinformation and abruptly disbanded its Trust and Safety Council, an advisory group that included human rights activists, child safety organizations and other experts.

X has also grappled with scrutiny that it's become less transparent under Musk's leadership. The company, which was once publicly traded, became private after Musk purchased it for $44 billion.

The change meant that the social media platform no longer reported its quarterly user numbers and revenue publicly. Last year, X started charging for access to its data, making it tougher for researchers to conduct studies about the platform.

Concerns about the lack of moderation on X has also posed a threat to its advertising business. The World Bank in September halted paid advertising on the platform after its ads showed up under a racist post. Roughly 25% of advertisers expect to decrease their spending on X next year and only 4% of advertisers think the platform's ads provide brand safety, according to a survey by the market research firm Kantar.

Some of the top problems that users reported on X involved posts that allegedly violated the platform's rules on harassment, violent content and hateful conduct, the platform's transparency report shows.

Musk, who has called himself a "free speech absolutist," has said on X that his approach to enforcing the platform's rules is to restrict the reach of potentially offensive posts rather than taking down the posts. He also sued California last year over a state law that lawmakers say aims to make social networks more transparent because of free speech concerns.

X's transparency report shows that toughly 2.8 million accounts were suspended for violating the platform's rules against child sexual exploitation, making up more than half of the 5.3 million accounts that were pulled down.

But the report also showed that X resorted to labeling user content in some cases rather than removing or suspending accounts.

X applied 5.4 million labels to content reported for abuse, harassment and hateful conduct, relying heavily on automated technology. Roughly 2.2 million pieces of content were taken down for violating those rules.

The platform's rules state that the site doesn't allow media depicting hateful imagery such as the Nazi swastika in live videos, account bios, profiles or header images. Other instances, though, must be marked as sensitive media. This week, X also made changes to a feature that allows users to block people on the platform. People whom users have blocked will be able to see their posts but not engage with them.

X also suspended nearly 464 million accounts for violating its rules against platform manipulation and spam. Musk vowed to "defeat the spam bots" on Twitter before he took over the platform. The company's report also included a metric called the "post violation rate" that showed users are unlikely to come across content that breaks the site's rules.

Meanwhile, X continues to face legal challenges in several countries including Brazil, whose Supreme Court blocked the site because Musk failed to comply with court orders to suspend certain accounts for posting hate speech. The company bowed to legal demands this week in an attempt to get reinstated. It has also been reporting content moderation data to regulators in places such as Europe and India.

The report included the number of requests X gets from government and law enforcement agencies. The company received 18,737 government requests for user account information and it disclosed information in about 53% of these cases.

Twitter started publicly reporting in 2012 the number of government requests it received for user information and content removal. The company's first transparency report, which also included data about copyright takedown notices, came after Google started releasing this data in 2010.

After revelations surfaced in 2013 that the National Security Agency had access to user data that Apple, Google, Facebook and other tech giants collected, a growing number of online platforms started to disclose more information about requests they received from the government and law enforcement.

© 2024 Los Angeles Times. Distributed by Tribune Content Agency, LLC.