In a blog post on Thursday, the California-based company announced the introduction of AI, including image matching and language understanding, in conjunction with it already-existing human reviewers to better identify and remove content "quickly".
"We know we can do better at using technology - and specifically artificial intelligence - to stop the spread of terrorist content on Facebook," Monika Bickert, Facebook's director of global policy management, and Brian Fishman, the company's counterterrorism policy manager, said in the post.
"Although our use of AI against terrorism is fairly recent, it's already changing the ways we keep potential terrorist propaganda and accounts off Facebook.
"We want Facebook to be a hostile place for terrorists."
Such technology is already used to block child pornography from Facebook and other services such as YouTube, but Facebook had been reluctant about applying it to other potentially less clear-cut uses.
In most cases, the company only removed objectionable material if users first report it.
Facebook and other internet companies have faced growing pressure from governments to identify and prevent the spread of "terrorist propaganda" and recruiting messages on their services.
Government officials have at times threatened to fine Facebook, which has nearly two billion users, and strip the broad legal protections it enjoys against liability for the content posted by its users.
Efforts welcomed
Facebook's announcement did not specifically mention this pressure, but it did acknowledge that "in the wake of recent terror attacks, people have questioned the role of tech companies in fighting terrorism online".
It said Facebook wants "to answer those questions head on" and that it agrees "with those who say that social media should not be a place where terrorists have a voice".
The UK interior ministry welcomed Facebook's efforts, but said technology companies needed to go further.
"This includes the use of technical solutions so that terrorist content can be identified and removed before it is widely disseminated, and ultimately prevented from being uploaded in the first place," a ministry spokesman said on Thursday.
Among the AI techniques being used by Facebook is image matching, which compares photos and videos people upload to Facebook to "known" terrorism images or video.
Matches generally mean that either that Facebook had previously removed that material, or that it had ended up in a database of such images that the company shares with YouTube, Twitter and Microsoft.
New techniques
Facebook is also developing "text-based signals" from previously removed posts that praised or supported terrorist organisations.It will feed those signals into a machine-learning system, over time, will learn how to detect similar posts.
In their blog post, Bickert and Fishman said that when Facebook receives reports of potential "terrorism posts", it reviews those reports urgently.
In addition, it says that in the rare cases when it uncovers evidence of imminent harm, it promptly informs authorities.
The company admitted that "AI can't catch everything" and technology is "not yet as good as people when it comes to understanding" what constitutes content that should be removed.
To address these shortcomings, Facebook said it continues to use "human expertise" to review reports and determine their context.
The company had previously announced it was hiring 3,000 additional people to review content that was reported by users.
Facebook also said it will continue working with other tech companies, as well as government and intergovernmental agencies to combat the spread of "terrorism" online.
©2017 Al Jazeera (Doha, Qatar) Distributed by Tribune Content Agency, LLC.