Typical of domestic terrorists and violent extremists today, each of these culprits was active on social media, leaving online records of words and thoughts related to their crimes. Now building upon military tactics to locate terror threats online, researchers at the Illinois Institute of Technology think machine learning and artificial intelligence could turn these social media posts into breadcrumb trails for governments and investigators to identify anonymous accounts.
In a paper co-authored by assistant professors from Illinois Tech and the University of Nebraska, graduate students Andreas Vassilakos and Jose Luis Castanon Remy combined Maltego software, an application in the digital forensics platform Kali Linux, with a process used by the military called open source intelligence (OSINT). With Maltego, they compiled various social media posts on Twitter, 4chan and Reddit and did a link analysis to find the same entity appearing in more than one place — for instance connecting a Twitter feed to a name in online court documents.
Maurice Dawson, one of the co-signing professors and the director of Illinois Tech’s Center for Cyber Security and Forensics Education, told Government Technology that linking one of those accounts to someone’s real-world identity is time-consuming but possible to do manually. For example, if a threatening account posts a photo, one could capture the metadata in the photo including location coordinates, put those into Google, do a search for tweets within 50 meters of that coordinate, and from there, figure out which of the tweeting accounts is from someone who lives in that area.
One problem with the manual process: It’s highly time-consuming, and there are already too few people doing those jobs. There are 464,420 job openings nationwide in the cybersecurity field, public and private sector combined, according to cyberseek.org. Dawson said his research team is in the process of coding the AI and machine learning to automate some of the work of scraping and link analysis, and he mentioned domestic terrorism and gang activity as possible use cases.
“What we’re trying to do is find a way to make this fully open source and available to anyone who wants to do it, namely state and federal government, and have it automated. If we have a domestic terrorism event, let’s create an intelligence profile of this event. This profile can be created from tweets and stuff like that, so we’ve created the process, and now we’re continuing to use AI and machine learning to further automate this,” he said. “You could take this technology and identify who these people are and go to these entities even before something happens.”
Asked about the ethical issues with outing anonymous accounts online, Dawson acknowledged concerns about who might use such an AI tool. If it’s open source and available to anyone, he said, a stalker or domestic abuser could feasibly use it to find people, unless the government claimed the tool and required a license to use it. But he was convinced the demands of online threat assessment and cybersecurity are outgrowing the manpower to handle them.
“In order to mitigate the problem, you have to automate a lot of these tasks,” he said.