The round, which brings the company’s total funding amount up to about $37 million according to Crunchbase, signals confidence in the company’s ability to find a market for such technology as misinformation increasingly amplifies problems. The company, though based in the U.K., has worked with government agencies in the U.S. where misinformation about vaccines, election results and other topics have stunted the progress of life-saving anti-COVID measures and even led to political violence.
The company uses AI to flag possible misinformation on social media, as well as identify patterns of communication, and then give its customers a way to respond. There is a human element to the process, but at least a portion of the company’s technology is meant to automate the determination of what is likely to be true and what isn’t.
“Given the evolving threat landscape and the increasingly sophisticated technology being used by adversarial actors, this investment will help us continue to stay at the forefront of the fight against misinformation, funding further research and development of end-to-end technology solutions that really break down and combat information threats,” said Lyric Jain, the company’s CEO, in a press release.
Logically is not alone in the space; though different companies take different approaches to the problem. AlphaVu, for example, has worked with the Virginia Department of Health to create a kind of digital focus group so the agency could identify general sentiment among different demographics of people in order to direct its public messaging. Vidrovr, which specializes in using AI to create insights into video content, emphasizes finding patterns over determining what’s true.
Much of Logically’s pitch is about the value of understanding such patterns, however the company also seeks to put mitigation and response tools in the hands of its government customers as well. That includes pre-crafted messaging from experts as well as the ability to alert social media networks to misinformation.
“If the response isn’t immediate, if it seems like, at least optics-wise, there’s any uncertainty in response, or if there’s a vacuum where a narrative is allowed to go unchallenged, that’s the space where the most harm occurs,” Jain told Government Technology in a previous interview. “And it’s really hard to convince people once they’ve been convinced of that narrative.”
The funding round was led by Vitruvian Partners, with participation from the Amazon Alexa Fund, XTX Ventures and the Northern Powerhouse Investment Fund.