IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

The Metaverse Will Be the Next Breeding Ground for Bad Info

Metaverses could be fertile ground for misinformation to spread if left unchecked. Reducing that danger means seizing the moment and starting thinking through tricky content moderation policies.

Digital illustration of a person connecting to a metaverse.
Shutterstock/is.a.bella
The dangers of misinformation and disinformation have been cast into sharp relief recently as outpourings of false narratives create confusion over Russia’s invasion of Ukraine, some of it pushed by Russia-backed media to “justify” the attacks. On the home front, the Cybersecurity and Infrastructure Security Agency (CISA) has warned Russia may spread misinformation to bias U.S. policies.

And still Americans continue to see the human toll of conspiracy theories about COVID-19 vaccines.

The fight against misinformation has often focused on its spread through social media platforms and sites that masquerade as news publications. But as communication channels evolve, many who examine the space anticipate that misinformation could move into a new arena — the metaverse.

Metaverses are often envisioned as immersive digital environments that persist even when users are logged out, and which use augmented and virtual reality. They are likely to mimic the real world to an extent and could virtually re-create some experiences like shopping and both professional and social meetings.

This is still a nascent area, so it is unclear how many organizations will launch these virtual environments or exactly what they will look like.

EARLY GLIMPSES OF A LARGER PROBLEM


Another kind of global virtual, “always on” digital realm may offer some hints of how narratives could spread in the metaverse: online multiplayer video games. Such games let players interact in real time with other players around the world, creating a kind of large-scale online forum.

Daniel Kelley — director of strategy and operations for the Anti-Defamation League (ADL)’s Center for Technology and Society — wrote in a 2020 Slatearticle that such games present the possibility for players to spread information — genuine or manipulated — which game companies’ content moderation policies may not be prepared to handle. For example, a 2019 ADL survey found 13 percent of the U.S. residents who play online multiplayer games reported being exposed to disinformation about 9/11 and 8 percent said they saw disinformation on vaccinations.

Early forays at the metaverse have already hit hurdles.

Metaverse startup Sensorium Corp reportedly ran into a misinformation issue when demoing at a 2021 tech conference, according toBloomberg. Participants could chat with virtual characters, including a bot that, “when simply asked what he thought of vaccines, began spewing health misinformation.” That included telling participants that vaccines can be more harmful than the diseases they are designed to defend against. Bloomberg reported that developers said they planned to make updates that would restrict what the bot can say about certain topics.

Buzzfeed reporters using Facebook parent Meta’s virtual reality entertainment platform, Horizon Worlds, also found apparent lapses in misinformation controls. The platform allowed them to design a private, invite-only area that they filled with audio and text-based conspiracy theories — without tripping content moderators to intervene.

“We called the world ‘The Qniverse,’ and we gave it a soundtrack: an endless loop of Infowars founder Alex Jones calling Joe Biden a pedophile and claiming the election was rigged by reptilian overlords. We filled the skies with words and phrases that Meta has explicitly promised to remove from Facebook and Instagram — ‘vaccines cause autism,’ ‘COVID is a hoax,’ and the QAnon slogan ‘where we go one we go all,’” one reporter wrote.

Horizon World moderators did not intercede or take down the Qniverse until reporters eventually used their status as journalists to contact Meta’s communications department and ask why not. Buzzfeed employees’ earlier attempts to use in-platform tools to report the Qniverse received the response that it “doesn’t violate our Content in VR policy.”

A PLACE FOR REGULATIONS?


Now is the time to get ahead of metaverse misinformation, according to political strategist and former Illinois Deputy Gov. Bradley Tusk.

He wrote recently that regulators should start thinking through the tricky regulatory questions surrounding metaverses, so they can push the space into developing along safer lines.

Regulators, think tanks and others may need to wait until platforms launch to see exactly what they’re dealing with and then hammer out specific policy details and proposals. But now is the time to think through larger concepts. That can include items like how to address free speech and misinformation concerns, what truth in advertising and election campaigning regulations will be needed and which political entities’ privacy policies should apply to metaverse operators and their far-flung individual and business users.

“The problems we have regulating technology companies now will be reproduced and amplified in the metaverse. You think policing state-sponsored disinformation is hard on Facebook and Twitter?” Tusk writes. “…How do we ensure accurate information prevails, especially in a context where the alteration of reality is the point? Life on the metaverse will not look or feel like real life, and that’s by design. So how do we keep people safe?”

Daniel Castro — Information Technology and Innovation Foundation (ITIF) vice president and director of its Center for Data Innovation — also raised concerns in a report published yesterday. He said that platforms and policymakers need to expect bad actors will try to misuse platforms and must find the right techniques to curb abuses without overly stifling innocuous use.

He noted that new technology brings new risks, including that malicious metaverse users could easily create avatars in the likeness of other people, to impersonate them.

Another challenge is that conversations in metaverses happen in real time, potentially in complex ways that require moderators to understand avatars’ gestures and the social context to grasp the full meaning, and which may not leave behind any traces for fact checkers to evaluate — such as in the case of unrecorded audio conversations, Castro wrote.

Platforms trying to prevent the spread of harmful content must decide how to strike the balance between safety and privacy, he writes: those aiming to clamp down could opt to listen in on as many user conversations as their employees and automated systems are capable of reviewing, but such a mass surveillance could also chill non-malicious speech.

At present, too many content moderation decisions fall on platforms, Castro said. Federal policymakers should help by giving guidance about what counts as malicious mis- or disinformation and how platforms should respond to it. Federal agencies and partners should also develop a framework for platforms to refer to — if they desire — when making content moderation decisions, he said. That could include advice about whether certain situations call for suspending users or simply limiting how the content can be shared.

And metaverse platforms as well as traditional social media platforms need a forum where they and federal law enforcement and intelligence agencies can communicate. This would allow them to better work together on identifying and responding to threats like disinformation campaigns, he said.
Jule Pattison-Gordon is a senior staff writer for Governing and former senior staff writer for Government Technology, where she'd specialized in cybersecurity. Jule also previously wrote for PYMNTS and The Bay State Banner and holds a B.A. in creative writing from Carnegie Mellon. She’s based outside Boston.