On Thursday, June 15, the McCain Institute hosted the third meeting of its Task Force on Defeating Disinformation Attacks on U.S. Democracy. The McCain Institute welcomed guest speakers from leading tech and social media companies, as well as expert intelligence analysts and researchers to discuss approaches to combating the spread of mis/disinformation online.
The panel expressed that violent content, harassment, hate speech, and mis/disinformation online have contributed to an unhealthy information ecosystem, and it is the obligation of tech and media companies to protect users and promote information integrity. As many of the world’s democracies prepare for upcoming elections, the spread of mis/disinformation threatens free and fair elections and may even incite violence. At this moment, prioritizing proactivity is key.
While mis/disinformation is not a new phenomenon, its scale has become unprecedented with the speed at which ideas are broadcasted on digital platforms. One panelist asked when anyone can contribute to the conversation, how could disinformation not spread? With these concerns, the tech and media industries are invested in making content reliable and accurate while valuing the democratic flow of information. Some approaches to these goals include utilizing machine learning and human reviewers to crack down on harmful content, algorithmically prioritizing authoritative content to bring users reputable information, and exploring how user-end incentives such as revenue-share monetization for creators can encourage information integrity.
The analysts recommend that tech and media companies not only combat mis/disinformation on their platforms internally but also partner with independent researchers. These researchers can produce metrics that inform models which mitigate unwanted content on digital platforms.
One possible solution focuses on adapting algorithms to reflect users’ values-based preferences – a step away from engagement-based algorithms that are informed by clicks, likes, and shares. This would require additional longterm research, such as asking people how they feel about certain content and providing short content surveys. In the end, it could result in a healthier social media ecosystem.
Left unregulated, hate speech and harassment may lead to violence, and while machine learning can help detect these threats, human intelligence is needed to instruct AI on what to look for. Analysts are trained in the nuances of hate speech and can identify coded terms that may be harder to detect by AI models. As one panelist mentioned, this work may produce small average changes, but potentially substantial—even lifesaving—changes for subgroups of the population.
The complexities of internationalism of media platforms are among the steepest barriers to combating mis/disinformation, but progress is being made. One panelist noted that real-time AI translation is erasing language barriers, though challenges still exist with languages that are less common. In many cases, human moderators are required to understand local languages and nuanced phrases, which can limit action on mis/disinformation shared in those languages. And while it must be noted that AI models are still very Westernized, tech companies can proactively remove biases from their models at every step in the process.
One panelist highlighted another challenge of the internationalism of media, which is that American standards for governing free speech are not universal. Thus any policy regarding content guidelines must be evergreen and applicable to every user in every location.
The innovative approaches our panelists discussed including using machine learning to enforce community guidelines, developing alternative algorithm models, improving digital literacy, and encouraging users to demand verification of authenticity shed an optimistic light on the fight against mis/disinformation online, but challenges remain.
One panelist remarked that in the past there have not been real, actionable solutions to combating mis/disinformation and now there are, but implementation will require an engaged effort across the tech and media industries. Furthermore, a collaborative approach will be essential; companies must not only work together but also partner with external experts, academic researchers, and nongovernmental organizations to ensure best practices.
The McCain Institute values the time and contributions of each of our panelists and task force members. We look forward to continued engagement with tech and media companies in the ongoing work of the Task Force on Disinformation Attacks on U.S. Democracy.
These discussions are made possible by a grant from the John S. and James L. Knight Foundation and Microsoft.