Industry Perspectives
Insights from the field
Global Elections Playbook:AI Edition
Over the past few weeks, I’ve engaged in numerous discussions with my peers in the tech industry, friends in the political and policy realm, and former colleagues in the foreign policy sector. A recurring theme emerged: understanding how AI would influence global elections during a year when nearly 50% of the world is participating in the election process.
Over the past few weeks, I’ve engaged in numerous discussions with my peers in the tech industry, friends in the political and policy realm, and former colleagues in the foreign policy sector. A recurring theme emerged: understanding how AI would influence global elections during a year when nearly 50% of the world is participating in the election process.
Many are inquiring about the impact of AI on the global election cycle and the ways in which companies should approach addressing the risks and opportunities it presents. Although it is acknowledged that generative AI will not single-handedly determine election outcomes, it remains a tool that can be readily accessed and exploited by ‘bad actors’ globally.
Generative AI is capable of producing content that may be used to disseminate misinformation and amplify disinformation campaigns on a global scale. Given the absence of an industry-wide mechanism for eradicating false information across all platforms, the responsibility falls upon the companies generating the content to establish necessary safeguards. This involves a range of measures, from policy formulation to email verification, geolocation tagging, and monitoring of slurs, dog whistles, and any content related to political parties, candidates, elected officials, and official voting information. Below, I outline some thoughts on structuring teams and prioritizing tasks for the upcoming election cycle.
How I would structure a team and build a framework for election integrity:
Global Elections Lead
Operations SME (Subject Matter Expert)
Data SME
Policy SME
Language SME
Legal SME
Partnerships SME
Communications SME
Crisis Response SME
While subject matter experts are crucial leaders, success ultimately hinges on proper team staffing. A truly global response program requires sufficient personnel to maintain a comprehensive on-call schedule; this is the only way to ensure effectiveness.
Framework:
1. Partnerships
In the US, collaborate with every state Secretary of State (SOS) office for fundamental training, with each state receiving a budget for election readiness. Focus on states that have already expressed concerns about election integrity. Partner with the Bipartisan Policy Center (BPC) to conduct red teaming exercises and educate on software usage. Similarly, since every country has an election commission, it’s vital to forge relationships with each to guarantee information accuracy and assist them in leveraging the platform effectively.
Engage with national and international organizations representing minority groups to ensure that slurs and specific phrases are incorporated into the system and that classifiers are in use. Such phrases should trigger flags in the system and prevent the generation of responses. This should include variations in spelling, acronyms, and translations.
Work with global watchdog organizations that monitor misinformation trends and can report relevant trends and language to OpenAI, aiding in the update of classifiers that require monitoring. This is particularly important in countries governed by authoritarian regimes.
2. Operations/Policies
Utilize geo-targeting to pinpoint the locations of users generating campaign or election-related content using OpenAI products. This is just one of many steps to identify foreign interference by known actors worldwide.
For ballot and procedural inquiries, provide language that directs users to the official SOS website or CanIVote.com as an alternative, noting that state websites are often more current than national ones.
Similarly, for each country, provide language directing users to the official election commission or authority site. In authoritarian states, guide users to watchdog organizations that can offer accurate information regarding official election procedures.
For politically charged or subjective questions about candidates, offer a standard response such as:
“I am a language model. Please consult a search engine for more detailed information on specific political issues.”
For inquiries related to current elected officials, provide relevant information and include a disclaimer about political views, urging users to conduct their own research:
“Here’s information related to [official], the current holder of [position]. For details on their political views or campaign, please consult a search engine for more information.”
For requests involving the creation of election-related materials (ads, fundraising emails, campaign slogans, etc.), restrict access to verified campaign or elected office accounts. Other users should receive a message stating:
“Certain features are accessible only to verified account holders. Please log in with your verified account.”
3. Create internal policy exceptions tailored to each market, acknowledging specific trends, political organizations, slurs, etc.
4. Flag any content utilizing the names of candidates or elected officials for creating content, including images, that could be construed as deep fakes or misinformation.
—
The manner in which generative AI is utilized during this global election cycle will shape the nature and extent of regulations imposed by countries in the future. While the focus remains on traditional social media platforms, earnest discussions about the role of generative AI in society and in this election cycle will dictate the level of governmental intervention going forward.
OpenAI &The Global Election Cycle
AI amplifies the worst aspects of human nature, what does that mean for elections?
In January, I took some time to ‘red team’ OpenAI’s new election integrity system, announced during Sam Altman’s appearance at Davos. I chose to do this for several reasons, most importantly because of my experience working on the 2020 election at Meta (formerly Facebook) and my involvement in politics since 2012, both in campaigns and in the Senate. As the Elections Fellow at the Integrity Institute, focused on global elections, I was extremely curious about how OpenAI’s integrity system would function across more than 70 elections in democratic, hybrid regime, and authoritarian countries. I am also a CFR term member and have participated in countless meetings about the state of democracy and the global impact of elections, including areas important to me, such as violent extremism.
AI amplifies the worst aspects of human nature, what does that mean for elections?
In January, I took some time to ‘red team’ OpenAI’s new election integrity system, announced during Sam Altman’s appearance at Davos. I chose to do this for several reasons, most importantly because of my experience working on the 2020 election at Meta (formerly Facebook) and my involvement in politics since 2012, both in campaigns and in the Senate. As the Elections Fellow at the Integrity Institute, focused on global elections, I was extremely curious about how OpenAI’s integrity system would function across more than 70 elections in democratic, hybrid regime, and authoritarian countries. I am also a CFR term member and have participated in countless meetings about the state of democracy and the global impact of elections, including areas important to me, such as violent extremism.
It’s crucial to understand that AI amplifies the worst aspects of human nature. While social platforms with public and private spaces create echo chambers, AI tools provide low-cost opportunities for bad actors, both domestic and international, to amplify their messaging, create GPTs through APIs, and figure out how to subvert current systems. My approach to this problem would be twofold: preventative and reactive. Although most scenarios we’re addressing will likely occur regardless of the guardrails in place, this doesn’t mean we should accept band-aid solutions for gaping wounds, especially during a technological renaissance. It’s also important to note that there is no perfect solution; it’s about iteration and creating broadly workable solutions, including space for necessary carve-outs.
OpenAI is, first and foremost, not a distribution platform like Instagram, Facebook, TikTok, and Snapchat. Instead, it’s a platform where you can either create content or develop tools (GPT) that serve multiple purposes. We’re familiar with ChatGPT 3.5, 4, and Dall-E 3, along with the ChatGPT Store that offers tools enabling the AI, in some cases, to create essays indistinguishable from human writing (Humanizer Pro) and ‘bypass the most advanced AI detectors’. Understanding the potential impact that AGI could have on future generations is astounding, but first, we need to establish guardrails quickly.
NASS, the National Association for Secretaries of State, is the primary organizing authority for all SOS offices across the U.S. However, according to insiders, NASS is not sufficiently robust, necessitating that individual states devise their own policies and methods to address social media platforms and AI more broadly. One SOS official expressed concern about the volume of misinformation and disinformation and how their office and local election offices will become overwhelmed, making it impossible for the average citizen to discern truth from falsehood. Those working in this field are already overwhelmed, preparing for an election cycle rife with false narratives without AI and have limited capacity to prepare themselves. They rely on platforms, and in this case, OpenAI, to mitigate potential harm as much as possible.
The outcomes of these elections will also determine the extent to which users lose faith in OpenAI, which could lead to user attrition and ultimately less data for the LLM to train on. Depending on how many mistakes the company makes, OpenAI could become the next Facebook circa 2016, a company now attempting to redeem itself in the election space and demonstrate active risk mitigation.
Immediately after reading OpenAi’s election blog, I started to jot down a series of questions that I wish I had clarity on, they fall within these four categories:
Access to Authoritative Voting Information & Prevention of Voter Interference
What happens to debunked content?
Election Prioritization — Which countries should be prioritized and why?
Knowledge Sharing and Liability
Below, I share a bit of my thinking from a policy, operations, strategic and human rights lens. These questions led me to red team OpenAi’s products to understand how close they were in being able to live up to their manifesto.
Access to Authoritative Voting Information & Prevention of Voter Interference
How is OpenAI approaching elections in countries outside India, the EU, and the United States, particularly in hybrid regimes and authoritarian nation-states where citizens still have the right to authoritative and accurate voting information? Does it maintain relationships with election authorities in each country? Does it collaborate with watchdog and international election monitoring organizations? How is it addressing slurs and politically charged language used to deter voting, and is it working with local vendors to focus on language nuances in addition to creating lists of political figures, political parties, and contentious issues? Domestic and international actors are scrutinizing all elections leading up to the EU Parliamentary (June 6–9) to identify gaps in the system, with 27 countries and over 30 languages (not including English), and varied voting rules per country relating to voter registration, casting votes in person, electronically, or absentee, which closely resembles the U.S. system. How will they penetrate and expose weaknesses within the platform’s defenses?
What happens to debunked content?
Elections are rapidly evolving, and something I learned firsthand in 2020 was that a minor change, such as an early vote date alteration by an authoritative source, could quickly spiral into a disinformation campaign. What happens to content created by one of OpenAI’s tools that was accurate on Monday but debunked by an election authority on Tuesday? What about the content that is already live and the users who used OpenAI to access this information — are they receiving notifications that the response to their query is no longer valid and is now false information? I’m concerned about users who unknowingly share false information, and the audience on other distribution platforms including traditional news outlets, Facebook, Instagram, Discord, TikTok, and X, who can view and process it. Does that information stay up? Is it removed? How is OpenAI collaborating with these companies to reduce the virality of content and ultimately remove it?
Election Prioritization — Which countries should be prioritized and why?
How is OpenAI prioritizing harm and risk, and where do elections in non-ideologically western markets stand? AGI is supposed to open up the world and push humanity forward, but what does that mean if fair and free elections in a handful of countries are top of mind for the company? This isn’t just a question for OpenAI but for all platforms involved in this global election cycle. What do investments in countries across the African Union or Latin America look like compared to investments in technology, human specialists, and government affairs teams for India, the EU, or the U.S.? Are decisions driven by market size, user base, and media discourse? Or are these companies, including OpenAI, concerned about more regulations that could constrain business and impede growth? Are they apprehensive about their relationships with governments that could result in legal takedown requests or demands for access to user data from journalists, activists, and dissenters?
Knowledge Sharing and Liability
What is the best way to place the responsibility on the user to find authoritative information themselves? Is it preferable to direct the user to an authoritative website, or does it make sense to share some information and then suggest the user conduct further research? Or does Google’s Bard have the right approach by deferring almost every election-related query to a Google search? The answer is unclear, and there is no definitive solution regarding how much is enough to prevent liability on the company’s part versus providing services that users depend on.
I have many more questions related to watermarks, international and domestic regulation and how prioritization models are crafted, but as I dig deeper into this year’s election cycle and start looking at election interference reports from Taiwan and gearing up for Pakistan’s election — I just have to wonder will OpenAi have enough time to address these questions and more before India National Election and hopefully by the EU Parliamentary Election.
To learn more about my red teaming efforts, questions used and responses provided by OpenAi, please reach out and I will be happy to share my findings.
Elections in the Digital Age: Navigating Technology, Policy and Global Impact
In an era where global politics intertwine with the threads of technology and diplomacy, elections emerge as pivotal moments that shape the course of nations and the collective future of the international community. The insights from thought leaders like Fareed Zakaria at Davos underscore the intricate dance between technology, electoral integrity, and global diplomacy. As we brace for a year marked by critical elections, with the US elections drawing particular scrutiny, the focus extends beyond domestic frontiers, touching upon broader concerns about global stability, the resilience of democratic institutions, and the strategic postures shaping our era.
In an era where global politics intertwine with the threads of technology and diplomacy, elections emerge as pivotal moments that shape the course of nations and the collective future of the international community. The insights from thought leaders like Fareed Zakaria at Davos underscore the intricate dance between technology, electoral integrity, and global diplomacy. As we brace for a year marked by critical elections, with the US elections drawing particular scrutiny, the focus extends beyond domestic frontiers, touching upon broader concerns about global stability, the resilience of democratic institutions, and the strategic postures shaping our era.
The digital transformation of electoral processes, while heralding inclusivity and efficiency, also unveils a trove of challenges—cybersecurity threats, the proliferation of misinformation, and the manipulation of public discourse through digital platforms stand out as formidable concerns. In this digital age, the quest for truth is a constant battle, underscoring the paramount importance of robust, transparent, and secure electoral mechanisms. Technology, serving as both a beacon of democratic engagement and a potential vector for discord, requires a balanced approach to safeguard and enhance our democratic values.
Recent comments by former President Donald Trump regarding NATO have stirred significant discourse amidst these challenges. Trump's suggestion that he would encourage Russia to attack U.S. allies if they failed to meet their defense spending commitments has drawn sharp criticism from both the Biden administration and European leaders, highlighting concerns over America's reliability as an ally. These remarks not only underscore the geopolitical complexities in the lead-up to the US elections but also reflect on the broader implications of election rhetoric on international relations and security alliances.
Simultaneously, Meta's recent endeavors to fortify election integrity ahead of the upcoming elections shine a light on the proactive measures being taken by tech giants to navigate the minefield of digital information. With initiatives aimed at identifying AI-generated images and moderating political content on platforms like Threads, Meta's approach represents a critical front in the battle against misinformation and the manipulation of public opinion. These efforts, highlighted in recent articles from Axios and CNBC, illustrate the pivotal role of technology companies in shaping the electoral landscape and the discourse surrounding it.
The timing of the US elections, positioned at the year's end, magnifies their global impact, setting the tone for international agendas and geopolitical dynamics in the ensuing year. This period of global political recalibration is closely monitored by allies and adversaries alike, who gauge the outcomes to anticipate shifts in foreign policy, strategic alliances, and the overarching commitment to the liberal world order.
Amidst this global tableau, the US's stance on issues like immigration policy becomes a litmus test for its position on the world stage, influencing international relations and perceptions of humanitarianism. The complex narrative surrounding US immigration policy, stretching from Latin America to regions across the globe, serves as a reflection of the nation's values and priorities at a time when global attention is riveted on its electoral outcomes.
The rise of violent extremism, fueled by online platforms, adds another layer of complexity to the electoral discourse. Tech companies find themselves at the vanguard, tasked with the dual responsibility of facilitating open communication while preventing digital spaces from becoming breeding grounds for radical ideologies. The actions of companies like Meta, in their efforts to moderate content and counter misinformation, embody the critical balance between freedom of expression and the imperative to protect democratic discourse and integrity.
As we navigate the confluence of technology, elections, and global policy, the interconnected challenges and opportunities of this era demand a collaborative, nuanced approach. The integrity of elections, as a cornerstone of democratic societies, influences global policy and shapes international relations, underscoring the need for a concerted effort to uphold democratic values, combat extremism, and foster inclusivity. In this dynamic landscape, our collective resolve to champion these principles will dictate the course of history, ensuring that democracy is not only preserved but flourished, emblematic of global unity and human dignity.
Navigating Misinformation and Voter Challenges in India’s Election
As India gears up for Phase 1 of its 2024 general elections, the electorate faces a complex landscape of misinformation and voter deterrence challenges. Amid an expected heatwave on April 19, voter turnout may be significantly impacted, particularly in states like Gujarat, Maharashtra, Andhra Pradesh, Karnataka, Odisha, and Madhya Pradesh. The rising temperatures have spurred advisories from the Indian Health Ministry and the National Disaster Management Authority, but the decentralized nature of electoral awareness efforts, with states issuing individual guidelines, could lead to confusion and misinformation.
As India gears up for Phase 1 of its 2024 general elections, the electorate faces a complex landscape of misinformation and voter deterrence challenges. Amid an expected heatwave on April 19, voter turnout may be significantly impacted, particularly in states like Gujarat, Maharashtra, Andhra Pradesh, Karnataka, Odisha, and Madhya Pradesh. The rising temperatures have spurred advisories from the Indian Health Ministry and the National Disaster Management Authority, but the decentralized nature of electoral awareness efforts, with states issuing individual guidelines, could lead to confusion and misinformation.
The issue of voter deterrence extends beyond the heatwave. There is ambiguity regarding polling locations, as schools and colleges close to serve as polling stations, yet there is no consolidated list available (the current ECI site is only available via VPN, and you have to search the Electoral Roll with your EPIC number) for third party fact checkers to verify these locations. This lack of clarity could lead to misinformation and potentially discourage voter participation.
Additionally, the integrity of the voting process is under scrutiny. Recent claims by a YouTuber, who was subsequently arrested, have fueled debates over the reliability of electronic voting machines. These claims advocate for a return to traditional ballot papers, adding another layer of complexity to the election narrative. Also, for the first time in voting history, voters 85 above will have the option to vote from their homes.
Misinformation also targets the candidates themselves. An independent study highlighting the criminal records and financial discrepancies among candidates could be twisted into misleading narratives, further muddying the waters of voter perception and choice.
Aside from a study by the Association for Democratic Reforms on candidates' criminal and financial backgrounds, misinformation campaigns are now shifting focus from standard corruption and extortion allegations, typically seen from parties like the Congress Party and the BJP, to broader themes that could disrupt voter participation itself.
When researching the upcoming Indian elections using AI platforms like OpenAI, Gemini, or Anthropic, or search engines such as Bing, Google, or Yahoo, be prepared to encounter a mix of accurate, outdated, or intentionally misleading information. This makes it essential to critically evaluate sources and verify facts.
With the exception of the study by the Association for Democratic Reforms regarding candidates' criminal records and financial standing, these narratives go beyond the current corruption and extortion claims lobbied by both the Congress Party and the BJP, and instead focus on themes that could interfere with the act of participating in the voting process.
If you are using any of the Ai platforms including OpenAi, Gemini or Anthropic or any of the search engines including Bing, Google, Yahoo, Msn, etc., to learn more information about the upcoming India elections, who the candidates are, polling information including how to vote, you’ll be met with a mixture of information either correct, outdated or pulled from sources that were created to be misleading.
As we approach another significant election phase in India, it’s crucial to sift through the noise and seek the truth amidst widespread misinformation. Feel free to connect with me for detailed analysis and discussions as we move closer to the elections.
China’s Long Game: How TikTok Marks a Move in the Tech Cold War
While the forced divestment of TikTok feels like a win for the US, with President Biden signing the foreign aid package that included the provision, it might be a short-sighted victory in the larger technological cold war with China.
Former Secretary of State Henry Kissinger's "Détente" strategy, focused on deterrence to prevent a full-blown US-Soviet war in the 1970s, offers valuable lessons. As Niall Ferguson argued in his Council on Foreign Relations essay, "Kissinger and the True Meaning of Détente," Kissinger, before his death, warned of a "new cold war" more dangerous due to technological advancements. While he likely focused on weapons and artificial intelligence, his words resonate with how tech companies are becoming pawns sacrificed in this modern cold war.
While the forced divestment of TikTok feels like a win for the US, with President Biden signing the foreign aid package that included the provision, it might be a short-sighted victory in the larger technological cold war with China.
Former Secretary of State Henry Kissinger's "Détente" strategy, focused on deterrence to prevent a full-blown US-Soviet war in the 1970s, offers valuable lessons. As Niall Ferguson argued in his Council on Foreign Relations essay, "Kissinger and the True Meaning of Détente," Kissinger, before his death, warned of a "new cold war" more dangerous due to technological advancements. While he likely focused on weapons and artificial intelligence, his words resonate with how tech companies are becoming pawns sacrificed in this modern cold war.
The divestment legislation, giving ByteDance nearly a year to find a US owner or face a ban, is just another move in this ongoing chess game. Last week, Apple, due to its "special" relationship with China, removed Meta's WhatsApp and Threads from its App Store at the CCP's request.
By May 2025, if ByteDance fails to find a buyer, Apple and Google might be forced to remove the app or restrict updates, effectively banning it in the US.
The Fallout: Legal Battles, Platform Shifts, and Economic Ramifications
This potential ban will likely trigger several consequences:
Legal Challenges: ByteDance might challenge the ban in the Supreme Court, arguing it violates First Amendment rights.
Investor Scramble: Potential investors will assess the situation, with TikTok's market cap and pricing impacted by access to its algorithm and user data.
Platform Migration: Meta will likely launch campaigns to entice TikTok users to migrate to Instagram's Reels feature.
Marketing Shifts: Global advertisers will need to adjust their strategies if TikTok is banned. Politicians will also have to decide on campaigning on the platform.
A crucial, yet under-discussed, risk is the impact on the US economy. The thriving content creator economy on TikTok would be forced back to Meta platforms, which are still playing catch-up. While Reels are profitable, TikTok's superior algorithm reigns supreme. Does Meta have the infrastructure to accommodate this influx of creators seeking a new home?
Beyond TikTok: The Looming AI Battleground
The ban wouldn't just affect TikTok. ByteDance, similar to Meta, has a pipeline of unreleased applications, some potentially in the AI and autonomous intelligence space. If these products are launched outside the US, the US loses access to the data and expertise that come with those launches. This creates a situation where Meta and its apps are banned in China, while ByteDance and other Chinese tech companies face similar bans in the US.
Détente or Escalation: The Need for a National Conversation
The lack of federal privacy legislation is a root cause of this situation. Had such legislation existed, national security concerns surrounding US user data could have been addressed before TikTok's rise. Now, we question how many chess pieces we have left before reaching Détente with China, all because the US couldn't establish federal privacy laws.
The Taiwan Question: A Broader Geopolitical Impact
The implications of this tech cold war extend beyond economic concerns. We must also consider the impact on Taiwan, a crucial geopolitical flashpoint.
--
Additional Resources on this topic:
Decoder with Nilay Patel: Why the TikTok ban won’t solve the US’s online privacy problems https://www.theverge.com/2024/4/25/24140320/why-the-tiktok-ban-wont-sol…
Pivot: TikTok Ban Approaches, China’s App Crackdown, and Guest Dana Mattioli https://open.spotify.com/episode/6gpwXDzL6sByHH4B98nFjY
Foreign Affairs: Kissinger and the True Meaning of Détente https://www.foreignaffairs.com/united-states/kissinger-and-true-meaning…
Senate Passes TikTok ban bill, sending it to President Biden’s desk https://www.theverge.com/2024/4/23/24137638/senate-passes-tiktok-ban-bi…
Is 2024 the Year US-China Tensions Finally Trip up Apple? https://www.bloomberg.com/news/articles/2024-01-03/us-china-tensions-po…
The Significance of the EU Election for the U.S.
Last week, I joined Katie Harbath on her podcast to talk about the EU elections and what we could take away as political and technology specialists. A week later, and a day before the first presidential debate, I’ve started thinking about what the results of the EU election really mean for us—if anything.
Last week, I joined Katie Harbath on her podcast to talk about the EU elections and what we could take away as political and technology specialists. A week later, and a day before the first presidential debate, I’ve started thinking about what the results of the EU election really mean for us—if anything.
Key Takeaways: The DSA guidelines will be more helpful for the upcoming European elections and are something the US should look to replicate for our own elections, either as a standalone or tied into the national security efforts currently focused on platforms and AI.
While there was a lot of speculation around the impact of AI on this particular election cycle, the types of misinformation campaigns were low-tech, focusing on comments, memes, and traditional forms of misdirection content. EDMO published a daily bulletin throughout the cycle, along with detailed reports.
There was a report published by Maldita, an IFCN signatory that focuses on disinformation, that showed platforms did not act against half of the misinformation about elections (I highly suggest reading the report). On the other hand, companies like OpenAI busted propaganda campaigns from Russian, Chinese, Iranian and Israeli groups leading up to the EU Election.
What can we learn as trust and safety technology specialists that can be applied for the US election:
War rooms, policy plans, and tooling should start to roll out at least six months before the actual election and last until 3 months. In the DSA, there are guidelines around when companies should start to implement their election plans and when those companies should cease tracking, etc. In the US, there are main inflection points that start almost a year outside of the actual November election. Here’s a breakdown of political flashpoints for the US election:
Super Tuesday
Presidential debates
Political party conventions
GOTV (Get Out The Vote period/early voting)
Election Day
Inauguration Day
These are the major points, just focused on the presidential level, not the other federal, state, and local races taking place in all 50 states. One thing I flagged is to either focus on the US writ large and have broad-based policies or create carve-outs and dedicate teams that understand the political targeting for disinformation for a handful of states. For the US, those would be the battleground states, including Wisconsin, Michigan, Pennsylvania, Nevada, Arizona, Georgia, and North Carolina. In the EU, it was France, Germany, Poland, Italy, Slovenia, and Malta.
Back to the DSA, we all know technology is such a polarizing issue in Washington, which makes it hard for anything to pass in Congress or be signed into law by the executive unless it relates to China (i.e., the TikTok ban). While members of the Senate Judiciary Committee want to pass laws that focus on privacy and CSAM, there’s been little actual movement. But on the executive level, the Biden administration is viewing AI as a national security issue, which is a course correction from how social media platforms have been viewed. With either candidate returning to the White House in 2025, adopting election guidelines that target social media platforms (distribution platforms) and AI (content creation companies) that would go through various stages before the 2028 presidential election should be a top priority. With a thorough understanding that not one size fits all, and election information integrity is not the same as creating regulation to prevent innovation. In fact, it’s creating norms and standards that should be adopted across all media entities that touch any election.
Lastly, for the conservative swing in the EU and the uphill battle to bring the EU back towards the center with representatives from the center-right and center-left at the Parliament, Council, and country levels—the issues that people are worried about are very similar to the ones faced here in the US, and like the EU and specifically France, didn’t appear overnight. Immigration, wages, healthcare, investment in wars but not domestically to focus on infrastructure, etc., are the same everywhere and have been since 2016. The othering, fear-mongering, and anti-identity politics are easy things to exploit online. Those topics trigger an emotional response, which makes misinformation campaigns around certain candidates and issue areas easier to digest and amplify. Labeling misinformation and watermarking AI-generated content is just the baseline of what technology companies can do. But it’s more than the technology companies that need to do better and invest in operational capacity; political parties and lawmakers need to understand what they’re dealing with—it’s not good enough to rely on the tech-savvy person on the team to explain technology, nor is it enough to just play with cool tools. Partnerships need to be created between companies and lawmakers, politicians, political parties, and the people who drive the campaigns. Technology can be used for good. It can show people where and how to vote, how to participate in democratic processes, but both sides need to make an effort. As technology and our dependency on it continue to grow, it’s imperative to get this right—now.
We cannot blame the outcome of the 2024 election on AI or on distribution platforms if we don’t like the outcome. For bad actors, the EU election was a test run for the US, and even though leaders in certain countries are not happy with the results, i.e., France, Germany, Belgium, they are not blaming technology companies for their downfall—because the responsibility lies with governing. In this case, the DSA was the best thing that could have happened, even if the election guidelines were recommendations. The US is still the Wild West, with a knowledge gap, and we’re running out of time to fix it.
Signals from the EU Parliamentary Election
In early December, I started paying attention to the EU election for a mix of reasons: how technology companies would enhance their election integrity efforts to ensure compliance with the DSA, how AI companies were approaching the election, and the broader political implications we could glean from the results. For all intents and purposes, the EU election was the closest parallel to the upcoming U.S. General election in terms of political trends. My concern wasn’t necessarily around social media platforms allowing bad actors to influence election results, but rather the growing popularity of populism and far-right conservatism.
In early December, I started paying attention to the EU election for a mix of reasons: how technology companies would enhance their election integrity efforts to ensure compliance with the DSA, how AI companies were approaching the election, and the broader political implications we could glean from the results. For all intents and purposes, the EU election was the closest parallel to the upcoming U.S. General election in terms of political trends. My concern wasn’t necessarily around social media platforms allowing bad actors to influence election results, but rather the growing popularity of populism and far-right conservatism. This trend includes a mix of Euroscepticism, anti-immigration sentiments, limiting support for Ukraine, and moving away from the Green Deal. An underlying ‘culture war’ narrative has also slowly taken hold over the past decade. My worry is whether these narratives could lead to political violence and, if they do, who would be targeted.
Regarding disinformation campaigns, EDMO’s daily bulletin proved to be a valuable resource across all 22 languages. Surprisingly, there was a lack of disinformation campaigns about the actual voting process—the procedural aspects. Instead, disinformation narratives focused on the ‘culture war,’ country-level involvement in the Ukraine-Russian war, and confusion around EU-level legislation. The EU Commission excelled in creating a strong, fact-based counter-narrative about the voting process, making it straightforward and easy to follow for every member state. This clarity made it difficult for misinformation about voting procedures to spread quickly. However, such misinformation still traveled widely across X, Meta Platforms, and TikTok. While we await analysis on how AI was used to spread disinformation, the content (both written and media-generated) was not sophisticated enough to significantly impact the election.
What does this mean for upcoming geopolitical flashpoints?
France’s Snap Legislative Election on June 30th and June 7th
UK’s General Election (originally slated for Q4) on July 4th
US Presidential Debate on June 30th
It’s unclear.
EDMO relied on a network of fact-checkers to provide updates on disinformation campaigns, but the database was not robust enough to identify micro-trends. Moderating ‘culture war’ content is incredibly nuanced. While some narratives play across many countries, others are so specific that unless there is a focus on individual member states, they may go unnoticed.
Certain tech companies focused on the EU as one large voting block instead of individual member states. This approach led to an increase in disinformation campaigns in targeted countries (France, Germany, Poland, Italy, Netherlands, Slovakia). These companies might have missed the bigger picture by not focusing on countries that wield significant global influence. Yes, the EU votes as a bloc, but countries like France, Italy, and Germany are part of the G7, with France as a permanent member of the United Nations Security Council. Meanwhile, Slovenia and Malta are serving a two-year term as non-permanent members, and Denmark and Greece start their two-year non-permanent member terms on January 1st, 2024.
For the UK, there may be a swing to the left, with the Labour Party possibly coming back into power, ending the Tories' decade-long tenure. General malcontent towards globalization and global elites has also contributed to political shifts, reflecting a broader trend. This sentiment ties into the rise of authoritarian challenges in Russia, Iran, North Korea, and China.
Lastly, while the EU Parliamentary election has concluded, four key roles still need to be filled: EU Commission President, European Council President, European Parliament President, and EU Foreign Policy Chief. Although MEPs have been chosen and parties have seen gains or losses, leadership is still very much undecided. The next EU election test will be the disinformation campaigns targeting current President of the European Commission Ursula von der Leyen, President of the European Council Charles Michel, European Parliament President Roberta Metsola, and EU Foreign-Policy Chief Josep Borrell.
In terms of the lean towards the center-right and far-right, we’re witnessing a difficult time across Western nations with the rise of conservatism in both traditional and flawed democracies. India was able to steady itself by forcing Modi’s hand to form a coalition government after the BJP didn’t secure enough seats to control the legislature. The snap National Assembly election in France, in reaction to the EU election results, will truly test whether the public supports what the National Rally (a far-right political group) is selling. This could either be a smart play for President Macron’s Renaissance party or result in him becoming a lame-duck president as his term ends in 2027. Watching this announcement in real time on F24 and listening to commentators' shocked reactions was surreal. Therefore, I am closely watching the first round of French voting (June 30th), coinciding with the first U.S. debate, to forecast the future of democracy and identify potential disinformation campaign vulnerabilities in the fall.
——-
Links for additional reading
Elections in the Digital Age: Navigating Technology, Policy and Global Impact (https://www.mereprotest.co/insights/globalelections)
Global Elections Playbook: AI Edition (https://integrityinstitute.org/blog/global-elections-playbook-ai-edition)
We Worked on Election Integrity At Meta. The EU - And All Democracies - Need to Fix the Feed Before It’s Too late (https://www.techpolicy.press/we-worked-on-election-integrity-at-meta-the-eu-and-all-democracies-need-to-fix-the-feed-before-its-too-late/)
EU Parliamentary Elections Confirm Sharp Right Turn/Snap Elections Called in France 30th June and 7th July (https://www.linkedin.com/pulse/eu-parliamentary-elections-confirm-sharp-right-turnsnap-tina-fordham-snjgf/)
EU Election Results - Europe Swings to the Right - EU Confidential from POLITICO https://shows.acast.com/61a657ec79ae560013721d13/66663ae1ba6e1700125767e7
EDMO EU Elections Disinfo Bulletin (https://edmo.eu/thematic-areas/european-elections/eu-elections-disinfo-bulletin/#issue46)
Why Disinformation Campaigns are the Most Lethal Form of Modern Warfare
Terrorist organizations rarely, if ever, are fully defeated; they are merely degraded, usually only for a period of time before they reap death and destruction all over again, albeit with a new name or leader. Misinformation has the potential for the same.
I remember checking my work email in 2020 and reading the handover notes from my colleagues in Dublin that mentioned something about a ‘Russian disinformation campaign.’ My role at that time was as one of the ‘crew leads’ for the 2020 US Election War Room at Meta (formerly Facebook). The job entailed running every aspect of the war room related to the election — from understanding if policies and integrity tools needed updates or creations to dealing with real-time crises. This wasn’t the first time the words ‘Russian disinformation campaign’ had come across my email, but it was the first time we had to devise a plan to investigate and determine if it was a real disinformation campaign led by a cyber farm in Eastern Europe.
The process wasn’t cut and dry. It involved uncovering layers, FB + IG page and friend connections, account history, reviewing infractions, determining locations, assessing possibilities of fake accounts, and finally discerning if there was a web of people or just one individual posting false information intentionally. What I’m describing is similar to what investigative journalists do when reporting or what analysts in the intelligence community compile for tactical decisions. By conducting these deep dives into disparate bits of information, we could formulate a hypothesis that would later be supported by concrete data. At Meta, uncovering and analyzing information wasn’t a job for just one person or team; we pulled people from various teams to investigate accounts for hours before formulating a suitable hypothesis, drafting a write-up, and sending it up the chain for approval on next steps. This was the process to uncover CIB — Coordinated Inauthentic Behavior.
What we did with information flagged as misinformation and disinformation was a process that involved labeling and demotion before the content was reviewed by 3PFC’s — Third Party Fact Checkers, based on priority level (infraction and real-world harm potential). The review process could take anywhere from 3 to 12 hours; while we had a prioritization system, we couldn’t force content selection or control who reviewed the content. If content wasn’t in English, we often leveraged internal Meta staff for translations and context before sharing to grasp the scope and potential threat. We had the responsibility to ensure that content was properly vetted and if the content stayed on the platform or was removed based on factors including Facebook’s internal Community Standards, consultations with our legal teams, and understanding the necessary context around a post to gauge potential real-world harm. Finally, we examined the virality probability of the content based on the user’s footprint (i.e., there’s a significant difference between someone with 1,000 followers vs. 50,000). The Russian Disinformation Campaign was more sophisticated than a politician or celebrity posting incorrect early voting dates or polling locations due to last-minute changes by the Secretary of State. The tactics the war room used for content moderation decisions resemble how legislators draft new policies or how people make judgment calls in everyday life. Our mandate, however, was to prevent real-world harm related to the 2020 Election.
While I no longer work at Meta, I am currently a Resident Fellow at the Integrity Institute, a think-tank focused on integrity across social platforms. Last weekend, following the terrorist attack against Israel by Hamas, a political party and terrorist organization in Palestine, I joined a call with a foreign policy think-tank to grasp what was happening in real-time. I felt at ease listening to Middle East experts discuss events and predict outcomes. Then, when I logged onto Instagram, I saw my timeline flooded with videos of the attacks, photos of children in dire situations, and commentary on the imminent war by self-proclaimed Middle East experts. Most of what I heard on the policy call didn’t reach mainstream channels. Instead, opinions from non-experts, along with bombing clips from 2015, were circulated; Hamas wasn’t labeled as a terrorist organization, and many couldn’t even locate Gaza on a map. I posted an IG story about being cautious around misinformation and sharing unverified information from non-trusted news sources, then shared my own perspective on the developing war. In 2019, I received my MA in International Relations from NYU with a focus on National Security and Intelligence, specializing in the Middle East and the Psychology of Jihadism. Although I transitioned into tech post-graduation, what I was witnessing made perfect sense. My master’s thesis focused on how platforms created havens for terrorist and right-wing organizations to organize, fundraise, and disseminate propaganda, potentially causing significant harm. I spent the subsequent days absorbing information, participating in discussions, and reading about the rampant spread of disinformation on social media platforms.
Disinformation campaigns aren’t new. In fact, before the era of social media, yellow journalism swayed citizen opinions on wars, politics, and the economic states of various nations. Disinformation campaigns led to the communist witch hunts of the 1950s and, post 9/11, to the unjust targeting of Sikhs and Muslims due to mistaken associations between Al-Qaeda and all Muslims. These campaigns have been present in every major election, from questioning President Obama’s American citizenship to rumors of Hillary Clinton’s alleged involvement in a sex ring from a pizza shop basement in D.C. While these examples might seem ludicrous, that’s precisely the point — once someone believes something outrageous, almost any other claim seems feasible. The habit of seeking reliable sources, understanding post authors, and seeing the bigger picture diminishes. We stop using the investigative tools we’ve learned in daily life to critically assess what experts or the general public say. Disinformation campaigns can be orchestrated by authoritarian regimes, agenda-driven organizations, or ordinary citizens. They can begin with a single post or video that gets reshared across multiple platforms, spreading virally.
The war between Israel and Hamas (and by extension, Iran, Hezbollah, and any other country that supports Iran) is complex and squarely resides in the grey. This fact alone makes it easy for disinformation campaigns to thrive because there is no single source of truth. The ‘truth’ depends on the country you reside in, ideology, and religious leaning. Since 2009, following the IDF military offensive nicknamed Operation ‘Cast Lead’ in Gaza, which resulted in the death of 1,383 Palestinians, including 333 children, Israel has engaged in wartime exercises and containment of the Gaza Strip, and we’ve had a front row seat to the carnage. It’s unsurprising that videos from 2015 and 2020 surfaced, and that photos of building demolitions and missile firings were so prevalent. The content was already there, making it a perfect and easy weapon for organizations promoting disinformation to deploy, stoking confusion and outrage.
As the conflict deepens and evolves into an even more intricate geopolitical dispute than it has over the past several decades, the disinformation campaigns from all parties could lead to miscalculations, placing more people in harm’s way and ultimately resulting in untold death and destruction.
— — — — —
While experts agree that the amplification of misinformation can increase around critical events, the implementation of design changes on platforms can significantly reduce the spread of misinformation. In 2022, the Integrity Institute, as part of their elections misinformation effort, created a dashboard that tracked misinformation across large social media companies, illustrating the true impact of platform design choices on the amplification of misinformation. What I’m suggesting are not solutions that will be impossible to implement, but ones that complement and amplify the work that trust and safety specialists have done. These can bypass months of testing and internal deliberation. First, it’s important to understand the difference between misinformation and disinformation:
Misinformation is the unintentional sharing of untrue content.
Disinformation is the intentional sharing of untrue content.
It’s very easy for misinformation to evolve into a disinformation campaign. We know it’s possible because there are reports detailing how this amplification occurs, often with the support of media platforms seeking engagement and revenue.
Below are solutions that both large traditional platforms and smaller ones can implement to protect their users:
Create more friction to prevent the easy sharing and reposting of content. Less friction allows users to share and repost content without hesitation. For example, users must take more steps on Instagram than on X to share content (either by DMs, email, etc.), which has proven to slow down the sharing of misinformation. I understand that this inherently goes against the business model of social media companies, but making it harder for users to easily re-share suspected false information has its merits.
Remove engagement focused content ranking and recommendation systems and replace them with ranking systems that favor accurate information over engaging content.
Agree on industry-wide standards for media provenance, including AI-generated or enhanced media.
Enable origin tracking of content (e.g., videos, articles, photos) to identify, flag, and remove content identified as misinformation if it has been recognized on any social platform.
Develop an industry-wide standard for labeling content to improve user understanding, and for removing content marked as misinformation by either users or AI.
Establish industry-wide notification mechanisms that alert users when content has been marked as potential misinformation and is under review by fact-checkers.
Notify content creators when their content is marked as misinformation to help raise awareness about the prevalence of misinformation.
Demote flagged content immediately based on its harm level. For instance, if there’s potential for real-world harm (e.g., calls to violence), then the content should be removed until a thorough review of both the content and the profile sharing is conducted.
Create industry-wide prioritization and real-world harm standards to ensure consistent tracking and removal of false content across all platforms.
For users of social platforms, there are simple steps you can take to stop the spread of misinformation. However, it’s worth noting that while traditional forms of social media content are the most visible carriers of misinformation, disinformation campaigns are also being waged on encrypted channels, including WhatsApp and Telegram. Here’s what you can do to protect yourself and your friends from spreading misinformation:
Put more effort into fact-checking information, even from verified accounts on social platforms. Just because an account is verified doesn’t mean the user lacks malicious intent or that it’s a genuine account rather than a bot. By understanding the source and fact-checking information through a quick Google search or using certain tools, you come closer to grasping what you’re reading or viewing.
Trust traditional news sources that adhere to a code of conduct regarding accurate information. They will issue retractions for false information, but primarily, they post only verified data. The Logically app is another tool you can use to help identify false information online.
If you encounter content that seems false or misleading, there are tools on all major platforms allowing you to report such content. If this type of content frequently appears in your timelines, consider reposting it alongside factual information to warn other users of its inaccuracy.
Check what trusted sources are saying about a topic. If reputable news outlets or officials (e.g., government agencies) are posting similar content, it’s more likely that the information has been verified. The International Fact-Checking Network consistently publishes fact-checks related to viral content and is a great resource.
Refrain from sharing content that you suspect may be misleading or contain misinformation.
We are missing a prime opportunity in classrooms to teach critical thinking skills and media literacy. While focusing on classics like Shakespeare is important, teenagers are actively using social media. They need tools to distinguish fact from fiction and to employ critical thinking skills. This will help them draw their own conclusions and ask more questions when digesting information.
Lastly, civil society organizations and think-tanks have a responsibility to use resources from the integrity community to help push for and build smart policies and methods that we know work. As the world gears up for another round of global elections, what organizations and technology companies build together now will determine how free and fair democratic elections will be. There will always be illiberal governments controlling narratives, targeting journalists, and activists, but in societies that remain democratic in nature, there’s hope to prevent further erosion leading to dictatorship by stopping the spread of misinformation.
On January 6th, I was in upstate NY when suddenly my friend called for me to look at the television and my phone’s ringtone went haywire. For hours, I spoke with my team at Meta about how to respond to the crisis in real time, all while watching the coup unfold on the steps of the U.S. Capitol, fueled by a lie perpetuated and shared by large swaths of the Republican Party. Extremist groups, such as the Proud Boys, weaponized that lie, turning it into disinformation. They then used images from the coup to continue fundraising and to heighten tensions among U.S. citizens. Hamas is doing the same. Governing bodies, like Meta’s Oversight Board, exemplify what slow governance might look like in an ideal world where power dynamics and narratives don’t constantly shift. However, in the face of disinformation campaigns, we can’t afford to wait for deliberations made in a vacuum that take, on average, 3 months and then require additional time for implementation. We need solutions now.
As this war escalates and the world enters a global election cycle, advocates, regulatory leaders, and legislators cannot afford to wait. For citizens who consume social media, employing tactics to prevent the spread of false narratives, which are eroding the social norms upon which this world was founded, is the only way to curb the deliberate creation and sharing of misinformation.
Warfare might seem like a strange term to describe what’s unfolding globally, but we are enduring against the backdrop of global wars — some fought by traditional means and others waged using technology. The IDF, along with countries that support Israel, is embarking on a lengthy endeavor to control the narrative while responding to Hamas. Hamas, described as ‘an entity as much a network, movement, and ideology as it is an organization where its leadership can be killed, but something akin to it will survive,’ parallels misinformation. The lie doesn’t vanish with the removal of the original content, but without that original reference point, much like terrorist organizations, it can fade and lose relevance over time. Terrorist organizations are rarely, if ever, fully defeated; they are merely degraded, usually only temporarily, before they wreak havoc once more, often under a new name or leader. Misinformation carries the same potential. As Israel strives to eliminate Hamas and prevent a similar organization from emerging in the region, the integrity community should urge technology companies to implement these small yet significant changes before it’s too late.
I know firsthand the power of disinformation campaigns. Fighting them in traditional media (print and news) is challenging but has been achieved due to the diligence of journalists and fact-checkers. Combating disinformation through technology domestically is tough and merits precision. Yet, on a global scale, to prevent further violence, doing what’s difficult is essential, even if we don’t succeed the first time. As technology advances without the proper foundation across the industry, we risk becoming victims, unable to distinguish truth from fiction.
Why We Should Stop Blaming Technology for the Erosion of Democracy
Prior to the election, sociologist Larry Diamond observed in Foreign Affairs, “political extremism, polarization, and distrust have been on the rise even in long-established liberal democracies.” This trend is amplified by technology’s deep integration into politics, with social media and AI now fueling authoritarianism through surveillance, disinformation, and polarization. While I don’t believe technology’s role is inherently negative, if democracies on the edge of illiberalism don’t address extremism and misinformation, both online and offline, we risk becoming no better than our adversaries in the long run.
The past 18 months have felt like a constant stream of one warning: “Elections will be overrun by misinformation and AI,” particularly for traditional and flawed democracies. Over the past 6 months, specific to the U.S. election, I have tracked misinformation claims in various, used tools to investigate whether content was AI-generated and improperly labeled, and watched as tech leaders amplified divisive narratives in ways that not only defied expectations but were operationally almost impossible to combat in real time. Yet focusing solely on technology as the cause overlooks deeper issues, including political extremism, polarization, and the erosion of public trust—factors that existed before the rise of social media and artificial intelligence.
Prior to the election, sociologist Larry Diamond observed in Foreign Affairs, “political extremism, polarization, and distrust have been on the rise even in long-established liberal democracies.” This trend is amplified by technology’s deep integration into politics, with social media and AI now fueling authoritarianism through surveillance, disinformation, and polarization. While I don’t believe technology’s role is inherently negative, if democracies on the edge of illiberalism don’t address extremism and misinformation, both online and offline, we risk becoming no better than our adversaries in the long run.
Throughout the past year, I’ve spoken publicly on misinformation and global elections, having kicked off the year working on the Indonesian election, followed by (in no particular order) the Indian, EU, UK, South African, and Turkish elections, with a cautionary note that we can’t solely blame democracy’s challenges on the lack of guardrails or enforcement by technology companies. Disinformation warfare is a form of psychological manipulation that isn’t easily countered by last-minute media literacy campaigns, fact-checking, or even LLM system audits. False narratives, especially those entrenched for years, can’t be easily dissuaded through memes, influencer campaigns, or fact-checked posts—especially when fact-checking is perceived as partisan and influencers are being paid by campaigns and foreign adversaries to spread misinformation.
An unexpected factor in this election cycle was the outsized influence of Elon Musk. His weaponization of X to amplify misinformation, relentless attacks against outspoken opponents of President-Elect Trump, and his funding of nonprofits creating "dark PACs" like Progress 2028 intensified the spread of falsehoods. Musk’s disregard for removing harmful misinformation created a significant setback to the tireless efforts of election integrity professionals, many of whom have worked months—and sometimes years—to safeguard the election process.
Yet, despite these challenges, progress was made. The US Intelligence Community, through CISA and the FBI, shared intel on foreign interference from Iran and Russia. In regular cadence, Microsoft released reports on election interference with specific examples of misinformation campaigns. Media and fact-checking organizations, including Wired, Politifact, Snopes and FactCheck.org, launched real-time fact-checking initiatives, often in multiple languages. Global news outlets focused on U.S. elections and increased public education on misinformation. And while we saw a sharp rise in sexist content targeting Vice President Harris and other female leaders, there was extensive media coverage of these harmful narratives, which allowed them to be scrutinized publicly.
This will be my sixth election cycle (presidential and midterm) in the U.S., and it’s probably the most important one in recent history due to how information has been shared and how infrastructure vulnerabilities have been exploited by foreign actors. There will be post-mortems, finger-pointing, and calls for changes to our election process, but right now, we’re still very much in the middle of a disinformation storm. The potential for political violence—and violence targeting people within protected classes—is growing and will likely continue throughout the year. After we all take a long and well-deserved break, we’ll also need to start understanding what this administration’s policies around technology will look like. Will all executive guidance on the responsible use of AI be thrown out the window? Will we finally see privacy legislation focused on platforms? And will TikTok continue to operate in the U.S. in 2025? These are questions that need answers, though we all need a break to see the forest through the trees. But if you’re like me and don’t believe in rest, here’s an outline of steps we should start looking toward to protect elections and everyday citizens during this transition into the Trump administration.
Expanding Regulatory Efforts for Election Integrity and Creating Guardrails that Work
Countries worldwide are increasingly implementing regulatory frameworks to protect election integrity, focusing on mitigating misinformation, ensuring transparency in political advertising, and regulating AI’s influence on digital platforms. These regulations aim to safeguard democratic processes and limit the manipulation of public opinion during elections, offering best practices that current legislators and government officials can adopt. A comprehensive technology reform is essential to promote innovation with appropriate guardrails. The EU’s Digital Services Act election guidelines, for instance, helped create a calmer environment for the recent Parliamentary elections. Other examples include Canada’s election advertising transparency regulations and Australia’s Electoral Commission’s expanded oversight to combat digital disinformation. The U.S. has an opportunity to develop its own framework before the 2028 presidential election. Rather than viewing the U.S. as an innovator and Europe as a regulator, we should pursue an adaptable model that incorporates the best practices from each.
European Union countries, under the Digital Services Act (DSA), have adopted a robust framework to create a safer digital ecosystem. The DSA places specific obligations on large online platforms to prevent the spread of illegal content, including election misinformation, and mandates clear guidelines on the transparency of political ads. By establishing guidelines for risk mitigation and requiring platforms to provide transparency in digital advertising, the DSA aims to combat misinformation more effectively.
Canada implemented the Elections Modernization Act (Bill C-76) in 2019, which mandates transparency for third-party election advertising, requires identification on partisan ads, and imposes spending limits on political activities leading up to federal elections. The legislation also includes mechanisms to ensure that digital platforms are accountable for any political content they host, reinforcing a fair and transparent election process.
Australia has taken a proactive approach with its Disinformation Register, managed by the Australian Electoral Commission (AEC). This initiative addresses false information surrounding election processes by debunking misleading claims and reinforcing trust in election integrity. Additionally, Australia requires explicit authorizations on election-related communications, encouraging voters to verify the sources of information they encounter.
Clear Steps for Technology Companies
To build these guardrails, our favorite technology companies must play a proactive role. Without their buy-in, and the Trust and Safety + Integrity communities pushing them internally and externally, none of the necessary work will happen. The heads of every platform got very lucky because public attention shifted towards X, and less focus on how AI and platforms played a role in amplifying election related misinformation. Here are three critical steps companies can take now to safeguard future elections:
Strengthen Transparency: Establish publicly accessible archives of political ads and content flagged as misinformation, and make the criteria for labeling or removing content accessible to researchers and the public.
Expand Real-Time Fact-Checking: Implement multilingual, AI-driven fact-checking mechanisms that can handle the sheer volume of claims during high-stakes periods, including collaborations with trusted non-partisan partners.
Combat Hate Speech More Proactively: Platforms should adopt clearer, enforced policies to remove hate speech targeting protected groups, especially during election cycles when such rhetoric often increases.
Empowering Non-Partisan Stakeholders in Tech and Policy
For those in tech and policy, especially non-partisan members of organizations, your voice and actions matter now more than ever. Here’s how you can help:
Advocate for Reform: Engage with local, state, and federal policymakers to share best practices and experiences from both U.S. and international elections. Encourage the creation of policy frameworks that address misinformation without stifling free speech.
Support Cross-Border Collaboration: Partner with international counterparts to exchange insights on successful interventions from the EU, Canada, Australia, and others. This global perspective can help the U.S. adopt the most effective and democratic tools.
Our democracy deserves stronger guardrails to prevent the weaponization of technology. Together, we can foster innovation while safeguarding the foundational values of our society.
[This article was originally published on November 14, 2024, by the Author for All Tech is Human, where she is the Senior Fellow for Information Integrity.]
Meta Policy Changes: Understanding Fact-Checking
What’s most important about Mark’s announcement is the signal these policy rollbacks send to other technology companies like Alphabet, TikTok, BlueSky, Snapchat, Discord, OpenAI, and emerging smaller platforms that people have integrated into their daily lives. More importantly, how will civil society, academia, and liberal democracies (and, in some cases, authoritarian ones that have strict privacy laws) circle their wagons to ensure the global community is safe from online harms?
Yesterday, Meta’s CEO and Founder, Mark Zuckerberg, announced massive roll backs to the integrity programs that keep users safe across the three most popular applications in the U.S.: Facebook, Instagram, and Threads. The announcement cameas a shock to many, but for some—like myself, who worked at Meta on both theOperational (Regulatory Escalations) and Governance (Global Governance of MetaApplications) sides—this was little surprise. The timing, announced less than a dayafter the 2024 election results were certified and a Republican Washington insider andMeta policy veteran was appointed as the President of Global Public Policy, was all a bit uncanny.
Yes, all of these things happened within a short window, but the policy changes Mark announced on Tuesday were years in the making. A few thoughts:
Companies are not democracies. As someone who worked in the U.S. Senate, it’s easy to forget that, at the end of the day, companies focus on revenue, while nation-states/countries should focus on the well-being of their constituents—which, for companies like Meta, are consumers of their products. Content moderation is necessary for high-priority risk areas, but specific policies around hate speech, bullying, and harassment for protected classes are “nice to have” unless they could lead to real-world harm.
While these changes are only going into effect for three applications, it’s worth notingthat WhatsApp, MetaAI, and Meta hardware (Ray-Bans and Oculus headsets) havepotential for policy changes down the line that may or may not be covered by currentglobal regulation. In the U.S., there’s still the Executive Order on AI, and in the EU,there’s the EU AI Act, along with a handful of other non-Western countries. Everyoneshould keep an eye on the policy changes that may subtly appear over the upcomingmonths.
What’s most important is the signal these policy rollbacks send to other technologycompanies like Alphabet, TikTok, BlueSky, Snapchat, Discord, OpenAI, and emergingsmaller platforms that people have integrated into their daily lives. More importantly,how will civil society, academia, and liberal democracies (and, in some cases,authoritarian ones that have strict privacy laws) circle their wagons to ensure theglobal community is safe from online harms?
Let’s break down what these policy changes mean for the U.S. and for countries thatalready have regulation in place to protect users—unlike in the U.S., where consumerprotection laws don’t cover social media platforms.
Today, I’ll cover the update that’s on the top of everyone’s mind: the dissolving of the3PFC for the U.S. market.
#1. Replace fact-checkers with Community Notes, starting in the U.S. (aka Disbanding of the Third Party Fact-Checking Program in the U.S. post, or 3PFC)
What is actually happening.The U.S. leg of the 3PFC is being deprecated starting this week. Fact-checkingorganizations, including PolitiFact, have stated that they will continue to do theimportant fact-checking work, but the working relationship they have with Meta willno longer continue. As of today, this only impacts U.S. partners, not global partners.
What everyone is concerned about.That this is the end of Meta taking mis- and disinformation seriously. Which is fair—this is a turning point. But here’s the thing: the program has experienced an uphillbattle since its launch. Additionally, the U.S. has one of the better-funded fact-checking programs; what’s at risk is the entire depreciation of the global program andthe rollback of fact-checking interventions once content is live on the platform incountries outside the EU (which has misinformation guardrails baked into the DSA).
Interventions and protections.The 3PFC program was good but flawed. Community Notes is very flawed and not aplace where real fact-checking exists. There isn’t a binary answer or a one-stepsolution—misinformation needs to be tackled on two ends of the spectrum:
A: Before it’s even posted online, handled by high-probability classifiers trained ondata collected from a number of sources.
B: After it’s posted. If content isn’t automatically flagged and prevented from beinguploaded because it didn’t reach the probability threshold, that’s where third-partyfact-checkers come in. Flagged content gets routed to a system where fact-checkerscan leave verified information, which then appears as part of an interstitial overlay ontop of the content, giving users the option to learn more. This is crucial for people tounderstand and see that content is false and draw their own conclusions.
In a world where there is no more 3PFC and we rely on Community Notes instead, itwill be incredibly harder to distinguish what is factual information versus what isposted by fake users or bots seeking to sow and amplify misinformation. There’s toomuch left to chance, and without proper content moderation, this is how falsenarratives spread. Unless a user follows a fact-checking account or that information isamplified by trustworthy sources, misinformation will stay online and false,dangerous narratives can and will spread. This is not a freedom of speech issue—thisis a modern warfare issue that needs to be attacked on all sides. Without transparencyinto how information is gathered and pulled into the classifier system or aboutprobability scoring, we are in for a wild ride. Misinformation warfare is the battlefieldwe’re currently in.
Like everyone else, I’m curious to see what the Oversight Board intends to do with thisnew development. Are they going to step in and provide fact-checking services for theU.S.? Will they advocate for the program to be strengthened in high-risk countries?Will the human rights defenders on the Board push Meta to invest in better solutionsfor fact-checking and misinformation handling across all issue areas—includingelections, health care, terrorism, and misinformation around protected classes?
Next week, I’ll distill what global lobbying by the Trump Administration on behalf ofcompanies like Meta looks like and how there are a series of levers that can be pulledrelated to trade, as the administration works to pull back “government overreach oncensorship” of American companies. This is a complex dance that goes beyond socialmedia platforms and includes AI companies as well.
[Originally Published on January 8, 2025 on the Council on Foreign Relations Member Wall]