How the media in Taiwan adapted to combat election misinformation

How the media in Taiwan adapted to combat election misinformation
How the media in Taiwan adapted to combat election misinformation
-

As several countries prepare for an onslaught of local and foreign digital disinformation campaigns in their 2024 elections, Taiwan’s recent experience offers useful lessons for journalists and democracy advocates around the world, as well as a much-needed dose of hope.

Although the January 13 election was hit by a barrage of online misinformation, notably allegations of voter fraud and extreme warnings of a future war with China, new research and independent journalistic reporting reveals that local media, election officials and fact-checkers in Taiwan have largely been successful in containing the attacks, with techniques such as early unmasking, smart regulation of communications, and a deliberate focus on trust in the media. After the vote, Lai Ching-te, head of the Progressive Democratic Party, a pro-sovereignty party that is opposed by China’s autocratic communist government, was elected president.

But the election in Taiwan also revealed trends in misinformation for journalists and fact-checkers in other countries to watch. The patterns include the use of generative AI in deepfakes, heightened propaganda by popular YouTube influencers, and foreign operations narratives designed to undermine trust in democracy rather than promote candidates.

In the webinar Disinformation and AI: What We’ve Learned So Far from Elections 2024, organized by the Thomson Foundation, three experts shared lessons from Taiwan and insights for journalists facing ongoing threats from disinformation.

Among the participants were Professor Chen-Ling Hung, director of the Institute of Journalism at National Taiwan University; electoral expert Rasto Kuzel, executive director of MEMO 98; and Jiore Craig, fellow at the Institute for Strategic Dialogue. The session was moderated by Caro Kriel, chief executive of the Thomson Foundation.

Use CrowdTangle’s Link Checker While You Can

Reporters should take advantage of the last few months of CrowdTangle’s powerful link checker, which shows posts from Facebook and Instagram that include a given link, indicating articles, videos from YouTube and other platforms.

Contrasting with the positive results of Taiwan’s resilience, a sad point in the panel was the participants’ dissatisfaction with the closure of CrowdTangle announced by Meta and scheduled for August 14, 2024. The tool is the best investigative digital resource that reporters have had in recent years to track and monitor online disinformation campaigns around the world.

“The organization where I work has benefited from access to CrowdTangle and we are very concerned about what will happen after August when Meta discontinues the tool, which has been absolutely crucial to our research,” Kuzel explained. “What CrowdTangle provided was access to public accounts and groups, and that’s something we’re going to continue to need.”

However, Craig encouraged reporters to view this disappointing ending as an investigative deadline to heavily use the tool and monitor current or emerging disinformation campaigns involving any election in 2024.

“It’s an amazing tool, and one of the only ones we had that helped with attribution work,” Craig said of the link checking feature. “So install the tool while you can and research the attribution of any online property or news source you have questions about.”

Reporters can still find useful data in Meta’s content library and clues about disinformation in new tools created for journalists, such as Junkipedia, which has access to various social media platforms, suspicious websites, and numerous other resources. However, Kuzel warned that “the unavailability of data following Meta’s decision will also affect Junkipedia.”

(For a detailed explanation of new election disinformation threats and the tools to debunk them, read the chapter on disinformation and political communication in the Revised Elections Guide for Investigative Reporters. To analyze audio deepfakes, read the comprehensive material produced by GIJN on the topic, How to identify and investigate audio deepfakes, one of the main threats to the 2024 elections)

Taiwan’s public media’s unified response to foreign influence operations

A report on combating disinformation in Taiwan’s most recent elections commissioned by the Thomson Foundation, and co-authored by Chen-Ling Hung and three colleagues, identified both coordinated digital attacks and a largely unified response to this wave of propaganda.

The report noted that “as the election period in Taiwan approaches, messages from troll groups have become increasingly alarmist, focusing on war rhetoric aimed at intimidating the Taiwanese people.” According to research by Taiwan AI Labs, in view of the messages that spread propaganda from Chinese state media, “content portraying an imminent Chinese military threat was prevalent, accounting for 25% of publications. In second place, making up 14.3 % of the speech, there were narratives suggesting that the US was manipulating Taiwan towards a precarious military confrontation”.

Other types of disinformation, possibly foreign in origin, also included personal slander, including false claims about a candidate’s “illegitimate son” and the release of a 300-page ebook with false claims about the current president and allegedly inappropriate sexual behavior.

At first, the ebook confused Taiwanese researchers because, as Tim Niven of DoubleThink Lab told Foreign Policy magazine, “This is the age of social media. No one is going to read a book that looks like spam sent by email.”

But researchers quickly saw that the ebook served as “a script for generative AI videos” and was used as a supposedly legitimate source for disinformation campaigns. The result was videos on Instagram and TikTok featuring an AI-generated avatar of news anchors and influencers using the book as a source of apparent authority and reading short excerpts from the text.

Election research included interviews with five major news outlets about their strategies to counter misinformation: Taiwan Public TV Service (PTS), Central News Agency (CNA), Radio Taiwan International (RTI), Formosa Television (FTV) and TVBS Media. The authors stated that public media tended to collaborate with fact-checking organizations and “demonstrated a strong commitment to authentic news, using advanced technology and various strategies to identify and debunk misinformation.” They noted that some commercial outlets “had more difficulty with disinformation than public media due to political biases and profit motives”, but overall there were internal verification efforts and traditional journalism inquiries were applied to verify information, particularly content in audio and video.

The report quoted CNA’s chief editor as saying that “posts on TikTok, which is considered deeply influenced by the Chinese government, were hardly cited in CNA’s coverage.”

In general, Taiwanese fact-checkers responded quickly to suspicious allegations, but sometimes not quickly enough. “What caught my attention in the report was an example of an AI deepfake in the election where it took them seven days to come up with a definitive answer,” noted Kriel. “That’s really wasted time, and that’s when conspiracy theories thrive.”

“We have also seen deepfakes in other countries and contexts, from Pakistan to the United Kingdom, Slovakia, the United States and elsewhere, and also outside the electoral context — for example, in Sudan, to advance the goals of people involved in the country’s civil war “, Craig explained. “This is making an already bad online threat landscape even worse.” (See GIJN’s recent webinar on election investigation and threats from AI-powered audio deepfakes)

Despite some weaknesses and limited newsroom resources, Chen said early communication and concerted efforts across sectors have largely protected Taiwan’s election information environment from an infestation of false and distorted information.

“Cross-sector cooperation is needed to combat disinformation — fact-checkers, governments, traditional media, digital platforms and civil society,” Chen explained. “Advanced techniques and tools have been used to combat disinformation, and some organizations have invested in techniques to address AI disinformation, but this is not enough; we need to invest more.”

Chen said readers were generally prepared to identify false information and misleading methods thanks to preliminary alerts from the media and civil society in the weeks and months leading up to voting day, creating a “gradual improvement in awareness among the Taiwanese population and of vigilance against disinformation”.

Why is trust in the media the best defense?

“The Taiwan example is a great demonstration of the power of trusted communicators in responding to AI-generated disinformation threats,” said Craig. “The outlet or any communicator who gains the trust of the audience has the opportunity to make more impactful choices when there is an attack of misinformation.”

She adds, “For me, earning trust means both outreach and transparency, as well as prioritizing a voter audience over a target audience, for example, to reach voters where they are consuming information in 2024. For example, short formats instead of long formats, radio, podcasts, etc.

Participants noted that consistent accuracy in other election reporting is essential to gaining the credibility needed to debunk disinformation efforts in “hot spots” in the run-up to voter registration deadlines and on voting day, when the threat of deepfakes and incendiary claims become more aggravated.

Rather than trying to create political converts, Kuzel said the goal of most election disinformation campaigns in 2024 was to encourage widespread lack of political engagement and “kill activism.”

Craig agreed. “As bad actors try to break our trust, we become insecure and that makes us emotional, and then we get tired,” he explained. “And when we get tired, we are more easily controlled. It takes us to a place of disinterest.”

Kuzel said that anticipating, identifying and debunking false and damaging election information in advance is doable because bad actors reveal their intentions months before the election cycle actually begins. Recognizing and formulating responses to misleading content is essential, because otherwise embryonic threads of misinformation can take root.

“What we saw in the 2020 US elections was that manipulative narratives began to circulate a year before the elections, with Trump saying that there could be manipulation in voting by mail; and then he escalated this by not recognizing the results,” he noted. Kuzel. “We need to understand the threats and immunize the public against them in future campaigns.”


Photo by Lisanto 李奕良 via Unsplash.

This article was originally published on GIJN and republished on IJNet with permission.

The article is in Portuguese

Tags: media Taiwan adapted combat election misinformation

-

-

PREV Driver who caused the death of a man in a crash is considered a fugitive | National Newspaper
NEXT SOS: urgent national assessment of medical graduates!