Parliamentary Speech on ELIONA – 15th October 2024 – Speech By He Ting Ru

Mr Speaker

Deepfakes, particularly malicious ones, pose a serious threat to our democratic processes, particularly during elections. While exciting, technological advances in the field of generative AI bring new challenges in maintaining the integrity of our political and electoral system. The proliferation of highly-realistic yet fabricated content – particularly in the digital realm – poses a risk to our electoral system, and if we are not careful, will shake the trust citizens have in the democratic process here in Singapore.

It bears reiterating that deepfakes and digitally altered content are a very real and present danger to democracy. As mentioned by the Minister earlier, In the 2023 Slovak parliamentary elections, we witnessed the potential impact on elections of the malicious use of deepfakes to sway the results of an election. Just two days before the elections, during the equivalent of our ‘cooling off day’, a fake audio clip surfaced, which was said to have been a recording of pro-European candidate Michal Šimečka discussing electoral fraud with a prominent journalist. Both quickly denied its authenticity, but the clip went viral. The impact was also amplified by the deepfake being released during the election’s “silence period”, where media is prohibited from discussing election-related developments. In that election, the pro-Russia candidate, Robert Fico, ultimately won, which naturally led to speculation about whether the deepfake contributed towards Šimečka’s loss, given that he was polling stronger than the ultimate victor. While political scientists on the whole concluded that the deepfake alone did not cause Šimečka to lose, the very speculation caused by its going viral laid bare how dangerous it is for a democracy to exist in an environment of low trust in public institutions and a population with a propensity to believe conspiracy theories. 

WP therefore supports the introduction of legislation to combat the threat of digitally manipulated media contained in this amendment bill, although I wish to raise two main areas of concerns, and seek further clarifications from the Minister on other further technical points.

Exemptions for Authorised News Agencies

First, the new section 42L (4) contains a number of carve-outs from the ban on manipulated content during the election period for online election advertising, including for authorised news agencies. The reason given for this is to allow factual reporting. However, this is not enough reason to exempt these actors, as factual reporting should not require reproduction of prohibited material. In fact, we should consider a concerning scenario: Authorised news agencies, when reporting on prohibited content, might inadvertently spread misinformation. In our attention-deficit world, many readers skim headlines and images without carefully reading the full article or captions. This creates a risk where such content, even when presented as part of factual reporting, could be mistaken for genuine content and go viral as “real news.” Thus, the very act of reporting by reproducing these prohibited materials might unintentionally amplify their reach and impact.

Our disquiet over creating such a two-tier media landscape leads to questions about how we can ensure that media entities exempt from the prohibitions of the act do not publish such content without consequences? What mechanisms will be in place to hold these outlets accountable if they do publish or propagate prohibited content, intentionally or unintentionally? More specifically, does the Minister believe that the existing codes of practice governing authorised news agencies are sufficient to address the concerns raised above, or would further updates be needed to combat the unique risks associated with digitally manipulated content and deepfakes? Would there thus also be new codes or updates to the existing codes of practice – such as the promised new code of conduct which MDDI states would be published to ensure social media companies do more to moderate content – and when is the expected publication date?

Carve-out of private spaces

Second, the Bill also states that private or domestic communications are exempted. The new section 61MA(4) exempts private or domestic electronic communications between two or more individuals from the regulations. While we acknowledge the intent to protect personal privacy, we hope that this exemption does not become a trojan horse used to overcome the Bill’s defences combating disinformation. This is because disinformation often spreads rapidly through private channels. It is also not a secret that modern communication platforms have blurred the lines between private and public spaces. What would be the standing of spaces such as private Facebook groups, private Telegram channels, locked Facebook profiles, or messaging group chats? Would whether a channel is “private” hinge on having people in the group not knowing each other?

It is important to have clarity, as academics have found that there is emerging evidence that propagandists increasingly exploit applications such as WhatsApp and Telegram – preying on their popularity, loose moderation policies and trust within private networks. In Slovakia, the example I raised earlier, Telegram has become a haven for pro-Russian propaganda, and the deepfake of Šimečka was spread wildly on pro-Fico Telegram channels ahead of the election.

In view of this, can the Minister clarify how the exemption for private messages would address the risks associated with widespread disinformation spreading through these channels? What are the criteria to be used to determine whether or not a specific communication is private and therefore exempt from the prohibitions contained in the Bill? 

Other clarifications

Aside from these, I have some clarifications around three broad areas: first, the scoping of the prohibitions, second, questions about the reporting and the investigation of alleged offences, and third, the potential misuse of the regime.

On the scoping of the prohibitions, I note that the prohibitions and offences only apply during election periods, and it is confined to Singapore. While it is necessarily the way it is scoped because the Acts being amended are acts governing our two types of elections in Singapore, what is the treatment regulating prohibited content aimed at influencing political sentiment when we are not in an official election period? Deceptive information may begin swaying public opinion well before an election is formally announced, and this is especially the case as the potential window for calling a general election narrows over time. 

After all, foreign disinformation groups are known to wage persistent year-round disinformation campaigns to influence political outcomes. For example, the Government of Canada detected a Chinese ‘Spamouflage’ campaign, where various Canadian MPs – including its Prime Minister, Leader of the Opposition and members of the cabinet – were targeted. The rapid response mechanism alerted the affected MPs, who were provided with advice and support on how to protect themselves from the campaign. The aim? To discredit and denigrate targeted MPs by questioning their political and ethical standards, using deepfake videos and fake social media profiles. 

While the punishment outlined in this Bill are meant to act as strong deterrents, they do not fully address threats from those operating outside of Singapore’s jurisdiction. How then will we effectively combat the risks associated with a foreign-coordinated campaign using prohibited content like deepfakes?

Next, moving to the reporting and investigation of alleged offences.

Given that members of the public can report alleged prohibited content, where and what is the investigative capacity to investigate claims from the candidate? Who undertakes the investigation, and how long would it take for these to be completed before any further action is taken? What resources – both in terms of manpower and otherwise – would be available to the RO and the ELD to make relevant decisions and take enforcement action? After all, in Singapore we are somewhat unique in having a very short campaign period, and added together with the quick spread of digital information, it makes it even more imperative that decisions about claims have to be made rapidly. 

Also, what happens after an offence is reported and a decision is made to issue a corrective order? Would the RO then simultaneously ask both the poster and platform to take it down? What then happens if there is a refusal to comply with the order? Further, for platforms, is the maximum fine of $1 million sufficient to compel trillion-dollar platforms such as Meta and TikTok to comply?

Would the Minister also be able to elaborate on the appeals process, if one were to disagree with the corrective order? This is particularly important too, as technology has now gotten to the stage where experts now disagree about whether a piece of content is real or doctored.

Finally, moving to abuse of this process, particularly given the very short nature of our official elections period. 

While we often view deepfakes as malignant and harmful, recent elections such as the recently concluded Indian national elections have seen instances where generative AI and deepfake technology has been used to manipulate videos of candidates in a way to benefit them. A classic example would be to use deepfake technology to show candidates speaking in languages or dialects that they do not themselves speak, in a misleading effort to endear themselves to certain segments of the electorate. Would these cases of “positive deepfakes” fall under the scope of the prohibition?

Also, given that anyone can make a report, what is the penalty for a member of the public making a false report? And how will this be communicated so that members of the public do not spam reports as an act of mischief?

Education as Inoculation

Finally, while we have focused a lot today on the potential harms and dangers that deepfakes may pose to democratic processes, some researchers have also warned against being overtly alarmist. In a 2020 paper, Orben warns against what he terms “technology panics”, arguing that these can encourage quick fixes that “centralise control over truth”. 

Instead, I think it would be more helpful to invest in nuanced and effective proactive public education strategies. A technique that appears promising is “pre-bunking”, – the process of debunking lies, tactics or sources before they strike,” because prevention is more impactful than cure. It works like inoculation, and aims to build mental resilience against misinformation before being exposed to its full force. Much like MINDEF’s Exercise SG Ready earlier this year involving a simulated phishing exercise run by organisations, pre-bunking works by exposing people to weakened forms of misinformation and uses this to teach them to spot manipulative techniques used by fake news peddlers. 

This approach seems to work across different cultures and across those with differing political views, and should be integrated into our wider strategy tackling the effects of misinformation on our population. Specific proposals could include enhancing media literacy education in schools and other touchpoints, where our citizens hone their critical-thinking skills necessary to navigate the increasingly complex information landscape. We can also use short-form content on social media and interactive online experiences to reach a wide audience, teaching them to recognise common manipulation techniques used in deepfakes and other types of misinformation campaigns.

To conclude, we support the addition of measures to tackle the harm that deepfakes can cause during the especially vulnerable period of an election campaign. However, I believe that there are a number of concerns and clarifications that I hope the Minister can address, as we work together to ensure that our democratic process does not come under threat by sophisticated manipulated media.