Mr Speaker, in Mandarin please.
议长先生
我支持《互联网安全(援助与问责)法案》的宗旨。
这能够为网络伤害受害者提供更及时的求助渠道、改善网络环境安全、遏制及预防网络伤害行为、并推动理性、负责任的网络行为。
我们能够,也应该确保受害者能得到更及时和更全面的援助。与此同时,工人党认为这个法案能进一步优化,因此针对法案提出多项修改建议。
我们建议扩大 “有害网络内容” 的定义。法案目前包括13类网络伤害,包括常见的网络骚扰、恶意公开他人隐私(doxxing)、网络跟踪(online stalking)、 亲密照泄露,以及儿童性虐待影像。我们建议纳入多两类网络伤害,也就是诱导自残与自杀,以及性诱 (sexual grooming) 儿童及弱势成人。这能够加以完善该法案的职权范围。
此外,我们也能加强法案的公平、公正的执行。法案将设 “互联网安全委员会 (Online Safety Commission)”, 通过统一的监管与申诉机制,协助网络伤害的受害者。
我们提出修改建议, 提倡由法院为上诉的最终仲裁者,以便完善申诉机制,加强独立监督。
另一项建议则是加强机制透明度和问责机制。我们提出修正建议,要求委员会每年提交年度报告给国会。报告内容应该包含接获举报的数量与类型、处理时长、发出的指令(order)和指示(direction)内容、网络伤害风险评估及趋势分析、所采取的个人隐私保护措施等。
发布年度报告能让机制更加透明,并且让民众心安,证明这个法案是有效的。
Mr Speaker, I want to begin by expressing my support for the intent of the Online Safety (Relief and Accountability) Bill to:
- Provide victims with timely means of redress;
- Promote and improve online safety
- Deter and prevent online harmful activity; and
- Promote accountability, as well as responsible and reasonable conduct online.
The statistics tell a troubling story. MDDI’s Perceptions of Digitalisation Survey found that 84% of Singapore residents encountered harmful content last year. One in three experienced harmful behaviour directly. SG Her Empowerment report show that female youths were twice as likely to experience sexual harassment online.
Mr Speaker, online harms are not a mere inconvenience. They can be severe, life-altering threats to safety and peace of mind. Because online harms can be constant, invasive and hard to escape, they can be as harmful, if not more harmful, than physical harm.
We know that the current system is inadequate. Victims of online harmful activities face challenges reporting such behaviour to online service providers. They report the content. They wait hours, usually days. Often, the content stays up.
IMDA’s 2024 report found that platforms took an average of 5 days or more to act on user reports of harmful content that violated their own community guidelines. Still, most platforms acted appropriately on only half the harmful content reported. For harms that are by nature constant, invasive and hard to escape, 5 days feel like forever.
This Bill offers victims more tools - ways to get harmful content taken down faster, and to hold perpetrators accountable. That matters, and I support it. At the same time, a good idea still needs good implementation.
Given how this Bill grants significant powers to a new agency - the Online Safety Commission - that will safeguard our rights in this fast-evolving space, I believe it warrants further examination of the Bill with care and rigour.
Part 1 - Building a Commission We Can Trust
Mr Speaker, whether the Bill’s policy objectives are achieved hinges heavily on the proper functioning of the Online Safety Commission (OSC). The Commission will have an extensive mandate. It will receive and triage reports, investigate whether thresholds of harm are met, issue directions and orders, monitor their compliance and handle appeals. This is a lot of responsibility, and it is crucial to get three things right: capacity, independence and transparency.
First, will the Commission have what it takes to do the job? It will be on the frontlines, dealing with cases that will require an exacting mix of legal expertise, technical knowledge, and sound judgement under pressure, all set amidst the fast-moving digital landscape.
Given these responsibilities, I second my colleague He Ting Ru’s point that the Commission should be staffed and resourced like a quasi-judicial body, not a customer service centre. I look forward to the Minister’s clarification of what is the anticipated resourcing for the Commission in terms of manpower and budget.
- What specific expertise will be recruited? Will it include lawyers and technologists who can keep up with the fast-evolving digital landscape, mental health professionals who understand trauma and people with lived experience of online harms?
- How will the Ministry ensure that there is trauma-informed care and processes for both the victims who engage with the Commission, and Commission staff who will have extensive interactions with these individuals?
Second, will the Commission remain independent when it matters most? The Commissioner is appointed by a Minister, and is subject to Ministerial direction. The Commissioner will make important assessments on the factual veracity and reputational harm of statements, a profoundly difficult and at times contentious exercise.
Unlike public agencies, political office holders themselves can make reports to the Commission. This means that the Commissioner may find herself having to make judgments about factual accuracy and reputational harm – judgments that could, in some cases, involve content critical of political office holders, possibly including the very Minister who appointed her.
To be clear, this is not a question about the integrity of any future Commissioner or Minister. It is about how good governance is not solely built on trust in individuals, but also on systems that work, and which remain independent and accountable. For the Commission to become a trusted institution that endures, we need to ensure that it is structurally resilient against potential conflict of interest and/or abuse.
I welcome the Minister to clarify what mechanisms there are to ensure the Commission and Commissioner’s operational independence, particularly when handling content that is politically sensitive.
Third, how will Singaporeans know that the Commission is working as intended? Mr Speaker, the sweeping powers granted to the Commission can only be justified if Singaporeans can see that they are being used effectively and appropriately. We deserve to know:
- Is the system working?
- Are reports being handled fairly and quickly?
- What kinds of harms are the most common, and are we adapting to emerging threats?
I invite the Minister to clarify how the Commission’s effectiveness will be measured, and to whom it will be held accountable. We believe that the Commission can and should do better to counteract the opacity of online service providers in regulating online harms. For this reason, the Workers’ Party has tabled an amendment requiring the Commission to publish annual reports. The report should contain:
- Number of reports received;
- Quantity and types of directions and orders issued by the Commissioner;
- Turnaround time for resolving reports;
- Findings on risk assessments and trends relating to online harms.
This is neither radical nor unprecedented. It is the standard practice of the Commission’s foreign counterparts with similar mandates - the Australian eSafety Commission, the UK’s OFCOM and the European Union’s Digital Services Coordinators. Australia and the EU even go a step further to enshrine mandatory review of the legislation after a fixed period to ensure that they remain relevant.
Annual reports by the Commission will help build public confidence in its work. They will also help researchers, civil society groups and even government agencies better understand what is happening online so we can respond more effectively. Transparency is not a burden. It makes good regulation sustainable.
Related to transparency is the question of consistency. Beyond aggregate reporting in the form of annual reports, there is also the question of how the Commissioner makes individual decisions for each case and whether they create precedent. As the Bill does not require for decisions made by the Commissioner to be published, my colleague Andre Low has sought clarification on whether the Commissioner will be bound by prior decisions or if each case will be decided on fresh discretion. This matters because consistency in decision making is fundamental to fairness. Published decisions - even in anonymous or partially redacted form - provides predictability and prevents arbitrary outcomes.
Part 2 - Identity information disclosure - Getting the Balance Right
Mr Speaker, another defining feature of the Bill is the powers it grants the Commissioner to unmask anonymous users. Section 52 allows the Commissioner to require an online service provider to obtain end-user identity information. Section 53 allows the end-user identity information to be disclosed to victims for a prescribed purpose, based on reasonable suspicion that the end-user engaged in harmful activity.
This power has real value.
- First, it serves as a deterrent. When perpetrators know they can be identified and held accountable, some might think twice before posting that intimate image, sending a threatening message or coordinating a harassment campaign.
- Second, it empowers victims. They deserve the option to pursue justice beyond content removal. They will have the option to pursue civil remedies under the new statutory torts. They can seek damages for the harm they have suffered. But none of that is possible if they do not know who they are taking to court.
While I support this power in principle, the bigger question is how it can be implemented responsibly.
Because Mr Speaker, identity disclosure is a one-way door. Once disclosed, information cannot be undisclosed. And the risks are real, not theoretical. We have seen the bad that can happen when someone’s identity is exposed - they include some of the very harms this Bill targets, like doxxing, and in extreme cases, even extend to actual physical harms.
This is why the standard matters. The Bill allows identity information disclosure for a prescribed purpose based on “reasonable suspicion” of harmful activity - short of a formal determination. This means someone could be unmasked based on an allegation that may not ultimately be proven. There is also the question of what happens after disclosure. Yes, penalties exist for misuse. But penalties are reactive - they punish harm after it occurs. They do not prevent someone from using identity information for vigilantism, or sharing it with others.
Mr Speaker, I also want to acknowledge something that may be overlooked in this debate. Anonymity is not inherently bad. For some users, it serves as protection. Academic literature on online safety recognises the dual nature of online anonymity – it can enable authentic expression while necessitating education to promote constructive dialogue. It enables marginalised voices to speak up, facilitates frank discussion of sensitive topics and protects some from discrimination or violence.
This is not an argument against end-user identity information disclosure. It is an argument for getting the balance right so it works as intended - deter perpetrators while empowering victims, yet also carefully calibrated to prevent misuse and protect legitimate anonymity. The threshold should be clear enough for consistent application, yet flexible enough to account for context and severity.
In this regard, I would appreciate the Minister’s clarifications on the following:
- On the standard of “reasonable suspicion”, what specific guidelines, training, and objective criteria will be used to ensure the definitions of the various “online harmful activities” are applied consistently, predictably, and fairly by the Commissioner?
- On “prescribed purpose”, what exactly are these purposes? Is it limited to pursuing civil action under the statutory torts? How will the Commission verify that applicants genuinely intend to use the information for a prescribed purpose?
- Safeguards for disclosure: What conditions will be imposed on the applicants who receive the identity information? Will there be consequences if they subsequently choose not to pursue the prescribed purpose? How will the Commission intervene if it suspects information was misused?
Part 3 - Prevention is Better than Cure
And finally, Mr Speaker, no enforcement mechanism, however well-designed, can eliminate online harms. Our goal should not be to create a system where Singaporeans constantly defer to regulatory power. We must continue to empower Singaporeans young and old to navigate the online world safely, think critically about what they encounter and act responsibly.
While it is important that victims have practical solutions to seek timely recourse, it is equally important to prevent that harm in the first place - how do we build a generation of digitally literate, resilient Singaporeans who can recognise risks, respond appropriately and support others?
We should actively work to normalise the reporting of harmful content, promote positive online behaviour and challenge the bystander effect. Research consistently shows that bystanders often fail to intervene or report, not from apathy, but from uncertainty about whether intervention is appropriate or how to do it effectively.
There is also potential for the Commission to be a resource that helps all Singaporeans navigate digital spaces more safely and constructively. Enforcement is but one pillar of work for the Commission’s global counterparts. Australia’s eSafety Commission and the UK’s OFCOM also invest in supporting, conducting and evaluating research on online safety.
Conducting demographic-specific research, helps form a better, more up-to-date understanding of the changing nature of online risks both in terms of content and medium, and helps inform policymaking. It also feeds into the development of accessible, regularly updated resources to increase confidence, skills and online safety of citizens.
Conclusion
Mr Speaker, I reiterate my support for the fundamental intent of the Bill. Victims of online harmful activity need and deserve better recourse than what is available today, and the establishment of the Commission is a necessary step towards that.
In the near term with the establishment of the Commission, we may see more reported instances of online harms – not because more harm is committed, but because fewer people will suffer in silence. This will be a positive development, as it means people trust the system enough to use it.
In the long term, online harms will evolve as quickly as technology itself because they are driven by perverse incentives – financial profit, manipulation or malice. This is why getting the Commission’s design right matters. It needs to be capable, independent and transparent enough to adapt to whatever comes next.
Mr Speaker, the true marker of Singapore’s success in safeguarding our digital spaces will not be one that can be easily quantified. Instead, it will be in something harder to measure but more important:
- Do Singaporeans feel better equipped to identify and respond to online risks?
- Do we feel supported and empowered to seek help?
- Are we more likely to intervene and report harmful content?
- Is our online discourse becoming more constructive and less toxic?
If most Singaporeans can answer ‘yes’ to most or all of the above questions, then we know that we have made good progress in building not just a safer digital space, but a healthier digital society.


