Mr Chairman,
As Budget 2026 advances Singapore’s AI ambitions, we must confront that increasingly, Singaporeans are exposed to AI-generated misinformation and AI-powered scams at unprecedented scale and speed.
A 5 February article in Lianhe Zaobao documented a surge of sensational videos claiming that Prime Minister Lawrence Wong is being forced out, and that intense internal power struggles are unfolding. These videos are entirely generated using AI within minutes, at a cost reportedly as low as one to two US dollars per 20-minute video.
The MDDI acknowledged that it has observed multiple online accounts publishing such fabricated claims about Singapore’s domestic politics. An MDDI spokesperson quoted by Zaobao stated that public education measures and resources have been rolled out, and urged the public to rely on official sources and refrain from sharing unverified content.
I welcome this response. But I wonder if these measures are sufficient, given the scale and sophistication of AI-generated misinformation? Why was POFMA not used against those behind these videos?
Enforcement tools like POFMA alone also cannot inoculate society against misinformation. We need a population equipped to question, verify, and critically assess what they see online. What structured, long-term programmes will the Ministry develop to strengthen media literacy and critical thinking, especially among vulnerable populations such as seniors?
Will we expand community-based workshops, school curricula, and public campaigns that teach citizens practical verification steps, such as checking original footage, examining sources, and consulting authoritative channels? Can we leverage AI itself to help filter and flag suspicious content at scale?
If AI lowers the cost of deception to one dollar per video, then the cost of inaction may be far higher. How will our national AI strategy ensure that Singaporeans are empowered to discern fact from fiction in an increasingly polluted information ecosystem?


