Commentary: Momentum is growing worldwide to make social media less addictive
After a landmark US trial, Singapore is well-positioned to join the growing tide of regulators targeting social media platforms for addictive features, says Mark Cenite of NTU.
by Mark Cenite · CNA · JoinRead a summary of this article on FAST.
Get bite-sized news via a new
cards interface. Give it a try.
Click here to return to FAST Tap here to return to FAST
FAST
SINGAPORE: A California jury found Meta and Google liable for deliberately designing platforms to keep young users glued to their screens, at the expense of their well-being, and for failing to warn users of the risks.
The social media giants were ordered to pay US$6 million in damages to a 20-year-old plaintiff, identified only as Kaley. The case was decided under the same legal doctrines used to hold car manufacturers and tobacco companies accountable for designing defective products.
Kaley has a history of depression, anxiety, suicidal thoughts and body dysmorphic disorder – severe distress over distorted perceptions of her body image. She convinced the jury that YouTube and Instagram, which she started using at ages 6 and 9, respectively, substantially contributed to her suffering.
A few million dollars may not alter a tech giant’s conduct, but Kaley’s lawyers are coordinating legal action involving thousands of other plaintiffs.
CNA Games
Guess Word
Crack the word, one row at a time
Buzzword
Create words using the given letters
Mini Sudoku
Tiny puzzle, mighty brain teaser
Mini Crossword
Small grid, big challenge
Word Search
Spot as many words as you can
Show More
Show Less
If legal losses continue, platforms may change familiar features that led to liability in Kaley’s case. Those features are also a focus of European Union regulations, and Singapore is well-positioned to consider targeting them as well.
PLATFORM FEATURES IN FOCUS
Kaley testified that she was drawn in by chasing likes and followers. She became preoccupied with comparing herself to similar users sharing everyday updates about outfits and birthday parties.
The genius of the legal strategy in Kaley's case was not arguing for liability based on content. For decades, potential claimants got nowhere in the US, where a 1996 law shields platforms from liability for harm stemming from users’ posts.
Kaley’s winning argument was based on specific features that platforms use to maximise engagement. Infinite scroll ensures there are always new posts to view, and video autoplay makes going down a rabbit hole effortless. Distressed by her social media use, Kaley was nonetheless unable to close the apps.
Social media platforms have a strong business incentive to do whatever works to keep us scrolling so they can show us more ads, even when we want to stop. Meta CEO Mark Zuckerberg testified in Kaley’s trial that it is not in a platform’s interest to upset users. But engagement metrics do not measure satisfaction – just time on the platform.
In the EU, regulations at various stages of development focus on the features that digital platforms use to engage users, such as infinite scroll of autoplay videos and counters that encourage users to maintain streaks of consecutive daily use. The emerging approach may push platforms to disable these features by default for minors and to provide adults with the option to switch them off.
IMPLICATIONS FOR SINGAPORE
In the university courses I teach, the undergraduates are social media natives. Many have mentioned in class discussions their own struggles with self-regulating social media use. They tend to be open to considering changes to platform features but are uncertain about their impact.
Singapore has increasingly focused on holding platforms responsible for hosting content, rather than just the individuals who posted it. The Infocomm Media Development Authority (IMDA) recently issued letters of caution to X and TikTok after finding serious weaknesses in their detection and removal of harmful content. They could face fines in Singapore if they do not improve.
Singapore’s new Online Safety Commission, set to begin operating by mid-2026, will be empowered to issue takedown orders to platforms and force the disclosure of perpetrators' identities, enabling victims to pursue civil lawsuits.
Laws targeting offensive content are only part of the picture, though. To help users like my students, Singapore's next regulatory frontier can target the addictive architecture itself.
App stores have offered different versions of social media apps to comply with various legal requirements, including strict EU regulations. If platforms are forced to disable addictive features in the EU or the US, they are unlikely to roll those protections out globally by choice. Ultimately, whether Singaporeans get access to safer, less addictive social media may depend on local laws demanding it.
NEW TECHNOLOGY, FAMILIAR THREATS
Guardrails against digital media addiction may take on new urgency with the rise of artificial intelligence chatbots. The top use of generative AI in 2025 was not writing or coding, but therapy and companionship, according to research reported on in the Harvard Business Review.
Eyeing an IPO later this year, OpenAI announced it will display ads on the free and lower-tier subscription levels of ChatGPT.
Though OpenAI has stated it does not aim to maximise the time users spend on ChatGPT, the business incentive is clear. More time spent on a platform means more time to present ads. Chatbots appear poised to follow the social media business model of monetising user engagement.
The lesson from social media is that engagement-driven business models lead to predictable harm to some users. AI companies are already facing lawsuits for harming users who developed emotional bonds with chatbots, including those from family members of users who died by suicide. Platforms have had little incentive to fix what is working as designed, to keep us engaged.
Kaley’s verdict and the emerging EU approach raise possibilities that Singapore might consider. Your late-night doomscrolling or chats with AI aren’t just failures of your willpower. They are the result of platform design choices.
The questions now are whether social media platforms and AI chatbots will help users make informed choices and how impactful new safety features will be.
Dr Mark Cenite is Associate Dean (Undergraduate Education) at Nanyang Technological University’s College of Humanities, Arts, and Social Sciences, and teaches media law and artificial intelligence law at the Wee Kim Wee School of Communication and Information.
Sign up for our newsletters
Get our pick of top stories and thought-provoking articles in your inbox
Get the CNA app
Stay updated with notifications for breaking news and our best stories
Get WhatsApp alerts
Join our channel for the top reads for the day on your preferred chat app