Amid the reams of distressing testimony and evidence that emerged from the coroner’s inquest into Molly Russell’s suicide, a short exchange brought the complexity of preventing online harm to children down to one, very human, response.
Asked by the barrister representing the Russell family whether some of the material Molly had viewed was safe for children to see, Judson Hoffman, Pinterest’s head of community operations, said he was not an expert. When pressed, “Would you show it to your children?” Mr Hoffman’s answer was, simply, “No.”
One thing which stood out from the evidence is the sheer volume of consumption and activity. As was widely reported, in the last several months of her life, Molly engaged with tens of thousands of social media posts, including content which “raised concerns”.
It would no doubt be extremely hard to point to the individual pieces of content that led to Molly Russell’s tragic suicide. More likely many hundreds or thousands of posts had small, but cumulative, negative effects on her mental health.
Molly’s family and others have worked hard to make sure the public, the government and others know what happened to her so that they can take action to make it less likely to happen to someone else. Sadly, in the course of our research, we see that experiences like Molly Russell’s are not unusual, even if the consequences are not always as stark. We repeatedly meet and hear about people engaging with large volumes of ever-more niche or ‘grey area’ content that causes them harm over time. The outcomes are thankfully often different, though still deeply concerning, but the pathways will be familiar to many who work on online harms.
The larger social media platforms regularly publish transparency reports detailing the volume of ‘bad’ accounts they’ve blocked and ‘bad’ content they’ve removed. This reflects conventional thinking on how online harms occur. The posited solution has been that some content can be labelled as inherently toxic and then taken down.
But as we have seen in our research – most notably in the projects we have done recently for Ofcom into how people are harmed and the risk factors that make online harm to children more likely – the reality is that content impacts different people in different ways, depending on context and what other content they may have engaged with.
Our work for Ofcom into how people are harmed and the risk factors to children builds on a previous, as yet unpublished, project we conducted in which we had tested the applicability of a framework developed for ‘offline’ health and safety to the online realm. This framework draws a distinction between hazards, risks and harms, and explains why harm occurs sometimes but not others.
The hazards, risks, harms framing implies that there aren’t ‘good’ and ‘bad’ units of content. A hazard is something with the potential to cause harm, but whether that actually happens is determined by contextual risk factors. Rather than relying on assumptions about what hazards will cause harm and seeking to remove them, the model encourages us to observe when harm has occurred and work back through contextual factors to identify what hazard or hazards contributed to harm.
This framing also acknowledges that something with a hazardous nature can potentially be valuable or useful in the right context. Think of a fitness video about working towards a slim waist. Helpful to one person in one circumstance, disastrous to the wrong person in the wrong circumstances. This means that protecting users from harm will need to take account of more than just the content itself.
Our research for Ofcom showed that when prompted to recall harmful experiences online, people – including children – will often point to exposure to single pieces of shocking content, but these rarely resulted in significant or lasting trauma. In contrast, the more serious harm seemed to be associated with engagement with content over time, as happened with Molly Russell.
In many cases the content people saw was often not unambiguously problematic, and in the right context may even have been considered positive or ‘helpful’, however it formed a pathway that led to harm. This has implications for regulation and platforms’ duty of care, particularly for younger users.
Some harmed respondents within the research felt the content they engaged with on these longer-term pathways was positive or supportive – until it became clear that it wasn’t. An online community around eating disorders, for example, could at first feel supportive. But in time engagement with it could reinforce negative beliefs and behaviours as well as increasing the likelihood that the user would get served more, and potentially more hazardous, similar content. At least in some cases, users may not be the best judges of whether their interaction with content is positive or harmful. This should not come as a surprise to anyone, but it does create additional challenges around relying on mechanisms such as user reports to identify potentially harmful content.
The more we understand about what online harm really looks like, that it is not so much caused by ‘one-off’ pieces of ‘bad’ content, but often cumulative engagement by users whose circumstances put them at greater risk of harm, the closer we can move to preventing children having experiences online that none of us would want for a young person we care about.