Wouldn’t you like to know what children see on social media?
It’s easier said than done. What with secretive teens, private pocket-sized devices and constant switching between multiple accounts – very few parents can keep up.
Policy makers are often out of touch, and digital companies often choose, for whatever reason, not to scrutinise or publish the data they hold on what child accounts are served.
As a researcher, this is a huge challenge. When we run research with adults, we often collect 24-hour screen record to see how they use their phones. With children this is a much bigger ethical dilemma. What if we see a nude? What if we see a child at risk?
But something has to be done to understand what’s going on. We hear about the nightmarish consequences – the teen suicides, self-harm, eating disorders and chronic depression experienced by young people, apparently the result of social media.
When faced with this challenge, we started by questioning – who, or what, is actually the subject of research here? To understand what children experience online, we’re much more interested in what online platforms are doing than what children are doing.
And if the platforms themselves are the subject of our research – maybe we don’t need to involve children at all?
We designed an experiment to test what platforms show to children
For our research with 5Rights, “Pathways: How digital design puts children at risk”, we set up a series of profiles on popular social media apps – modelled on the real behaviours of genuine children aged 13-17. And we watched what happened to them.
We got more than we bargained for. Within hours, most profiles (registered with the genuine age of a child) had been followed by unknown adults, sent unsolicited messages or added to group-chats. Several were sent links to pornographic content.
With one ‘like’ of a fitness tip or a bikini model, our child-avatars were immediately recommended higher volumes of similar or more extreme content – weight loss advice, body-building guides, highly sexualised content.
One search of a term like ‘bodygoals’ or ‘porn’ unlocked access to even more extreme content. In one experiment we typed ‘selfharm’ into the search bar, and the results were, needless to say, shocking.
We can, with careful ethics and rigorous research design, effectively ‘mystery shop’ the digital world
This exercise was not as simple as setting up a few profiles, hitting like on some posts and scrolling away. To be able to make robust claims, we had to set up a watertight experiment.
From randomised sampling of real children’s profiles, to using individual devices to prevent cross-contamination of cookies or phone-level data – we took every precaution we could think of to make this a fair test.
We designed a multi-phased methodology that incrementally introduced more ‘input data’ about children (their age, interests, behaviours, etc.) to measure the impact on what they were shown on social media.
And we put in place strict ethical parameters to mitigate any unintended negative impact on real children on these platforms – e.g. we did not ‘like’ or otherwise engage with any evidently harmful content, in case it increased the likelihood of the algorithm pushing this content to other child users.
Importantly – it is a repeatable exercise. Social media platforms are constantly updating and changing, and there are regular announcements about fresh protections for young people who use them. Our avatar experiments were conducted throughout December 2020, so there was always the chance that some of what we saw would now be blocked or prevented.
In the run up to today’s publication – we replicated one additional avatar to test this theory, with much the same result.
More needs to be done
Even with these considerations, there are obvious limitations to what we can claim from this research. Our avatars spent a maximum of six minutes per day on the platform, when we know real children often spend many hours. Would this make it more likely for negative outcomes to occur? Or less? We can’t know for sure.
But it remains obvious that more research like this will be needed if the UK is going to effectively regulate social media. In other sectors – mystery shopping and tracking of business practice is largescale, ongoing and facilitated by industry.
Despite all the calls for action and the promises of the tech firms, our expectations of transparency, scrutiny and accountability have not yet caught up with the digital world.