Cybersafety

Cybersafety

Online abuse: teenagers might not report it because they often don’t see it as a problem

Protecting children from harm online is high on the political agenda right now. The UK government has set out plans to make social media companies legally responsible for protecting users, and MPs have criticised social media platforms for relying on users to report abuse.

This is a serious problem, especially if people who come across illegal material online don’t recognize it as such. While working as a news presenter, I helped run a project teaching thousands of children about social media laws and I noticed patterns emerging in their responses to threatening, abusive and hateful messages online.

 

They said things like:

'You’re not physically doing anything. Things like this are said all the time. You can’t arrest everyone on the internet' – Year 12 pupil.

'Even though it’s disgusting, as long as there’s no physical violence, it’s okay. Free speech. It’s an opinion' – Year 13 pupil.

'Don’t think you could be arrested … Nothing happens on social media, no one gets into trouble, so many people say bad stuff' – Year 8 pupil.

 

So, in 2014, I began an academic study giving 184 participants – aged 11 to 18 – different examples of social media posts, and asking them how “risky” they were, in terms of whether the person posting them might get in trouble.

 

Among the examples (informed by Crown Prosecution Service guidelines) were racist, homophobic and misogynistic material; threats of violence; potential harassment and a post suggesting image-based sexual abuse (commonly called “revenge porn”).

I asked young people to think of the different levels of risk like traffic lights: red for criminal risk (police involvement), orange for civil risk (legal action by other people), yellow for social risk (sanctions from school or family) and green for no risk. I also asked them why they thought abusive posts might not be a criminal risk. Here’s what I found.

 

Victim blaming

One example I used was a post which seemed to share a sexual video of a fictional person called “Alice” (signified as comments with a link to a YouTube video). This created more disagreement than any other example, as different participants put it under all four categories of risk. This is surprising, given that schools, the media and non-governmental organisations have all emphasised the risks of sharing indecent images. It’s even included in the Department of Education’s new guidelines for sex education.

 

Even so, some children argued a sender “couldn’t be in trouble” if Alice had agreed to the video in the first place – without even questioning whether she might have been pressured into it, which studies reveal is a common occurrence among young people. Indeed, even if Alice had consented to be filmed, sharing the video without her permission could still be illegal under two different laws, depending on whether she was under 18 or not.

 

Victim blaming is used as a way to downplay the responsibility of the people who share such content online. It also implies that victims should “just deal with it” themselves. In fact, the children in my study thought it more likely that Alice would sue a sender privately, than involve the police.

 

 

Defending free speech

“Just saying”, “just joking” and “just an opinion” were common responses to online posts in my study – even to hate speech or threats, which could actually result in a criminal conviction. Freedom of speech can be wrongly viewed as a 'catch-all right' for people to say whatever they like online. In some cases, children’s views mimicked alt-right arguments in favour of liberty, free speech and the right to offend.

 

In reality, freedom of expression has always had legal limits, and material inciting hatred and violence on the grounds of race, religion or sexual orientation is criminalised under the Public Order Act 1986.

Some teens believed even jokes had their limits, though – and most thought a joke bomb threat would result in prison. It’s ironic that this was the post that most of my participants agreed would lead to prison, given that someone was famously acquitted for a similar tweet in 2012.

 

Tolerating abuse

Many children doubted there would be any serious consequences for social media abusers – a finding echoed in other studies. Some felt police wouldn’t “waste time” dealing with cyber-hate – which news reports suggest probably accurate.

 

Others argued that hateful or threatening content is “tolerated” on social media, and so widespread as to be “normal”. And given the scale of online abuse against women, for example, they may have a point.

Younger children were more likely to think that police might get involved, whereas older teens put abusive posts in lower risk categories. It’s possible that as children grow up and spend more time online, they see a larger amount of abusive material shared without any obvious consequences, and assume it can’t be illegal. This is bad news for young people who might re-post or share abuse, but also for victims, who may think there’s no point seeking support.

 

Anything goes?

It’s often claimed we live in a 'post-truth' or “post-moral” society. There wasn’t much debate over who was correct within my focus groups. Despite the lack of agreement between the children, different viewpoints were seen as “equally valid”. Respecting others’ arguments is one thing, but failing to tell truth from lies is also a cause for concern.

 

Young people need to be given the tools to understand and critique arguments based on reliable evidence. Universal human rights are a good starting point for lawmakers to try to reach global agreement on what will (or won’t) be tolerated online. But young people also need to be educated to understand those rights. Otherwise, social media sites could simply become a space where there are.

 

 

 

This article is from The Generation Next website

https://www.generationnext.com.au