Is there such a thing as harmful content?

Disagreements flowed freely at the spiked-seminar on internet regulation.

Amol Rajan

Topics Science & Tech

This is a bit of random text from Kyle to test the new global option to add a message at the top of every article. This bit is linked somewhere.

As the internet has risen in popularity, concerns over the regulation of internet content have exploded. A seminar ‘Internet regulation: is there such a thing as harmful content?’ on 10 September, produced by spiked and Hill & Knowlton, tried to get to grips with this thorny question.

The two speakers were Roland Perry, founder of Internet Policy Agency, who argued that regulation was necessary; and Sandy Starr, editor of the spiked-IT section, who argued that it was not.

Roland Perry asserted that ‘the world is full of vulnerable people’ who needed protection from ‘various threats’. Perry said that the availability of harmful material on the internet was no more the fault of the internet service provider than it was of the personal computer being used. He suggested instead that ‘perhaps it is the fault of the content providers, or the sum of content providers’. Perry argued that dual emphasis was needed: protection of vulnerable people; and prosecution of culpable content providers.

Perry said that the pressure to curb access to harmful content would transcend different cultures. Obtaining a world-wide consensus on regulation of harmful content would prove difficult, he admitted, but given the level of vulnerability, and the number of vulnerable people, the case for regulation was pressing.

Sandy Starr countered Perry with five key points. First, he argued, there is an important distinction between actions and speech: actions can be harmful, but speech, strictly speaking, cannot. The problem with terms like ‘harmful content’ and ‘hate speech’ is that they confuse actions with speech. When we start condemning speech instead of action, asked Starr, how far is this from being an Orwellian thought-crime?

Second, he took up the fear that on the internet absolutely anybody could be doing absolutely anything at any time. The internet is associated with several unknown quantities, he said, but these needn’t necessarily be sinister: it could imply freedom, rather than danger.

Third, he dismissed self-regulation as a solution, describing it as ‘a dangerously invisible and unaccountable means of dealing with problematic content’. For regulation to be effective, he argued, it must take place in public view.

Fourth, he worried that panic about harmful content on the internet leads to the stigmatisation of the technology itself – potentially limiting its deployment in exciting or productive ways. The internet, he said, becomes the ‘villain of the piece’, a depository for blame or responsibility that belongs in human hands.

And fifth, he said that fears about internet content create a false impression of the society we live in, elevating the idea that children are constantly at risk from predatory adults – which could ‘threaten to undermine trust between people’. The internet ‘should be a medium for unbridled self-expression’, not an amplifier of fears.

The economic journalist Daniel Ben-Ami took up Perry, asking him whether, if the world was really full of vulnerable people, anybody who used the internet was in fact ‘safe’? Should the internet not be banned altogether? David Carr, of the Big Blog Company, questioned the validity of the label ‘vulnerable’: why, he asked, is someone over 65 or confined to a wheelchair more vulnerable to harmful content than anybody else?

Many attendees questioned the category of ‘harmful content’. Dolan Cummings, of the Institute of Ideas, said that pictures themselves were not harmful; what was potentially harmful was the way in which pictures were used. spiked‘s Jennie Bristow suggested that a peculiarity of our times is that we seem, above all else, vulnerable to our own emotions, and in need of protection from them.

Others argued that defining harmful content was possible, and a prerequisite to regulation. Mary-Louise Morris, of Childnet International, gave examples of children whom she worked with who had been adversely affected by internet material. One example of harmful content she cited was a derogatory picture of a seven-year-old: would that picture not be seriously harmful to the person who was in it?

Piers Benn from Imperial College, London suggested making a distinction between ‘offensive’ and ‘harmful’ content. You could not regulate against people getting offended. In any case, he said, ‘one can be corrupted by what one reads or thinks, but that does not necessitate mass-regulation’.

There were a number of disputes about the implementation of regulation. Dolan Cummings made the point that internet laws should, like all law, be enforceable, and that internet regulation was pointless unless it was carried out consistently and well. Dean Bubley, of Disruptive Analysis, suggested that there was a major problems deciding who should regulate the internet – government apparatchiks were only experts in paper-use, and companies themselves couldn’t be trusted.

And there were disagreements about fears associated with online interaction. Virginia Chapman said that online communities were the ‘Wild West’ of the internet – suggesting that participation in online communities encouraged individuals to assume different personas, to become different characters from day-to-day life. This difference in behaviour had the potential to be dangerous, she argued, and so necessitated regulation.

But Alison Perrett endorsed Starr’s suggestion that a climate of fear was the most tangible product of internet regulation. She cited the example of her daughter being told to put clothes on because, while running around naked on a family holiday, ‘anybody could be taking pictures’.

To close the debate, Roland Perry noted that there was no watershed on the internet – if TV received such regulation, why shouldn’t the internet have the same? (Although how exactly such a watershed might be implemented was something that would need to be decided.) Finally, he argued that the ‘battleground needs to shift to those who are generating this harm’, with a stronger focus on the ‘villains’ who spread dangerous material around the internet.

Meanwhile, Starr argued that ‘regulation is a parent’s job’. The culture of fear created by notions of ‘harmful content’ should be inverted, he said, into a positive feeling about the remarkable power of new technologies.

Read on:

Communication breakdown, by Sandy Starr

To enquire about republishing spiked’s content, a right to reply or to request a correction, please contact the managing editor, Viv Regan.

Topics Science & Tech


Want to join the conversation?

Only spiked supporters and patrons, who donate regularly to us, can comment on our articles.

Join today