The Films, Videos, and Publications Classification (Urgent Interim Classification of Publications and Prevention of Online Harm) Amendment Bill introduces new criminal offences. It hands power to a chief censor who can make immediate decisions to block content.
It allows the government to create and deploy internet filters. The filters would screen out material the chief censor decides is objectionable. This can be more than violent material.
Response to Christchurch terror live-streaming
The bill matches the proposal first tabled in cabinet last December by Internal Affairs Minister Tracey Martin.
It aims to update the Films, Videos, and Publications Classification Act 1993 after last year’s live-streaming of the Christchurch terror attack.
The focus is on stopping the people or organisations from live-streaming objectionable or violent content. It does not target companies who provide the infrastructure used to distribute content.
Take down notices and filters
Yet carriers and hosts will need to acknowledge government imposed take-down notices. This includes removing links to objectionable content. Failure to do so could result in civil action and fines.
The legislation will allow the Department of Internal Affairs to create internet filters. The DIA must consult with internet service providers before it launches a filter.
InternetNZ opposes the filter plan. In a media statement CEO Jordan Carter says there can be dangerous side effects from a filter.
He also says: “The proposed filters would work at the network level, in a way that is a mile wide and a millimetre deep.
“People who want to get around these filters can easily do so by using a VPN, technology that many Kiwis have been using when working from home recently.”
Filters can be ineffective with violent material
As Carter points out, the problem with filters is that they often don’t work as intended. Determined people who want to see or distribute objectionable material can workaround them. Everyone else may suffer a degraded internet experience.
Internet filters are, by their nature, blunt tools. There’s a trade off between failing to block bad material and blocking harmless content.
The same goes for artistic content. Filters are incapable of drawing lines in the right place.
False positives, false negatives
Researchers found that filters designed to shield young people from pornography might block 90 percent of adult content. At the same time they can block up to a quarter of inoffensive pages.
Tinker with algorithms to permit more inoffensive material generally means letting more porn through.
Filter advocates talk about artificial intelligence helping, but that often makes matters worse. Filters don’t understand context or nuance. AI is usually terrible at context or nuance.
Much of the focus with internet filters is on dealing with web pages. These days they account for a fraction of online material. Peer-to-peer networks, instant messaging and social media platforms are a bigger problem.
There are other issues with filtering. Protecting children might be straightforward, but teenagers are often more tech-savvy than adults.
Filtering violent material can create a false sense of safety. It’s the same when not-very-tech-savvy people install security software. They feel safe from malware threats but can relax and fall victim to phishing or other scams.
While governments often set up filters with good intent, they can be used for censorship, even shutting down political opponents.
On the positive side
In practice, the slippery-slope argument doesn’t wash. New Zealand already has successful voluntary filters blocking child abuse material. That appears to be working well. There has been no slippery slope effect.
Determined viewers can bypass these filters. Yet they stop everyday users from stumbling over the objectionable material.
New Zealand’s child abuse filter gives service providers the option to opt-in. There is independent oversight.
The planned filter in the new legislation would be compulsory. There is no mention of formal oversight.