web analytics

Statistics minister James Shaw launched the Algorithm Charter for Aotearoa New Zealand. He says it is a world first.

The charter might not seem a huge deal. Yet overseas experience suggests it could save lives.

Shaw’s press release says the charter will “give New Zealanders confidence that data is being used safely and effectively across government.”

Make that: “parts of government”. The charter is not compulsory. A total of 21 government departments have signed. The biggest data users are there: Inland Revenue and The Ministry of Social Development are important. The New Zealand Defence Force has signed, the Police has not.

New Zealanders would be more confident they would not be on the wrong end of a rogue algorithm if the charter was compulsory across government.

Ethical data use

The charter draws on work by the head of Statistics NZ, Liz MacPherson. She also has the title of chief data steward. MacPherson has been working on ethical data use in government.

Last year the government looked at how it used algorithms. It decided they needed more transparency. In July it set up a Data Ethics Advisory Group.

The thinking behind the charter is sound enough. Government departments use vast amounts of data. At times the software used to sift the data is complex, although it can be straightforward at times.

This can work fine, but humans write algorithms. They can be biased or based on false premises. Algorithms can be broken. People using them can make bad decisions.

Algorithm chaos

There are plenty of stories of algorithms serving up inaccuracies and discriminatory decisions. The process is opaque, government employees have been known to hide behind bad decisions. The logic used to feed algorithms is often kept secret from the public.

When this happens, the consequences can be dire. At times the most vulnerable members of society can be at risk.

One of the worst examples of how bad this gets is Australia’s so called Robodebt saga. Australians who had received welfare payments were automatically sent debt notices, often without explanation if data matching between different departments showed inconsistencies.

Many Robodebt demands were wrong. Fighting or even questioning the demands saw people descend into a Kafkaesque digital distopia. There were suicides as a result.

Agencies signing the charter commit to explaining their algorithms to the people on the receiving end. The rules used are supposed to be transparent and published in plain English. Good luck with that one.

Fit for purpose

Elsewhere the New Zealand charter wants algorithm users to “make sure data is fit for purpose” by “understanding its limitations” and “identifying and managing bias”. It sounds good, but there is a danger public servants might push the meaning of those words to the limit.

Any agency signing the charter has to give the public a point of contact for enquiries about algorithms. The charter expects agencies to offer a way of appealing against algorithm decisions.

There’s a specific New Zealand twist. The charter asks agencies to take Māori views on data collection into account. This is important. Algorithms tend to be written by people from other cultures and Māori are disproportionately on the wrong end of bad decisions.

One area not covered in the documents published at the launch is how agencies might deal with data that is manipulated by external agencies. Given that government outsources data work, this could be a problem. There may even be cases where external organisations use proprietary algorithms.

A survey conducted by the Office of the Privacy Commissioner found that two-third of New Zealanders want more privacy regulation.

Less than a third of those surveyed are happy with things as they stand. Six percent of New Zealanders would like to see less regulation.

Women are more likely to want more privacy than men. The survey found Māori are more likely to be very concerned about individual privacy than others.

Business sharing private data

In general, New Zealanders are most concerned about businesses sharing personal information without permission. Three quarters of the sample worry about this. Almost as many, 72 percent, have concerns about theft of banking details. The same number has fears about the security of online personal information.

The use of facial recognition and closed circuit TV technology is of concern to 41 percent.

UMR Research conducted the survey earlier this year. It interviewed 1,398 New Zealanders.

The survey results appeared a week after Parliament passed the 2020 Privacy Act. They show the public is in broad support of the way New Zealand regulates privacy.

Most of the changes to the Privacy Act bring it up to date. Parliament passed the previous Act in 1993 as the internet moved into the mainstream. There have been huge technology changes since then.

Justice Minister Andrew Little says the legislation introduces mechanisms to promote early intervention and risk management by agencies rather than relying on people making complaints after a privacy breach has already happened.

Mandatory notification

An important part of the new Act is mandatory privacy breach notification.

If an organisation or company has a breach that poses a risk, they are now required by law to notify the Privacy Commissioner and tell anyone affected.

The new Act also strengthens the role of the Privacy Commissioner.

The commissioner can issue a compliance notice telling data users to get their act together and comply with the Act. If they don’t, the commissioner can fine them up to $10,000.

Another update is when a business or organisation deals with a New Zealander’s private data overseas. They must ensure whoever gets that information has the same level of  protection as New Zealand.

The rules apply to anyone. They don’t need to have a New Zealand physical presence. Yes, that means companies like Facebook.

There are also new criminal offences. It’s now a crime to destroy personal information if someone makes a request for it.

According to Botsight, I am “almost certainly a bot”. Or at least my Twitter account is.

Botsight says it uses artificial intelligence to decide if there is a human or a bot behind a Twitter account. The software was developed by NortonLifeLock, which was formerly part of Symantec.

The goal is to help fight disinformation campaigns. It’s hard to argue with the sentiment behind this.

Botsight in a browser

You install Botsight as a browser extension. NortonLifeLock says it works with the major browsers. It turns out that mainly means Chrome. There’s no support for Safari and when I first tested the Firefox version that wasn’t delivering. These things happen with beta software. It’s no big deal.

Then, when Twitter is running in your browser, Botsight flags whether an account is likely to be human or a bot. You have to use the office Twitter website. A green flag shows an account that is likely to be human, red tells users to be wary.

The flags also show percentages. In my case the score is 80 percent, that’s enough for alarm bells to ring.

At Botsight says, I’m “almost certainly a bot”.

Botsight report

The developers say they collected terabytes of data then looked at a number of features to determine if an account is human or not. The software uses 20 factors to make this decision.

More AI nonsense

NortonLifeLock says its AI model detects bots with a high degree of accuracy. It’s a typical AI claim and like many of them, doesn’t stand up too well when tested in the real world.

No doubt a lot of Botsight readers who encounter my Twitter wit and wisdom will assume the worst.

It’s not going to happen, but that could be grounds for a defamation action. Sooner or later someone is going to sue a bot for character assassination.

Like it says at the top of the story I’m on the wrong side of this equation.

What gives?

I asked NortonLifeLock how come I’m identified as a bot. Daniel Kats, the principal researcher at NortonLifeLock Research Group says there are three main reasons.

The first is my Twitter handle: @billbennettnz.

Kats writes:

“The reporter’s handle is quite long, and contains many “bigrams” (groups of two characters) that are uncommon together. This is a sign of auto-generated handles (ex. lb, tn, nz). It’s also quite a long handle, which in our experience is common of bots.”

I didn’t have much choice here. My given name includes that tricky LB combination. I doubt changing Bill to William would have made any difference.

There are a lot of other Bill Bennetts in the world. Others got to the obvious Twitter handles first. Mine tells people I’m in New Zealand. Trust me, the alternatives look more bot-like.

The only practical way to change this is to kill the account and start Twitter again from scratch. It is an option.

Following too many

Botsight’s second alarm is triggered by my follow to follower ratio. It turns out that following 2888 people is ’an usually high number, especially in relation to the number of followers”. Kats says it is no common for a human to follow that many others.

Well, that’s partly because I use Twitter to follow people who might be news sources.

The idea of letting bots or AI bot detectors dictate behaviour bothers me. Yet, if Botsight thinks I’m a bot, it’s possible other researchers and analytical tools looking at my account think so too. We can’t have that. Perhaps I should cull my follow list.

So, please don’t take offence if you’re unfollowed. I need to look more human. Only up to a point. On one level I don’t care what a piece of software thinks about me. On another, I get a fair bit of work come to me via the Twitter account so it may need a bit more care and attention.

Not enough likes

The third sign that I’m a bot is that my number of favourite is low. Favourite is the official Twitter terms for liking a tweet. Apparently I don’t do this as much as other humans.

On the other hand, I link to a lot of web posts. Linking lots and not favouriting much is, apparently, a sign of a bot.

The Botsight software could take note that I often get involved in discussion threads on Twitter. That’s something that a human would do, but would be beyond most bot accounts.

From the bot’s mouth:

Well, there you have it. I’m a bot. Perhaps that means I should put my freelance rates up.

Of course, any AI model is only as good as the assumptions that are fed into it. This is where lots of them fall down. We’ve all heard stories of AI recruitment tools or bank loan tools that discriminate against women or minorities. Bias is hard coded.

This is nothing like as bad. On a personal level I’m not unduly worried or offended by Botsight. Yet it does give an insight into the power and potential misuse or misinterpretation of AI analysis.

Laurence Millar:

I do all my banking, travel booking, shopping and communicating online.  Surely in the 21st century, I should be able to vote online? If you are voting to elect the president of your sports club, then online voting is convenient and easy. But it should never be used to elect our government[…]

Source: Online voting? No thanks! – NZRise

It’s comforting to see someone as knowledgable and experienced in government computing as Laurence Millar choses to speak out about the dangers of online voting.

He makes all the points you might expect: the risks are too high and the rewards for ratbags are too tempting. We know for certain that criminals and unfriendly governments have intervened in election campaigns. Some even boast about it. So it’s realistic to assume they will turn their attention to an actual vote.

The reality is almost no computer system is foolproof. And few are immune from attackers who are prepared to throw enough resources at breaching security.

But there’s more. Millar writes:

…the chimera of manipulated votes is in itself sufficient to undermine confidence in the result of the election.

And this is just as likely to be the goal of those who would attack elections. Yes, they’d love to manipulate the vote. But they also want to undermine the very idea of a democratic vote.

This suits their purposes almost as much.

Millar’s other points are all valid. It’s worth reading the original post.

Yet something else bothers me about the idea of an online election in New Zealand. Typically projects of this nature are put out to tender and awarded to the lowest bidder.

Tender writers may talk about how the project won’t just go to the cheapest bid, but also about the values, privacy, security and yada, yada, yada that need to be embodied in the system.

We all know the reality. Lower prices win.

We’ve seen this time and time again. Tender responses may be full of piety and goody two-shoes language about protecting this and respecting that.

Words are cheap.

When push comes to shove, saving a few bucks here and there will impress the organisation issuing the tender more than anything else.

It always does.

And even if money is no object and the first tender goes to a first class bidder who does everything right, when it comes up for renewal someone else will be purchasing.

Or the next time. Or the time after that.

Sooner or later cheapskates or, just as bad, companies that are better at lobbying governments than delivering on promises will get the job.

Before you know it there will be an argument for, say, using an overseas cloud provider or a well known brand that hasn’t done a sterling job managing its own digital security in the past.

It is in the nature of these things. Sooner or later we are disappointed.

A virtual private network has its uses. But only in limited and narrow cases.

Most people don’t need a VPN. That won’t stop advertisers barraging you with scare stories.

The Electronic Frontier Foundation points out in Why public Wi-fi is a lot safer than you think. It says widespread use of HTTPS encryption means a virtual private network is often overkill.

“In general, using public Wi-Fi is a lot safer than it was in the early days of the Internet. With the widespread adoption of HTTPS, most major websites will be protected by the same encryption regardless of how you connect to them.”

If you are still scared of public Wi-fi, use a mobile data connection. They are far more secure and it works out far cheaper in the long term.

Digital snake oil

VPNs are often sold to people who don’t need them. For most users they are digital snake oil. You might as well buy a charm to ward off evil spirits.

Companies selling virtual private network services charge a lot for not much. They are cheap to set up. Which means VPN margins are high. It’s a lucrative business.

If you are tech savvy you could build your own. It isn’t hard.

Although most people don’t need VPNs most of the time, a minority do.

Helpful when government is repressive

Say you live in or travel to a place where the government restricts internet activity. A VPN can help. In effect it digs a tunnel for your data to pass through firewalls and other digital obstacles.

At least, they do that until the government concerned cracks down on VPNs.

On my first visit to China a VPN helped me get around internet restrictions.

With a VPN I could use Gmail and Outlook.com to send mail. It let me connect to Google and popular social networks. I used it to connect to my WordPress account. There was no problem using iCloud or OneDrive with the VPN switched on.

None of this worked if I switched off my VPN.

What happens in China stays in China

By the time I returned two years later, China was better at frustrating the VPN.

My VPN’s activity was erratic. It disconnected again and again. Some of the time it didn’t work at all. It’s reasonable to assume governments have now figured out their VPN workarounds.

That’s not to say a VPN isn’t useful in these circumstances. Governments tend to be more concerned about restricting their citizens. Overseas visitor are not the main target, the governments may tolerate some use.

Although I couldn’t use my VPN on public networks on my last China trip, I could use it from my hotel room.

Big end of town

You may also need a VPN if you work for a large corporation. They may insist you use a VPN when connecting to the digital mothership. Corporations can be targets for online criminals. Insisting on a VPN may reduce the threat.

HTTPS encrypts data end-to-end. People watching don’t know what’s going on in your messages, but they can view your metadata.

In other words, they know which sites you visit, but not the pages on a site. Metadata may be all a criminal need to find vulnerabilities if they have other parts of the jigsaw.

This argument doesn’t apply when you use your device to check your bank balance or read Gmail. Knowing you’ve connected to Westpac or Gmail isn’t that helpful to a criminal.

Geo-blocking

A second practical VPN application is bypassing geo-blocking.

Bypassing a block doesn’t have to be illegal. There are legitimate reasons to do this. And there are activities that are, well, let’s say ambiguous.

Services like Netflix negotiate content rights on a territory by territory basis.

Say your favourite TV show to is available to US Netflix customers but not New Zealand.

A VPN can make your connection appear to be coming from wherever you choose. To Netflix, a New Zealand customer may appear to be in the US.

Using a VPN terminating in the US makes it look as though you live there. Some streaming services don’t ask questions if you use a New Zealand credit card to subscribe. Others do. There’s a wealth of expertise around the subject of getting past geo blocks1.

Pirates, criminals, persons of interest

Pirates use VPNs to hide their illegal activities from authorities. There is no grey area here, piracy is illegal. By using a VPN their ISP has no idea what is going on, nor do the authorities.

There are worse criminal online acts where a VPN can cover the tracks, up to a point. One thing to keep in mind is that anyone looking hard enough can tell a VPN is being used.

Not all VPNs are create equal. Some are trustworthy even if the sales pitch might be a touch insincere. Take extra care with free VPNs. They are often data gathering exercises. It may hide your information from your ISP and the authorities but it is being stored elsewhere. These ratbags then share your data with other companies.

Some free VPNs are criminal in intent. As is often the case, the worst examples are in the Android world. Some Android VPNs push malware on to your computer. .

“In 2017, researchers from Australia, the UK, and the US studied 234 VPN applications available on the Google Play Store. They discovered that more than a third of these apps used malware to track users’ online behaviour.”

Ciso Magazine.

See also 29 VPN Services Owned by Six China-Based Organizations.

Virtual private network overview

At this point there’s little practical advice to offer readers other than “be wary of free VPNs”. If you are squeaky clean, don’t deal in secrets and don’t travel to locked down countries you don’t need a VPN. If you think you do need one, take care. It’s a minefield out there.

 


  1. Go and look elsewhere. It’s not hard to find ↩︎