web analytics

Bill Bennett

Menu

Tag: privacy

Five Eyes wants access to encrypted messages

New Zealand joined its Five Eyes security partners to ask social media companies like Facebook to allow access to encrypted data.

Five Eyes is a security partnership that includes the United States, Britain, Canada, Australia and New Zealand. India and Japan also took part in the move.

At first sight this looks like a continuation of a long campaign by Western governments to unravel digital encryption. I talked to Kathryn Ryan about this on RNZ Nine-to-Noon last week.

Governments say they worry that criminals and terrorists can use encryption to keep illegal online activity private. There’s no question this goes on.

Important role

The difference this time is that the governments acknowledge encryption plays an important role. It gives people privacy and enables online commerce including banking. This would be difficult to do without encryption.

When Justice minister Andrew Little announced New Zealand’s support earlier this week he was clear that any access to encrypted data would require a warrant.

This would subject large technology companies like Facebook and Google to the same measures as local companies like Spark or Vodafone. New Zealand’s Telecommunications Interception Capability and Security Act (TICSA) means local companies must comply with proper warrants.

Hard to enforce

While New Zealand law applies to foreign technology giants, our system has little power to enforce warrants. An international agreement and a common legislative framework will make it easier for local law enforcement.

The UK and US have legislation to address this. Australia has anti-encryption legislation, which has not been effective because it can’t be enforce.

Five Eyes is not asking for carte blanch. At this stage it is making a request and asking the tech companies for their ideas.

The security partnership says it wants to embed public safety in system designs. This would let companies act against illegal content and activity without reducing user safety.

Five Eyes wants law enforcement access to content in a readable and usable format where an authorisation is lawfully issued. At the moment companies can respond to warrants with indecipherable encrypted data.

There are, as you’d expect, fears about privacy and freedom.

While we shouldn’t play these fears down, in part this is back to the question of social media companies taking more responsibility for what happens on their sites.

Encryption works

There’s a clear message here that governments remain frustrated by their inability to access encrypted material. In other words, encryption is working.

There’s a contraction here, earlier in the week GCSB director Andrew Hampton talked about this on Nine-to-Noon. The relevant clip is the last few minutes of a long 27 minute interview.

He rightly talked about the “threat surface” and security vulnerabilities. Yet encryption is on of the best tools we have to reduce these threats and vulnerabilities.

This action is not about making tech companies give government agencies back doors into encryption. That has been discussed in the past.

Back doors are a bad idea because the moment there is an entry point for government agencies there is one for criminals and terrorists. It takes one law enforcement officer anywhere in the world to hand those keys over to a criminal.

Exam algo bias, fighting back against the boss snooping on you | RNZ

Microsoft Surface Duo, folding phoneTechnology correspondent Bill Bennett joins Kathryn to talk about how the UK was forced to ditch exam results generated by a biased algorithm after student protests, how workers are fighting back against surveillance software when they’re working from home, and Microsoft’s new Surface Duo – is it a phone or a tablet? Microsoft calls it neither.

Source: Exam algo bias, fighting back against the boss snooping on you | RNZ

There’s dystopian undertone to my session earlier today on RNZ Nine to Noon with Kathryn Ryan. Some good discussion about the damage algorithms can do.

New Zealand’s bias challenging algorithm charter could save lives

Statistics minister James Shaw launched the Algorithm Charter for Aotearoa New Zealand. He says it is a world first.

The charter might not seem a huge deal. Yet overseas experience suggests it could save lives.

Shaw’s press release says the charter will “give New Zealanders confidence that data is being used safely and effectively across government.”

Make that: “parts of government”. The charter is not compulsory. A total of 21 government departments have signed. The biggest data users are there: Inland Revenue and The Ministry of Social Development are important. The New Zealand Defence Force has signed, the Police has not.

New Zealanders would be more confident they would not be on the wrong end of a rogue algorithm if the charter was compulsory across government.

Ethical data use

The charter draws on work by the head of Statistics NZ, Liz MacPherson. She also has the title of chief data steward. MacPherson has been working on ethical data use in government.

Last year the government looked at how it used algorithms. It decided they needed more transparency. In July it set up a Data Ethics Advisory Group.

The thinking behind the charter is sound enough. Government departments use vast amounts of data. At times the software used to sift the data is complex, although it can be straightforward at times.

This can work fine, but humans write algorithms. They can be biased or based on false premises. Algorithms can be broken. People using them can make bad decisions.

Algorithm chaos

There are plenty of stories of algorithms serving up inaccuracies and discriminatory decisions. The process is opaque, government employees have been known to hide behind bad decisions. The logic used to feed algorithms is often kept secret from the public.

When this happens, the consequences can be dire. At times the most vulnerable members of society can be at risk.

One of the worst examples of how bad this gets is Australia’s so called Robodebt saga. Australians who had received welfare payments were automatically sent debt notices, often without explanation if data matching between different departments showed inconsistencies.

Many Robodebt demands were wrong. Fighting or even questioning the demands saw people descend into a Kafkaesque digital distopia. There were suicides as a result.

Agencies signing the charter commit to explaining their algorithms to the people on the receiving end. The rules used are supposed to be transparent and published in plain English. Good luck with that one.

Fit for purpose

Elsewhere the New Zealand charter wants algorithm users to “make sure data is fit for purpose” by “understanding its limitations” and “identifying and managing bias”. It sounds good, but there is a danger public servants might push the meaning of those words to the limit.

Any agency signing the charter has to give the public a point of contact for enquiries about algorithms. The charter expects agencies to offer a way of appealing against algorithm decisions.

There’s a specific New Zealand twist. The charter asks agencies to take Māori views on data collection into account. This is important. Algorithms tend to be written by people from other cultures and Māori are disproportionately on the wrong end of bad decisions.

One area not covered in the documents published at the launch is how agencies might deal with data that is manipulated by external agencies. Given that government outsources data work, this could be a problem. There may even be cases where external organisations use proprietary algorithms.

Privacy regulation: New Zealand wants more

A survey conducted by the Office of the Privacy Commissioner found that two-third of New Zealanders want more privacy regulation.

Less than a third of those surveyed are happy with things as they stand. Six percent of New Zealanders would like to see less regulation.

Women are more likely to want more privacy than men. The survey found Māori are more likely to be very concerned about individual privacy than others.

Business sharing private data

In general, New Zealanders are most concerned about businesses sharing personal information without permission. Three quarters of the sample worry about this. Almost as many, 72 percent, have concerns about theft of banking details. The same number has fears about the security of online personal information.

The use of facial recognition and closed circuit TV technology is of concern to 41 percent.

UMR Research conducted the survey earlier this year. It interviewed 1,398 New Zealanders.

The survey results appeared a week after Parliament passed the 2020 Privacy Act. They show the public is in broad support of the way New Zealand regulates privacy.

Most of the changes to the Privacy Act bring it up to date. Parliament passed the previous Act in 1993 as the internet moved into the mainstream. There have been huge technology changes since then.

Justice Minister Andrew Little says the legislation introduces mechanisms to promote early intervention and risk management by agencies rather than relying on people making complaints after a privacy breach has already happened.

Mandatory notification

An important part of the new Act is mandatory privacy breach notification.

If an organisation or company has a breach that poses a risk, they are now required by law to notify the Privacy Commissioner and tell anyone affected.

The new Act also strengthens the role of the Privacy Commissioner.

The commissioner can issue a compliance notice telling data users to get their act together and comply with the Act. If they don’t, the commissioner can fine them up to $10,000.

Another update is when a business or organisation deals with a New Zealander’s private data overseas. They must ensure whoever gets that information has the same level of  protection as New Zealand.

The rules apply to anyone. They don’t need to have a New Zealand physical presence. Yes, that means companies like Facebook.

There are also new criminal offences. It’s now a crime to destroy personal information if someone makes a request for it.

Turns out I’m almost certainly a bot

According to Botsight, I am “almost certainly a bot”. Or at least my Twitter account is.

Botsight says it uses artificial intelligence to decide if there is a human or a bot behind a Twitter account. The software was developed by NortonLifeLock, which was formerly part of Symantec.

The goal is to help fight disinformation campaigns. It’s hard to argue with the sentiment behind this.

Botsight in a browser

You install Botsight as a browser extension. NortonLifeLock says it works with the major browsers. It turns out that mainly means Chrome. There’s no support for Safari and when I first tested the Firefox version that wasn’t delivering. These things happen with beta software. It’s no big deal.

Then, when Twitter is running in your browser, Botsight flags whether an account is likely to be human or a bot. You have to use the office Twitter website. A green flag shows an account that is likely to be human, red tells users to be wary.

The flags also show percentages. In my case the score is 80 percent, that’s enough for alarm bells to ring.

At Botsight says, I’m “almost certainly a bot”.

Botsight report

The developers say they collected terabytes of data then looked at a number of features to determine if an account is human or not. The software uses 20 factors to make this decision.

More AI nonsense

NortonLifeLock says its AI model detects bots with a high degree of accuracy. It’s a typical AI claim and like many of them, doesn’t stand up too well when tested in the real world.

No doubt a lot of Botsight readers who encounter my Twitter wit and wisdom will assume the worst.

It’s not going to happen, but that could be grounds for a defamation action. Sooner or later someone is going to sue a bot for character assassination.

Like it says at the top of the story I’m on the wrong side of this equation.

Bot gives?

I asked NortonLifeLock how come I’m identified as a bot. Daniel Kats, the principal researcher at NortonLifeLock Research Group says there are three main reasons.

The first is my Twitter handle: @billbennettnz.

Kats writes:

“The reporter’s handle is quite long, and contains many “bigrams” (groups of two characters) that are uncommon together. This is a sign of auto-generated handles (ex. lb, tn, nz). It’s also quite a long handle, which in our experience is common of bots.”

I didn’t have much choice here. My given name includes that tricky LB combination. I doubt changing Bill to William would have made any difference.

There are a lot of other Bill Bennetts in the world. Others got to the obvious Twitter handles first. Mine tells people I’m in New Zealand. Trust me, the alternatives look more bot-like.

The only practical way to change this is to kill the account and start Twitter again from scratch. It is an option.

Following too many

Botsight’s second alarm is triggered by my follow to follower ratio. It turns out that following 2888 people is ’an usually high number, especially in relation to the number of followers”. Kats says it is no common for a human to follow that many others.

Well, that’s partly because I use Twitter to follow people who might be news sources.

The idea of letting bots or AI bot detectors dictate behaviour bothers me. Yet, if Botsight thinks I’m a bot, it’s possible other researchers and analytical tools looking at my account think so too. We can’t have that. Perhaps I should cull my follow list.

So, please don’t take offence if you’re unfollowed. I need to look more human. Only up to a point. On one level I don’t care what a piece of software thinks about me. On another, I get a fair bit of work come to me via the Twitter account so it may need a bit more care and attention.

Not enough likes

The third sign that I’m a bot is that my number of favourite is low. Favourite is the official Twitter terms for liking a tweet. Apparently I don’t do this as much as other humans.

On the other hand, I link to a lot of web posts. Linking lots and not favouriting much is, apparently, a sign of a bot.

The Botsight software could take note that I often get involved in discussion threads on Twitter. That’s something that a human would do, but would be beyond most bot accounts.

From the bot’s mouth:

Well, there you have it. I’m a bot. Perhaps that means I should put my freelance rates up.

Of course, any AI model is only as good as the assumptions that are fed into it. This is where lots of them fall down. We’ve all heard stories of AI recruitment tools or bank loan tools that discriminate against women or minorities. Bias is hard coded.

This is nothing like as bad. On a personal level I’m not unduly worried or offended by Botsight. Yet it does give an insight into the power and potential misuse or misinterpretation of AI analysis.