web analytics

According to Botsight, I am “almost certainly a bot”. Or at least my Twitter account is.

Botsight says it uses artificial intelligence to decide if there is a human or a bot behind a Twitter account. The software was developed by NortonLifeLock, which was formerly part of Symantec.

The goal is to help fight disinformation campaigns. It’s hard to argue with the sentiment behind this.

Botsight in a browser

You install Botsight as a browser extension. NortonLifeLock says it works with the major browsers. It turns out that mainly means Chrome. There’s no support for Safari and when I first tested the Firefox version that wasn’t delivering. These things happen with beta software. It’s no big deal.

Then, when Twitter is running in your browser, Botsight flags whether an account is likely to be human or a bot. You have to use the office Twitter website. A green flag shows an account that is likely to be human, red tells users to be wary.

The flags also show percentages. In my case the score is 80 percent, that’s enough for alarm bells to ring.

At Botsight says, I’m “almost certainly a bot”.

Botsight report

The developers say they collected terabytes of data then looked at a number of features to determine if an account is human or not. The software uses 20 factors to make this decision.

More AI nonsense

NortonLifeLock says its AI model detects bots with a high degree of accuracy. It’s a typical AI claim and like many of them, doesn’t stand up too well when tested in the real world.

No doubt a lot of Botsight readers who encounter my Twitter wit and wisdom will assume the worst.

It’s not going to happen, but that could be grounds for a defamation action. Sooner or later someone is going to sue a bot for character assassination.

Like it says at the top of the story I’m on the wrong side of this equation.

What gives?

I asked NortonLifeLock how come I’m identified as a bot. Daniel Kats, the principal researcher at NortonLifeLock Research Group says there are three main reasons.

The first is my Twitter handle: @billbennettnz.

Kats writes:

“The reporter’s handle is quite long, and contains many “bigrams” (groups of two characters) that are uncommon together. This is a sign of auto-generated handles (ex. lb, tn, nz). It’s also quite a long handle, which in our experience is common of bots.”

I didn’t have much choice here. My given name includes that tricky LB combination. I doubt changing Bill to William would have made any difference.

There are a lot of other Bill Bennetts in the world. Others got to the obvious Twitter handles first. Mine tells people I’m in New Zealand. Trust me, the alternatives look more bot-like.

The only practical way to change this is to kill the account and start Twitter again from scratch. It is an option.

Following too many

Botsight’s second alarm is triggered by my follow to follower ratio. It turns out that following 2888 people is ’an usually high number, especially in relation to the number of followers”. Kats says it is no common for a human to follow that many others.

Well, that’s partly because I use Twitter to follow people who might be news sources.

The idea of letting bots or AI bot detectors dictate behaviour bothers me. Yet, if Botsight thinks I’m a bot, it’s possible other researchers and analytical tools looking at my account think so too. We can’t have that. Perhaps I should cull my follow list.

So, please don’t take offence if you’re unfollowed. I need to look more human. Only up to a point. On one level I don’t care what a piece of software thinks about me. On another, I get a fair bit of work come to me via the Twitter account so it may need a bit more care and attention.

Not enough likes

The third sign that I’m a bot is that my number of favourite is low. Favourite is the official Twitter terms for liking a tweet. Apparently I don’t do this as much as other humans.

On the other hand, I link to a lot of web posts. Linking lots and not favouriting much is, apparently, a sign of a bot.

The Botsight software could take note that I often get involved in discussion threads on Twitter. That’s something that a human would do, but would be beyond most bot accounts.

From the bot’s mouth:

Well, there you have it. I’m a bot. Perhaps that means I should put my freelance rates up.

Of course, any AI model is only as good as the assumptions that are fed into it. This is where lots of them fall down. We’ve all heard stories of AI recruitment tools or bank loan tools that discriminate against women or minorities. Bias is hard coded.

This is nothing like as bad. On a personal level I’m not unduly worried or offended by Botsight. Yet it does give an insight into the power and potential misuse or misinterpretation of AI analysis.

Many US systems that process unemployment claims still run on COBOL, a 60-year-old programming language that relatively few coders understand.

IBM plans to offer free COBOL training to address overloaded US unemployment systems:

“IBM is releasing a free training course next week to teach the 60-year-old programming language COBOL to coders. It is also launching a forum where those with knowledge of the language can be matched with companies in need of help maintaining their critical systems.

“The moves come in response to desperate pleas by state governors for anyone with knowledge of COBOL to volunteer their time to help keep unemployment systems functioning, a critical need as the coronavirus has resulted in an unprecedented surge in people being laid off and having to claim unemployment benefits.”

Younger readers may be mystified by this. COBOL is a programming language from the early 1960s. It was mainly used to write business applications1 on mainframe computers.

It wouldn’t be accurate to say COBOL is like Basic, but there are some similarities. If you’ve used Basic, especially one of the earliest forms, you’d recognise some of the concepts.

Printing numbers, text

One of its big features back in the day was the ability to carefully place printed numbers and text in the right places on forms… so that computers could, among other things, write paper cheques.

This was a strength, but, in a sense it turned out to be a weakness, as we shall see later.

Another big feature of COBOL something newer programming languages might learn from, is that it is self-documenting. That is, it is easy to write code in a way that other programmers can follow. This was important as it meant others could pick up the pieces at any point.

In 1975 I studied Computer Science A Level at a technology college in the UK. We mainly worked in Fortran, another early language, and Assembler, which is a kind of half way house between machine code and a formal language. There was some Basic and some COBOL.

COBOL obsolete in the 1970s… sort of

At that time one of the Computer Science Lecturers introduced us to COBOL. I remember him saying that the language was still important, if we were going to end up as programmers it would probably be what we would use at first. However, he went on to say  COBOL probably wouldn’t be around much longer.

That was 45 years ago. It is still going strong.

The headline mentions that COBOL is back from the dead again. That’s because it had a come back in the late 1990s. Much of the problem that was known as the millennium bug or Y2K, was to do with COBOL code.

In the late 1990s, many important programmes, especially those used for billing systems and for government, used COBOL. The printing features had standard ways of handling numbers and, in particular dates.

From memory, correct me if this is wrong, COBOL could handle the year number in more than one format, but almost every programmer stuck to using just two digits to show the year. So they represented 1975 as 75. When the year 2000 rolled around, most COBOL programs interpreted this a 1900. The error would then cause problems elsewhere in a program.

COBOL’s second coming

So in the late 1990s, industry dragged COBOL programmers out of retirement to fix up the software some of them wrote in the first place. I fielded calls from recruiter offering me huge money to work on fixing COBOL code.

Keep in mind that I have never worked as a programmer. I did one year of university Computer Science before realising this was not the career for me. And my only brush with COBOL was a few lectures during the mid 1970s. This tells you just how desperate businesses were to get their code fixed in the late 1990s.

Now COBOL is back in the spotlight as many US unemployment systems still use the language. They are busy right now with US unemployment rates going through the roof. IBM is training people to fixed COBOL code.

Undead software

Here’s the weird thing about COBOL. It wasn’t even remotely cutting edge technology by 1980, yet a lot of mainframe programs depended on it. Companies rarely reworked programs with more modern programming languages. Instead they employed teams to keep maintaining and tweaking ancient code. Over time they dropped many old programs for newer technology, but not at the pace you might imagine.

Because everyone knew COBOL was past its sell-by date, people didn’t worry about the looming Y2K problem until the late 1990s. They assumed the old code would be replaced by then. But it wasn’t. Those programs kept churning on.

I found this reference at the Six Colors website:

Back in 2015 my pal Glenn Fleishman found the oldest computer program still in use for MIT Technology Review, and guess what:

In 1958, the United States Department of Defense launched a computerized contract-management system that it dubbed Mechanization of Contract Administration Services, or MOCAS (pronounced “MOH-cass”). The system was designed to use the latest in computation and output technology to track contracts in progress and payments to vendors.

Fifty-seven years later, it’s still going…. MOCAS is written in COBOL.

On and on

That same attitude applied in the late 1990s. Programmers patched code on the assumption that the clunky old COBOL programs that had made it to 2000 were not going to stay in use much longer. This lead to another problem.

It’s not only COBOL that carried on this way. Microsoft Windows retained many original lines of code from the 1980s well into this millennium. I’m not sure if it is still true today, but for decades Microsoft Word included segments of ancient code. In many cases the app didn’t actually use the code, but developers were often unwilling to remove it until the next big clean up.


  1. In those days people rarely talked of applications. If they did, it was more often about the application of the computer to a task than a synonym for program. Likewise, the people who wrote programs were always programmers, I don’t recall hearing the term developer until well into the PC era. Again, if you remember otherwise, feel free to comment. I’m not infallible. Nor is my memory. ↩︎

If I didn’t promise to help you out in the next sentence, you’d probably have to look up skeuomorphism in a dictionary.

In simple terms the word means something that resembles whatever it was that used to do the job.1

The word may be unfamiliar. The idea is not.

Take the old Macintosh Address Book app. Before Apple modernised its software, the Address Book app looked like a paper address book.

You might also remember when computer operating system desktops had waste paper bin or trash can icons to tell you this is where you throw things away.

Skeuomorph central

The smartphone is skeuomorph central. Every iPhone has icons showing a torch, a telephone handset, a camera and so on. What each of these does is obvious. The envelope icon isn’t quite so apparent, yet you don’t need a PhD to figure out it is for email. Android phones have similar skeuomorphs.

Skeuomorphs don’t have to be software. Houses might have cladding where manufacturers made the building material resemble wooden boards or brick.

Soon electric vehicles in Europe will have to make noises so that pedestrians and others get an audio cue to take care.

Understanding

The idea behind skeuomorphism is that it helps you to better understand what you are looking at. It’s a visual clue telling you the purpose of the object. You see something familiar and, bingo, you know what that thing is going to do.

There’s a special breed of skeuomorph idea where the visual cue lives on long after the original item has disappeared from use.

Mr flippy floppy

Perhaps the best known is the floppy disk icon you sometimes see used to indicate the save function.

It’s getting on for 20 years since computers had built-in floppy disk drives. An entire generation has entered the workforce without every having seen a floppy disk in action. And yet, everyone knows what that image is supposed to mean.

No doubt you have heard stories of young people encountering a real floppy disc for the first time. While they may not know what the item is, or how it is used. They often recognise it from the icon.

Time to put skeuomorphism to bed

While the thinking behind skeuomorphism makes sense, as far as software and operating systems go, it’s best days are in the past. Skeuomorphic designs are often fussy and ugly. They clutter things up. The images are often meaningless and what is represented is not always clear cut.

Yet there’s a Catch 22 here. I prefer minimalist design. It’s easier to focus on the job in hand when the software stays out of the way. I was about to say that when I’m writing, I prefer to start with a blank sheet of paper. Which is, of course, itself a skeuomorphism.


  1. 1 My Mac’s dictionary says: An object or feature which imitates the design of a similar artefact made from another material. ↩︎

Microsoft Word is my fourth favourite writing tool1.

I rarely use Word to write stories or blog posts. Yet, I never hesitate to renew my Office 365 subscription.

It sounds contradictory. That NZ$165.00 Office 365 subscription is good value. That’s true even though I don’t use Word to write and I almost never open Excel. I go out of my way to not let PowerPoint into my life.

At this point you might think this is throwing money away. Open source fans reading this will be aghast.

But there is a method in my madness. Writing is my work. A typical year’s work is 250,000 words. My writing output was even higher for a few years. After more than 40 years in the business, I’ve written, and publishers have paid me for, at least 10 million words.

Most, not all, of the time, I’m paid by the word. Which means my ability to produce quality writing puts food on my table and a roof over my head.

Writing is talking to people, researching, checking then putting it all into words. Sometimes it is about reviving my own work or dealing with words others have sent to me.

Microsoft Word is not optional

Like it or not, Microsoft Word is the lingua franca of digital writing. Almost everyone in the business uses it. It’s been more than two decades since an editor expected copy in anything other than Word format.

At this point, people who dislike Word might be thinking: “Yes, but everything else can save in a Word format. So it isn’t necessary to buy a subscription”.

They have a point. Except that sooner or later, something doesn’t convert between Word and another format.

The most troublesome issue is with edits marked using Microsoft’s Track Changes feature.

Yes, many non-Word writing applications can understand and deal with Track Changes markups. But this is not always straightforward.

The cost to me of failing to deal fast with one edit incident can be greater than the subscription price. It’s rare, but over 250,000 words, it happens a few times every year.

It costs even more than the subscription when we take into account it includes licenses for other family members. In effect my personal subscription costs 25 percent of $165, that’s $40 plus change.

Don’t go there

Even a quick dive down the troubleshooting rabbit hole costs more.

Multiply this by the two or more incidents a year and you can see that paying the subscription leaves me ahead. It’s a solid investment.

Open source fans tell me this attitude is wrong and that I’m paying a tax or even a ransom to Microsoft to be able to work. You could see it this way.

Yet it isn’t Microsoft that is holding me to ransom, it is the editors and publishers who commit to Word. If everyone accepted plain text2 I wouldn’t need to pay the fee.

It might be better to frame the fee as paying for membership of the hireable professional writers club. Either way, it’s a bargain.


  1. In my world it ranks behind IA Writer, Byword and Pages. ↩︎
  2. Text was fine for a long time. That changed about 20 years ago. ↩︎

Technology commentator Bill Bennett looks at how the millennium bug is back – because it never exactly went away. In trying to solve the problem, programmers pushed it back 20 years. And time’s up. He’ll also look at how Volvo is experimenting with adding noise to near-silent EVs, after research showed pedestrians were twice as likely to be involved in an accident with EVs than those with traditional engines. And is working remotely back in fashion in response to corona virus?

Source: The Y2K bug makes a comeback | RNZ

I’m on RNZ Nine-to-Noon talking about technology.