5 min read

Technology product reviews: Science and anecdotes

A scientific approach.

Tech product reviews take many forms.

Some are scientific. Others are anecdotal.

Scientific reviews involve research, prising the back from things, taking them apart and dropping them on hard surfaces. Listening to noises. Measuring everything. Running battery life tests.

You come away from these tests with numbers. Often many numbers. Maybe you’ve heard of data journalism. This is similar, you need maths and statistics to make sense of the numbers.

Scientific reviews take time. And money. You need deep pockets to test things to breaking point.

Benchmarks

Benchmarks are one reason scientific reviews take so much time. You do them again and again to make sure. You draw up meaningful, measured comparisons with rival products. Then put everything into context.

We used the scientific approach when I ran the Australian and New Zealand editions of PC Magazine.

This was in the 1990s. ACP, the publishing company I worked for, invested in a testing laboratory.

We had expensive test equipment and a range of benchmarking software and tools. Specialist technicians managed the laboratory. They researched new ways to make in-depth comparisons, like the rest of us working there, they were experienced technology journalists.

The scientific approach to product reviews

My PC Magazine colleague Darren Yates was a master at the scientific approach. He tackled the job as if it were an engineering problem. He was methodical and diligent.

You can’t do that in a hurry.

There were times when the rest of my editorial team pulled their hair out waiting for the last tests to complete on a print deadline. We may have cursed but the effort was worth it.

Our test results were comprehensive. We knew to the microsecond, cent, bit, byte or milliamp what PCs and other tech products delivered.

There are still publications working along similar lines. Although taking as much time as we did then is rare today.

Publishing industry pressure

It’s not only the cost of operating a laboratory. Today’s publishers expect journalists to churn out many more words for each paid hour than in the past. That leaves less time for in-depth analysis. Less time to weigh up the evidence, to go back over numbers and check them once again.

At the other end of the scale to scientific reviews are once-over-lightly descriptions of products. These are little more than lists of product highlights with a few gushing words tacked on. The most extreme examples are where reviewers write without turning the device on — or loading the software.

Some reviews are little more than rehashed public relations or marketing material.

The dreaded reviewers’ guide

Some tech companies send reviewers’ guides. Think of them as a preferred template for write ups. I’ve seen published product reviews regurgitate this information, adding little original or critical.

That’s cheating readers.

Somewhere between the extremes are exhaustive, in-depth descriptions. These can run to many thousands of words and include dozens of photographs. They are ridiculously nit-picking at times. A certain type of reader loves this approach.

Much of what you read today is closer to the once-over-lightly end of the spectrum than the scientific or exhaustive approach.

Need to know

One area that is often not well addressed is focusing on what readers need to know.

The problem is need-to-know differs from one audience to another. Many Geekzone readers want in-depth technical details. If I write about a device they want to know the processor, clock speed, Ram and so on.

When I write for NZ Business I often ignore or downplay technical  specifications.

Readers there are more interested to know what something does and if it delivers on promises. Does it work? Does it make life easier? Is it worth the asking price?

Most of the time when I write here, my focus is on how things work in practice and how they compare with similar products. I care about whether they aid productivity more than how they get there. I like the one week with this tablet approach.

Beyond benchmarks

Benchmarks were important when applications always ran on PCs, not in the cloud. How software, processor, graphics and storage interact is an important part of the user experience.

While speeds and processor throughput numbers matter for specialists, most of the time they are irrelevant.

How could you, say, make a meaningful benchmark of a device accessing Xero accounts?

Ten times the processor speed doesn’t make much difference to Xero, or to a writer typing test into Microsoft Word. It is important if you plough through huge volumes of local data.

I still mention device speed if it is noticeable. For most audiences benchmarks are not useful. But this does depend on context.

Context is an important word when it comes to technology product reviews.

Fast enough

Today’s devices are usually fast enough for most apps.

Much heavy-lifting now takes place in the cloud, so line speed is often as big an issue as processor performance. That will differ from user to user and even from time to time. If, say, you run Xero, your experience depends more on the connection speed than on your computer.

Gamers and design professionals may worry about performance, but there is little value in measuring raw speed these days.

Instead, I prefer exploring if devices are fit for the task. Then I write about how they fit with my work. I call this the anecdotal approach to reviewing. There has been the occasional mistake, my Computers Lynx review from 40 years ago was a learning experience.

Taking a personal approach this way is a good starting point for others to relate to their own needs. My experience and use patterns almost certainly won’t match yours, but you can often project my experience onto your needs. I’m happy to take questions in comments if people need more information.

Review product ratings

I’ve toyed with giving products ratings in my reviews. It was standard practice to do this in print magazines. We were careful about this at PC Magazine.

A lot of ratings elsewhere were meaningless. There was a heavy skew to the top of the scale. Depending on the scale used, more products got the top or second top ranking than any other. Few rated lower than two-thirds of the way up the scale.

So much for the Bell Curve.

If a magazine review scale ran from, say, one to five stars, you’d rarely see any product score less than three. And even a score of three would be rare. I’ve known companies to launch legal action against publications awarding three or four stars. Better than average is hardly grounds for offence, let alone litigation.

As for all those five-star reviews. Were reviewers saying a large proportion of products were perfect or near perfect? That’s unlikely. For any rating system to be meaningful you’d expect to see a lot of one or two-star ratings.

That doesn’t happen.

Loss aversion

Once I heard an advertising sales exec (not working on my publication) tell a magazine advertiser: “we only review the good stuff”.

That’s awful.

Readers need to know what to avoid as much as what to buy. Indeed, basic human nature says losses are twice as painful as gains.

Where possible, I like to warn against poor products. Companies that make poor products usually know better than to send them out for review, so you’ll see less of them, but it can happen.

My approach to reviewing products isn’t perfect. I’d like to do more scientific testing, but don’t have the time or resources. Often The review loan is only for a few days, so extensive testing isn’t possible. Reviews here are unpaid. This means reviewing has to take second place behind paying jobs.