Skip to content


User testing: the panic room beckons

Since the post about SWORD and this project from Richard Jones, there have been no blog posts here recently. That’s because we were supposed to have been in user testing purgatory purdah. In fact, the testing process took a break in summer 2011 and did not resume as planned to test some recent updates to enable DepositMO clients to work with DSpace.

With user testing you have to be careful (and you can never be too careful in such matters, as our extended break will attest) not to pre-empt or prejudice the results with prior or leading information.

Having belatedly realised that user testing is complete, we can now resume the story of the project, which set out to change the Modus Operandi, for authors, of repository deposit by enabling them to deposit in-progress work directly from their preferred desktop applications. There is a lot to catch up with, so we are grateful to Balviar Notay and JISC for allowing an extended project schedule and the chance to complete the story.

Testing is towards the culmination of that story, so before we get to the results we have to rewind in order to understand what we are testing and why. Ahead of a full paper, we will build that story in a series of blog posts here. From that you can anticipate these posts will cover background, tools being tested, method and anything else that needs to be known to understand what we have been doing, as well as results and conclusions – all the things you would expect in a formal report of the work.

First, it may help to understand the effects of testing on the people involved, and the careful lines that have to trodden in development and user testing.

There is a saying in commerce that the customer is always right, even if the commercial partner is saying this through gritted teeth. The equivalent on the Web is that the user is always right. For years Jakob Nielsen’s Alertbox newsletter on Web Usability has shown that Web site developers ignore this at their peril. Users don’t have to answer for their decisions when it comes to using a Web site, or not. With a poorly designed Web site or service what you will notice is those users disappear. That is a ruthless outcome.

One way to avoid this, before it is too late, is user testing. When it comes to testing, the outcome can be just as ruthless and unforgiving, but in addition you are giving voice to, and recording the verdict of, the tester.

It can work the other way, of course, that you have a brilliant product and testing reinforces that, but it is more likely that the brilliant product you think you have created will emerge once refinements, learned the hard way from user testing, have been implemented.

Remember, in the line of fire here is the developer of the product or service being tested. Good developers may pour their heart and soul into what they are developing, only for this to be crushed by testing. It’s easy to say developers have to be thick-skinned, but also understandable if their reaction is to dismiss and diminish the tester. So long as they understand that the tester, if properly chosen, should be their next user, and the user will not change a negative opinion unless something else about the object under test changes.

It’s important to distinguish, in the case of typical Web services, between software testing and user testing. Developers do software testing themselves, running their code through machines and debugging. This is not about code testing, however. Machines are more pliable and predictable than real users.

In this project we are fortunate to have others who are not developers to manage user testing, and take the developers out of the immediate test firing line. So here I am, test moderator and reporter, standing in that uneasy space between user and developer.

Don’t assume that makes the process less stressful. To find out what doesn’t work about the subject under test, it is essential that everything else does work. The test process has to be designed and appropriate for the purpose; users have to be recruited; venues found and booked; documentation produced and systems set; all have to be ready on time. This is no different from any other meeting, except where most meetings can recover from organisational oversights, a single mistake can invalidate your test and all the work that has gone into it. Users can be fickle, and so can developers when offered the opportunity to pass off any unfavourable result to other user factors.

On the eve of each test I review every arrangement to try and anticipate what can go wrong – every little boring detail. If a sense of panic fills emails sent at this stage, it’s because there is panic, until all potential fault lines are closed. This often continues until the moment before the test, when everyone and everything is finally and demonstrably ready. Only when the test is underway can you relax, a little.

A test moderator seeks to navigate between objective and outcome, through iteration if necessary, most importantly trying not to be blinded by allegiance or personality or self-interest, but laying the foundation to make objective decisions for what to do next, whether by us or others.

With all that understood, in the next post we will begin the story of the repository deposit tools produced by DepositMO, and how they fared in front of test users. What I can tell you now is that we had no disasters in organising the tests, from my point of view, and I think I would have known. Therefore, like an auditor, I shall present the report as a valid representation of the tools under test. Whether that survives the scrutiny of the other participants and vested interests, we shall see as the story unfolds, but rest assured I am covering my back there as well.

Next, we will start to learn more about the tools under test.

Posted in Uncategorized.

Tagged with , , , , .

0 Responses

Stay in touch with the conversation, subscribe to the RSS feed for comments on this post.

Some HTML is OK

or, reply to this post via trackback.