Saturday, 9 April 2005

Training day

There are many different ways to measure software to give you an idea of the quality of it. Even though many of these metrics provide hard numbers, the answer is still soft. You can't perfectly compare one app's stats against anothers. e.g. App A has 80% code coverage, App B has 79%, so App A is better quality than App B => wrong.

One of the my fav. indicators of usability of software is how many "training issues" an app has. That's a way to describe the gaps between how the average user thinks the software should work the first time through and how it actually works.

Assume a user base of 100. The first time an issue comes up you document it (of course) and let them know what the work around (proper use) is. The second second or third time you put it in an FAQ. If it happens 4 or 5 times it's time to think about a redesign because it's just not working. For any application, if 5% people reported having problems will a part of the app, that's a great indication that you designed that part wrong.

Training like other things (manually testing) isn't something that you want to do because it's one of the more expensive parts of software. It's always a great idea to figure out how to spend less time and effort of these areas. That's why there is automated testing. And that's why you do usabilty testing / beta testing to figure out where these "training issues" are and reduce them before you move to a larger audience.

It's all about the test / feedback loop.

1 comment:

  1. ...and also, if you work in a small company all of these training and support issues cost a noticable amount of time and money. Sometimes it's worth avoiding support issues with a little up front usability work.

    ReplyDelete