Faculty
of Natural Sciences
Department of iComputer Science
CSE2202: Human Computer Interaction I
Assignment #1
April 30th, 2022
Tariq Harris 1037257
1 Introduction
An automated usability testing tool should take into account the properties of an application's graphical user interface, the sequence of user actions as they use the application to complete specific tasks, their behavior, and comments, and a description of these tasks. The tool should assess both the static and dynamic properties of the interface, as well as the navigational burden, and make recommendations for changes or templates that would improve usability. The results should be quick and simple to interpret, and they should be understandable to people who are not specialized testers.
2(a) Definition of Key Terms
- Usability - IEEE (1990) defines usability as "the ease with which a user can operate, prepare inputs for, and interpret outputs of a system or component," and it consists of five major attributes, as outlined by Le Peuple and Scane (2003) and Nielsen, J. (1993): learnability, efficiency, memorability, errors, and satisfaction.
- Usability Testing - There are numerous usability testing methodologies available, each with its own set of pros and cons. Heuristic evaluations are relatively inexpensive, quick, and simple to perform; however, Le Peuple and Scane (2003), Scholtz (2006), and Spolsky (2001) claim that such evaluations are likely to identify only about half of the actual problems, with a significant number of false problems raised and actual problems missed. User testing, when done correctly, is likely to identify the majority of the major usability issues because it involves real users attempting real tasks and is very useful for collecting feedback on subjective aspects of usability (such as satisfaction and aesthetic appeal).
- Ease of use – To check how easily a user can use the different functionalities of the application.
- Ease of learning – To check how easily users can learn to use the application.
- Memorability – To check how easily the user can remember the different flows of the application after exploring it for the first time.
- Error rate – While using the application, how often do users make mistakes and how easily can they recover from those errors.
- Level of satisfaction – This is a subjective attribute that deals with the satisfaction or the general opinion user have about the product.
- Morae - Morae is a four-module commercial usability testing software package developed by TechSmith. The Recorder is installed on the test user's machine and records screen and system activity; the Remote Viewer allows testers to control the Recorder remotely and view, hear, and annotate the recordings in real-time; and the Manager Analysis module allows testers to isolate segments of the recording and provides a search editor to quickly locate user actions. Metrics such as time on task, number of clicks or pages viewed, and delay times are also calculated automatically based on time-stamped and indexed events on the video; the Manager Presentation module is used to edit, annotate (textual or audio), and title video clips for use in Morae or other programs (such as Microsoft PowerPoint), and optionally with videos not recorded with Morae. Morae automates and significantly reduces the cost of conducting and analyzing a user test in many ways by facilitating the more efficient collection and use of recorded data.
- Watchfire Bobby(WebXACT) - Watchfire Bobby is a popular web accessibility testing tool, with a limited version available for free online as WebXACT. Bobby navigates a website (both local pages and web pages behind a firewall) to determine whether each page satisfies various accessibility requirements, such as readability by screen readers, the provision of text equivalents for all images, animated elements, audio, and video displays.'
- GUI Description - XML or other scripting languages can be used to fully describe an HDA's interface in the same way that HTML describes the layout of web pages. A parser should be included in the tool to interpret such scripts and extract relevant data for analysis.
- Recording user interaction sequences - Observing and video/audio-recording a user's interactions with an HDA or an equivalent working prototype is the traditional method of capturing how a user interacts with an HDA. The time and resources required for data collection and subsequent analysis make this one of the most expensive processes in a usability evaluation. Furthermore, while most tape mediums are adequate for displaying images of people, they lack the resolution required to display images on computer displays. Furthermore, the presence of observers, cameras, and other recording equipment frequently distracts or intimidates test users, causing them to perform poorly. A different recording procedure is clearly required.
- User Behaviour and Comments - The traditional video recording practices provide useful footage of user behavior. Body language reveals the user's thoughts and feelings as they attempt the tasks; frowning, pauses, uncertain actions, and other expressions of confusion or frustration strongly suggest poor usability. The comments and thoughts of the users themselves are, of course, a good indicator of how usable the AUT was for the users. Although these tasks would undoubtedly be difficult to fully automate, according to Lee and Grice (2004), they should still be included in the usability evaluation of HDAs and thus the tool should aim to better support them.
- Task Definition - Providing a means for developers to input task definitions would be a significant challenge, as these can be broad in scope and either generic or very specific to a specific HDA. However, for the purposes of templates or style guides, a more generic classification should suffice.



Comments
Post a Comment