Abstract
The computer industry has seen an explosive emergence of user interface management system (UIMS) toolkits in the last few years. However, there are no standards for the components of such toolkits, and no procedure for systematically evaluating or comparing these toolkits. With their proliferation, ad hoc evaluations and comparisons are constantly being done, without a formal, structured approach.
This paper will describe several of the problems involved in developing an evaluation procedure for UIMS, and will report on research that is showing promise as an evaluation procedure that produces quantifiable criteria for evaluation and comparing UIMS. Such a procedure could be used, for example, for choosing a UIMS for a particular human-computer interface development environment.
The procedure we have developed generates ratings for two dimensions:
-
Functionality of the UIMS being evaluated, and
-
Usability of the UIMS being evaluated.
Functionality refers to what the UIMS can do; that is, what interface styles, techniques, and features it can be used to produce. Usability refers to how well the UIMS does what it can do in terms of ease of use (a subjective, qualitative rating of how easy the UIMS is to use) and human performance (an objective, quantitative rating of how efficiently the UIMS can be used to perform a task).
A significant by-product of this research is a practical taxonomy of types of human-computer interfaces, including interaction styles, features, and hardware, in addition to a taxonomy of types of interface development support provided by UIMS, and general characteristics of UIMS.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Borenstein, N. S., 1985, The Evaluation of Text Editors: A Critical Review of the Roberts and Moran Methodology Based on New Experiments, Proceedings of the CHI ’85 Conference, San Francisco, CA, April 1985, pp. 99–105.
Cohill, A. M., Gilfoil, D. M., and Pilitsis, J. V., 1988, Measuring the Utility of Application Software, Advances in Human-Computer Interaction, Volume 2, H. R. Hartson and D. Hix„ eds., Ablex Publishing Corp., Norwood, NJ, pp. 128–158.
Hix, D., 1987, An Evaluation Procedure for Human-Computer Interface Development Toolkits, VPIandSU Department of Computer Science Technical Report, October 1987.
Roberts, T. L., and Moran, T. P., 1983, The Evaluation of Text Editors: Methodology and Empirical Results, Communications of the ACM, Vol. 26, No. 4, April 1983, pp. 265–283.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1990 Plenum Press, New York
About this chapter
Cite this chapter
Hix, D. (1990). Evaluation of Human-Computer Interface Development Tools: Problems and Promises. In: Zunde, P., Hocking, D. (eds) Empirical Foundations of Information and Software Science V. Springer, Boston, MA. https://doi.org/10.1007/978-1-4684-5862-6_18
Download citation
DOI: https://doi.org/10.1007/978-1-4684-5862-6_18
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4684-5864-0
Online ISBN: 978-1-4684-5862-6
eBook Packages: Springer Book Archive