J.C. Campos, C. Fayollas, M. Gonçalves, C. Martinie, D. Navarre, P. Palanque and M. Pinto
A "More Intelligent" Test Case Generation Approach through Task Models Manipulation
Proceedings of the ACM on Human Computer Interaction, 1(EICS):9:1-9:20. 2017.

Abstract

Ensuring that an interactive application allows users to perform their activities and reach their goals is critical to the overall usability of the interactive application. Indeed, the e ectiveness factor of usability directly refers to this capability. Assessing e ectiveness is a real challenge for usability testing as usability tests only cover a very limited number of tasks and activities. This paper proposes an approach towards automated testing of e ectiveness of interactive applications. To this end we resort to two main elements: an exhaustive description of users’ activities and goals using task models, and the generation of scenarios (from the task models) to be tested over the application. However, the number of scenarios can be very high (beyond the computing capabilities of machines) and we might end up testing multiple similar scenarios. In order to overcome these problems, we propose strategies based on task models manipulations (e.g., manipulating task nodes, operator nodes, information ...) resulting in a more intelligent test case generation approach. For each strategy, we investigate its relevance (both in terms of test case generation and in terms of validity compared to the original task models) and we illustrate it with a small example. Finally, the proposed strategies are applied on a real-size case study demonstrating their relevance and validity to test interactive applications.

visit publisher   download PDF

@article{CamposFGMNPP:2017,
 author = {J.C. Campos and C. Fayollas and M. Gonçalves and C. Martinie and D. Navarre and P. Palanque and M. Pinto},
 journal = {Proceedings of the ACM on Human Computer Interaction},
 title = {A "More Intelligent" Test Case Generation Approach through Task Models Manipulation},
 volume = {1},
 number = {EICS},
 month = jun,
 year = {2017},
 pages = {9:1-9:20},
 articleno = {9},
 numpages = {20},
 doi = {10.1145/3095811},
 publisher = {ACM},
 abstract = {Ensuring that an interactive application allows users to perform their activities and reach their goals is critical to the overall usability of the interactive application. Indeed, the e ectiveness factor of usability directly refers to this capability. Assessing e ectiveness is a real challenge for usability testing as usability tests only cover a very limited number of tasks and activities. This paper proposes an approach towards automated testing of e ectiveness of interactive applications. To this end we resort to two main elements: an exhaustive description of users’ activities and goals using task models, and the generation of scenarios (from the task models) to be tested over the application. However, the number of scenarios can be very high (beyond the computing capabilities of machines) and we might end up testing multiple similar scenarios. In order to overcome these problems, we propose strategies based on task models manipulations (e.g., manipulating task nodes, operator nodes, information ...) resulting in a more intelligent test case generation approach. For each strategy, we investigate its relevance (both in terms of test case generation and in terms of validity compared to the original task models) and we illustrate it with a small example. Finally, the proposed strategies are applied on a real-size case study demonstrating their relevance and validity to test interactive applications.},
 paperurl = {https://repositorio.inesctec.pt/bitstream/123456789/6720/1/P-00M-ZA1.pdf}
}

Generated by mkBiblio 2.6.26